[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Topic Editors

Department of Agroecology, Climate and Water, Aarhus University, 8830 Tjele, Denmark
College of Agriculture, Shihezi University, Shihezi 832003, China
Farmland Irrigation Research Institute, Chinese Academy of Agricultural Sciences, 380 Hongli Road, Xinxiang 453003, China
1. Department of Plant & Soil Science, Texas Tech University, 2500 Broadway, Lubbock, TX 79409, USA
2. Department of Soil and Crop Sciences, Texas A&M University, TAMU 2124, College Station, TX 77843, USA
College of Agriculture, South China Agricultural University, Guangzhou 510642, China

Advances in Smart Agriculture with Remote Sensing as the Core and Its Applications in Crops Field

Abstract submission deadline
31 October 2025
Manuscript submission deadline
31 December 2025
Viewed by
17916

Topic Information

Dear Colleagues,

In recent years, smart agriculture with remote sensing and modeling technologies has brought significant benefits in crop fields and has also altered our understanding and management of crops. Remote sensing allows for crop growth monitoring on different scales such as “ground–low altitude–satellite”, while crop modeling provides predictive insights into crop growth and yield based on a diverse set of environmental parameters. Remote sensing and modeling are fully integrated into applications of crop growth, nutrition demands, irrigation management, and pest control in smart agriculture to optimize agricultural practices, enhance resource efficiency, and make substantial contributions to sustainable agricultural development. This research topic aims to seamlessly integrate remote sensing and modeling, essential components in smart agriculture, to address urgent challenges such as optimizing resource utilization and sustainable agricultural development with enhanced crop production.

The scope of this research topic encompasses a broad range of subjects including but not limited to:

  • Integrating remote sensing data with plant traits into crop models to enhance prediction accuracy and decision support.
  • Applying machine learning and AI algorithms in crop modeling for increased accuracy and adaptability.
  • Utilizing the Internet of Things, sensors, and drones for real-time data collection and monitoring in smart agriculture.

We invite authors to contribute original research articles, perspectives, and reviews, providing valuable insights into the ”Advances in Smart Agriculture with Remote Sensing as the Core and Its Applications in Crops Field”.

Dr. Syed Tahir Ata-Ul-Karim
Dr. Yang Liu
Dr. Ben Zhao
Dr. Wenxuan Guo
Dr. Lei Zhang
Topic Editors

Keywords

  • crop
  • remote sensing
  • crop modeling
  • smart agriculture
  • machine learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Remote Sensing
remotesensing
4.2 8.3 2009 23.9 Days CHF 2700 Submit
Agronomy
agronomy
3.3 6.2 2011 17.6 Days CHF 2600 Submit
Agriculture
agriculture
3.3 4.9 2011 19.2 Days CHF 2600 Submit
Crops
crops
- - 2021 22.1 Days CHF 1000 Submit
Plants
plants
4.0 6.5 2012 18.9 Days CHF 2700 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (17 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
17 pages, 704 KiB  
Article
Willingness to Pay to Adopt Conservation Agriculture in Northern Namibia
by Teofilus Shiimi and David Uchezuba
Agriculture 2025, 15(5), 568; https://doi.org/10.3390/agriculture15050568 - 6 Mar 2025
Viewed by 173
Abstract
This paper aims to explore the willingness of farmers in the northern Namibia to adopt conservation agriculture (CA), employing the conditional logit model to estimate the probability of farmers choosing to adopt CA in different villages relative to all other alternatives and examining [...] Read more.
This paper aims to explore the willingness of farmers in the northern Namibia to adopt conservation agriculture (CA), employing the conditional logit model to estimate the probability of farmers choosing to adopt CA in different villages relative to all other alternatives and examining the effects of omitted variance and correlations on coefficient estimates, willingness to pay (WTP), and decision predictions. This study has practical significance, as agriculture plays a crucial role in the economic development of and livelihoods in Namibia, especially for those farmers who rely on small-scale farming as a means of subsistence. In terms of methodology, the data for the experimental choice simulation were collected using a structured questionnaire administered through a face-to-face survey approach. This paper adopts the conditional logit model to estimate the probability of farmers choosing to adopt CA in different villages, which is an appropriate choice as the model is capable of handling multi-option decision problems. This paper further enhances its rigor and reliability by simulating discrete choice experiments to investigate the impact of omitted variables and correlations on the estimation results. The research findings indicate that crop rotation and permanent soil cover are the main factors positively influencing farmers’ WTP for adopting CA, while intercropping, the time spent on soil preparation in the first season, and the frequency and rate of weeding consistently negatively influence the WTP for adopting CA. These discoveries provide valuable insights for formulating policy measures to promote the adoption of CA. In terms of policy recommendations, this paper puts forward targeted suggestions, including the appointment of specialized extension technicians by the Ministry of Agriculture, Water, and Land Reform to disseminate information as well as coordinate, promote, and personally implement CA activities across all regions. Additionally, to expedite the adoption of CA, stakeholders should ensure the availability of appropriate farming equipment, such as rippers and direct seeders, in local markets. Full article
Show Figures

Figure 1

Figure 1
<p>Location of villages in the selected study areas. Source: Authors’ compilation.</p>
Full article ">
34 pages, 13743 KiB  
Article
Integration of UAV Multispectral Remote Sensing and Random Forest for Full-Growth Stage Monitoring of Wheat Dynamics
by Donghui Zhang, Hao Qi, Xiaorui Guo, Haifang Sun, Jianan Min, Si Li, Liang Hou and Liangjie Lv
Agriculture 2025, 15(3), 353; https://doi.org/10.3390/agriculture15030353 - 6 Feb 2025
Viewed by 533
Abstract
Wheat is a key staple crop globally, essential for food security and sustainable agricultural development. The results of this study highlight how innovative monitoring techniques, such as UAV-based multispectral imaging, can significantly improve agricultural practices by providing precise, real-time data on crop growth. [...] Read more.
Wheat is a key staple crop globally, essential for food security and sustainable agricultural development. The results of this study highlight how innovative monitoring techniques, such as UAV-based multispectral imaging, can significantly improve agricultural practices by providing precise, real-time data on crop growth. This study utilized unmanned aerial vehicle (UAV)-based remote sensing technology at the wheat experimental field of the Hebei Academy of Agriculture and Forestry Sciences to capture the dynamic growth characteristics of wheat using multispectral data, aiming to explore efficient and precise monitoring and management strategies for wheat. A UAV equipped with multispectral sensors was employed to collect high-resolution imagery at five critical growth stages of wheat: tillering, jointing, booting, flowering, and ripening. The data covered four key spectral bands: green (560 nm), red (650 nm), red-edge (730 nm), and near-infrared (840 nm). Combined with ground-truth measurements, such as chlorophyll content and plant height, 21 vegetation indices were analyzed for their nonlinear relationships with wheat growth parameters. Statistical analyses, including Pearson’s correlation and stepwise regression, were used to identify the most effective indices for monitoring wheat growth. The Normalized Difference Red-Edge Index (NDRE) and the Triangular Vegetation Index (TVI) were selected based on their superior performance in predicting wheat growth parameters, as demonstrated by their high correlation coefficients and predictive accuracy. A random forest model was developed to comprehensively evaluate the application potential of multispectral data in wheat growth monitoring. The results demonstrated that the NDRE and TVI indices were the most effective indices for monitoring wheat growth. The random forest model exhibited superior predictive accuracy, with a mean squared error (MSE) significantly lower than that of traditional regression models, particularly during the flowering and ripening stages, where the prediction error for plant height was less than 1.01 cm. Furthermore, dynamic analyses of UAV imagery effectively identified abnormal field areas, such as regions experiencing water stress or disease, providing a scientific basis for precision agricultural interventions. This study highlights the potential of UAV-based remote sensing technology in monitoring wheat growth, addressing the research gap in systematic full-cycle analysis of wheat. It also offers a novel technological pathway for optimizing agricultural resource management and improving crop yields. These findings are expected to advance intelligent agricultural production and accelerate the implementation of precision agriculture. Full article
Show Figures

Figure 1

Figure 1
<p>Location of wheat experimental field. The experimental wheat-growing area is situated approximately 20 km southeast of Shijiazhuang, the capital city of Hebei Province.</p>
Full article ">Figure 2
<p>Multispectral reflectance data (G, R, RE, NIR) of wheat collected on 1 April 2024, illustrating growth conditions and physiological characteristics during the jointing stage. (<b>a</b>) Green band; (<b>b</b>) near-infrared band; (<b>c</b>) red band; (<b>d</b>) red edge band.</p>
Full article ">Figure 3
<p>Temporal dynamics of wheat growth: height and chlorophyll content. (<b>a</b>) Dynamics of wheat height across five growth stages; (<b>b</b>) changes in chlorophyll content of wheat across five growth stages. The figures illustrate the dynamic trends of wheat height and chlorophyll content across five growth stages in 72 experimental plots, reflecting the growth and health status of the wheat.</p>
Full article ">Figure 4
<p>Spatial distribution maps of wheat canopy height and chlorophyll content across five growth stages. (<b>a</b>,<b>b</b>) illustrate the spatial distribution of wheat height and chlorophyll content during five growth stages (pre-jointing, jointing, post-jointing, booting, and flowering), revealing crop growth dynamics and regional variations.</p>
Full article ">Figure 4 Cont.
<p>Spatial distribution maps of wheat canopy height and chlorophyll content across five growth stages. (<b>a</b>,<b>b</b>) illustrate the spatial distribution of wheat height and chlorophyll content during five growth stages (pre-jointing, jointing, post-jointing, booting, and flowering), revealing crop growth dynamics and regional variations.</p>
Full article ">Figure 5
<p>Correlation between chlorophyll content and height across five wheat growth stages. (<b>a</b>) During the pre-jointing stage, the correlation coefficient between chlorophyll content and height is −0.04, indicating almost no correlation. The scatter points are randomly distributed, reflecting a weak relationship between plant height and chlorophyll content at this stage. (<b>b</b>) During the jointing stage, the correlation coefficient is −0.25, showing a weak negative correlation. As plants enter the rapid growth phase, differences in plant size may obscure the relationship between chlorophyll content and height. (<b>c</b>) During the post-jointing stage, the correlation coefficient is −0.23, also indicating a slight negative correlation. Plant height stabilizes during this stage, while the decrease in chlorophyll content may be associated with changes in photosynthetic efficiency. (<b>d</b>) During the booting stage, the correlation coefficient is −0.48, demonstrating a moderate negative correlation. The reduction in chlorophyll content reflects nutrient translocation to grains, further reducing the dependency of plant height on chlorophyll. (<b>e</b>) During the flowering stage, the correlation coefficient is −0.24, maintaining a slight negative correlation. By this stage, chlorophyll content has significantly decreased, and plant height is stable, indicating a weak relationship between the two.</p>
Full article ">Figure 6
<p>Wheat growth correlation analysis: vegetation indices with height and chlorophyll content. (<b>a</b>) Correlation between wheat height and vegetation indices. This figure shows the correlation of 21 vegetation indices with wheat height. During the tillering stage (20240401), the highest correlations were observed for CIg (0.65), TVI (0.64), and RERI (0.62), indicating their high sensitivity to wheat height monitoring at this stage. The most negatively correlated index was GNDVI (−0.63), reflecting the significant response of height variations to this index in the tillering phase. By the maturity stage (20240521), TVI (0.67) remained the most correlated, followed by SR (0.64) and CIg (0.63), demonstrating their strong utility for height monitoring, while GNDVI (−0.59) showed the largest negative correlation, highlighting its unique sensitivity to height variations during maturity. (<b>b</b>) Correlation between wheat chlorophyll content and vegetation indices. This figure illustrates the correlation of 21 vegetation indices with wheat chlorophyll content. During the jointing stage (20240423), the highest correlations were observed for TVI (0.65), SR (0.61), and CIg (0.61), reflecting their significant responses to nitrogen content and photosynthetic efficiency. The most negatively correlated index was GNIRR (−0.59), underscoring its importance in monitoring chlorophyll content changes during the jointing stage. By the maturity stage (20240521), TVI (0.66) and SR (0.64) had the highest correlations, followed by CIg (0.64), showing their exceptional sensitivity for monitoring chlorophyll content during this phase. GNIRR (−0.55) remained the most negatively correlated index, demonstrating its critical value in chlorophyll monitoring during maturity.</p>
Full article ">Figure 6 Cont.
<p>Wheat growth correlation analysis: vegetation indices with height and chlorophyll content. (<b>a</b>) Correlation between wheat height and vegetation indices. This figure shows the correlation of 21 vegetation indices with wheat height. During the tillering stage (20240401), the highest correlations were observed for CIg (0.65), TVI (0.64), and RERI (0.62), indicating their high sensitivity to wheat height monitoring at this stage. The most negatively correlated index was GNDVI (−0.63), reflecting the significant response of height variations to this index in the tillering phase. By the maturity stage (20240521), TVI (0.67) remained the most correlated, followed by SR (0.64) and CIg (0.63), demonstrating their strong utility for height monitoring, while GNDVI (−0.59) showed the largest negative correlation, highlighting its unique sensitivity to height variations during maturity. (<b>b</b>) Correlation between wheat chlorophyll content and vegetation indices. This figure illustrates the correlation of 21 vegetation indices with wheat chlorophyll content. During the jointing stage (20240423), the highest correlations were observed for TVI (0.65), SR (0.61), and CIg (0.61), reflecting their significant responses to nitrogen content and photosynthetic efficiency. The most negatively correlated index was GNIRR (−0.59), underscoring its importance in monitoring chlorophyll content changes during the jointing stage. By the maturity stage (20240521), TVI (0.66) and SR (0.64) had the highest correlations, followed by CIg (0.64), showing their exceptional sensitivity for monitoring chlorophyll content during this phase. GNIRR (−0.55) remained the most negatively correlated index, demonstrating its critical value in chlorophyll monitoring during maturity.</p>
Full article ">Figure 7
<p>Wheat height prediction and splitting rules based on the decision tree regression model. (<b>a</b>) 1 April 2024: NDVI, GNDVI, and RENDVI were the key predictive indices, with height ranging from 26.7 to 37.8 cm; (<b>b</b>) 23 April 2024: RNRE, SR, and RESR were the key predictive indices, with height ranging from 56.7 to 63.6 cm; (<b>c</b>) 30 April 2024: TVI, CIg, and SR were the key predictive indices, with height ranging from 66.0 to 75.8 cm; (<b>d</b>) 9 May 2024: TVI, SR, and NPCI were the key predictive indices, with height ranging from 68.0 to 82.7 cm; (<b>e</b>) 21 May 2024: TVI, SR, and GRNI were the key predictive indices, with height ranging from 70.0 to 76.4 cm.</p>
Full article ">Figure 7 Cont.
<p>Wheat height prediction and splitting rules based on the decision tree regression model. (<b>a</b>) 1 April 2024: NDVI, GNDVI, and RENDVI were the key predictive indices, with height ranging from 26.7 to 37.8 cm; (<b>b</b>) 23 April 2024: RNRE, SR, and RESR were the key predictive indices, with height ranging from 56.7 to 63.6 cm; (<b>c</b>) 30 April 2024: TVI, CIg, and SR were the key predictive indices, with height ranging from 66.0 to 75.8 cm; (<b>d</b>) 9 May 2024: TVI, SR, and NPCI were the key predictive indices, with height ranging from 68.0 to 82.7 cm; (<b>e</b>) 21 May 2024: TVI, SR, and GRNI were the key predictive indices, with height ranging from 70.0 to 76.4 cm.</p>
Full article ">Figure 7 Cont.
<p>Wheat height prediction and splitting rules based on the decision tree regression model. (<b>a</b>) 1 April 2024: NDVI, GNDVI, and RENDVI were the key predictive indices, with height ranging from 26.7 to 37.8 cm; (<b>b</b>) 23 April 2024: RNRE, SR, and RESR were the key predictive indices, with height ranging from 56.7 to 63.6 cm; (<b>c</b>) 30 April 2024: TVI, CIg, and SR were the key predictive indices, with height ranging from 66.0 to 75.8 cm; (<b>d</b>) 9 May 2024: TVI, SR, and NPCI were the key predictive indices, with height ranging from 68.0 to 82.7 cm; (<b>e</b>) 21 May 2024: TVI, SR, and GRNI were the key predictive indices, with height ranging from 70.0 to 76.4 cm.</p>
Full article ">Figure 8
<p>Wheat chlorophyll content prediction and splitting rules based on the decision tree regression model. (<b>a</b>) 1 April 2024: TVI, NPCI, and GRVI were the main contributing indices for chlorophyll content prediction, with a predicted range of 49.0–61.1 mg/g; (<b>b</b>) 23 April 2024: RNRE, NDVI, and GNDVI were the main contributing indices for chlorophyll content prediction, with a predicted range of 52.0–60.7 mg/g; (<b>c</b>) 30 April 2024: TVI, PSRI, and GNIRR were the main contributing indices for chlorophyll content prediction, with a predicted range of 50.0–58.2 mg/g; (<b>d</b>) 9 May 2024: TVI, RESR, and PSRI were the main contributing indices for chlorophyll content prediction, with a predicted range of 53.0–60.0 mg/g; (<b>e</b>) 21 May 2024: TVI, GRVI, and NPCI were the main contributing indices for chlorophyll content prediction, with a predicted range of 31.1–55.0 mg/g.</p>
Full article ">Figure 8 Cont.
<p>Wheat chlorophyll content prediction and splitting rules based on the decision tree regression model. (<b>a</b>) 1 April 2024: TVI, NPCI, and GRVI were the main contributing indices for chlorophyll content prediction, with a predicted range of 49.0–61.1 mg/g; (<b>b</b>) 23 April 2024: RNRE, NDVI, and GNDVI were the main contributing indices for chlorophyll content prediction, with a predicted range of 52.0–60.7 mg/g; (<b>c</b>) 30 April 2024: TVI, PSRI, and GNIRR were the main contributing indices for chlorophyll content prediction, with a predicted range of 50.0–58.2 mg/g; (<b>d</b>) 9 May 2024: TVI, RESR, and PSRI were the main contributing indices for chlorophyll content prediction, with a predicted range of 53.0–60.0 mg/g; (<b>e</b>) 21 May 2024: TVI, GRVI, and NPCI were the main contributing indices for chlorophyll content prediction, with a predicted range of 31.1–55.0 mg/g.</p>
Full article ">Figure 8 Cont.
<p>Wheat chlorophyll content prediction and splitting rules based on the decision tree regression model. (<b>a</b>) 1 April 2024: TVI, NPCI, and GRVI were the main contributing indices for chlorophyll content prediction, with a predicted range of 49.0–61.1 mg/g; (<b>b</b>) 23 April 2024: RNRE, NDVI, and GNDVI were the main contributing indices for chlorophyll content prediction, with a predicted range of 52.0–60.7 mg/g; (<b>c</b>) 30 April 2024: TVI, PSRI, and GNIRR were the main contributing indices for chlorophyll content prediction, with a predicted range of 50.0–58.2 mg/g; (<b>d</b>) 9 May 2024: TVI, RESR, and PSRI were the main contributing indices for chlorophyll content prediction, with a predicted range of 53.0–60.0 mg/g; (<b>e</b>) 21 May 2024: TVI, GRVI, and NPCI were the main contributing indices for chlorophyll content prediction, with a predicted range of 31.1–55.0 mg/g.</p>
Full article ">
19 pages, 3491 KiB  
Article
Inversion and Fine Grading of Tidal Flat Soil Salinity Based on the CIWOABP Model
by Jin Zhu, Shuowen Yang, Shuyan Li, Nan Zhou, Yi Shen, Jincheng Xing, Lixin Xu, Zhichao Hong and Yifei Yang
Agriculture 2025, 15(3), 323; https://doi.org/10.3390/agriculture15030323 - 1 Feb 2025
Viewed by 585
Abstract
This study on soil salinity inversion in coastal tidal flats based on Sentinel-2 remote sensing imagery is significant for improving saline–alkali soils and advancing tidal flat agriculture. This study proposes an improved approach for soil salinity inversion in coastal tidal flats using Sentinel-2 [...] Read more.
This study on soil salinity inversion in coastal tidal flats based on Sentinel-2 remote sensing imagery is significant for improving saline–alkali soils and advancing tidal flat agriculture. This study proposes an improved approach for soil salinity inversion in coastal tidal flats using Sentinel-2 imagery and a new enhanced chaotic mapping adaptive whale optimization neural network (CIWOABP) algorithm. Novel spectral indices were developed to enhance correlations with salinity, significantly outperforming traditional indexes. The CIWOABP model achieved superior validation accuracy (R2 = 0.815) and reduced root mean square error (RMSE) and mean absolute error (MAE) compared to other machine learning models. The results enable the precise mapping of salinity levels, aiding salt-tolerant crop cultivation and sustainable agricultural management. This method offers a reliable framework for rapid salinity monitoring and precision farming in coastal regions. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The geographical location of Yancheng. (<b>b</b>) The elevation model of Yancheng. (<b>c</b>) Sampling point area.</p>
Full article ">Figure 2
<p>Technology flowchart of this study.</p>
Full article ">Figure 3
<p>Correlation heatmap analysis of spectral indices.</p>
Full article ">Figure 4
<p>Soil salinity prediction figure based on spectral index B11, with blue dots representing predicted values.</p>
Full article ">Figure 5
<p>Soil salinity prediction figure based on spectral index B11 and SI6, with blue dots representing predicted values.</p>
Full article ">Figure 5 Cont.
<p>Soil salinity prediction figure based on spectral index B11 and SI6, with blue dots representing predicted values.</p>
Full article ">Figure 6
<p>The salinity inversion map of the study area, where darker colors indicate higher soil salinity levels.</p>
Full article ">Figure 7
<p>The vector file of farmland in the study area, where white represents farmland.</p>
Full article ">Figure 8
<p>The precision breeding map of the study area, where light blue represents maize, yellow represents rice, and red represents seepweed.</p>
Full article ">
21 pages, 10908 KiB  
Article
Canopy Segmentation of Overlapping Fruit Trees Based on Unmanned Aerial Vehicle LiDAR
by Shiji Wang, Jie Ji, Lijun Zhao, Jiacheng Li, Mian Zhang and Shengling Li
Agriculture 2025, 15(3), 295; https://doi.org/10.3390/agriculture15030295 - 29 Jan 2025
Viewed by 463
Abstract
Utilizing LiDAR sensors mounted on unmanned aerial vehicles (UAVs) to acquire three-dimensional data of fruit orchards and extract precise information about individual trees can greatly facilitate unmanned management. To address the issue of low accuracy in traditional watershed segmentation methods based on canopy [...] Read more.
Utilizing LiDAR sensors mounted on unmanned aerial vehicles (UAVs) to acquire three-dimensional data of fruit orchards and extract precise information about individual trees can greatly facilitate unmanned management. To address the issue of low accuracy in traditional watershed segmentation methods based on canopy height models, this paper proposes an enhanced method to extract individual tree crowns in fruit orchards, enabling the improved detection of overlapping crown features. Firstly, a distribution curve of single-row or single-column treetops is fitted based on the detected treetops using variable window size. Subsequently, a cubic spatial region extending infinitely along the Z-axis is generated with equal width around this curve, and all crown points falling within this region are extracted and then projected onto the central plane. The projecting contour of the crowns on the plane is then fitted using Gaussian functions. Treetops are detected by identifying peak points on the curve fitted by Gaussian functions. Finally, the watershed algorithm is applied to segment fruit tree crowns. The results demonstrate that in citrus orchards with pronounced crown overlap, this novel method significantly reduces the number of undetected trees with a recall of 97.04%, and the F1 score representing the detection accuracy for fruit trees reaches 98.01%. Comparisons between the traditional method and the Gaussian fitting–watershed fusion algorithm across orchards exhibiting varying degrees of crown overlap reveal that the fusion algorithm achieves high segmentation accuracy when dealing with overlapping crowns characterized by significant height variations. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of the study area. (<b>a</b>) The location of the study area and plots. (<b>b1</b>) plot with non-overlapping trees, (<b>b2</b>) plot with mixed overlapping trees, (<b>b3</b>) plot with highly overlapping trees.</p>
Full article ">Figure 2
<p>Workflow of point cloud data preprocessing and Gaussian fitting–watershed fusion algorithm.</p>
Full article ">Figure 3
<p>Canopy profile points and their projection points on the corresponding plane.</p>
Full article ">Figure 4
<p>The graph displays a curve with red markers indicating more than two peaks within a single crest and a green marker representing a valley.</p>
Full article ">Figure 5
<p>The projection points of the crown profile in the target plane and the peak finding results based on the curve fitted by Gaussian function.</p>
Full article ">Figure 6
<p>The difference between our algorithm and the traditional watershed algorithm in dealing with overlapping tree crowns.</p>
Full article ">Figure 7
<p>CHM with treetop markers, red markers represent treetops. (<b>a</b>) Performance of treetop detection by traditional algorithm. (<b>b</b>) Performance of treetop detection by Gaussian fitting algorithm.</p>
Full article ">Figure 8
<p>Vertical view of individual tree segmentation, the circle areas are the segmentation results comparison of the local region between the two methods. (<b>a</b>) Performance of individual tree segmentation by traditional algorithm. (<b>b</b>) Performance of individual tree segmentation by our algorithm.</p>
Full article ">Figure 9
<p>CHM of three plots with different degrees of overlap. (<b>a</b>) Plot 1: completely non-overlapping tree crowns. (<b>b</b>) Plot 2: mixed overlapping tree crowns. (<b>c</b>) Plot 3: highly overlapping tree crowns.</p>
Full article ">Figure 10
<p>Performance of treetop detection by traditional CHM algorithm. (<b>a</b>) Plot 1 with detected treetops. (<b>b</b>) Plot 2 with detected treetops. (<b>c</b>) Plot 2 with detected treetops.</p>
Full article ">Figure 11
<p>Performance of individual tree segmentation processed by traditional CHM algorithm. (<b>a</b>) Vertical view of individual tree segmentation of Plot 1 processed by traditional CHM algorithm. (<b>b</b>) Vertical view of individual tree segmentation of Plot 2 processed by traditional CHM algorithm. (<b>c</b>) Vertical view of individual tree segmentation of Plot 3 processed by traditional CHM algorithm.</p>
Full article ">Figure 12
<p>Performance of treetop detection by Gaussian fitting algorithm. (<b>a</b>) Plot 2 with detected treetops. (<b>b</b>) Plot 3 with detected treetops.</p>
Full article ">Figure 13
<p>Performance of individual tree segmentation processed by Gaussian fitting algorithm. (<b>a</b>) Vertical view of individual tree segmentation of Plot 2 processed by Gaussian fitting algorithm. (<b>b</b>) Vertical view of individual tree segmentation of Plot 3 processed by Gaussian fitting algorithm.</p>
Full article ">Figure 14
<p>Comparison of treetop detection and crown segmentation results using traditional CHM and Gaussian fitting. (<b>a</b>) Treetop detection accuracy in Plot 2. (<b>b</b>) Treetop detection accuracy in Plot 3. (<b>c</b>) Segmentation accuracy in Plot 2. (<b>d</b>) Segmentation accuracy in Plot 3.</p>
Full article ">Figure 14 Cont.
<p>Comparison of treetop detection and crown segmentation results using traditional CHM and Gaussian fitting. (<b>a</b>) Treetop detection accuracy in Plot 2. (<b>b</b>) Treetop detection accuracy in Plot 3. (<b>c</b>) Segmentation accuracy in Plot 2. (<b>d</b>) Segmentation accuracy in Plot 3.</p>
Full article ">
23 pages, 7919 KiB  
Article
Interpretable LAI Fine Inversion of Maize by Fusing Satellite, UAV Multispectral, and Thermal Infrared Images
by Yu Yao, Hengbin Wang, Xiao Yang, Xiang Gao, Shuai Yang, Yuanyuan Zhao, Shaoming Li, Xiaodong Zhang and Zhe Liu
Agriculture 2025, 15(3), 243; https://doi.org/10.3390/agriculture15030243 - 23 Jan 2025
Viewed by 576
Abstract
Leaf area index (LAI) serves as a crucial indicator for characterizing the growth and development process of maize. However, the LAI inversion of maize based on unmanned aerial vehicles (UAVs) is highly susceptible to various factors such as weather conditions, light intensity, and [...] Read more.
Leaf area index (LAI) serves as a crucial indicator for characterizing the growth and development process of maize. However, the LAI inversion of maize based on unmanned aerial vehicles (UAVs) is highly susceptible to various factors such as weather conditions, light intensity, and sensor performance. In contrast to satellites, the spectral stability of UAV-based data is relatively inferior, and the phenomenon of “spectral fragmentation” is prone to occur during large-scale monitoring. This study was designed to solve the problem that maize LAI inversion based on UAVs is difficult to achieve both high spatial resolution and spectral consistency. A two-stage remote sensing data fusion method integrating coarse and fine fusion was proposed. The SHapley Additive exPlanations (SHAP) model was introduced to investigate the contributions of 20 features in 7 categories to LAI inversion of maize, and canopy temperature extracted from thermal infrared images was one of them. Additionally, the most suitable feature sampling window was determined through multi-scale sampling experiments. The grid search method was used to optimize the hyperparameters of models such as Gradient Boosting, XGBoost, and Random Forest, and their accuracy was compared. The results showed that, by utilizing a 3 × 3 feature sampling window and 9 features with the highest contributions, the LAI inversion accuracy of the whole growth stage based on Random Forest could reach R2 = 0.90 and RMSE = 0.38 m2/m2. Compared with the single UAV data source mode, the inversion accuracy was enhanced by nearly 25%. The R2 in the jointing, tasseling, and filling stages were 0.87, 0.86, and 0.62, respectively. Moreover, this study verified the significant role of thermal infrared data in LAI inversion, providing a new method for fine LAI inversion of maize. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the experimental farm and tools in this study. (<b>a</b>,<b>b</b>) are the thumbnail and enlarged views of the study area’s location. (<b>c</b>) is a false—color image of the study area based on Sentinel-2. The yellow box indicates the maize—planting area, and the numbers inside represent the sowing sequence. (<b>d</b>) is a true—color image of the study area captured by a drone. The numbers on each plot denote the plot area, the unit is mu. (<b>e</b>) is a photo of the sample—processing site. (<b>f</b>) shows the DJI Mavic 3M drone, and (<b>g</b>) shows the DJI Mavic 3T drone.</p>
Full article ">Figure 2
<p>Distribution of LAI samples across different growth stages.</p>
Full article ">Figure 3
<p>Flowchart for the interpretable fine LAI inversion of maize by fusing satellite, UAV multispectral, and thermal infrared images. Step 1 is the process of data processing and integration. Step 2 is the process of feature engineering. Step 3 is the process of constructing and evaluating the LAI inversion model.</p>
Full article ">Figure 4
<p>Average accuracy of ten-fold cross-validation for coarse fusion of UAV and Sentinel-2. (<b>a</b>) represents the R<sup>2</sup> of coarse fusion, (<b>b</b>) represents the MAE of coarse fusion.</p>
Full article ">Figure 5
<p>Accuracy of feature sampling window for different sampling points corresponding to different data modes and models.</p>
Full article ">Figure 6
<p>Feature engineering of the optimal model for maize LAI inversion based on SHapley Additive exPlanations (SHAP) using UAV images. (<b>a</b>) Summary diagram of the optimal LAI inversion model, (<b>b</b>) bar chart of average |SHAP value|, (<b>c</b>) dependency diagram of the best features, the Random Forest model used in (<b>a</b>–<b>c</b>), and (<b>d</b>–<b>f</b>) represent the performance of models with different numbers of features.</p>
Full article ">Figure 7
<p>Grid search scores for each LAI fine inversion model using UAV data mode. (<b>a</b>–<b>c</b>) Grid search score (R<sup>2</sup>) for XGBoost model, (<b>d</b>,<b>e</b>) Grid search score (R<sup>2</sup>) for Decision Tree model, (<b>f</b>,<b>g</b>) Grid search score (R<sup>2</sup>) for AdaBoost model, (<b>h</b>,<b>i</b>) Grid search score (R<sup>2</sup>) for Gradient Boosting model, (<b>j</b>–<b>l</b>) Grid search score (R<sup>2</sup>) for Random Forest model.</p>
Full article ">Figure 8
<p>Scatter plots of training accuracy and validation accuracy for the Fine-fusion model with six LAI inversion models.</p>
Full article ">Figure 9
<p>LAI inversion results of maize in different growth stages with different data modes, except for the inversion results in the Sentinel-2 mode, which have a spatial resolution of 10 m; the spatial resolution of all other results is 0.5 m.</p>
Full article ">Figure 10
<p>Local enlarged views of the study area and LAI inverted by different data modes. (<b>a</b>) Original UAV true color image, (<b>b</b>) UAV true color image resampled to 0.5 m, (<b>c</b>) Original UAV false color image, (<b>d</b>) LAI inverted under Sentinel-2 data mode, (<b>e</b>) LAI inverted under UAV data mode, (<b>f</b>) LAI inverted under Brovey data mode, (<b>g</b>) LAI inverted under PCA data mode, (<b>h</b>) LAI inverted under Fine-fusion data mode. The displayed growth stage is the tasseling stage.</p>
Full article ">
25 pages, 24423 KiB  
Article
A Landscape-Clustering Zoning Strategy to Map Multi-Crops in Fragmented Cropland Regions Using Sentinel-2 and Sentinel-1 Imagery with Feature Selection
by Guanru Fang, Chen Wang, Taifeng Dong, Ziming Wang, Cheng Cai, Jiaqi Chen, Mengyu Liu and Huanxue Zhang
Agriculture 2025, 15(2), 186; https://doi.org/10.3390/agriculture15020186 - 16 Jan 2025
Viewed by 615
Abstract
Crop mapping using remote sensing is a reliable and efficient approach to obtaining timely and accurate crop information. Previous studies predominantly focused on large-scale regions characterized by simple cropping structures. However, in complex agricultural regions, such as China’s Huang-Huai-Hai region, the high crop [...] Read more.
Crop mapping using remote sensing is a reliable and efficient approach to obtaining timely and accurate crop information. Previous studies predominantly focused on large-scale regions characterized by simple cropping structures. However, in complex agricultural regions, such as China’s Huang-Huai-Hai region, the high crop diversity and fragmented cropland in localized areas present significant challenges for accurate crop mapping. To address these challenges, this study introduces a landscape-clustering zoning strategy utilizing multi-temporal Sentinel-1 and Sentinel-2 imagery. First, crop heterogeneity zones (CHZs) are delineated using landscape metrics that capture crop diversity and cropland fragmentation. Subsequently, four types of features (spectral, phenological, textural and radar features) are combined in various configurations to create different classification schemes. These schemes are then optimized for each CHZ using a random forest classifier. The results demonstrate that the landscape-clustering zoning strategy achieves an overall accuracy of 93.52% and a kappa coefficient of 92.67%, outperforming the no-zoning method by 2.9% and 3.82%, respectively. Furthermore, the crop mapping results from this strategy closely align with agricultural statistics at the county level, with an R2 value of 0.9006. In comparison with other traditional zoning strategies, such as topographic zoning and administrative unit zoning, the proposed strategy proves to be superior. These findings suggest that the landscape-clustering zoning strategy offers a robust reference method for crop mapping in complex agricultural landscapes. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of crop heterogeneity. (<b>a</b>–<b>d</b>) represent landscapes, and different colors indicate different crop types within the landscape. (<b>a</b>) to (<b>b</b>) and (<b>c</b>) to (<b>d</b>) indicate increases in the fragmentation of cropland (configurational heterogeneity). (<b>c</b>) to (<b>a</b>) and (<b>d</b>) to (<b>b</b>) indicate increases in crop diversity (compositional heterogeneity).</p>
Full article ">Figure 2
<p>The geo-location of the study area with survey samples of different crops. The four panels (<b>a1</b>–<b>a4</b>) present representative examples, illustrating details of crop distribution. All images were acquired using Sentinel-2 satellites in August 2019 and are displayed as false color composites (Bands: B8, B4, and B3).</p>
Full article ">Figure 3
<p>Phenological features fitted by double logistic regression function. (a) SOS, (b) EOS, (c) LOS, (d) BL, (e) MOS, (f) value_MOS, (g) SA, (h) Integral, (i) Value_SOS, and (j) Value_EOS, with detailed descriptions in <a href="#agriculture-15-00186-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 4
<p>Statistical crop area in the study area for each county.</p>
Full article ">Figure 5
<p>Overall framework of the study.</p>
Full article ">Figure 6
<p>Decision tree for cropland information extraction.</p>
Full article ">Figure 7
<p>Variance analysis of PCA with varimax rotation. (<b>a</b>) displays eigenvalues for all components, emphasizing those over 1 with red squares. (<b>b</b>) depicts the variance and cumulative variance of components post-screening.</p>
Full article ">Figure 8
<p>Rotated principal component loadings extracted from landscape metrics. Landscape metrics with absolute loadings above 0.75, indicating significant contribution to their components, are marked with blue triangles.</p>
Full article ">Figure 9
<p>Generation of CHZs and their crop heterogeneity. (<b>a</b>) shows the average silhouette coefficient based on Equation (4), assessing the clustering effectiveness. (<b>b</b>) shows the mean values of comprehensive landscape metrics for each CHZ.</p>
Full article ">Figure 10
<p>The spatial distribution of CHZs. (<b>a</b>) shows the overall spatial distribution of CHZs. (<b>a1</b>–<b>a5</b>) are representative examples of each CHZ, showing the spatial details of arable patches. All images are derived from Sentinel 2 August mean composite images, and the cropland maps are from Section Cropland Extraction.</p>
Full article ">Figure 11
<p>The percentage of statistical crop areas in each CHZ.</p>
Full article ">Figure 12
<p>Optimal classification scheme selection for each CHZ. S1–S7 have the same meanings as in <a href="#agriculture-15-00186-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 13
<p>Crop map of the study area obtained from landscape-clustering zoning strategy.</p>
Full article ">Figure 14
<p>Spatial details of crop mapping results in five CHZs between the landscape-clustering zoning and non-zoning methods. All images were derived from Sentinel-2 satellites in August 2019.</p>
Full article ">Figure 15
<p>Comparison of the mapped area of all crop types with census data at the county level. The red slash indicates that the ratio of the mapped area to census area is 1:1.</p>
Full article ">Figure 16
<p>Accuracy evaluation accuracy based on different zoning methods.</p>
Full article ">Figure 17
<p>The spatial distribution of zoning results of different zoning strategies. (<b>a</b>) Landscape-clustering zoning strategy (<b>b</b>), topographic zoning strategy, and (<b>c</b>) county-level administrative zoning strategy.</p>
Full article ">Figure 18
<p>The OA of the entire study area and each CHZ.</p>
Full article ">
22 pages, 6594 KiB  
Article
Rice Growth-Stage Recognition Based on Improved YOLOv8 with UAV Imagery
by Wenxi Cai, Kunbiao Lu, Mengtao Fan, Changjiang Liu, Wenjie Huang, Jiaju Chen, Zaoming Wu, Chudong Xu, Xu Ma and Suiyan Tan
Agronomy 2024, 14(12), 2751; https://doi.org/10.3390/agronomy14122751 - 21 Nov 2024
Viewed by 922
Abstract
To optimize rice yield and enhance quality through targeted field management at each growth stage, rapid and accurate identification of rice growth stages is crucial. This study presents the Mobilenetv3-YOLOv8 rice growth-stage recognition model, designed for high efficiency and accuracy using Unmanned Aerial [...] Read more.
To optimize rice yield and enhance quality through targeted field management at each growth stage, rapid and accurate identification of rice growth stages is crucial. This study presents the Mobilenetv3-YOLOv8 rice growth-stage recognition model, designed for high efficiency and accuracy using Unmanned Aerial Vehicle (UAV) imagery. A UAV captured images of rice fields across five distinct growth stages from two altitudes (3 m and 20 m) across two independent field experiments. These images were processed to create training, validation, and test datasets for model development. Mobilenetv3 was introduced to replace the standard YOLOv8 backbone, providing robust small-scale feature extraction through multi-scale feature fusion. Additionally, the Coordinate Attention (CA) mechanism was integrated into YOLOv8’s backbone, outperforming the Convolutional Block Attention Module (CBAM) by enhancing position-sensitive information capture and focusing on crucial pixel areas. Compared to the original YOLOv8, the enhanced Mobilenetv3-YOLOv8 model improved rice growth-stage identification accuracy and reduced the computational load. With an input image size of 400 × 400 pixels and the CA implemented in the second and third backbone layers, the model achieved its best performance, reaching 84.00% mAP and 84.08% recall. The optimized model achieved parameters and Giga Floating Point Operations (GFLOPs) of 6.60M and 0.9, respectively, with precision values for tillering, jointing, booting, heading, and filling stages of 94.88%, 93.36%, 67.85%, 78.31%, and 85.46%, respectively. The experimental results revealed that the optimal Mobilenetv3-YOLOv8 shows excellent performance and has potential for deployment in edge computing devices and practical applications for in-field rice growth-stage recognition in the future. Full article
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of the proposed method.</p>
Full article ">Figure 2
<p>The study site and rice field experiment designs. (<b>a</b>) Spring rice field experiment, EXP.1. (<b>b</b>) Autumn rice field experiment, EXP.2.</p>
Full article ">Figure 3
<p>Unmanned Aerial Vehicle photography.</p>
Full article ">Figure 4
<p>Diagram of YOLOv8 model. Note: The color block in the figure simulates the process of YOLOv8 image input: the image enters the backbone network for feature extraction, passes through the standard convolution and the new C2F convolution structure, and finally enters the image classification function module.</p>
Full article ">Figure 5
<p>Diagram of Mobilenetv3-YOLOv8 model. Note: Like <a href="#agronomy-14-02751-f004" class="html-fig">Figure 4</a>, the backbone part of YOLOv8 in the figure is replaced by Mobilenetv3: Conv2d is a two-dimensional convolution layer. Bneck is a special bottleneck structure of Mobilenetv3.</p>
Full article ">Figure 6
<p>Overview diagram of CBAM mechanism.</p>
Full article ">Figure 7
<p>Overview diagram of CA mechanism.</p>
Full article ">Figure 8
<p>Recognition effect of Mobilenetv3-YOLOv8 model on images of different input sizes.</p>
Full article ">Figure 9
<p>Performance comparison of different Mobilenet networks.</p>
Full article ">Figure 10
<p>Performance comparison of different models.</p>
Full article ">Figure 11
<p>Location map of CA mechanism. Notes: Subfigures (<b>a</b>–<b>e</b>) are five different discussions on adding one layer of attention mechanism, two layers, three layers, four layers and five layers to the backbone network. The blue color block is the backbone network work layer. The first layer adopts a bottleneck, which includes a 3 × 3 convolution, and the input feature has a spatial dimension of 320 × 320 and consists of 16 channels. The second layer adopts a bottleneck, which includes a 3 × 3 convolution, and the input feature has a spatial dimension of 160 × 160 and consists of 16 channels. The third layer adopts a bottleneck, which includes a 5 × 5 convolution, and the input feature has a spatial dimension of 80 × 80 and consists of 24 channels. The forth layer adopts a bottleneck, which includes a 5 × 5 convolution, and the input feature has a spatial dimension of 40 × 40 and consists of 48 channels. The fifth layer adopts a bottleneck, which includes a 5 × 5 convolution, and the input feature has a spatial dimension of 20 × 20 and consists of 96 channels. The red color block is the added CA attention mechanism layer.</p>
Full article ">Figure 12
<p>Confusion matrix for Mobilenetv3-YOLOv8 evaluated on the test dataset.</p>
Full article ">Figure 13
<p>False-positive detection with (<b>a</b>) booting stage being falsely recognized as tillering stage and (<b>b</b>) filling stage being falsely recognized as jointing stage.</p>
Full article ">Figure 14
<p>False-positive detection with booting stage being falsely recognized as tillering stage.</p>
Full article ">
17 pages, 7484 KiB  
Article
Prediction of the Potentially Suitable Areas of Sesame in China Under Climate Change Scenarios Using MaxEnt Model
by Guoqiang Li, Xue Wang, Jie Zhang, Feng Hu, Hecang Zang, Tongmei Gao, Youjun Li and Ming Huang
Agriculture 2024, 14(11), 2090; https://doi.org/10.3390/agriculture14112090 - 20 Nov 2024
Viewed by 901
Abstract
Sesame (Sesamum indicum L, flora of China) is an essential oil crop in China, but its growth and development are affected by climate change. To cope with the impacts of climate change on sesame cultivation, we used the Maximum Entropy (MaxEnt) model [...] Read more.
Sesame (Sesamum indicum L, flora of China) is an essential oil crop in China, but its growth and development are affected by climate change. To cope with the impacts of climate change on sesame cultivation, we used the Maximum Entropy (MaxEnt) model to analyze the bioclimatic variables of climate suitability of sesame in China and predicted the suitable area and trend of sesame in China under current and future climate scenarios. The results showed that the MaxEnt model prediction was excellent. The most crucial bioclimatic variable influencing the distribution of sesame was max temperature in the warmest month, followed by annual mean temperature, annual precipitation, mean diurnal range, and precipitation of the driest month. Under the current climate scenario, the suitable areas of sesame were widely distributed in China, from south (Hainan) to north (Heilongjiang) and from east (Yellow Sea) to west (Tibet). The area of highly suitable areas was 64.51 × 104 km2, accounting for 6.69% of the total land area in China, and was primarily located in mainly located in southern central Henan, eastern central Hubei, northern central Anhui, northern central Jiangxi, and eastern central Hunan. The area of moderately suitable areas and lowly suitable areas accounted for 17.45% and 25.82%, respectively. Compared with the current climate scenario, the area of highly and lowly suitable areas under future climate scenarios increased by 0.10%–11.48% and 0.08%–8.67%, while the area of moderately suitable areas decreased by 0.31%–23.03%. In addition, the increased highly suitable areas were mainly distributed in northern Henan. The decreased moderately suitable areas were mainly distributed in Heilongjiang, Jilin, and Liaoning. This work is practically significant for optimizing the regional layout of sesame cultivation in response to future climate conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution points of sesame in China.</p>
Full article ">Figure 2
<p>Results of jackknife test of bioclimatic variables for the suitability of sesame.</p>
Full article ">Figure 3
<p>Responses of the five major bioclimatic variables to sesame: (<b>a</b>) max temperature of warmest month, (<b>b</b>) annual mean temperature, (<b>c</b>) annual precipitation, (<b>d</b>) mean diurnal range, and (<b>e</b>) precipitation of driest month.</p>
Full article ">Figure 4
<p>Receiver operating curve with the corresponding area under the curve.</p>
Full article ">Figure 5
<p>Potentially suitable areas for sesame under the current climate scenario in China.</p>
Full article ">Figure 6
<p>Potentially suitable areas for sesame under future climate scenarios in China.</p>
Full article ">Figure 7
<p>Changes in suitable areas for sesame in different periods.</p>
Full article ">
38 pages, 7743 KiB  
Article
Forecasting Blue and Green Water Footprint of Wheat Based on Single, Hybrid, and Stacking Ensemble Machine Learning Algorithms Under Diverse Agro-Climatic Conditions in Nile Delta, Egypt
by Ashrakat A. Lotfy, Mohamed E. Abuarab, Eslam Farag, Bilal Derardja, Roula Khadra, Ahmed A. Abdelmoneim and Ali Mokhtar
Remote Sens. 2024, 16(22), 4224; https://doi.org/10.3390/rs16224224 - 13 Nov 2024
Viewed by 878
Abstract
The aim of this research is to develop and compare single, hybrid, and stacking ensemble machine learning models under spatial and temporal climate variations in the Nile Delta regarding the estimation of the blue and green water footprint (BWFP and GWFP) for wheat. [...] Read more.
The aim of this research is to develop and compare single, hybrid, and stacking ensemble machine learning models under spatial and temporal climate variations in the Nile Delta regarding the estimation of the blue and green water footprint (BWFP and GWFP) for wheat. Thus, four single machine learning models (XGB, RF, LASSO, and CatBoost) and eight hybrid machine learning models (XGB-RF, XGB-LASSO, XGB-CatBoost, RF-LASSO, CatBoost-LASSO, CatBoost-RF, XGB-RF-LASSO, and XGB-CatBoost-LASSO) were used, along with stacking ensembles, with five scenarios including climate and crop parameters and remote sensing-based indices. The highest R2 value for predicting wheat BWFP was achieved with XGB-LASSO under scenario 4 at 100%, while the minimum was 0.16 with LASSO under scenario 3 (remote sensing indices). To predict wheat GWFP, the highest R2 value of 100% was achieved with RF-LASSO across scenario 1 (all parameters), scenario 2 (climate parameters), scenario 4 (Peeff, Tmax, Tmin, and SA), and scenario 5 (Peeff and Tmax). The lowest value was recorded with LASSO and scenario 3. The use of individual and hybrid machine learning models showed high efficiency in predicting the blue and green water footprint of wheat, with high ratings according to statistical performance standards. However, the hybrid programs, whether binary or triple, outperformed both the single models and stacking ensemble. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of the study area and the meteorological stations.</p>
Full article ">Figure 2
<p>Workflow summarizing input data, applied machine learning models, scenarios, and expected output. EVI: Landsat Enhanced Vegetation Index; NDVI: the normalized difference vegetation index; SAVI: Soil-Adjusted Vegetation Index; NDMI: The Normalized Difference Moisture Index; GCI: Green Chlorophyll Index; LST: land surface temperature; BWFP: blue water footprint; GWFP: green water footprint; Sc: scenario.</p>
Full article ">Figure 3
<p>Stacking ensemble-learning workflow.</p>
Full article ">Figure 4
<p>Stacking ensemble based on a cross-validation of all feature subsets.</p>
Full article ">Figure 5
<p>The climatic parameters and reference evapotranspiration from 2013 to 2022 in the study area (<b>A</b>) T<sub>max</sub> and T<sub>min</sub>, (<b>B</b>) relative humidity and wind speed, and (<b>C</b>) effective precipitation and reference evapotranspiration for both governorates and months of wheat growing season.</p>
Full article ">Figure 6
<p>The yield and evapotranspiration of wheat for the time series from 2013 to 2022 for both governorates and months of the wheat growing season.</p>
Full article ">Figure 7
<p>The GWFP and BWFP of wheat for the time series from 2013 to 2022 for both governorates and months of the wheat growing season.</p>
Full article ">Figure 8
<p>Bar charts to compare the models in each scenario separately, based on the U<sub>95</sub> and accuracy for BWFP prediction.</p>
Full article ">Figure 9
<p>Bar charts to compare the models in each scenario separately, based on U<sub>95</sub> and accuracy for GWFP prediction.</p>
Full article ">Figure 10
<p>Flower plots to investigate the correlations between actual and predicted BWFP and GWFP values through R<sup>2</sup> parameter.</p>
Full article ">Figure 10 Cont.
<p>Flower plots to investigate the correlations between actual and predicted BWFP and GWFP values through R<sup>2</sup> parameter.</p>
Full article ">Figure 11
<p>Radar charts to compare the models in each scenario separately, based on the RMSE criterion for the BWFP and GWFP of wheat.</p>
Full article ">Figure 11 Cont.
<p>Radar charts to compare the models in each scenario separately, based on the RMSE criterion for the BWFP and GWFP of wheat.</p>
Full article ">Figure 12
<p>Column charts to compare the models in each scenario separately, based on the SI criterion for the (<b>A</b>) BWFP and (<b>B</b>) GWFP of wheat.</p>
Full article ">Figure 12 Cont.
<p>Column charts to compare the models in each scenario separately, based on the SI criterion for the (<b>A</b>) BWFP and (<b>B</b>) GWFP of wheat.</p>
Full article ">Figure 13
<p>Correlation matrix between climate parameters, crop parameters, and remote sensing indices with BWFP and GWFP for wheat under EL-Sharkia and EL-Beheira governorates.</p>
Full article ">Figure 14
<p>Box plots illustrating the distribution of the BWFP and GWFP estimate errors for the best model and scenarios in the test section.</p>
Full article ">Figure 15
<p>Relative contributions of 13 input variables to green water footprint.</p>
Full article ">
27 pages, 7854 KiB  
Article
An Optimized Semi-Supervised Generative Adversarial Network Rice Extraction Method Based on Time-Series Sentinel Images
by Lingling Du, Zhijun Li, Qian Wang, Fukang Zhu and Siyuan Tan
Agriculture 2024, 14(9), 1505; https://doi.org/10.3390/agriculture14091505 - 2 Sep 2024
Viewed by 1100
Abstract
In response to the limitations of meteorological conditions in global rice growing areas and the high cost of annotating samples, this paper combines the Vertical-Vertical (VV) polarization and Vertical-Horizontal (VH) polarization backscatter features extracted from Sentinel-1 synthetic aperture radar (SAR) images and the [...] Read more.
In response to the limitations of meteorological conditions in global rice growing areas and the high cost of annotating samples, this paper combines the Vertical-Vertical (VV) polarization and Vertical-Horizontal (VH) polarization backscatter features extracted from Sentinel-1 synthetic aperture radar (SAR) images and the NDVI, NDWI, and NDSI spectral index features extracted from Sentinel-2 multispectral images. By leveraging the advantages of an optimized Semi-Supervised Generative Adversarial Network (optimized SSGAN) in combining supervised learning and semi-supervised learning, rice extraction can be achieved with fewer annotated image samples. Within the optimized SSGAN framework, we introduce a focal-adversarial loss function to enhance the learning process for challenging samples; the generator module employs the Deeplabv3+ architecture, utilizing a Wide-ResNet network as its backbone while incorporating dropout layers and dilated convolutions to improve the receptive field and operational efficiency. Experimental results indicate that the optimized SSGAN, particularly when utilizing a 3/4 labeled sample ratio, significantly improves rice extraction accuracy, leading to a 5.39% increase in Mean Intersection over Union (MIoU) and a 2.05% increase in Overall Accuracy (OA) compared to the highest accuracy achieved before optimization. Moreover, the integration of SAR and multispectral data results in an OA of 93.29% and an MIoU of 82.10%, surpassing the performance of single-source data. These findings provide valuable insights for the extraction of rice information in global rice-growing regions. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart of the proposed method.</p>
Full article ">Figure 2
<p>Location of the study area and a true-color image map.</p>
Full article ">Figure 3
<p>The phenology of rice on Chongming Island.</p>
Full article ">Figure 4
<p>A map of the distribution of rice samples.</p>
Full article ">Figure 5
<p>Variation curves of backscatter values of VH and VV polarized features.</p>
Full article ">Figure 6
<p>Semi-supervised generation of adversarial learning strategies.</p>
Full article ">Figure 7
<p>Improved generator network structure diagram.</p>
Full article ">Figure 8
<p>Generator backbone network residual blocks and bottleneck layer structures. (<b>a</b>) Basic residual module; (<b>b</b>) Atrous Convolution + Dropout residual module; (<b>c</b>) Bottleneck module; (<b>d</b>) Atrous Convolution + Dropout bottleneck module.</p>
Full article ">Figure 9
<p>Discriminator network structure.</p>
Full article ">Figure 10
<p>VH polarization feature dataset accuracy enhancement values.</p>
Full article ">Figure 11
<p>VV polarization feature dataset accuracy enhancement values.</p>
Full article ">Figure 12
<p>VH + VV polarization feature dataset accuracy enhancement values.</p>
Full article ">Figure 13
<p>VH polarization + VV polarization + NDVI + NDWI + NDSI feature dataset accuracy enhancement values.</p>
Full article ">Figure 14
<p>Comparative plots of rice extraction results.</p>
Full article ">
22 pages, 7164 KiB  
Article
LettuceNet: A Novel Deep Learning Approach for Efficient Lettuce Localization and Counting
by Aowei Ruan, Mengyuan Xu, Songtao Ban, Shiwei Wei, Minglu Tian, Haoxuan Yang, Annan Hu, Dong Hu and Linyi Li
Agriculture 2024, 14(8), 1412; https://doi.org/10.3390/agriculture14081412 - 20 Aug 2024
Cited by 3 | Viewed by 1196
Abstract
Traditional lettuce counting relies heavily on manual labor, which is laborious and time-consuming. In this study, a simple and efficient method for localization and counting lettuce is proposed, based only on lettuce field images acquired by an unmanned aerial vehicle (UAV) equipped with [...] Read more.
Traditional lettuce counting relies heavily on manual labor, which is laborious and time-consuming. In this study, a simple and efficient method for localization and counting lettuce is proposed, based only on lettuce field images acquired by an unmanned aerial vehicle (UAV) equipped with an RGB camera. In this method, a new lettuce counting model based on the weak supervised deep learning (DL) approach is developed, called LettuceNet. The LettuceNet network adopts a more lightweight design that relies only on point-level labeled images to train and accurately predict the number and location information of high-density lettuce (i.e., clusters of lettuce with small planting spacing, high leaf overlap, and unclear boundaries between adjacent plants). The proposed LettuceNet is thoroughly assessed in terms of localization and counting accuracy, model efficiency, and generalizability using the Shanghai Academy of Agricultural Sciences-Lettuce (SAAS-L) and the Global Wheat Head Detection (GWHD) datasets. The results demonstrate that LettuceNet achieves superior counting accuracy, localization, and efficiency when employing the enhanced MobileNetV2 as the backbone network. Specifically, the counting accuracy metrics, including mean absolute error (MAE), root mean square error (RMSE), normalized root mean square error (nRMSE), and coefficient of determination (R2), reach 2.4486, 4.0247, 0.0276, and 0.9933, respectively, and the F-Score for localization accuracy is an impressive 0.9791. Moreover, the LettuceNet is compared with other existing widely used plant counting methods including Multi-Column Convolutional Neural Network (MCNN), Dilated Convolutional Neural Networks (CSRNets), Scale Aggregation Network (SANet), TasselNet Version 2 (TasselNetV2), and Focal Inverse Distance Transform Maps (FIDTM). The results indicate that our proposed LettuceNet performs the best among all evaluated merits, with 13.27% higher R2 and 72.83% lower nRMSE compared to the second most accurate SANet in terms of counting accuracy. In summary, the proposed LettuceNet has demonstrated great performance in the tasks of localization and counting of high-density lettuce, showing great potential for field application. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the study area and distribution of the lettuce field.</p>
Full article ">Figure 2
<p>Examples of lettuce labeling for (<b>a</b>) a single variety of healthy lettuce individuals; and (<b>b</b>) tightly packed groups of lettuce (the green dots are point-level labels).</p>
Full article ">Figure 3
<p>The architecture of the LettuceNet. (The term layer m (1/n) in the figure indicates that after convolutional layer m, the size of the feature map is reduced by a factor of 1/n relative to the input image size. If 1/n is not specified, the feature map size remains the same as the previous layer. For example, the feature map size for Layer 6 is 1/16 of the input image size, which is identical to the feature map size for Layer 5 (i.e., 1/16). The feature map size for Layer 1 is the same as the input image size (i.e., 1/1), and so on; The number of feature maps refers to the number of output feature maps after convolutional processing. For the original input, the model has 3 feature maps, corresponding to the R, G, and B color channels; The term Rate refers to the dilation rate of the Atrous convolution; The notation k × k Conv indicates that the convolution kernel has dimensions of k × k; The Upsample by i refers to increasing the feature map size by a factor of i).</p>
Full article ">Figure 4
<p>Structure and internal details of the improved MobileNetV2. (The expansion_factor and bottleneck_num will be used to parameterize the convolutional layer respectively; Layer 1 performs only one deep convolution; from layer 2 to layer 8, each layer has the same internal structure of inverted residuals, but with different expansion and bottleneck coefficients; Layers 3–5 output feature maps to the DM, and layer 8 outputs feature maps to the MFFM (see the architecture of LettuceNet in <a href="#agriculture-14-01412-f003" class="html-fig">Figure 3</a>)).</p>
Full article ">Figure 5
<p>Comparison of LettuceNet operation efficiency using ResNet50, VGG16, MobileNetV2 and improved MobileNetV2 as the backbone network, respectively.</p>
Full article ">Figure 6
<p>The localization effects of the LettuceNet model using improved MobileNetV2 as a backbone network for lettuce counts on the five test images from the SAAS-L dataset for (<b>a</b>) clear borders, clear texture features, and tight arrangement; (<b>b</b>,<b>c</b>) unclear borders, fuzzy texture features, and tight arrangement; (<b>d</b>,<b>e</b>) relatively clear border and texture features, and compact irregular arrangement. (The red area in the second column represents the probability that the pixel point is a lettuce class, with darker colors representing a higher probability of being lettuce and vice versa. The blue area in the third column is the blobs consisting of neighboring pixel points with a probability greater than 0.5).</p>
Full article ">Figure 7
<p>Comparison of the overall performance for LettuceNet localization using different backbone networks for lettuce images with (<b>a</b>) clear borders, clear texture features, and tight arrangement; (<b>b</b>,<b>c</b>) unclear borders, fuzzy texture features, and tight arrangement; (<b>d</b>,<b>e</b>) relatively clear border and texture features, and overly tight arrangement. Red boxes indicate that the lettuce is undetected; green boxes indicate that two or more lettuces are detected as one.</p>
Full article ">Figure 8
<p>Comparison of local visualizations of LettuceNet localization using different backbone networks in randomly selected small area lettuce images with (<b>a</b>) clear borders, clear texture features, and tight arrangement; (<b>b</b>,<b>c</b>) unclear borders, fuzzy texture features, and tight arrangement; (<b>d</b>,<b>e</b>) relatively clear border and texture features, and overly tight arrangement. (Red boxes indicate that the lettuce is undetected; green boxes indicate that two or more lettuces are detected as one).</p>
Full article ">Figure 9
<p>Comparison of the proposed LettuceNet with MCNN, CSRNets, SANet, TasselNetV2, and FIDTM on the operational efficiency of lettuce counting tasks.</p>
Full article ">Figure 10
<p>LettuceNet visualization of counting results from the GWHD dataset of wheat heads with (<b>a</b>–<b>c</b>) obvious features and large differences from the background; (<b>d</b>,<b>e</b>) similar features and mixed with the background under strong light. (The first column shows the original RGB test images, the second column shows the heat map, and the third column shows the localization and counting map; The red area in the second column represents the probability that the pixel point is a wheat head class, with darker colors representing a higher probability of being wheat head and vice versa. The blue area in the third column is the blobs consisting of neighboring pixel points with a probability greater than 0.5.).</p>
Full article ">Figure 11
<p>Coefficients of determination of the LettuceNet on 24 original images (resolution 5472 × 4648). (The orange dots represent 24 counting experiments, and the green lines are 1:1 lines across the origin. The closer the orange dot is to the green line, the closer the predicted count value is to the actual manual count value).</p>
Full article ">Figure 12
<p>Localization results with LettuceNet model for a stitched lettuce image most affected by boundary effects (The blue areas represent individual lettuces and are blobs of adjacent pixels, each of which has a probability greater than 0.5 of belonging to the lettuces category; The red boxes indicate lettuces that were not detected due to boundary effects; The green boxes indicate lettuces that were repeatedly detected due to boundary effects). (<b>A</b>–<b>C</b>) areas where lettuce was not detected due to boundary effects; (<b>D</b>) areas where lettuce is repeatedly detected due to boundary effects.</p>
Full article ">
27 pages, 7575 KiB  
Article
Improving Radiometric Block Adjustment for UAV Multispectral Imagery under Variable Illumination Conditions
by Yuxiang Wang, Zengling Yang, Haris Ahmad Khan and Gert Kootstra
Remote Sens. 2024, 16(16), 3019; https://doi.org/10.3390/rs16163019 - 17 Aug 2024
Cited by 2 | Viewed by 1308
Abstract
Unmanned aerial vehicles (UAVs) equipped with multispectral cameras offer great potential for applications in precision agriculture. A critical challenge that limits the deployment of this technology is the varying ambient illumination caused by cloud movement. Rapidly changing solar irradiance primarily affects the radiometric [...] Read more.
Unmanned aerial vehicles (UAVs) equipped with multispectral cameras offer great potential for applications in precision agriculture. A critical challenge that limits the deployment of this technology is the varying ambient illumination caused by cloud movement. Rapidly changing solar irradiance primarily affects the radiometric calibration process, resulting in reflectance distortion and heterogeneity in the final generated orthomosaic. In this study, we optimized the radiometric block adjustment (RBA) method, which corrects for changing illumination by comparing adjacent images and from incidental observations of reference panels to produce accurate and uniform reflectance orthomosaics regardless of variable illumination. The radiometric accuracy and uniformity of the generated orthomosaic could be enhanced by improving the weights of the information from the reference panels and by reducing the number of tie points between adjacent images. Furthermore, especially for crop monitoring, we proposed the RBA-Plant method, which extracts tie points solely from vegetation areas, to further improve the accuracy and homogeneity of the orthomosaic for the vegetation areas. To validate the effectiveness of the optimization techniques and the proposed RBA-Plant method, visual and quantitative assessments were conducted on a UAV-image dataset collected under fluctuating solar irradiance conditions. The results demonstrated that the optimized RBA and RBA-Plant methods outperformed the current empirical line method (ELM) and sensor-corrected approaches, showing significant improvements in both radiometric accuracy and homogeneity. Specifically, the average root mean square error (RMSE) decreased from 0.084 acquired by the ELM to 0.047, and the average coefficient of variation (CV) decreased from 24% (ELM) to 10.6%. Furthermore, the orthomosaic generated by the RBA-Plant method achieved the lowest RMSE and CV values, 0.039 and 6.8%, respectively, indicating the highest accuracy and best uniformity. In summary, although UAVs typically incorporate lighting sensors for illumination correction, this research offers different methods for improving uniformity and obtaining more accurate reflectance values from orthomosaics. Full article
Show Figures

Figure 1

Figure 1
<p>The study area is located in Wageningen, Gelderland province, the Netherlands. The right figure shows the experimental setup, where the red dots represent ground control points (GCPs), and each single yellow five-pointed star denotes a set of reference panels. The white rectangle border indicates the range of the potato monoculture field, and the light blue one represents the potato and grass stripcropping field.</p>
Full article ">Figure 2
<p>The mean reflectance values of four sets of self-made reference panels used in this experiment.</p>
Full article ">Figure 3
<p>Variability of solar irradiance at 560 nm (green channel) during UAV data collection under dynamic cloud conditions, as observed on 14 June, between 11:20 and 12:00.</p>
Full article ">Figure 4
<p>Workflow for the proposed radiometric block adjustment method.</p>
Full article ">Figure 5
<p>Flowchart for selecting tie points located in the vegetation area.</p>
Full article ">Figure 6
<p>Flowchart for identifying tie points located in vegetation areas. (<b>a</b>) Tie points, denoted by the red dots, are located on the example image in the green channel. (<b>b</b>) Histogram of the calculated NDVI map for the corresponding image, and the red line indicates the segmentation threshold between vegetation and non-vegetation. (<b>c</b>) NDVI filtered to highlight vegetation areas, overlaid on the RGB base layer. (<b>d</b>) Tie points that are exclusively located in vegetation regions of the example image.</p>
Full article ">Figure 7
<p>Conceptual framework for reducing the number of tie point equations. (<b>a</b>) displays the result of tie point extraction on the example image pair using the Metashape Python package. (<b>b</b>) is a 2D scatter plot of radiance values on matching tie points between pairs of overlapping images, indicated by blue circles. Green dots highlight the points selected after outlier removal, which are utilized for regression analysis. The fitted regression line is shown in blue, with the maximum and minimum points marked in red on this line chosen to construct the tie points equations. The black line denotes the 1:1 line.</p>
Full article ">Figure 8
<p>The changing trend between the slopes obtained for each image in the dataset and the DLS-recorded corresponding irradiance. The green line denotes the variation in slopes derived for each image, while the blue dashed line represents the change in irradiance. The yellow dots highlight images that capture the reference panels, with their slopes calculated based on observations from these reference panels.</p>
Full article ">Figure 9
<p>Overview of all the reflectance orthomosaics for each method—ELM, DLS-CRP, DLS-LCRP, the optimized RBA, and RBA-Plant—across the green, red, red edge, and NIR bands. The bottom layer displays the false-color composite image (color infrared or CIR).</p>
Full article ">Figure 10
<p>The result of reflectance conversion for tie points in side overlapping regions between two example image pairs in the green channel under different illumination conditions. The points indicate the respective reflectance values of tie points in two images. The ellipses show the 95% confidence ellipses.</p>
Full article ">Figure 11
<p>The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato monoculture field.</p>
Full article ">Figure 12
<p>The trend of coefficient of variation (CV%) of the orthomosaic-extracted reflectance for sampling plants within the potato stripcropping field.</p>
Full article ">Figure 13
<p>The trend of RMSE with the increase in the parameter <math display="inline"><semantics> <mi>ω</mi> </semantics></math> for each channel.</p>
Full article ">
15 pages, 9712 KiB  
Article
Oilseed Rape Yield Prediction from UAVs Using Vegetation Index and Machine Learning: A Case Study in East China
by Hao Hu, Yun Ren, Hongkui Zhou, Weidong Lou, Pengfei Hao, Baogang Lin, Guangzhi Zhang, Qing Gu and Shuijin Hua
Agriculture 2024, 14(8), 1317; https://doi.org/10.3390/agriculture14081317 - 8 Aug 2024
Cited by 1 | Viewed by 1477
Abstract
Yield prediction is an important agriculture management for crop policy making. In recent years, unmanned aerial vehicles (UAVs) and spectral sensor technology have been widely used in crop production. This study aims to evaluate the ability of UAVs equipped with spectral sensors to [...] Read more.
Yield prediction is an important agriculture management for crop policy making. In recent years, unmanned aerial vehicles (UAVs) and spectral sensor technology have been widely used in crop production. This study aims to evaluate the ability of UAVs equipped with spectral sensors to predict oilseed rape yield. In an experiment, RGB and hyperspectral images were captured using a UAV at the seedling (S1), budding (S2), flowering (S3), and pod (S4) stages in oilseed rape plants. Canopy reflectance and spectral indices of oilseed rape were extracted and calculated from the hyperspectral images. After correlation analysis and principal component analysis (PCA), input spectral indices were screened to build yield prediction models using random forest regression (RF), multiple linear regression (MLR), and support vector machine regression (SVM). The results showed that UAVs equipped with spectral sensors have great potential in predicting crop yield at a large scale. Machine learning approaches such as RF can improve the accuracy of yield models in comparison with traditional methods (e.g., MLR). The RF-based training model had the highest determination coefficient (R2) (0.925) and lowest relative root mean square error (RRMSE) (5.91%). In testing, the MLR-based model had the highest R2 (0.732) and lowest RRMSE (11.26%). Moreover, we found that S2 was the best stage for predicting oilseed rape yield compared with the other growth stages. This study demonstrates a relatively accurate prediction for crop yield and provides valuable insight for field crop management. Full article
Show Figures

Figure 1

Figure 1
<p>Location of study area and experiment field.</p>
Full article ">Figure 2
<p>Development workflow for the yield prediction model.</p>
Full article ">Figure 3
<p>Oilseed rape yield with N fertilizer (F1, F2, and F3), herbicide (NHB and HB), density (D1, D2, and D3), and special fertilizer (CF and SF) treatments.</p>
Full article ">Figure 4
<p>Reflectance of various growth stages in oilseed rape plants (<b>A</b>) and correlation between reflectance and yield at various growth stages (<b>B</b>).</p>
Full article ">Figure 5
<p>Relationship between VIs and oilseed rape yield at (<b>S1</b>), (<b>S2</b>), (<b>S3</b>), and (<b>S4</b>) growth stages. * indicates a significant correlation at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 6
<p>PCA plot of VIs and oilseed rape yield at (<b>S1</b>), (<b>S2</b>), (<b>S3</b>), and (<b>S4</b>) stages.</p>
Full article ">Figure 7
<p>Measured and predicted oilseed rape yield obtained by the RF model at (<b>S1</b>), (<b>S2</b>), (<b>S3</b>), and (<b>S4</b>) stages.</p>
Full article ">Figure 8
<p>Measured and predicted oilseed rape yield obtained by the MLR model at (<b>S1</b>), (<b>S2</b>), (<b>S3</b>), and (<b>S4</b>) stages.</p>
Full article ">Figure 9
<p>Measured and predicted oilseed rape yield obtained by the SVM model at (<b>S1</b>), (<b>S2</b>), (<b>S3</b>), and (<b>S4</b>) stages.</p>
Full article ">Figure 10
<p>Measured and predicted yields at (<b>S1</b>), (<b>S2</b>), (<b>S3</b>), and (<b>S4</b>) stages.</p>
Full article ">
18 pages, 5238 KiB  
Article
Research on the Maturity Detection Method of Korla Pears Based on Hyperspectral Technology
by Jiale Liu and Hongbing Meng
Agriculture 2024, 14(8), 1257; https://doi.org/10.3390/agriculture14081257 - 30 Jul 2024
Cited by 3 | Viewed by 1195
Abstract
In this study, hyperspectral imaging technology with a wavelength range of 450 to 1000 nanometers was used to collect spectral data from 160 Korla pear samples at various maturity stages (immature, semimature, mature, and overripe). To ensure high-quality data, multiple preprocessing techniques such [...] Read more.
In this study, hyperspectral imaging technology with a wavelength range of 450 to 1000 nanometers was used to collect spectral data from 160 Korla pear samples at various maturity stages (immature, semimature, mature, and overripe). To ensure high-quality data, multiple preprocessing techniques such as multiplicative scatter correction (MSC), standard normal variate (SNV), and normalization were employed. Based on these preprocessed data, a custom convolutional neural network model (CNN-S) was constructed and trained to achieve precise classification and identification of the maturity stages of Korla pears. Additionally, a BP neural network model was used to determine the characteristic wavelengths for maturity assessment based on the sugar content feature wavelengths. The results demonstrated that the BP model, based on sugar content feature wavelengths, effectively discriminated the maturity stages of the pears. Specifically, the comprehensive recognition rates for the training, testing, and validation sets were 98.5%, 93.5%, and 90.5%, respectively. Furthermore, the combination of hyperspectral imaging technology and the custom CNN-S model significantly enhanced the detection performance of pear maturity. Compared to traditional CNN models, the CNN-S model improved the accuracy of the test set by nearly 10%. Moreover, the CNN-S model outperformed existing techniques based on partial least squares discriminant analysis (PLS-DA) and support vector machine (SVM) in capturing hyperspectral data features, showing superior generalization capability and detection efficiency. The superior performance of this method in practical applications further supports its potential in smart agriculture technology, providing a more efficient and accurate solution for agricultural product quality detection. Additionally, it plays a crucial role in the development of smart agricultural technology. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the experimental field.</p>
Full article ">Figure 2
<p>The procedure of spectral processing and sweetness determination in this study.</p>
Full article ">Figure 3
<p>Block diagram of a hyperspectral imaging system. 1: Computer 2: Imaging spectrometer 3: Lens 4: Fiber optic halogen lamp 5: Motorized panning stage.</p>
Full article ">Figure 4
<p>Measurement of brix at this location.</p>
Full article ">Figure 5
<p>Hyperspectral images of balsam pear at different times.</p>
Full article ">Figure 6
<p>Average spectrograms for different phases.</p>
Full article ">Figure 7
<p>Average brix values of balsam pear at different stages.</p>
Full article ">Figure 8
<p>Successive projections algorithm to extract characteristic wavelength.</p>
Full article ">Figure 8 Cont.
<p>Successive projections algorithm to extract characteristic wavelength.</p>
Full article ">Figure 9
<p>Effect of different spectral preprocessing methods on raw spectral data.</p>
Full article ">Figure 9 Cont.
<p>Effect of different spectral preprocessing methods on raw spectral data.</p>
Full article ">Figure 10
<p>BP model prediction set discrimination results.</p>
Full article ">Figure 11
<p>Model Performance Summary.</p>
Full article ">
20 pages, 2415 KiB  
Article
YOLOv8-RCAA: A Lightweight and High-Performance Network for Tea Leaf Disease Detection
by Jingyu Wang, Miaomiao Li, Chen Han and Xindong Guo
Agriculture 2024, 14(8), 1240; https://doi.org/10.3390/agriculture14081240 - 27 Jul 2024
Cited by 1 | Viewed by 1134
Abstract
Deploying deep convolutional neural networks on agricultural devices with limited resources is challenging due to their large number of parameters. Existing lightweight networks can alleviate this problem but suffer from low performance. To this end, we propose a novel lightweight network named YOLOv8-RCAA [...] Read more.
Deploying deep convolutional neural networks on agricultural devices with limited resources is challenging due to their large number of parameters. Existing lightweight networks can alleviate this problem but suffer from low performance. To this end, we propose a novel lightweight network named YOLOv8-RCAA (YOLOv8-RepVGG-CBAM-Anchorfree-ATSS), aiming to locate and detect tea leaf diseases with high accuracy and performance. Specifically, we employ RepVGG to replace CSPDarkNet63 to enhance feature extraction capability and inference efficiency. Then, we introduce CBAM attention to FPN and PAN in the neck layer to enhance the model perception of channel and spatial features. Additionally, an anchor-based detection head is replaced by an anchor-free head to further accelerate inference. Finally, we adopt the ATSS algorithm to adapt the allocating strategy of positive and negative samples during training to further enhance performance. Extensive experiments show that our model achieves precision, recall, F1 score, and mAP of 98.23%, 85.34%, 91.33%, and 98.14%, outperforming the traditional models by 4.22~6.61%, 2.89~4.65%, 3.48~5.52%, and 4.64~8.04%, respectively. Moreover, this model has a near-real-time inference speed, which provides technical support for deploying on agriculture devices. This study can reduce labor costs associated with the detection and prevention of tea leaf diseases. Additionally, it is expected to promote the integration of rapid disease detection into agricultural machinery in the future, thereby advancing the implementation of AI in agriculture. Full article
Show Figures

Figure 1

Figure 1
<p>Tea plantation scene.</p>
Full article ">Figure 2
<p>Sample Images of Tea Leaf Diseases.</p>
Full article ">Figure 3
<p>YOLOv8-RCAA Network Architecture.</p>
Full article ">Figure 4
<p>Structural Re-parameterization Process.</p>
Full article ">Figure 5
<p>CBAM Network Structure.</p>
Full article ">Figure 6
<p>CAM Network Structure.</p>
Full article ">Figure 7
<p>SAM Network Structure.</p>
Full article ">Figure 8
<p>Experimental design procedure.</p>
Full article ">Figure 9
<p>Confusion matrix of YOLOv8-RCAA Model.</p>
Full article ">Figure 10
<p>Heat Map Comparison Results.</p>
Full article ">
17 pages, 5407 KiB  
Article
Variable-Rate Fertilization for Summer Maize Using Combined Proximal Sensing Technology and the Nitrogen Balance Principle
by Peng Zhou, Yazhou Ou, Wei Yang, Yixiang Gu, Yinuo Kong, Yangxin Zhu, Chengqian Jin and Shanshan Hao
Agriculture 2024, 14(7), 1180; https://doi.org/10.3390/agriculture14071180 - 18 Jul 2024
Viewed by 1213
Abstract
Soil is a heterogeneous medium that exhibits considerable variability in both spatial and temporal dimensions. Proper management of field variability using variable-rate fertilization (VRF) techniques is essential to maximize crop input–output ratios and resource utilization. Implementing VRF technology on a localized scale is [...] Read more.
Soil is a heterogeneous medium that exhibits considerable variability in both spatial and temporal dimensions. Proper management of field variability using variable-rate fertilization (VRF) techniques is essential to maximize crop input–output ratios and resource utilization. Implementing VRF technology on a localized scale is recommended to increase crop yield, decrease input costs, and reduce the negative impact on the surrounding environment. This study assessed the agronomic and environmental viability of implementing VRF during the cultivation of summer maize using an on-the-go detector of soil total nitrogen (STN) to detect STN content in the test fields. A spatial delineation approach was then applied to divide the experimental field into multiple management zones. The amount of fertilizer applied in each zone was determined based on the sensor-detected STN. The analysis of the final yield and economic benefits indicates that plots that adopted VRF treatments attained an average summer maize grain yield of 7275 kg ha−1, outperforming plots that employed uniform-rate fertilization (URF) treatments, which yielded 6713 kg ha−1. Through one-way ANOVA, the yield p values of the two fertilization methods were 6.406 × 10−15, 5.202 × 10−15, 2.497 × 10−15, and 3.199 × 10−15, respectively, indicating that the yield differences between the two fertilization methods were noticeable. This led to an average yield increase of 8.37% ha−1 and a gross profit margin of USD 153 ha−1. In plots in which VRF techniques are utilized, the average nitrogen (N) fertilizer application rate is 627 kg ha−1. In contrast, in plots employing URF methods, the N fertilizer application rate is 750 kg ha−1. The use of N fertilizer was reduced by 16.4%. As a result, there is a reduction in production costs of USD 37.5 ha−1, achieving increased yield while decreasing the amount of applied fertilizer. Moreover, in plots where the VRF method was applied, STN was balanced despite the reduced N application. This observation can be deduced from the variance in summer maize grain yield through various fertilization treatments in a comparative experiment. Future research endeavors should prioritize the resolution of particular constraints by incorporating supplementary soil data, such as phosphorus, potassium, organic matter, and other pertinent variables, to advance and optimize fertilization methodologies. Full article
Show Figures

Figure 1

Figure 1
<p>A flow diagram explaining different steps performed during the study.</p>
Full article ">Figure 2
<p>On-the-go detector, subsoiler, and the optical path of the detection unit; (<b>a</b>) detector and (<b>b</b>) subsoiler and optical path of the detection unit.</p>
Full article ">Figure 3
<p>Geographic location of the experimental field in Beijing; (<b>a</b>) schematic diagram of the experimental design of two parts of the field and (<b>b</b>) the position of each soil spectrum taken by the applicator throughout the two parts.</p>
Full article ">Figure 4
<p>Schematic representation of the graded fertilization rate for VRF.</p>
Full article ">Figure 5
<p>Map of STN content in VRF plots (plot 1 to plot 4); (<b>a</b>) shows the STN content at the basal dressing stage; (<b>b</b>) shows the STN content at the jointing stage.</p>
Full article ">Figure 6
<p>Fertilizing maps for 50 MZs; (<b>a</b>) shows the N fertilizer application rate at the basal dressing stage for the MZs; (<b>b</b>) shows the N fertilizer application rate at the jointing stage.</p>
Full article ">Figure 6 Cont.
<p>Fertilizing maps for 50 MZs; (<b>a</b>) shows the N fertilizer application rate at the basal dressing stage for the MZs; (<b>b</b>) shows the N fertilizer application rate at the jointing stage.</p>
Full article ">Figure 7
<p>Histogram of summer maize grain yield data from plots 1 to 5.</p>
Full article ">Figure 8
<p>Heat map of summer maize grain yield in the 50 MZs. The bottom labels represent the respective numbers of plots 1 to 5. The right label indicates the color change of the heat map corresponding to yield variations in the MZs ranging from 6600 kg ha<sup>−1</sup> to 7400 kg ha<sup>−1</sup>.</p>
Full article ">
20 pages, 23128 KiB  
Article
Unmanned Aerial Vehicle-Measured Multispectral Vegetation Indices for Predicting LAI, SPAD Chlorophyll, and Yield of Maize
by Pradosh Kumar Parida, Eagan Somasundaram, Ramanujam Krishnan, Sengodan Radhamani, Uthandi Sivakumar, Ettiyagounder Parameswari, Rajagounder Raja, Silambiah Ramasamy Shri Rangasami, Sundapalayam Palanisamy Sangeetha and Ramalingam Gangai Selvi
Agriculture 2024, 14(7), 1110; https://doi.org/10.3390/agriculture14071110 - 9 Jul 2024
Cited by 6 | Viewed by 1749
Abstract
Predicting crop yield at preharvest is pivotal for agricultural policy and strategic decision making. Despite global agricultural targets, labour-intensive surveys for yield estimation pose challenges. Using unmanned aerial vehicle (UAV)-based multispectral sensors, this study assessed crop phenology and biotic stress conditions using various [...] Read more.
Predicting crop yield at preharvest is pivotal for agricultural policy and strategic decision making. Despite global agricultural targets, labour-intensive surveys for yield estimation pose challenges. Using unmanned aerial vehicle (UAV)-based multispectral sensors, this study assessed crop phenology and biotic stress conditions using various spectral vegetation indices. The goal was to enhance the accuracy of predicting key agricultural parameters, such as leaf area index (LAI), soil and plant analyser development (SPAD) chlorophyll, and grain yield of maize. The study’s findings demonstrate that during the kharif season, the wide dynamic range vegetation index (WDRVI) showcased superior correlation coefficients (R), coefficients of determination (R2), and the lowest root mean square errors (RMSEs) of 0.92, 0.86, and 0.14, respectively. However, during the rabi season, the atmospherically resistant vegetation index (ARVI) achieved the highest R and R2 and the lowest RMSEs of 0.83, 0.79, and 0.15, respectively, indicating better accuracy in predicting LAI. Conversely, the normalised difference red-edge index (NDRE) during the kharif season and the modified chlorophyll absorption ratio index (MCARI) during the rabi season were identified as the predictors with the highest accuracy for SPAD chlorophyll prediction. Specifically, R values of 0.91 and 0.94, R2 values of 0.83 and 0.82, and RMSE values of 2.07 and 3.10 were obtained, respectively. The most effective indices for LAI prediction during the kharif season (WDRVI and NDRE) and for SPAD chlorophyll prediction during the rabi season (ARVI and MCARI) were further utilised to construct a yield model using stepwise regression analysis. Integrating the predicted LAI and SPAD chlorophyll values into the model resulted in higher accuracy compared to individual predictions. More exactly, the R2 values were 0.51 and 0.74, while the RMSE values were 9.25 and 6.72, during the kharif and rabi seasons, respectively. These findings underscore the utility of UAV-based multispectral imaging in predicting crop yields, thereby aiding in sustainable crop management practices and benefiting farmers and policymakers alike. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the experimental farm in this study.</p>
Full article ">Figure 2
<p>Variation in weather parameters during crop growing period.</p>
Full article ">Figure 3
<p>Flowchart of multispectral image processing.</p>
Full article ">Figure 4
<p>Treatment-wise representation of various vegetation indices (VIs) and field observed values of leaf area index (LAI) and SPAD chlorophyll by heatmap: (<b>a</b>) <span class="html-italic">kharif</span> maize season; (<b>b</b>) <span class="html-italic">rabi</span> maize season. Note: Fourteen treatment details, viz., 1: PE AA@20%; 2: PE CU@10% + AA@10%; 3: PE CU@10% + CS@10% + AA@10%; 4: PE CS@10% + AA@10% + TC@10%; 5: EPOE AA@20%; 6: EPOE CU@10% + AA@10%; 7: EPOE CU@10% + CS@10% + AA@10%; 8: EPOE CS@10% + AA@10% + TC@10%; 9: POE AA@20%; 10: POE CU@10% + AA@10%; 11: POE CU@10% + CS@10% + AA@10%; 12: POE CS@10% + AA@10% + TC@10%; 13: hand weeding; 14: weedy check (CU: cow urine, CS: common salt, AA: acetic acid, TC: <span class="html-italic">Terminalia chebula</span>, PE: pre-emergence, EPOE: early postemergence, POE: postemergence).</p>
Full article ">Figure 5
<p>Spatial variability of vegetation indices (VIs) during the <span class="html-italic">kharif</span> maize: (<b>a</b>) normalised difference vegetation index (NDVI); (<b>b</b>) green normalised difference vegetation index (GNDVI); (<b>c</b>) normalised difference red-edge index (NDRE); (<b>d</b>) enhanced vegetation index (EVI); (<b>e</b>) excess green vegetation index (ExGVI); (<b>f</b>) wide dynamic range vegetation index (WDRVI); (<b>g</b>) atmospherically resistant vegetation index (ARVI); (<b>h</b>) green chlorophyll vegetation index (GCVI); (<b>i</b>) modified chlorophyll absorption ratio index (MCARI).</p>
Full article ">Figure 5 Cont.
<p>Spatial variability of vegetation indices (VIs) during the <span class="html-italic">kharif</span> maize: (<b>a</b>) normalised difference vegetation index (NDVI); (<b>b</b>) green normalised difference vegetation index (GNDVI); (<b>c</b>) normalised difference red-edge index (NDRE); (<b>d</b>) enhanced vegetation index (EVI); (<b>e</b>) excess green vegetation index (ExGVI); (<b>f</b>) wide dynamic range vegetation index (WDRVI); (<b>g</b>) atmospherically resistant vegetation index (ARVI); (<b>h</b>) green chlorophyll vegetation index (GCVI); (<b>i</b>) modified chlorophyll absorption ratio index (MCARI).</p>
Full article ">Figure 6
<p>Spatial variability of vegetation indices (VIs) during the <span class="html-italic">rabi</span> season: (<b>a</b>) normalised difference vegetation index (NDVI); (<b>b</b>) green normalised difference vegetation index (GNDVI); (<b>c</b>) normalised difference red-edge index (NDRE); (<b>d</b>) enhanced vegetation index (EVI); (<b>e</b>) excess green vegetation index (ExGVI); (<b>f</b>) wide dynamic range vegetation index (WDRVI); (<b>g</b>) atmospherically resistant vegetation index (ARVI); (<b>h</b>) green chlorophyll vegetation index (GCVI); (<b>i</b>) modified chlorophyll absorption ratio index (MCARI).</p>
Full article ">Figure 7
<p>Correlation between vegetation indices (VIs) against leaf area index (LAI) and SPAD chlorophyll of the <span class="html-italic">kharif</span> and <span class="html-italic">rabi</span> maize. Note: Levels of significance: * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, and *** <span class="html-italic">p</span> &lt; 0.001. K: <span class="html-italic">kharif</span>; R: <span class="html-italic">rabi</span>.</p>
Full article ">Figure 8
<p>Accuracy assessment between field-observed (o) and predicted (<span class="html-italic">p</span>) values of different parameters of maize during the <span class="html-italic">kharif</span> and <span class="html-italic">rabi</span> seasons: (<b>a</b>) leaf area index (LAI); (<b>b</b>) SPAD chlorophyll; (<b>c</b>) yield. Note. R-squared (R<sup>2</sup>): coefficient of determination.</p>
Full article ">Figure 8 Cont.
<p>Accuracy assessment between field-observed (o) and predicted (<span class="html-italic">p</span>) values of different parameters of maize during the <span class="html-italic">kharif</span> and <span class="html-italic">rabi</span> seasons: (<b>a</b>) leaf area index (LAI); (<b>b</b>) SPAD chlorophyll; (<b>c</b>) yield. Note. R-squared (R<sup>2</sup>): coefficient of determination.</p>
Full article ">
Back to TopTop