[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Novel Applications of Optical Sensors and Machine Learning in Agricultural Monitoring—2nd Edition

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: closed (25 June 2024) | Viewed by 17949

Special Issue Editors


E-Mail Website
Guest Editor
Key Laboratory of Quantitative Remote Sensing in Agriculture, Ministry of Agriculture and Rural Affairs, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
Interests: UAV; biomass; nutrient management; yield mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
Interests: remote sensing; climate change; machine learning; ecosystem model
Special Issues, Collections and Topics in MDPI journals
Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing 100081, China
Interests: UAV; smart orchard; pest management; pest risk mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences, Hangzhou 310021, China
Interests: image segmentation; UAV; machine learning; pattern recognition; IOT
Special Issues, Collections and Topics in MDPI journals
College of Information and Management Science, Henan Agricultural University, Zhengzhou 450002, China
Interests: remote sensing; precision agriculture; machine learning; crop model; crop mapping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Agricultural production management is facing a new era of intelligence and automation. With developments in sensor technologies, temporal, spectral, and spatial resolution from ground/air/space platforms have been notably improved. Optical sensors play an essential role in agriculture production management. In particular, monitoring plant health, growth condition, and insect infestation have traditionally been approached by performing extensive fieldwork.

The processing and analysis of huge amounts of data from different sensors still face many challenges. Machine learning can derive and process agricultural information from optical sensors onboard ground, air, and space platforms. Advances in optical images and machine learning have attracted widespread attention, but we call for more highly flexible solutions for various agricultural study applications.

We believe that sensors, artificial intelligence, and machine learning are not simply scientific experiments but opportunities to make our agricultural production management more efficient and cost-effective, further contributing to the healthy development of nature–human systems.

This Topic seeks to compile the latest research on optical sensors and machine learning in agricultural monitoring. The following provides a general (but not exhaustive) overview of subjects that might be relevant to this Topic:

  • Machine learning approaches for crop health, growth, and yield monitoring.
  • Combined multisource/multi-sensor data to improve crop parameter mapping.
  • Crop-related growth models, artificial intelligence models, algorithms, and precision management.
  • Farmland environmental monitoring and management.
  • Ground, air, and space platform application in precision agriculture.
  • Development and application of field robotics.
  • High-throughput field information surveys.
  • Phenological monitoring.

Dr. Haikuan Feng
Dr. Yanjun Yang
Dr. Ning Zhang
Dr. Chengquan Zhou
Dr. Jibo Yue
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • optical sensor
  • crop mapping
  • precision agriculture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 12292 KiB  
Article
Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning
by Guangfu Gao, Shanxin Zhang, Jianing Shen, Kailong Hu, Jia Tian, Yihan Yao, Qingjiu Tian, Yuanyuan Fu, Haikuan Feng, Yang Liu and Jibo Yue
Agriculture 2024, 14(12), 2240; https://doi.org/10.3390/agriculture14122240 - 6 Dec 2024
Viewed by 473
Abstract
Conservation tillage involves covering the soil surface with crop residues after harvest, typically through reduced or no-tillage practices. This approach increases the soil organic matter, improves the soil structure, prevents erosion, reduces water loss, promotes microbial activity, and enhances root development. Therefore, accurate [...] Read more.
Conservation tillage involves covering the soil surface with crop residues after harvest, typically through reduced or no-tillage practices. This approach increases the soil organic matter, improves the soil structure, prevents erosion, reduces water loss, promotes microbial activity, and enhances root development. Therefore, accurate information on crop residue coverage is critical for monitoring the implementation of conservation tillage practices. This study collected “crop–crop residues–soil” images from wheat-soybean rotation fields using mobile phones to create calibration, validation, and independent validation datasets. We developed a deep learning model named crop–crop residue–soil segmentation network (CCRSNet) to enhance the performance of cropland “crop–crop residues–soil” image segmentation and proportion extraction. The model enhances the segmentation accuracy and proportion extraction by extracting and integrating shallow and deep image features and attention modules to capture multi-scale contextual information. Our findings indicated that (1) lightweight models outperformed deeper networks for “crop–crop residues–soil” image segmentation. When CCRSNet employed a deep network backbone (ResNet50), its feature extraction capability was inferior to that of lighter models (VGG16). (2) CCRSNet models that integrated shallow and deep features with attention modules achieved a high segmentation and proportion extraction performance. Using VGG16 as the backbone, CCRSNet achieved an mIoU of 92.73% and a PA of 96.23% in the independent validation dataset, surpassing traditional SVM and RF models. The RMSE for the proportion extraction accuracy ranged from 1.05% to 3.56%. These results demonstrate the potential of CCRSNet for the accurate, rapid, and low-cost detection of crop residue coverage. However, the generalizability and robustness of deep learning models depend on the diversity of calibration datasets. Further experiments across different regions and crops are required to validate this method’s accuracy and applicability for “crop–crop residues–soil” image segmentation and proportion extraction. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and experimental sites. (<b>a</b>) Location of the study area. (<b>b</b>) Soybean experimental field. (<b>c</b>) Field “crop–crop residue–soil” digital images, <span class="html-italic">f<sub>CR</sub></span> (crop residue coverage).</p>
Full article ">Figure 2
<p>Original image and annotated image.</p>
Full article ">Figure 3
<p>Methodology framework.</p>
Full article ">Figure 4
<p>CCRSNet architecture.</p>
Full article ">Figure 5
<p>mIoU and loss curves of CCRSNet semantic segmentation network with different backbone networks during calibration.</p>
Full article ">Figure 6
<p>Visualization of segmentation results for processed images using the CCRSNet model with VGG16 as the backbone network.</p>
Full article ">Figure 7
<p>Visualization of segmentation results for original images using the CCRSNet model with VGG16 as the backbone network.</p>
Full article ">Figure 8
<p>Class activation mapping using the CCRSNet model with VGG16 as the backbone network.</p>
Full article ">Figure 9
<p>Proportion extraction of crop, crop residues, and soil using digital images and deep learning based on the TVD and IVD datasets. (<b>a</b>) crop (TVD-vali). (<b>b</b>) crop residues (TVD-vali). (<b>c</b>) soil (TVD-vali). (<b>d</b>) crop (IVD). (<b>e</b>) crop residues (IVD). (<b>f</b>) soil (IVD).</p>
Full article ">Figure A1
<p>Ablation experiment architectures. (<b>a</b>) CCRSNet without the deep and shallow feature structure, (<b>b</b>) CCRSNet without the attention module.</p>
Full article ">
20 pages, 7208 KiB  
Article
Combining UAV Multispectral Imaging and PROSAIL Model to Estimate LAI of Potato at Plot Scale
by Shuang Li, Yongxin Lin, Ping Zhu, Liping Jin, Chunsong Bian and Jiangang Liu
Agriculture 2024, 14(12), 2159; https://doi.org/10.3390/agriculture14122159 - 27 Nov 2024
Viewed by 488
Abstract
Accurate and rapid estimation of the leaf area index (LAI) is essential for assessing crop growth and nutritional status, guiding farm management, and providing valuable phenotyping data for plant breeding. Compared to the tedious and time-consuming manual measurements of the LAI, remote sensing [...] Read more.
Accurate and rapid estimation of the leaf area index (LAI) is essential for assessing crop growth and nutritional status, guiding farm management, and providing valuable phenotyping data for plant breeding. Compared to the tedious and time-consuming manual measurements of the LAI, remote sensing has emerged as a valuable tool for rapid and accurate estimation of the LAI; however, the empirical inversion modeling methods face challenges of low efficiency for actual LAI measurements and poor model interpretability. The integration of radiative transfer models (RTMs) can overcome these problems to some extent. The aim of this study was to explore the potential of combining the PROSAIL model with high-resolution unmanned aerial vehicle (UAV) multispectral imaging to estimate the LAI across different growth stages at the plot scale. In this study, four inversion strategies for estimating the LAI were tested. Firstly, two types of lookup tables (LUTs) were built to estimate potato LAI of different varieties across different growth stages. Specifically, LUT1 was based on band reflectance, and LUT2 was based on vegetation index. Secondly, the hybrid models combining LUTs generated by PROSAIL and two machine learning algorithms (random forest (RF), Partial Least Squares Regression (PLSR)) are built to estimate potato LAI. The determination of coefficient (R2) of models for estimating LAI by LUTs ranged from 0.24 to 0.64. The hybrid method that integrates UAV multispectral, PROSAIL, and machine learning significantly improved the accuracy of LAI estimation. Compared to the results based on LUT2, the hybrid model achieved higher accuracy with the R2 of the inversion model improved by 30% to 263%. The LAI retrieval model using the PROSAIL model and PLSR achieved an R2 as high as 0.87, while the R2 using the RF algorithm ranged from 0.33 to 0.81. The proposed hybrid model, integrated with UAV multispectral data, PROSAIL, and PLSR can achieve approximate accuracy compared with the empirical inversion models, which can alleviate the labor-intensive process of handheld LAI measurements for building inversion models. Thus, the hybrid approach provides a feasible and efficient strategy for estimating the LAI of potato varieties across different growth stages at the plot scale. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area and field plot distribution.</p>
Full article ">Figure 2
<p>Unmanned aerial vehicle multispectral image acquisition system.</p>
Full article ">Figure 3
<p>Spectral response function of the RedEdge-P multispectral sensor used.</p>
Full article ">Figure 4
<p>Leaf area photo background removal effect.</p>
Full article ">Figure 5
<p>Local sensitivity analysis of PROSAIL model parameters. Figures (<b>a</b>,<b>b</b>) show the variation in spectral reflectance at 400–2500 nm for 3 &lt; LAI &lt; 6 and 0 &lt; LAI &lt; 3; (<b>c</b>–<b>o</b>) show the effects of Cab, Car, Cm, Cw, Cbrown, hspot, ALA, N, skyl, psoil, tts, tto, and psi on the spectral reflectance at 400 nm–2500 nm, respectively.</p>
Full article ">Figure 6
<p>Global sensitivity analysis of the main PROSAIL model parameters: (<b>a</b>) the result of global sensitivity analysis for 0 &lt; LAI &lt; 3; (<b>b</b>) the result of global sensitivity analysis for 3 &lt; LAI &lt; 6.</p>
Full article ">Figure 7
<p>Results of LAI inversion using LUT1.</p>
Full article ">Figure 8
<p>LAI inversion results of potato varieties across all growth stages using four strategies. The above results are for the model validation set, and the hybrid model results are for model validation using measured data.</p>
Full article ">
16 pages, 3804 KiB  
Article
Detection of Mechanical Damage in Corn Seeds Using Hyperspectral Imaging and the ResNeSt_E Deep Learning Network
by Hua Huang, Yinfeng Liu, Shiping Zhu, Chuan Feng, Shaoqi Zhang, Lei Shi, Tong Sun and Chao Liu
Agriculture 2024, 14(10), 1780; https://doi.org/10.3390/agriculture14101780 - 10 Oct 2024
Viewed by 750
Abstract
Corn is one of the global staple grains and the largest grain crop in China. During harvesting, grain separation, and corn production, corn is susceptible to mechanical damage including surface cracks, internal cracks, and breakage. However, the internal cracks are difficult to observe. [...] Read more.
Corn is one of the global staple grains and the largest grain crop in China. During harvesting, grain separation, and corn production, corn is susceptible to mechanical damage including surface cracks, internal cracks, and breakage. However, the internal cracks are difficult to observe. In this study, hyperspectral imaging was used to detect mechanical damage in corn seeds. The corn seeds were divided into four categories that included intact, broken, internally cracked, and surface-crackedtv. This study compared three feature extraction methods, including principal component analysis (PCA), kernel PCA (KPCA), and factor analysis (FA), as well as a joint feature extraction method consisting of a combination of these methods. The dimensionality reduction results of the three methods (FA + KPCA, KPCA + FA, and PCA + FA) were combined to form a new combined dataset and improve the classification. We then compared the effects of six classification models (ResNet, ShuffleNet-V2, MobileNet-V3, ResNeSt, EfficientNet-V2, and MobileNet-V4) and proposed a ResNeSt_E network based on the ResNeSt and efficient multi-scale attention modules. The accuracy of ResNeSt_E reached 99.0%, and this was 0.4% higher than that of EfficientNet-V2 and 0.7% higher than that of ResNeSt. Additionally, the number of parameters and memory requirements were reduced and the frames per second were improved. We compared two dimensionality reduction methods: KPCA + FA and PCA + FA. The classification accuracies of the two methods were the same; however, PCA + FA was much more efficient than KPCA + FA and was more suitable for practical detection. The ResNeSt_E network could detect both internal and surface cracks in corn seeds, making it suitable for mobile terminal applications. The results demonstrated that detecting mechanical damage in corn seeds using hyperspectral images was possible. This study provides a reference for mechanical damage detection methods for corn. Full article
Show Figures

Figure 1

Figure 1
<p>Hyperspectral true-color images of corn seeds: (<b>a</b>) IN, (<b>b</b>) BR, (<b>c</b>) IC, and (<b>d</b>) SC.</p>
Full article ">Figure 2
<p>Overall process.</p>
Full article ">Figure 3
<p>Hyperspectral image processing procedure.</p>
Full article ">Figure 4
<p>Feature extraction.</p>
Full article ">Figure 5
<p>Schematic depicting the functioning of ResNeSt_E.</p>
Full article ">Figure 6
<p>Training process of ResNeSt_E on combined dataset: (<b>a</b>) training set loss function, (<b>b</b>) validation set loss function, (<b>c</b>) training set accuracy, (<b>d</b>) assessment indicators for the classification of validation set.</p>
Full article ">Figure 7
<p>Confusion matrix of ResNeSt_E on the test set: (<b>a</b>) FA + KPCA, (<b>b</b>) KPCA + FA, (<b>c</b>) PCA + FA.</p>
Full article ">
19 pages, 4569 KiB  
Article
Effects of Variety and Growth Stage on UAV Multispectral Estimation of Plant Nitrogen Content of Winter Wheat
by Meiyan Shu, Zhiyi Wang, Wei Guo, Hongbo Qiao, Yuanyuan Fu, Yan Guo, Laigang Wang, Yuntao Ma and Xiaohe Gu
Agriculture 2024, 14(10), 1775; https://doi.org/10.3390/agriculture14101775 - 9 Oct 2024
Viewed by 811
Abstract
The accurate estimation of nitrogen content in crop plants is the basis of precise nitrogen fertilizer management. Unmanned aerial vehicle (UAV) imaging technology has been widely used to rapidly estimate the nitrogen in crop plants, but the accuracy will still be affected by [...] Read more.
The accurate estimation of nitrogen content in crop plants is the basis of precise nitrogen fertilizer management. Unmanned aerial vehicle (UAV) imaging technology has been widely used to rapidly estimate the nitrogen in crop plants, but the accuracy will still be affected by the variety, the growth stage, and other factors. We aimed to (1) analyze the correlation between the plant nitrogen content of winter wheat and spectral, texture, and structural information; (2) compare the accuracy of nitrogen estimation at single versus multiple growth stages; (3) assess the consistency of UAV multispectral images in estimating nitrogen content across different wheat varieties; (4) identify the best model for estimating plant nitrogen content (PNC) by comparing five machine learning algorithms. The results indicated that for the estimation of PNC across all varieties and growth stages, the random forest regression (RFR) model performed best among the five models, obtaining R2, RMSE, MAE, and MAPE values of 0.90, 0.10%, 0.08, and 0.06%, respectively. Additionally, the RFR estimation model achieved commendable accuracy in estimating PNC in three different varieties, with R2 values of 0.91, 0.93, and 0.72. For the dataset of the single growth stage, Gaussian process regression (GPR) performed best among the five regression models, with R2 values ranging from 0.66 to 0.81. Due to the varying nitrogen sensitivities, the accuracy of UAV multispectral nitrogen estimation was also different among the three varieties. Among the three varieties, the estimation accuracy of SL02-1 PNC was the worst. This study is helpful for the rapid diagnosis of crop nitrogen nutrition through UAV multispectral imaging technology. Full article
Show Figures

Figure 1

Figure 1
<p>The geographical location of the study area and the distribution of experimental plots.</p>
Full article ">Figure 2
<p>The main technical flowchart of this study. Note: PNC, plant nitrogen content; UAV, unmanned aerial vehicle; GNDVI, green-band normalized vegetation index; NDVI, normalized difference vegetation index; CARI, chlorophyll absorption ratio index; OSAVI, optimized soil-adjusted vegetation index; EVI, enhanced vegetation index; TVI, triangle vegetation index; SD, standard deviation; CV, variable coefficient; PLSR, partial least squares regression; RFR, random forest regression; SVR, support vector machine regression; GPR, Gaussian process regression; NNETR, neural network regression model; R<sup>2</sup>, coefficient of determination; RMSE, root mean square error; MAE, mean absolute error; MAPE, mean absolute percentage error. *, **, and *** indicate significant differences at <span class="html-italic">p</span> &lt; 0.05, <span class="html-italic">p</span> &lt; 0.01, and <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 3
<p>Plant nitrogen content of wheat at different nitrogen levels and growth stages. Note: PNC, plant nitrogen content; three wheat varieties: HG 35, Hengguan 35; ML 1, Malan No. 1; SL 02-1, Shiluan 02-1.</p>
Full article ">Figure 4
<p>Estimation results of the PNC of multiple varieties in a single growth stage based on GPR. Note: PNC, plant nitrogen content; GPR, Gaussian process regression.</p>
Full article ">Figure 5
<p>Optimal nitrogen estimation results of wheat plants of a single variety in multiple growth periods. Note: PNC, plant nitrogen content; three wheat varieties: HG 35, Hengguan 35; ML 1, Malan No. 1; SL 02-1, Shiluan 02-1.</p>
Full article ">Figure 6
<p>Nitrogen content and measured values of wheat plants at different growth stages estimated based on RFR. Note: PNC, plant nitrogen content; RFR, random forest regression.</p>
Full article ">Figure 6 Cont.
<p>Nitrogen content and measured values of wheat plants at different growth stages estimated based on RFR. Note: PNC, plant nitrogen content; RFR, random forest regression.</p>
Full article ">
19 pages, 11653 KiB  
Article
Influence of Vegetation Phenology on the Temporal Effect of Crop Fractional Vegetation Cover Derived from Moderate-Resolution Imaging Spectroradiometer Nadir Bidirectional Reflectance Distribution Function–Adjusted Reflectance
by Yinghao Lin, Tingshun Fan, Dong Wang, Kun Cai, Yang Liu, Yuye Wang, Tao Yu and Nianxu Xu
Agriculture 2024, 14(10), 1759; https://doi.org/10.3390/agriculture14101759 - 5 Oct 2024
Viewed by 731
Abstract
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may [...] Read more.
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may be unreliable, especially since the canopy structure of vegetation undergoes stark changes at the start of season (SOS) and the end of season (EOS). Therefore, to investigate the MODIS NBAR product’s temporal effect on the quantitative remote sensing of crops at different stages of the growing seasons, this study selected typical phenological parameters, namely SOS, EOS, and the intervening stable growth of season (SGOS). The PROBA-V bioGEOphysical product Version 3 (GEOV3) Fractional Vegetation Cover (FVC) served as verification data, and the Pearson correlation coefficient (PCC) was used to compare and analyze the retrieval accuracy of FVC derived from the MODIS NBAR product and MODIS Surface Reflectance product. The Anisotropic Flat Index (AFX) was further employed to explore the influence of vegetation type and mixed pixel distribution characteristics on the BRDF shape under different stages of the growing seasons and different FVC; that was then combined with an NDVI spatial distribution map to assess the feasibility of using the reflectance of other characteristic directions besides NBAR for FVC correction. The results revealed the following: (1) Generally, at the SOSs and EOSs, the differences in PCCs before vs. after the NBAR correction mainly ranged from 0 to 0.1. This implies that the accuracy of FVC derived from MODIS NBAR is lower than that derived from MODIS Surface Reflectance. Conversely, during the SGOSs, the differences in PCCs before vs. after the NBAR correction ranged between –0.2 and 0, suggesting the accuracy of FVC derived from MODIS NBAR surpasses that derived from MODIS Surface Reflectance. (2) As vegetation phenology shifts, the ensuing differences in NDVI patterning and AFX can offer auxiliary information for enhanced vegetation classification and interpretation of mixed pixel distribution characteristics, which, when combined with NDVI at characteristic directional reflectance, could enable the accurate retrieval of FVC. Our results provide data support for the BRDF correction timescale effect of various stages of the growing seasons, highlighting the potential importance of considering how they differentially influence the temporal effect of NBAR corrections prior to monitoring vegetation when using the MODIS NBAR product. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial extent of the Wancheng District study area (in Henan Province, China). (<b>a</b>) Map of land cover types showing the location of sampling points across the study area. This map came from MCD12Q1 (v061). (<b>b</b>–<b>d</b>) True-color images of the three mixed pixels, obtained from Sentinel-2. The distribution characteristics are as follows: crops above with buildings below (<b>b</b>); crops below with buildings above (<b>c</b>); and buildings in the upper-left corner, crops in the remainder (<b>d</b>).</p>
Full article ">Figure 2
<p>Monthly average temperature and monthly total precipitation in the study area, from 2017 to 2021.</p>
Full article ">Figure 3
<p>Data processing flow chart. The green rectangles from top to the bottom represent three steps: crop phenological parameters extraction with TIMESAT; Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4; and accuracy evaluation, respectively. Blue solid rectangles refer to a used product or derived results, while blue dashed rectangles refer to the software or model used in this study. NDVI<sub>MOD09GA</sub>: NDVI derived from MOD09GA, NDVI<sub>MCD43A4</sub>: NDVI derived from MCD43A4, FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: Pearson correlation coefficient (PCC) calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC.</p>
Full article ">Figure 4
<p>NDVI and EVI time series fitted curves and phenological parameters of crops. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 5
<p>Spatial distribution of Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4, and the difference images of FVC. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. (<b>a</b>–<b>c</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 15 November 2020, respectively; (<b>d</b>–<b>f</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 10 February 2021, respectively; (<b>g</b>–<b>i</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 30 September 2021, respectively.</p>
Full article ">Figure 6
<p>Pearson correlation coefficients (PCCs) of Fractional Vegetation Cover (FVC) derived before and after the NBAR correction with GEOV3 FVC at different stages of the growing seasons. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA. FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: PCC calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC. (<b>a</b>) PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub> in 2018–2021; (<b>b</b>) Scatterplot of numerical differences between PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub>. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 7
<p>NDVI spatial distribution maps of crop pixel, savanna pixel, and grassland pixel in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crop. (<b>e</b>–<b>h</b>) Savanna. (<b>i</b>–<b>l</b>) Grassland. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">Figure 8
<p>NDVI spatial distribution maps of mixed pixels in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crops above and buildings below. (<b>e</b>–<b>h</b>) Crops below and buildings above. (<b>i</b>–<b>l</b>) Buildings in the upper-left corner and crops in the remainder. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">
19 pages, 6791 KiB  
Article
Vegetation Phenology Changes and Recovery after an Extreme Rainfall Event: A Case Study in Henan Province, China
by Yinghao Lin, Xiaoyu Guo, Yang Liu, Liming Zhou, Yadi Wang, Qiang Ge and Yuye Wang
Agriculture 2024, 14(9), 1649; https://doi.org/10.3390/agriculture14091649 - 20 Sep 2024
Viewed by 590
Abstract
Extreme rainfall can severely affect all vegetation types, significantly impacting crop yield and quality. This study aimed to assess the response and recovery of vegetation phenology to an extreme rainfall event (with total weekly rainfall exceeding 500 mm in several cities) in Henan [...] Read more.
Extreme rainfall can severely affect all vegetation types, significantly impacting crop yield and quality. This study aimed to assess the response and recovery of vegetation phenology to an extreme rainfall event (with total weekly rainfall exceeding 500 mm in several cities) in Henan Province, China, in 2021. The analysis utilized multi-sourced data, including remote sensing reflectance, meteorological, and crop yield data. First, the Normalized Difference Vegetation Index (NDVI) time series was calculated from reflectance data on the Google Earth Engine (GEE) platform. Next, the ‘phenofit’ R language package was used to extract the phenology parameters—the start of the growing season (SOS) and the end of the growing season (EOS). Finally, the Statistical Package for the Social Sciences (SPSS, v.26.0.0.0) software was used for Duncan’s analysis, and Matrix Laboratory (MATLAB, v.R2022b) software was used to analyze the effects of rainfall on land surface phenology (LSP) and crop yield. The results showed the following. (1) The extreme rainfall event’s impact on phenology manifested directly as a delay in EOS in the year of the event. In 2021, the EOS of the second growing season was delayed by 4.97 days for cropland, 15.54 days for forest, 13.06 days for grassland, and 12.49 days for shrubland. (2) Resistance was weak in 2021, but recovery reached in most areas by 2022 and slowed in 2023. (3) In each year, SOS was predominantly negatively correlated with total rainfall in July (64% of cropland area in the first growing season, 53% of grassland area, and 71% of shrubland area). In contrast, the EOS was predominantly positively correlated with rainfall (51% and 54% area of cropland in the first and second growing season, respectively, and 76% of shrubland area); however, crop yields were mainly negatively correlated with rainfall (71% for corn, 60% for beans) and decreased during the year of the event, with negative correlation coefficients between rainfall and yield (−0.02 for corn, −0.25 for beans). This work highlights the sensitivity of crops to extreme rainfall and underscores the need for further research on their long-term recovery. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map of the study area, Henan Province (China), and the distribution of its land cover types. (<b>b</b>) Cumulative rainfall from 17 July to 23 July 2021. The irregular purple line-demarcated zone is where the cumulative rainfall exceeded 500 mm, with its cities delineated by gray lines.</p>
Full article ">Figure 2
<p>The flowchart of this vegetation phenology study.</p>
Full article ">Figure 3
<p>Trends in the NDVI of each vegetation type, from 2018 to 2023. Green curves show representative sample point values in the extreme rainfall zone while gray curves show those in non-extreme rainfall zone; the vertically dashed red lines and red rectangles mark the occurrence of the extreme rainfall event in 2021 in Henan Province (China).</p>
Full article ">Figure 4
<p>(<b>a</b>) Spatial distribution of resistance values of vegetation after the extreme rainfall event in 2021, in Henan Province, China. In (<b>b</b>,<b>c</b>) are the patterns of vegetation recovery in 2022 and 2023, respectively.</p>
Full article ">Figure 5
<p>The patterns of rainfall recovery in (<b>a</b>) 2022 and (<b>b</b>) 2023, respectively.</p>
Full article ">Figure 6
<p>Spatial distribution of Pearson’s <span class="html-italic">r</span> coefficient values in cropland: (<b>a</b>–<b>d</b>) are the correlations of SOS1, EOS1, SOS2, and EOS2 with rainfall, respectively.</p>
Full article ">Figure 7
<p>Spatial distribution of Pearson’s <span class="html-italic">r</span> coefficient values for the forest (<b>a</b>) SOS and (<b>b</b>) EOS correlations with rainfall, in Henan Province, China.</p>
Full article ">Figure 8
<p>Spatial distribution of Pearson’s <span class="html-italic">r</span> coefficient values for the grassland (<b>a</b>) SOS and (<b>b</b>) EOS correlations with rainfall, in Henan Province, China.</p>
Full article ">Figure 9
<p>Spatial distribution of Pearson’s <span class="html-italic">r</span> coefficient values for the shrubland (<b>a</b>) SOS and (<b>b</b>) EOS correlations with rainfall, in Henan Province, China.</p>
Full article ">Figure 10
<p>Spatial distribution of Pearson’s <span class="html-italic">r</span> coefficients for the (<b>a</b>) corn yield and (<b>b</b>) bean yield correlations with rainfall, in Henan Province, China.</p>
Full article ">Figure 11
<p>The average rainfall per city in July each year from 2018 to 2023.</p>
Full article ">
23 pages, 9401 KiB  
Article
Refinement of Cropland Data Layer with Effective Confidence Layer Interval and Image Filtering
by Reza Maleki, Falin Wu, Amel Oubara, Loghman Fathollahi and Gongliu Yang
Agriculture 2024, 14(8), 1285; https://doi.org/10.3390/agriculture14081285 - 4 Aug 2024
Viewed by 1077
Abstract
Various systems have been developed to process agricultural land data for better management of crop production. One such system is Cropland Data Layer (CDL), produced by the National Agricultural Statistics Service of the United States Department of Agriculture (USDA). The CDL has been [...] Read more.
Various systems have been developed to process agricultural land data for better management of crop production. One such system is Cropland Data Layer (CDL), produced by the National Agricultural Statistics Service of the United States Department of Agriculture (USDA). The CDL has been widely used for training deep learning (DL) segmentation models. However, it contains various errors, such as salt-and-pepper noise, and must be refined before being used in DL training. In this study, we used two approaches to refine the CDL for DL segmentation of major crops from a time series of Sentinel-2 monthly composite images. Firstly, different confidence intervals of the confidence layer were used to refine the CDL. Secondly, several image filters were employed to improve data quality. The refined CDLs were then used as the ground-truth in DL segmentation training and evaluation. The results demonstrate that the CDL with +45% and +55% confidence intervals produced the best results, improving the accuracy of DL segmentation by approximately 1% compared to non-refined data. Additionally, filtering the CDL using the majority and expand–shrink filters yielded the best performance, enhancing the evaluation metrics by about 1.5%. The findings suggest that pre-filtering the CDL and selecting an effective confidence interval can significantly improve DL segmentation performance, contributing to more accurate and reliable agricultural monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Geographic overview and data layers used in the study. The top-left map shows the study area within the Mississippi Delta (blue rectangle) and T14TNK test area (red rectangle). The top-right image displays Sentinel-2 composite imagery from May 2021. The bottom-left map illustrates the distribution of crops in the CDL. The bottom-right map depicts the CDL confidence layer, indicating the confidence values associated with the CDL.</p>
Full article ">Figure 2
<p>NDVI profiles for the major crops in the study area throughout the year 2021.</p>
Full article ">Figure 3
<p>The 2021 CDL land cover distribution in the study area.</p>
Full article ">Figure 4
<p>Flowchart summarizing the research approach for major crop mapping from Sentinel-2 imagery using various CDL confidence levels and image filters.</p>
Full article ">Figure 5
<p>Distribution of crop pixels refined by different confidence levels of major crops within the study area for the year 2021.</p>
Full article ">Figure 6
<p>Impact of varying confidence intervals on the refinement of the CDL. The percentages indicate the confidence thresholds used to refine the CDL.</p>
Full article ">Figure 7
<p>DL segmentation results of croplands from Sentinel-2 imagery. The DL models were trained using different CDL confidence layer intervals.</p>
Full article ">Figure 8
<p>Confusion matrices illustrating the performance of DL models trained with different CDL confidence intervals in segmenting major crops. Diagonal numbers represent the percentage of correctly classified instances for each crop, while non-diagonal numbers indicate the percentage of misclassified instances between different crops.</p>
Full article ">Figure 9
<p>Filtered CDL and corresponding DL segmentation results for major crops using various image filters. The “No Filter” and “R-CDL” results are included for comparison.</p>
Full article ">Figure 10
<p>Confusion matrices illustrating the performance of DL models using different image filters on the CDL for major crop segmentation. The values within the matrix represent the percentage of correctly and incorrectly classified instances, ranging from 0 to 100.</p>
Full article ">Figure 11
<p>Comparison of DL accuracy metric results using different CDL refinement methods, including the R-CDL from Lin et al.’s study [<a href="#B9-agriculture-14-01285" class="html-bibr">9</a>].</p>
Full article ">Figure 12
<p>F1-score trends comparing the performance of different refining methods between the Mississippi Delta and T14TNK areas. The chart illustrates the accuracy metrics for various confidence intervals and filtering techniques, highlighting the generalizability and robustness of the methods across different geographical regions.</p>
Full article ">
18 pages, 6138 KiB  
Article
Spectral-Frequency Conversion Derived from Hyperspectral Data Combined with Deep Learning for Estimating Chlorophyll Content in Rice
by Lei Du and Shanjun Luo
Agriculture 2024, 14(7), 1186; https://doi.org/10.3390/agriculture14071186 - 18 Jul 2024
Viewed by 1090
Abstract
As a vital pigment for photosynthesis in rice, chlorophyll content is closely correlated with growth status and photosynthetic capacity. The estimation of chlorophyll content allows for the monitoring of rice growth and facilitates precise management in the field, such as the application of [...] Read more.
As a vital pigment for photosynthesis in rice, chlorophyll content is closely correlated with growth status and photosynthetic capacity. The estimation of chlorophyll content allows for the monitoring of rice growth and facilitates precise management in the field, such as the application of fertilizers and irrigation. The advancement of hyperspectral remote sensing technology has made it possible to estimate chlorophyll content non-destructively, quickly, and effectively, offering technical support for managing and monitoring rice growth across wide areas. Although hyperspectral data have a fine spectral resolution, they also cause a large amount of information redundancy and noise. This study focuses on the issues of unstable input variables and the estimation model’s poor applicability to various periods when predicting rice chlorophyll content. By introducing the theory of harmonic analysis and the time-frequency conversion method, a deep neural network (DNN) model framework based on wavelet packet transform-first order differential-harmonic analysis (WPT-FD-HA) was proposed, which avoids the uncertainty in the calculation of spectral parameters. The accuracy of estimating rice chlorophyll content based on WPT-FD and WPT-FD-HA variables was compared at seedling, tillering, jointing, heading, grain filling, milk, and complete periods to evaluate the validity and generalizability of the suggested framework. The results demonstrated that all of the WPT-FD-HA models’ single-period validation accuracy had coefficients of determination (R2) values greater than 0.9 and RMSE values less than 1. The multi-period validation model had a root mean square error (RMSE) of 1.664 and an R2 of 0.971. Even with independent data splitting validation, the multi-period model accuracy can still achieve R2 = 0.95 and RMSE = 1.4. The WPT-FD-HA-based deep learning framework exhibited strong stability. The outcome of this study deserves to be used to monitor rice growth on a broad scale using hyperspectral data. Full article
Show Figures

Figure 1

Figure 1
<p>Design for 42- and 48-plot experiments.</p>
Full article ">Figure 2
<p>Schematic diagram of the deep learning network framework.</p>
Full article ">Figure 3
<p>Dataset partitioning and model validation methods.</p>
Full article ">Figure 4
<p>Changes in rice canopy spectra with SPAD and growth stage: (<b>a</b>) rice canopy reflectance under different chlorophyll contents; (<b>b</b>) changes in rice canopy spectra with growth stage; (<b>c</b>) correlation between rice canopy spectra and chlorophyll content.</p>
Full article ">Figure 5
<p>Correlation analysis of different types of data with chlorophyll content in rice.</p>
Full article ">Figure 6
<p>Correlation of HA characterization parameters with chlorophyll content in rice.</p>
Full article ">Figure 7
<p>Comparison between measured and predicted chlorophyll content based on the DNN model at the seeding, tillering, and jointing stages in the 42-plot experiment (the black line is the fitting line and the red dotted line is the 1:1 line).</p>
Full article ">Figure 8
<p>Comparison between measured and predicted chlorophyll content based on the DNN model at the heading, grain filling, and milk stages in the 42-plot experiment (the black line is the fitting line and the red dotted line is the 1:1 line).</p>
Full article ">Figure 9
<p>Comparison between measured and predicted chlorophyll content based on the DNN model throughout the whole period in the 42-plot experiment (the black line is the fitting line and the red dotted line is the 1:1 line).</p>
Full article ">Figure 10
<p>Comparison between measured and predicted chlorophyll content based on the DNN model throughout the whole period in the 48-plot experiment (the red dotted line is the 1:1 line).</p>
Full article ">
19 pages, 8463 KiB  
Article
Rapid Detection of Fertilizer Information Based on Near-Infrared Spectroscopy and Machine Learning and the Design of a Detection Device
by Yongzheng Ma, Zhuoyuan Wu, Yingying Cheng, Shihong Chen and Jianian Li
Agriculture 2024, 14(7), 1184; https://doi.org/10.3390/agriculture14071184 - 18 Jul 2024
Viewed by 1174
Abstract
The online detection of fertilizer information is pivotal for precise and intelligent variable-rate fertilizer application. However, traditional methods face challenges such as the complex quantification of multiple components and sensor-induced cross-contamination. This study investigates integrating near-infrared principles with machine learning algorithms to identify [...] Read more.
The online detection of fertilizer information is pivotal for precise and intelligent variable-rate fertilizer application. However, traditional methods face challenges such as the complex quantification of multiple components and sensor-induced cross-contamination. This study investigates integrating near-infrared principles with machine learning algorithms to identify fertilizer types and concentrations. We utilized near-infrared transmission spectroscopy and applied Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), and Back-Propagation Neural Network (BPNN) algorithms to analyze full spectrum data. The BPNN model, using S-G smoothing, demonstrated a superior classification performance for the nutrient ions of four fertilizer solutions: HPO42−, NH4+, H2PO4 and K+. Optimization using the competitive adaptive reweighted sampling (CARS) method yielded BPNN model RMSE values of 0.3201, 0.7160, 0.2036, and 0.0177 for HPO42−, NH4+, H2PO4, and K+, respectively. Building on this foundation, we designed a four-channel fertilizer detection device based on the Lambert–Beer law, enabling the real-time detection of fertilizer types and concentrations. The test results confirmed the device’s robust stability, achieving 93% accuracy in identifying fertilizer types and concentrations, with RMSE values ranging from 1.0034 to 2.4947, all within ±8.0% error margin. This study addresses the practical requirements for online fertilizer detection in agricultural engineering, laying the groundwork for efficient water–fertilizer integration technology aligned with sustainable development goals. Full article
Show Figures

Figure 1

Figure 1
<p>Development strategy of NIR fertilizer detection sensors: (<b>a</b>) acquisition and pre-processing of NIR absorption spectra of four fertilizer liquids; (<b>b</b>) determination of characteristic wavelengths and construction of qualitative and quantitative models; (<b>c</b>) design of sensor structures and amplifier circuits; and (<b>d</b>) detection strategy of qualitative analysis followed by quantitative assessment.</p>
Full article ">Figure 2
<p>Fertilizer detection sensor; (<b>a</b>) overall structure of the sensor; (<b>b</b>) exterior structure of the sensor; (<b>c</b>) internal structure of the sensor; (<b>d</b>) physical drawing of the fertilizer sensor.</p>
Full article ">Figure 3
<p>Fertilizer concentration detection strategy.</p>
Full article ">Figure 4
<p>Raw spectra of nutrient ions of four fertilizer solutions to be tested.</p>
Full article ">Figure 5
<p>S-G pretreatment spectra of nutrient ions of four fertilizer solutions to be tested.</p>
Full article ">Figure 6
<p>Effective wavelength distribution of HPO<sub>4</sub><sup>−</sup> after CARS selection.</p>
Full article ">Figure 7
<p>Comparison of the original spectra with the confusion matrix of the four-classification prediction set obtained by S-G preprocessing combined with BPNN: (<b>a</b>) raw spectrum; (<b>b</b>) S-G preprocessing.</p>
Full article ">Figure 8
<p>Correlation between true and predicted values in the BPNN prediction model for four fertilizer solutions to be tested. (<b>a</b>) KH<sub>2</sub>PO<sub>4</sub> quantity contained (<b>b</b>) K<sub>2</sub>SO<sub>4</sub> quantity contained (<b>c</b>) (NH<sub>4</sub>)<sub>2</sub>SO quantity contained (<b>d</b>) (NH<sub>4</sub>)<sub>2</sub>HPO<sub>4</sub> quantity contained.</p>
Full article ">Figure 9
<p>Fitting curves of concentration and absorbance at characteristic wavelengths of four fertilizers to be tested: (<b>a</b>) HPO<sub>4</sub><sup>−</sup> (980 nm); (<b>b</b>) NH<sub>4</sub><sup>+</sup> (1450 nm); (<b>c</b>) H<sub>2</sub>PO<sub>4</sub><sup>−</sup> (1550 nm); (<b>d</b>) K<sup>+</sup> (1600 nm).</p>
Full article ">Figure 10
<p>Stability test of fertilizer detection device: (<b>a</b>) LED irradiation voltage value; (<b>b</b>) transmission voltage value; (<b>c</b>) ambient light voltage value.</p>
Full article ">Figure 11
<p>Confusion matrix for identifying nutrient ion types in four fertilizer solutions.</p>
Full article ">Figure 12
<p>The concentration detection error of the sensor for four kinds of configured fertilizer solution.</p>
Full article ">
14 pages, 15703 KiB  
Article
High-Precision Peach Fruit Segmentation under Adverse Conditions Using Swin Transformer
by Dasom Seo, Seul Ki Lee, Jin Gook Kim and Il-Seok Oh
Agriculture 2024, 14(6), 903; https://doi.org/10.3390/agriculture14060903 - 7 Jun 2024
Cited by 1 | Viewed by 1030
Abstract
In the realm of agricultural automation, the efficient management of tasks like yield estimation, harvesting, and monitoring is crucial. While fruits are typically detected using bounding boxes, pixel-level segmentation is essential for extracting detailed information such as color, maturity, and shape. Furthermore, while [...] Read more.
In the realm of agricultural automation, the efficient management of tasks like yield estimation, harvesting, and monitoring is crucial. While fruits are typically detected using bounding boxes, pixel-level segmentation is essential for extracting detailed information such as color, maturity, and shape. Furthermore, while previous studies have typically focused on controlled environments and scenes, achieving robust performance in real orchard conditions is also imperative. To prioritize these aspects, we propose the following two considerations: first, a novel peach image dataset designed for rough orchard environments, focusing on pixel-level segmentation for detailed insights; and second, utilizing a transformer-based instance segmentation model, specifically the Swin Transformer as a backbone of Mask R-CNN. We achieve superior results compared to CNN-based models, reaching 60.2 AP on the proposed peach image dataset. The proposed transformer-based approach specially excels in detecting small or obscured peaches, making it highly suitable for practical field applications. The proposed model achieved 40.4 AP for small objects, nearly doubling that of CNN-based models. This advancement significantly enhances automated agricultural systems, especially in yield estimation, harvesting, and crop monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Component illustrations of Swin Transformer. The maps following Swin Transformer blocks are attention maps of each stage. (<b>a</b>) Swin Transformer architecture; (<b>b</b>) shifted window approach; (<b>c</b>) Swin Transformer block; (<b>d</b>) ViT block. (<b>c</b>,<b>d</b>) were extracted from [<a href="#B23-agriculture-14-00903" class="html-bibr">23</a>] and [<a href="#B22-agriculture-14-00903" class="html-bibr">22</a>], respectively.</p>
Full article ">Figure 2
<p>Examples of image data and corresponding ground-truth masks: (<b>a</b>,<b>b</b>) tree-focused images; (<b>c</b>) fruit-bunch-focused image. The top row presents original images, and the bottom row shows labeled masks.</p>
Full article ">Figure 3
<p>(<b>a</b>) Image with significant occlusion; (<b>b</b>) the corresponding ground truth. Most peaches are obscured by leaves.</p>
Full article ">Figure 4
<p>(<b>a</b>) Histograms of object area for tree-focused images; (<b>b</b>) and fruit-bunch-focused images.</p>
Full article ">Figure 5
<p>Examples of images containing objects with minimum and maximum areas. Each target object is represented by a yellow box.</p>
Full article ">Figure 6
<p>Confusion matrix for each model. Starting from the top left and moving clockwise, the order is TP, FN, TN, and FP. Each value in the box indicates the ratio with respect to the ground truth, and the values in parentheses indicate the real number of predictions. Mask R-CNN(Swin-T) has the fewest peach misses.</p>
Full article ">Figure 7
<p>Examples of test images from each model. From left to right, ground-truth images and output images from Mask R-CNN(ResNet-50), YOLACT, SOLOv2, and Mask R-CNN(Swin-T) are shown. FP and FN predictions are highlighted as yellow and cyan boxes, respectively. Mask R-CNN(Swin-T) yields the fewest FPs and FNs.</p>
Full article ">Figure 8
<p>Images from unlabeled inputs. Highlighted regions from the third image are magnified under the images. The region in the white box shows the difference in region separation between Mask R-CNN(ResNet-50) and Mask R-CNN(Swin-T). Mask R-CNN(ResNet-50) detects a group of peaches as a single peach with a high confidence score of 0.72, while Mask R-CNN(Swin-T) detects each peach individually. The white box also shows that YOLACT misses about four peaches and does not separate the peach group on the left side of the image as individual objects. SOLOv2 also misses obscured peaches in the white box and shows significant duplicated detections. The yellow box shows that Mask R-CNN(Swin-T) detects tiny peaches better than other models.</p>
Full article ">
16 pages, 2107 KiB  
Article
Phenotyping the Anthocyanin Content of Various Organs in Purple Corn Using a Digital Camera
by Zhengxin Wang, Ye Liu, Ke Wang, Yusong Wang, Xue Wang, Jiaming Liu, Cheng Xu and Youhong Song
Agriculture 2024, 14(5), 744; https://doi.org/10.3390/agriculture14050744 - 10 May 2024
Cited by 1 | Viewed by 1748
Abstract
Anthocyanins are precious industrial raw materials. Purple corn is rich in anthocyanins, with large variation in their content between organs. It is imperative to find a rapid and non-destructive method to determine the anthocyanin content in purple corn. To this end, a field [...] Read more.
Anthocyanins are precious industrial raw materials. Purple corn is rich in anthocyanins, with large variation in their content between organs. It is imperative to find a rapid and non-destructive method to determine the anthocyanin content in purple corn. To this end, a field experiment with ten purple corn hybrids was conducted, collecting plant images using a digital camera and determining the anthocyanin content of different organ types. The average values of red (R), green (G) and blue (B) in the images were extracted. The color indices derived from RGB arithmetic operations were applied in establishing a model for estimation of the anthocyanin content. The results showed that the specific color index varied with the organ type in purple corn, i.e., ACCR for the grains, BRT for the cobs, ACCB for the husks, R for the stems, ACCB for the sheaths and BRT for the laminae, respectively. Linear models of the relationship between the color indices and anthocyanin content for different organs were established with R2 falling in the range of 0.64–0.94. The predictive accuracy of the linear models, assessed according to the NRMSE, was validated using a sample size of 2:1. The average NRMSE value was 11.68% in the grains, 13.66% in the cobs, 8.90% in the husks, 27.20% in the stems, 7.90% in the sheaths and 15.83% in the laminae, respectively, all less than 30%, indicating that the accuracy and stability of the model was trustworthy and reliable. In conclusion, this study provided a new method for rapid, non-destructive prediction of anthocyanin-rich organs in purple corn. Full article
Show Figures

Figure 1

Figure 1
<p>Digital image acquisition and standardization process. (<b>A</b>) A picture of the X-Rite ColorChecker classic chart. (<b>B</b>) The camera used in the experiment. (<b>C</b>) The sample image acquisition. (<b>D</b>) Creation of DNG format file in Colorchecker Camera Calibration. (<b>E</b>) Image calibration in Lightroom. (<b>F</b>) The image before color calibration. (<b>G</b>) The image after color calibration.</p>
Full article ">Figure 2
<p>Visual heat map of the correlation between anthocyanin content and color indices in different organs.</p>
Full article ">Figure 3
<p>Fitting the relationship between anthocyanin content and the color index. The letters in the figure indicate grains (<b>A</b>); cobs (<b>B</b>); husks (<b>C</b>); stems (<b>D</b>); sheaths (<b>E</b>) and laminae (<b>F</b>).</p>
Full article ">Figure 4
<p>Validation of predictive models for anthocyanin content in different organs of purple corn; (<b>a</b>–<b>f</b>) are tests of anthocyanin content prediction models for grains, cobs, husks, laminae, stems and sheaths, respectively.</p>
Full article ">
20 pages, 4620 KiB  
Article
Estimating Corn Growth Parameters by Integrating Optical and Synthetic Aperture Radar Features into the Water Cloud Model
by Yanyan Wang, Zhaocong Wu, Shanjun Luo, Xinyan Liu, Shuaibing Liu and Xinxin Huang
Agriculture 2024, 14(5), 695; https://doi.org/10.3390/agriculture14050695 - 28 Apr 2024
Cited by 1 | Viewed by 1494
Abstract
Crop growth parameters are the basis for evaluation of crop growth status and crop yield. The aim of this study was to develop a more accurate estimation model for corn growth parameters combined with multispectral vegetation indexes (VIopt) and the differential [...] Read more.
Crop growth parameters are the basis for evaluation of crop growth status and crop yield. The aim of this study was to develop a more accurate estimation model for corn growth parameters combined with multispectral vegetation indexes (VIopt) and the differential radar information (DRI) derived from SAR data. Targeting the estimation of corn plant height (H) and the BBCH (Biologische Bundesanstalt, Bundessortenamt and CHemical industry) phenological parameters, this study compared the estimation accuracies of various multispectral vegetation indexes (VIopt) and the corresponding VIDRI (vegetation index corrected by DRI) indexes in inverting the corn growth parameters. (1) When comparing the estimation accuracies of four multispectral vegetation indexes (NDVI, NDVIre1, NDVIre2, and S2REP), NDVI showed the lowest estimation accuracy, with a normalized root mean square error (nRMSE) of 20.84% for the plant height, while S2REP showed the highest estimation accuracy (nRMSE = 16.05%). In addition, NDVIre2 (nRMSE = 16.18%) and S2REP (16.05%) exhibited a higher accuracy than NDVIre1 (nRMSE = 19.27%). Similarly, for BBCH, the nRMSEs of the four indexes were 24.17%, 22.49%, 17.04% and 16.60%, respectively. This confirmed that the multispectral vegetation indexes based on the red-edge bands were more sensitive to the growth parameters, especially for the Sentinel-2 red-edge 2 band. (2) The constructed VIDRI indexes were more beneficial than the VIopt indexes in enhancing the estimation accuracy of corn growth parameters. Specifically, the nRMSEs of the four VIDRI indexes (NDVIDRI, NDVIre1DRI, NDVIre2DRI, and S2REPDRI) decreased to 19.64%, 18.11%, 15.00%, and 14.64% for plant height, and to 23.24%, 21.58%, 15.79%, and 15.91% for BBCH, indicating that even in cases of high vegetation coverage, the introduction of SAR DRI features can further improve the estimation accuracy of growth parameters. Our findings also demonstrated that the NDVIre2DRI and S2REPDRI indexes constructed using red-edge 2 band information of Sentinel-2 and SAR DRI features had more advantages in improving the estimation accuracy of corn growth parameters. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the experimental areas and some ground sampling points (Yellow rectangles show the specific sampling fields).</p>
Full article ">Figure 2
<p>The original measured data (plant height and BBCH) and the corresponding logical fit-ting results (red dashed line represents the logistic fitting curve).</p>
Full article ">Figure 3
<p>Flow chart of corn growth parameters estimation combined with multispectral and SAR features.</p>
Full article ">Figure 4
<p>Scatter plot of VI<sub>opt</sub> and corresponding VI<sub>DRI</sub> indexes with plant height (NDVI, NDVIre1, NDVIre2 and S2REP are multispectral vegetation indexes. NDVI<sub>DRI</sub>, NDVIre1<sub>DRI</sub>, NDVIre2<sub>DRI</sub> and S2REP<sub>DRI</sub> are the corresponding <span class="html-italic">VI</span><sub>DRI</sub> indexes. <span class="html-italic">H</span> and <span class="html-italic">R</span> represents the plant height and Pearson Correlation, respectively. (<b>a</b>–<b>d</b>) represent the relationship between the plant height and the multispectral vegetation indexes. (<b>e</b>–<b>h</b>) represent the relationship between the plant height and the corresponding VI<sub>DRI</sub> indexes. Black lines represent the linear fitting results).</p>
Full article ">Figure 5
<p>Scatter plot of VI<sub>opt</sub> and corresponding VI<sub>DRI</sub> indexes with BBCH (NDVI, NDVIre1, NDVIre2 and S2REP are multispectral vegetation indexes. NDVI<sub>DRI</sub>, NDVIre1<sub>DRI</sub>, NDVIre2<sub>DRI</sub> and S2REP<sub>DRI</sub> are the corresponding VI<sub>DRI</sub> indexes. BBCH and R represents the phenology stage and Pearson Correlation, respectively. (<b>a</b>–<b>d</b>) represent the relationship between the BBCH and the multispectral vegetation indexes. (<b>e</b>–<b>h</b>) represent the relationship between the BBCH and the corresponding <span class="html-italic">VI</span><sub>DRI</sub> indexes. Black lines represent the linear fitting results).</p>
Full article ">Figure 6
<p>Estimated plant height and measured plant height results based on the data from experi-mental area 1 ((<b>a</b>–<b>d</b>) represent the plant height estimation results using the NDVI, NDVIre1, NDVIre2, S2REP. (<b>e</b>–<b>h</b>) represent the plant height estimation results using the corresponding <span class="html-italic">VI</span><sub>DRI</sub> indexes, respectively. Dashed line represents a 1:1 relationship line).</p>
Full article ">Figure 7
<p>Estimated BBCH and measured BBCH results based on the data from experimental area 1 ((<b>a</b>–<b>d</b>) represent the BBCH estimation results using the NDVI, NDVIre1, NDVIre2, S2REP. (<b>e</b>–<b>h</b>) represent the BBCH estimation results using the corresponding <span class="html-italic">VI</span><sub>DRI</sub> indexes, respectively. Dashed line represents a 1:1 relationship line).</p>
Full article ">Figure 8
<p>Estimated (plant height and BBCH) and measured (plant height and BBCH) results based on the data from experimental area 2 ((<b>a</b>–<b>d</b>) represent the plant height estimation results, (<b>e</b>–<b>h</b>) represent the BBCH estimation results using the NDVIre2, S2REP NDVIre2<sub>DRI</sub> and S2RE<sub>DRI</sub> indexes, respectively. Dashed line represents a 1:1 relationship line).</p>
Full article ">
16 pages, 4346 KiB  
Article
Winter Wheat Yield Estimation with Color Index Fusion Texture Feature
by Fuqin Yang, Yang Liu, Jiayu Yan, Lixiao Guo, Jianxin Tan, Xiangfei Meng, Yibo Xiao and Haikuan Feng
Agriculture 2024, 14(4), 581; https://doi.org/10.3390/agriculture14040581 - 6 Apr 2024
Cited by 3 | Viewed by 1677
Abstract
The rapid and accurate estimation of crop yield is of great importance for large-scale agricultural production and national food security. Using winter wheat as the research object, the effects of color indexes, texture feature and fusion index on yield estimation were investigated based [...] Read more.
The rapid and accurate estimation of crop yield is of great importance for large-scale agricultural production and national food security. Using winter wheat as the research object, the effects of color indexes, texture feature and fusion index on yield estimation were investigated based on unmanned aerial vehicle (UAV) high-definition digital images, which can provide a reliable technical means for the high-precision yield estimation of winter wheat. In total, 22 visible color indexes were extracted using UAV high-resolution digital images, and a total of 24 texture features in red, green, and blue bands extracted by ENVI 5.3 were correlated with yield, while color indexes and texture features with high correlation and fusion indexes were selected to establish yield estimation models for flagging, flowering and filling stages using partial least squares regression (PLSR) and random forest (RF). The yield estimation model constructed with color indexes at the flagging and flowering stages, along with texture characteristics and fusion indexes at the filling stage, had the best accuracy, with R2 values of 0.70, 0.71 and 0.76 and RMSE values of 808.95 kg/hm2, 794.77 kg/hm2 and 728.85 kg/hm2, respectively. The accuracy of winter wheat yield estimation using PLSR at the flagging, flowering, and filling stages was better than that of RF winter wheat estimation, and the accuracy of winter wheat yield estimation using the fusion feature index was better than that of color and texture feature indexes; the distribution maps of yield results are in good agreement with those of the actual test fields. Thus, this study can provide a scientific reference for estimating winter wheat yield based on UAV digital images and provide a reference for agricultural farm management. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental design: (1) J9843—Jing 9843, ZM175—ZhongMai 175; (2) nitrogen treatments, N1—0 nitrogen, N2—1/2 normal nitrogen, N3—normal nitrogen, N4—3/2 normal nitrogen; (3) water treatments, W1—rainfall, W2—normal water, W3—2 times normal water.</p>
Full article ">Figure 2
<p>Verification results of different stages’ yield prediction model by using the PLSR method based on color indexes.</p>
Full article ">Figure 3
<p>Verification results of different stages’ yield prediction using the RF method based on color indexes.</p>
Full article ">Figure 4
<p>Distribution of optimal model yield based on color feature indexes.</p>
Full article ">Figure 5
<p>Verification results of different stages’ yield prediction model by using the PLSR method based on texture features.</p>
Full article ">Figure 6
<p>Verification results of different stages’ yield prediction using the RF method based on texture features.</p>
Full article ">Figure 7
<p>Distribution of optimal model yield based on texture features.</p>
Full article ">Figure 8
<p>Verification results of different stages’ yield prediction model by using the PLSR method based on fusion of color indexes and texture features.</p>
Full article ">Figure 9
<p>Verification results of different stages’ yield prediction using the RF method based on the fusion of color indexes and texture features.</p>
Full article ">Figure 10
<p>Distribution of optimal model yield based on color index and texture features.</p>
Full article ">
18 pages, 13111 KiB  
Article
Estimation of Peanut Southern Blight Severity in Hyperspectral Data Using the Synthetic Minority Oversampling Technique and Fractional-Order Differentiation
by Heguang Sun, Lin Zhou, Meiyan Shu, Jie Zhang, Ziheng Feng, Haikuan Feng, Xiaoyu Song, Jibo Yue and Wei Guo
Agriculture 2024, 14(3), 476; https://doi.org/10.3390/agriculture14030476 - 15 Mar 2024
Cited by 1 | Viewed by 1351
Abstract
Southern blight significantly impacts peanut yield, and its severity is exacerbated by high-temperature and high-humidity conditions. The mycelium attached to the plant’s interior quickly proliferates, contributing to the challenges of early detection and data acquisition. In recent years, the integration of machine learning [...] Read more.
Southern blight significantly impacts peanut yield, and its severity is exacerbated by high-temperature and high-humidity conditions. The mycelium attached to the plant’s interior quickly proliferates, contributing to the challenges of early detection and data acquisition. In recent years, the integration of machine learning and remote sensing data has become a common approach for disease monitoring. However, the poor quality and imbalance of data samples can significantly impact the performance of machine learning algorithms. This study employed the Synthetic Minority Oversampling Technique (SMOTE) algorithm to generate samples with varying severity levels. Additionally, it utilized Fractional-Order Differentiation (FOD) to enhance spectral information. The validation and testing of the 1D-CNN, SVM, and KNN models were conducted using experimental data from two different locations. In conclusion, our results indicate that the SMOTE-FOD-1D-CNN model enhances the ability to monitor the severity of peanut white mold disease (validation OA = 88.81%, Kappa = 0.85; testing OA = 82.76%, Kappa = 0.75). Full article
Show Figures

Figure 1

Figure 1
<p>Overview map of the research area for peanut southern blight.</p>
Full article ">Figure 2
<p>Field survey display of peanut southern blight. (<b>a</b>) Healthy plants, (<b>b</b>) plants with mild disease, (<b>c</b>) plants with moderate disease, (<b>d</b>) plants with severe disease. (<b>e</b>) Mycelium at the base of plants with mild disease, (<b>f</b>) leaf symptoms in plants with moderate disease.</p>
Full article ">Figure 3
<p>Spatial distribution maps of original data and synthetic samples for different values of <span class="html-italic">K</span> based on PCA. (<b>a</b>) Original data. (<b>b</b>) Synthetic data for <span class="html-italic">K</span> = 1. (<b>c</b>) Synthetic data for <span class="html-italic">K</span> = 2. (<b>d</b>) Synthetic data for <span class="html-italic">K</span> = 3. The legend Class 1–4 represents the four categories in this study.</p>
Full article ">Figure 4
<p>FOD spectral curves in the (0–1.0) fractional-order range.</p>
Full article ">Figure 5
<p>FOD spectral curves in the (1.1–2.0) fractional-order range.</p>
Full article ">Figure 6
<p>Correlation between FOD spectra and diseases at different orders. (<b>a</b>) 3D spectral curve correlation projection diagram. (<b>b</b>) 2D correlation diagram.</p>
Full article ">Figure 7
<p>Illustration of feature selection using the ReliefF algorithm for the top 5% weighted features under different scenarios of FOD (0–2). The selected features are highlighted in yellow. Specifically, the features for synthesized data are represented when (<b>a</b>) <span class="html-italic">K</span> = 1, (<b>b</b>) <span class="html-italic">K</span> = 2, and (<b>c</b>) <span class="html-italic">K</span> = 3.</p>
Full article ">Figure 8
<p>Overall accuracy of the 1D-CNN model under different K-nearest neighbors values.</p>
Full article ">Figure 9
<p>SVM model accuracy with different multiples (n) of synthetic data.</p>
Full article ">Figure 10
<p>Spectral curves of SMOTE synthetic data and actual measured data.</p>
Full article ">Figure 11
<p>Peanut southern blight severity detection using different models with FOD (0.2–0.29) and a step size of 0.01.</p>
Full article ">
17 pages, 17511 KiB  
Article
Improvement of Winter Wheat Aboveground Biomass Estimation Using Digital Surface Model Information Extracted from Unmanned-Aerial-Vehicle-Based Multispectral Images
by Yan Guo, Jia He, Huifang Zhang, Zhou Shi, Panpan Wei, Yuhang Jing, Xiuzhong Yang, Yan Zhang, Laigang Wang and Guoqing Zheng
Agriculture 2024, 14(3), 378; https://doi.org/10.3390/agriculture14030378 - 27 Feb 2024
Cited by 3 | Viewed by 1247
Abstract
Aboveground biomass (AGB) is an important indicator for characterizing crop growth conditions. A rapid and accurate estimation of AGB is critical for guiding the management of farmland and achieving production potential, and it can also provide vital data for ensuring food security. In [...] Read more.
Aboveground biomass (AGB) is an important indicator for characterizing crop growth conditions. A rapid and accurate estimation of AGB is critical for guiding the management of farmland and achieving production potential, and it can also provide vital data for ensuring food security. In this study, by applying different water and nitrogen treatments, an unmanned aerial vehicle (UAV) equipped with a multispectral imaging spectrometer was used to acquire images of winter wheat during critical growth stages. Then, the plant height (Hdsm) extracted from the digital surface model (DSM) information was used to establish and improve the estimation model of AGB, using the backpropagation (BP) neural network, a machine learning method. The results show that (1) the R2, root-mean-square error (RMSE), and relative predictive deviation (RPD) of the AGB estimation model, constructed directly using the Hdsm, are 0.58, 4528.23 kg/hm2, and 1.25, respectively. The estimated mean AGB (16,198.27 kg/hm2) is slightly smaller than the measured mean AGB (16,960.23 kg/hm2). (2) The R2, RMSE, and RPD of the improved AGB estimation model, based on AGB/Hdsm, are 0.88, 2291.90 kg/hm2, and 2.75, respectively, and the estimated mean AGB (17,478.21 kg/hm2) is more similar to the measured mean AGB (17,222.59 kg/hm2). The improved AGB estimation model boosts the accuracy by 51.72% compared with the AGB directly estimated using the Hdsm. Moreover, the improved AGB estimation model shows strong transferability in regard to different water treatments and different year scenarios, but there are differences in the transferability for different N-level scenarios. (3) Differences in the characteristics of the data are the key factors that lead to the different transferability of the AGB estimation model. This study provides an antecedent in regard to model construction and transferability estimation of AGB for winter wheat. We confirm that, when different datasets have similar histogram characteristics, the model is applicable to new scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Design of the experiment.</p>
Full article ">Figure 2
<p>Flowchart of the study framework.</p>
Full article ">Figure 3
<p>Flowchart of the extraction of plant height of winter wheat.</p>
Full article ">Figure 4
<p>H (the <b>left</b> plot) and AGB (the <b>right</b> plot) of winter wheat at different water and nitrogen levels (note: the lowercase letters a, b, c, and d in the figure indicate significant differences at the 0.05 level).</p>
Full article ">Figure 5
<p>Relationship between H<sub>dsm</sub> and H under different water–N treatments.</p>
Full article ">Figure 6
<p>Relationship between H<sub>dsm</sub> and measured AGB under different water–N treatments.</p>
Full article ">Figure 7
<p>Relationship between the measured and estimated AGB based on H<sub>dsm</sub>.</p>
Full article ">Figure 8
<p>Changing pattern of the ratio AGB to H<sub>dsm</sub> under different water–N treatments.</p>
Full article ">Figure 9
<p>Relationship between measured AGB and estimated AGB according to the model-based ratio.</p>
Full article ">Figure 10
<p>The transferability estimation ability of the improved AGB estimation model under different water treatments: (<b>a</b>) the transferability estimation of the W0 model for the W1 treatment, and (<b>b</b>) the transferability estimation of the W1 model for the W0 treatment.</p>
Full article ">Figure 11
<p>The transferability estimation ability of the improved AGB estimation model across different year scenarios: (<b>a</b>) the transferability ability of the nth-year model to estimate AGB for the (n + 1)th year, and (<b>b</b>) the transferability ability of the (n + 1)th-year model to estimate AGB for the nth year.</p>
Full article ">Figure 12
<p>Data characteristics of measured AGB and AGB/H<sub>dsm</sub> under different water treatments.</p>
Full article ">Figure 13
<p>Data characteristics of measured AGB and AGB/H<sub>dsm</sub> across different years.</p>
Full article ">Figure 14
<p>Data characteristics of measured AGB and AGB/H<sub>dsm</sub> at different N levels.</p>
Full article ">
Back to TopTop