Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning
<p>Study area and experimental sites. (<b>a</b>) Location of the study area. (<b>b</b>) Soybean experimental field. (<b>c</b>) Field “crop–crop residue–soil” digital images, <span class="html-italic">f<sub>CR</sub></span> (crop residue coverage).</p> "> Figure 2
<p>Original image and annotated image.</p> "> Figure 3
<p>Methodology framework.</p> "> Figure 4
<p>CCRSNet architecture.</p> "> Figure 5
<p>mIoU and loss curves of CCRSNet semantic segmentation network with different backbone networks during calibration.</p> "> Figure 6
<p>Visualization of segmentation results for processed images using the CCRSNet model with VGG16 as the backbone network.</p> "> Figure 7
<p>Visualization of segmentation results for original images using the CCRSNet model with VGG16 as the backbone network.</p> "> Figure 8
<p>Class activation mapping using the CCRSNet model with VGG16 as the backbone network.</p> "> Figure 9
<p>Proportion extraction of crop, crop residues, and soil using digital images and deep learning based on the TVD and IVD datasets. (<b>a</b>) crop (TVD-vali). (<b>b</b>) crop residues (TVD-vali). (<b>c</b>) soil (TVD-vali). (<b>d</b>) crop (IVD). (<b>e</b>) crop residues (IVD). (<b>f</b>) soil (IVD).</p> "> Figure A1
<p>Ablation experiment architectures. (<b>a</b>) CCRSNet without the deep and shallow feature structure, (<b>b</b>) CCRSNet without the attention module.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area and Experimental Design
2.1.1. Study Area
2.1.2. Field Experimental Design
- For constructing the training dataset and performing quantitative validation, images with a pixel size of 256 × 256 × 3 were extracted from the central region of the original images and labeled according to land cover categories.
- For field image segmentation applications and visual evaluations, only the edge-affected areas were manually removed.
2.2. Data Pre-Processing
2.2.1. Data Annotation
2.2.2. Data Augmentation
- Rotation: A random angle between −90° and 90° was used to rotate the images, altering the positions of the categories to be segmented.
- Horizontal mirroring: This technique was applied to create left-right symmetrical versions of the original images.
- Vertical mirroring: This technique generated top–bottom symmetrical versions of the original images.
2.3. Technical Workflow
- Image acquisition: Images of crops, crop residue, and soil were collected at different times from agricultural fields within a specified region. The TVD dataset was used for calibration and validation, while the IVD dataset served as an independent validation set to assess the models’ generalization performance and robustness.
- Data preprocessing: The dataset was manually annotated, categorizing the images into three classes: “soil–crop–crop residue”. Data augmentation techniques, such as rotation and mirroring, were applied to enhance the dataset.
- Model construction: The proposed CCRSNet model architecture utilized VGG16 and ResNet50 as backbone networks.
- Accuracy verification: Based on the TVD and IVD datasets, the segmentation accuracies of CCRSNet, RF, and SVM for “soil–crop–crop residue” in farmland were compared. Ablation experiments were also conducted to assess the contributions of different components within CCRSNet.
2.4. CCRSNet Segmentation Model
2.4.1. Backbone Network
2.4.2. CCRSNet Model Architecture
- Input: The input image size is 256 × 256 × 3.
- Backbone: VGG16 and ResNet50 were selected as the backbone networks for CCRSNet.
- Pyramid pooling module (PPM): PPM is composed of multiple pooling layers, each using windows of different sizes (including 1 × 1, 2 × 2, 3 × 3, and 6 × 6 pooling windows in this study) to extract contextual information of images at different scales. Subsequently, the pooled information is adjusted to the same size and concatenated through upsampling [43].
- Attention module: The attention mechanism module in this study includes channel attention and spatial attention mechanisms, with lightweight characteristics and minimal impact on model complexity [44].
2.5. Baseline Models
2.6. Deep Learning Model Parameter Settings
2.7. Segmentation and Classification Accuracy Evaluation Metrics
- The ratio of the intersection of predicted results and true annotations to their union, reflecting the model’s prediction performance for each category.
- 2.
- The average of IoUs across all categories, indicating the model’s average segmentation performance across all categories, serving as a crucial metric for assessing segmentation performance.
- 3.
- The proportion of correctly predicted pixels to the total number of pixels reflects the model’s overall prediction accuracy, although it does not consider the performance of each category.
- 4.
- The proportion of correctly predicted positive samples among all actual positives reflects the model’s ability to detect true positives.
2.8. Proportion Extraction Evaluation Metrics
3. Results
3.1. Model Calibration and Validation Based on the TVD Dataset
3.2. Model Evaluation Based on the IVD Dataset
3.3. CCRSNet Ablation Study Based on the TVD and IVD Datasets
- (i)
- Exp. 1: CCRSNet (Figure 4);
- (ii)
- Exp. 2: CCRSNet without the deep and shallow feature structure (Appendix A, Figure A1a);
- (iii)
- Exp. 3: CCRSNet without the attention module (Appendix A, Figure A1b).
3.4. Segmentation Results and Proportional Extraction Accuracy Based on CCRSNet
4. Discussion
4.1. Advantages of Deep Learning in Image Segmentation and Proportion Extraction
4.2. Disadvantages of Deep Learning for Crop, Crop Residues, and Soil Proportion Extraction
5. Conclusions
- (i)
- Compared to traditional machine learning models (e.g., RF and SVM), deep learning models are more suitable for the segmentation and proportional extraction of farmland “crop–crop residues–soil.” When using VGG16 as the backbone network for the CCRSNet segmentation model, CCRSNet achieved a segmentation accuracy of 96.23%, which is notably higher than traditional models like SVM (58.00%) and RF (82.47%).
- (ii)
- The CCRSNet model, which is capable of fusing shallow and deep image features and incorporates an attention module, provides high-performance results for farmland “crop–crop residues–soil” image segmentation and proportional extraction. On the IVD independent validation dataset, the CCRSNet model with a lightweight backbone network achieved an mIoU of 92.73%, a PA of 96.23%, and proportional extraction RMSE values between 1.05% and 3.56%.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Liu, J.; Qiu, T.; Peñuelas, J.; Sardans, J.; Tan, W.; Wei, X.; Cui, Y.; Cui, Q.; Wu, C.; Liu, L.; et al. Crop Residue Return Sustains Global Soil Ecological Stoichiometry Balance. Glob. Chang. Biol. 2023, 29, 2203–2226. [Google Scholar] [CrossRef] [PubMed]
- Delandmeter, M.; Colinet, G.; Pierreux, J.; Bindelle, J.; Dumont, B. Combining Field Measurements and Process-based Modelling to Analyse Soil Tillage and Crop Residues Management Impacts on Crop Production and Carbon Balance in Temperate Areas. Soil Use Manag. 2024, 40, 13098. [Google Scholar] [CrossRef]
- Yue, J.; Tian, Q.; Liu, Y.; Fu, Y.; Tian, J.; Zhou, C.; Feng, H.; Yang, G. Mapping Cropland Rice Residue Cover Using a Radiative Transfer Model and Deep Learning. Comput. Electron. Agric. 2023, 215, 108421. [Google Scholar] [CrossRef]
- Su, Y.; Gabrielle, B.; Makowski, D. The Impact of Climate Change on the Productivity of Conservation Agriculture. Nat. Clim. Chang. 2021, 11, 628–633. [Google Scholar] [CrossRef]
- Gao, P.; Song, Y.; Minhui, S.; Qian, P.; Su, Y. Extract Nanoporous Gold Ligaments from SEM Images by Combining Fully Convolutional Network and Sobel Operator Edge Detection Algorithm. SSRN Electron. J. 2021, 365, 536–538. [Google Scholar] [CrossRef]
- Yue, J.; Tian, Q.; Tang, S.; Xu, K.; Zhou, C. A Dynamic Soil Endmember Spectrum Selection Approach for Soil and Crop Residue Linear Spectral Unmixing Analysis. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 306–317. [Google Scholar] [CrossRef]
- Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Islam, M.U.; Guo, Z.; Jiang, F.; Peng, X. Does Straw Return Increase Crop Yield in the Wheat-Maize Cropping System in China? A Meta-Analysis. F. Crop. Res. 2022, 279, 108447. [Google Scholar] [CrossRef]
- Yue, J.; Tian, Q.; Dong, X.; Xu, N. Using Broadband Crop Residue Angle Index to Estimate the Fractional Cover of Vegetation, Crop Residue, and Bare Soil in Cropland Systems. Remote Sens. Environ. 2020, 237, 111538. [Google Scholar] [CrossRef]
- Dolata, P.; Wróblewski, P.; Mrzygłód, M.; Reiner, J. Instance Segmentation of Root Crops and Simulation-Based Learning to Estimate Their Physical Dimensions for on-Line Machine Vision Yield Monitoring. Comput. Electron. Agric. 2021, 190, 106451. [Google Scholar] [CrossRef]
- Mishra, S.; Mishra, D.; Santra, G.H. Applications of Machine Learning Techniques in Agricultural Crop Production: A Review Paper. Indian J. Sci. Technol. 2016, 9, 1–14. [Google Scholar] [CrossRef]
- Song, H.; Wang, J.; Bei, J.; Wang, M. Modified Snake Optimizer Based Multi-Level Thresholding for Color Image Segmentation of Agricultural Diseases. Expert Syst. Appl. 2024, 255, 124624. [Google Scholar] [CrossRef]
- Shang, C.; Zhang, D.; Yang, Y. A Gradient-Based Method for Multilevel Thresholding. Expert Syst. Appl. 2021, 175, 114845. [Google Scholar] [CrossRef]
- Gupta, L.; Sortrakul, T. A Gaussian-Mixture-Based Image Segmentation Algorithm. Pattern Recognit. 1998, 31, 315–325. [Google Scholar] [CrossRef]
- Panjwani, D.K.; Healey, G. Markov Random Field Models for Unsupervised Segmentation of Textured Color Images. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 939–954. [Google Scholar] [CrossRef]
- Chen, Y.; Cheng, N.; Cai, M.; Cao, C.; Yang, J.; Zhang, Z. A Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation. Inf. Sci. 2021, 575, 41–65. [Google Scholar] [CrossRef]
- Trombini, M.; Solarna, D.; Moser, G.; Dellepiane, S. A Goal-Driven Unsupervised Image Segmentation Method Combining Graph-Based Processing and Markov Random Fields. Pattern Recognit. 2023, 134, 109082. [Google Scholar] [CrossRef]
- Yang, Y.; Zhao, X.; Huang, M.; Wang, X.; Zhu, Q. Multispectral Image Based Germination Detection of Potato by Using Supervised Multiple Threshold Segmentation Model and Canny Edge Detector. Comput. Electron. Agric. 2021, 182, 106041. [Google Scholar] [CrossRef]
- Hashim, F.A.; Hussien, A.G. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowledge-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
- Ding, Z.; Li, C.; Huang, R.; Gatenby, C.J.; Metaxas, D.N.; Gore, J.C. A Level Set Method for Image Segmentation in the Presence of Intensity Inhomogeneities with Application to MRI. IEEE Trans. Image Process. 2011, 20, 2007–2016. [Google Scholar] [CrossRef]
- Wang, Z.; Wan, L.; Xiong, N.; Zhu, J.; Ciampa, F. Variational Level Set and Fuzzy Clustering for Enhanced Thermal Image Segmentation and Damage Assessment. NDT E Int. 2021, 118, 102396. [Google Scholar] [CrossRef]
- Yue, J.; Tian, Q. Estimating Fractional Cover of Crop, Crop Residue, and Soil in Cropland Using Broadband Remote Sensing Data and Machine Learning. Int. J. Appl. Earth Obs. Geoinf. 2020, 89, 102089. [Google Scholar] [CrossRef]
- Guerrero, J.M.; Pajares, G.; Montalvo, M.; Romeo, J.; Guijarro, M. Support Vector Machines for Crop/Weeds Identification in Maize Fields. Expert Syst. Appl. 2012, 39, 11149–11155. [Google Scholar] [CrossRef]
- Xu, J.; Zhou, S.; Xu, A.; Ye, J.; Zhao, A. Automatic Scoring of Postures in Grouped Pigs Using Depth Image and CNN-SVM. Comput. Electron. Agric. 2022, 194, 106746. [Google Scholar] [CrossRef]
- Wang, H.; Ma, Z.; Ren, Y.; Du, S.; Lu, H.; Shang, Y.; Hu, S.; Zhang, G.; Meng, Z.; Wen, C.; et al. Interactive Image Segmentation Based Field Boundary Perception Method and Software for Autonomous Agricultural Machinery Path Planning. Comput. Electron. Agric. 2024, 217, 108568. [Google Scholar] [CrossRef]
- Li, Y.; Zhao, H.; Qi, X.; Wang, L.; Li, Z.; Sun, J.; Jia, J. Fully Convolutional Networks for Panoptic Segmentation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2024; 2021; pp. 214–223. [Google Scholar]
- Shaheed, K.; Mao, A.; Qureshi, I.; Kumar, M.; Hussain, S.; Ullah, I.; Zhang, X. DS-CNN: A Pre-Trained Xception Model Based on Depth-Wise Separable Convolutional Neural Network for Finger Vein Recognition. Expert Syst. Appl. 2022, 191, 116288. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.B.T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
- Beeche, C.; Singh, J.P.; Leader, J.K.; Gezer, N.S.; Oruwari, A.P.; Dansingani, K.K.; Chhablani, J.; Pu, J. Super U-Net: A Modularized Generalizable Architecture. Pattern Recognit. 2022, 128, 108669. [Google Scholar] [CrossRef]
- Qi, J.; Liu, X.; Liu, K.; Xu, F.; Guo, H.; Tian, X.; Li, M.; Bao, Z.; Li, Y. An Improved YOLOv5 Model Based on Visual Attention Mechanism: Application to Recognition of Tomato Virus Disease. Comput. Electron. Agric. 2022, 194, 106780. [Google Scholar] [CrossRef]
- Chen, H.; He, Y.; Zhang, L.; Yao, S.; Yang, W.; Fang, Y.; Liu, Y.; Gao, B. A Landslide Extraction Method of Channel Attention Mechanism U-Net Network Based on Sentinel-2A Remote Sensing Images. Int. J. Digit. Earth 2023, 16, 552–577. [Google Scholar] [CrossRef]
- Yue, J.; Tian, Q.; Dong, X.; Xu, K.; Zhou, C. Using Hyperspectral Crop Residue Angle Index to Estimate Maize and Winter-Wheat Residue Cover: A Laboratory Study. Remote Sens. 2019, 11, 807. [Google Scholar] [CrossRef]
- Ding, Y.; Zhang, H.; Wang, Z.; Xie, Q.; Wang, Y.; Liu, L.; Hall, C.C. A Comparison of Estimating Crop Residue Cover from Sentinel-2 Data Using Empirical Regressions and Machine Learning Methods. Remote Sens. 2020, 12, 1470. [Google Scholar] [CrossRef]
- Yue, J.; Fu, Y.; Guo, W.; Feng, H.; Qiao, H. Estimating Fractional Coverage of Crop, Crop Residue, and Bare Soil Using Shortwave Infrared Angle Index and Sentinel-2 MSI. Int. J. Remote Sens. 2022, 43, 1253–1273. [Google Scholar] [CrossRef]
- Zhou, D.; Li, M.; Li, Y.; Qi, J.; Liu, K.; Cong, X.; Tian, X. Detection of Ground Straw Coverage under Conservation Tillage Based on Deep Learning. Comput. Electron. Agric. 2020, 172, 105369. [Google Scholar] [CrossRef]
- Torralba, A.; Russell, B.C.; Yuen, J. LabelMe: Online Image Annotation and Applications. Proc. IEEE 2010, 98, 1467–1484. [Google Scholar] [CrossRef]
- Zhang, S.; Zhang, C. Modified U-Net for Plant Diseased Leaf Image Segmentation. Comput. Electron. Agric. 2023, 204, 107511. [Google Scholar] [CrossRef]
- Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y.; et al. TransUNet: Rethinking the U-Net Architecture Design for Medical Image Segmentation through the Lens of Transformers. Med. Image Anal. 2024, 97, 103280. [Google Scholar] [CrossRef]
- Qiang, J.; Liu, W.; Li, X.; Guan, P.; Du, Y.; Liu, B.; Xiao, G. Detection of Citrus Pests in Double Backbone Network Based on Single Shot Multibox Detector. Comput. Electron. Agric. 2023, 212, 108158. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd Int. Conf. Learn. Represent. ICLR 2015—Conf. Track Proc. 2014, 1–14. [Google Scholar] [CrossRef]
- Hattiya, T.; Dittakan, K.; Musikasuwan, S. Diabetic Retinopathy Detection Using Convolutional Neural Network: A Comparative Study on Different Architectures. Eng. Access 2021, 7, 50–60. [Google Scholar] [CrossRef]
- Lian, X.; Pang, Y.; Han, J.; Pan, J. Cascaded Hierarchical Atrous Spatial Pyramid Pooling Module for Semantic Segmentation. Pattern Recognit. 2021, 110, 107622. [Google Scholar] [CrossRef]
- Du, L.; Lu, Z.; Li, D. Broodstock Breeding Behaviour Recognition Based on Resnet50-LSTM with CBAM Attention Mechanism. Comput. Electron. Agric. 2022, 202, 107404. [Google Scholar] [CrossRef]
- Bai, J.; Li, Y.; Li, J.; Yang, X.; Jiang, Y.; Xia, S.-T. Multinomial Random Forest. Pattern Recognit. 2022, 122, 108331. [Google Scholar] [CrossRef]
- Dong, S. Multi Class SVM Algorithm with Active Learning for Network Traffic Classification. Expert Syst. Appl. 2021, 176, 114885. [Google Scholar] [CrossRef]
- Wang, P.; Fan, E.; Wang, P. Comparative Analysis of Image Classification Algorithms Based on Traditional Machine Learning and Deep Learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
- Wang, C.; Antos, S.E.; Triveno, L.M. Automatic Detection of Unreinforced Masonry Buildings from Street View Images Using Deep Learning-Based Image Segmentation. Autom. Constr. 2021, 132, 103968. [Google Scholar] [CrossRef]
- Zhao, H.; Zheng, J.; Wang, Y.; Yuan, X.; Li, Y. Portrait Style Transfer Using Deep Convolutional Neural Networks and Facial Segmentation. Comput. Electr. Eng. 2020, 85, 106655. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Impact of Dataset Size and Variety on the Effectiveness of Deep Learning and Transfer Learning for Plant Disease Classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Ghasemi, F.; Mehridehnavi, A.; Pérez-Garrido, A.; Pérez-Sánchez, H. Neural Network and Deep-Learning Algorithms Used in QSAR Studies: Merits and Drawbacks. Drug Discov. Today 2018, 23, 1784–1790. [Google Scholar] [CrossRef]
- Liang, Y.; Li, S.; Yan, C.; Li, M.; Jiang, C. Explaining the Black-Box Model: A Survey of Local Interpretation Methods for Deep Neural Networks. Neurocomputing 2021, 419, 168–182. [Google Scholar] [CrossRef]
Label | Category | Description |
---|---|---|
L0 | Crop | The green plants in farmland are mainly soybeans and a small amount of weeds. |
L1 | Crop residue | The residue after wheat harvesting, mainly consisting of stems; small residues may labeled as soil. |
L2 | Soil | Dry, moist, shaded, and illuminated soil; small pieces of soil region may be labeled as crop residue under high crop residue coverage. |
Type | Prediction | ||
---|---|---|---|
Actual | Label | Positive | Negative |
Positive | true positives (TPs) | false negatives (FNs) | |
Negative | false positives (FPs) | true negatives (TNs) |
Model and Backbone | RF | SVM | CCRSNet | ||
---|---|---|---|---|---|
VGG16 | ResNet50 | ||||
Recall | L0 (crop) | 99.11% | 82.24% | 99.21% | 99.37% |
L1 (crop residue) | 62.55% | 45.56% | 90.41% | 92.93% | |
L2 (soil) | 82.87% | 4.8% | 92.67% | 91.41% | |
PA | 80.70% | 33.54% | 93.70% * | 93.54% | |
mIoU | 71.29% | 21.64% | 87.84% * | 87.72% |
Model and Backbone | RF | SVM | CCRSNet | ||
---|---|---|---|---|---|
VGG | ResNet50 | ||||
Recall | L0 (crop) | 98.22% | 87.87% | 98.56% | 98.51% |
L1 (crop residues) | 64.35% | 45.56% | 94.27% | 95.43% | |
L2 (soil) | 78.83% | 4.85% | 96.16% | 94.46% | |
PA | 82.47% | 58.00% | 96.23% * | 96.19% | |
mIoU | 66.86% | 32.98% | 92.73% * | 92.65% |
Type | Exp. 1 | Exp. 2 | Exp. 3 | ||||
---|---|---|---|---|---|---|---|
Recall | Label | TVD-vali | IVD | TVD-vali | IVD | TVD-vali | IVD |
L0 (crop) | 99.21% | 98.56% | 98.87% | 97.73% | 98.95% | 98.66% | |
L1 (crop residues) | 90.41% | 94.27% | 90.45% | 94.34% | 92.18% | 93.64% | |
L2 (soil) | 92.67% | 96.16% | 92.10% | 95.62% | 91.79% | 95.74% | |
PA | 93.70% * | 96.23% ** | 93.30% | 95.83% | 93.50% | 95.91% | |
mIoU | 87.84% * | 92.73% ** | 87.20% | 91.98% | 87.57% | 92.13% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, G.; Zhang, S.; Shen, J.; Hu, K.; Tian, J.; Yao, Y.; Tian, Q.; Fu, Y.; Feng, H.; Liu, Y.; et al. Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning. Agriculture 2024, 14, 2240. https://doi.org/10.3390/agriculture14122240
Gao G, Zhang S, Shen J, Hu K, Tian J, Yao Y, Tian Q, Fu Y, Feng H, Liu Y, et al. Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning. Agriculture. 2024; 14(12):2240. https://doi.org/10.3390/agriculture14122240
Chicago/Turabian StyleGao, Guangfu, Shanxin Zhang, Jianing Shen, Kailong Hu, Jia Tian, Yihan Yao, Qingjiu Tian, Yuanyuan Fu, Haikuan Feng, Yang Liu, and et al. 2024. "Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning" Agriculture 14, no. 12: 2240. https://doi.org/10.3390/agriculture14122240
APA StyleGao, G., Zhang, S., Shen, J., Hu, K., Tian, J., Yao, Y., Tian, Q., Fu, Y., Feng, H., Liu, Y., & Yue, J. (2024). Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning. Agriculture, 14(12), 2240. https://doi.org/10.3390/agriculture14122240