PMDNet: An Improved Object Detection Model for Wheat Field Weed
<p>Example of Weeds.</p> "> Figure 2
<p>Example of Data Augmentation.</p> "> Figure 3
<p>Training Set Labels Distribution: (<b>1</b>) Category Instances (<b>2</b>) Bounding Box Shapes (<b>3</b>) Bounding Box Position (x vs. y) (<b>4</b>) Bounding Box Dimensions (width vs. height).</p> "> Figure 4
<p>Network structure diagram of PMDNet model. Note: Stem is the initial preprocessing module for feature extraction; SPPF is a spatial pyramid pooling module for enhancing multi-scale contextual information. Concat merges feature maps from different layers to preserve multi-scale details; C2f is a feature fusion component derived from YOLOv8, designed to optimize feature representation. Upsampling increases spatial resolution for fine-grained localization. Downsampling reduces spatial resolution to capture high-level semantic features effectively. P3–P5 represent outputs of different layer feature maps. PKI Stage is the core module of the PKINet backbone; MultiScaleFusion is the core module of the self-designed feature fusion layer MSFPN, and DyHead serves as the detection head. These three components will be described in detail in the subsequent sections.</p> "> Figure 5
<p>Structure diagram of PKI Stage. Note: FFN refers to the Feed-Forward Network, a fully connected layer used to process and transform feature representations. CAA refers to the Context Anchor Attention module, designed to capture long-range dependencies and enhance central feature regions, improving small object detection in complex backgrounds. Conv stands for a standard convolutional layer responsible for extracting local spatial features. DWConv refers to Depthwise Convolution, which reduces computation by applying spatial convolution independently to each channel, often used to build lightweight neural networks.</p> "> Figure 6
<p>Structure diagram of MultiScaleFusion. Note: PWConv refers to Pointwise Convolution, a 1 × 1 convolution used to adjust the channel dimensions of feature maps, enabling efficient feature transformation and information fusion with minimal computational cost.</p> "> Figure 7
<p>Structure diagram of DyHead. Note: hard sigmoid refers to an approximation of the sigmoid function to constrain values within [0, 1]; relu denotes the activation function; avg pool represents global average pooling; offset refers to self-learned offsets in deformable convolution; index indicates the corresponding channel or spatial index. Sigmoid represents the standard sigmoid function; fc stands for fully connected layers; normalize is the normalization operation applied to inputs. Symbols <span class="html-italic">α</span>1, <span class="html-italic">α</span>2, <span class="html-italic">β</span>1, and <span class="html-italic">β</span>2 represent parameters controlling channel activations and offsets.</p> "> Figure 8
<p>Comparison of Training Results Between PMDNet and YOLOv8n Models.</p> "> Figure 9
<p>Comparison of Model Prediction Results. Note: The three columns in this figure represent, from left to right, the ground truth annotations, the predictions by YOLOv8n, and the predictions by PMDNet.</p> "> Figure 10
<p>Field detection results of the PMDNet model in wheat fields.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Traditional Methods for Weed Identification in Wheat Fields
2.2. Application of Deep Learning in Weed Identification
2.3. Overview of YOLO Model
2.4. Current Research on Weed Identification Based on Improved YOLO Models
- (1)
- Enhancements in Precision and Detection Capability
- (2)
- Lightweight Design and Real-Time Optimization
- (3)
- Multi-Class Identification and Classification
- (4)
- Incorporating Attention Mechanisms
3. Dataset Construction and Preprocessing
3.1. Dataset Construction
3.2. Data Annotation
3.3. Data Augmentation and Data Splitting
4. Model Optimization and Improvements
4.1. Introduction to YOLOv8 Model
4.2. Model Optimization
4.2.1. Backbone Network: Improved with the Poly Kernel Inception Network (PKINet)
4.2.2. Feature Fusion Layer: Improved with Multi-Scale Feature Pyramid Network (MSFPN)
4.2.3. Detection Head: Improved with Dynamic Head (DyHead)
- (1)
- Scale-aware Attention
- (2)
- Spatial-aware Attention
- (3)
- Task-aware Attention
5. Experiment Design and Result Analysis
5.1. Experimental Setup
5.2. Performance Evaluation Metrics
5.3. Model Performance Comparison
5.4. Ablation Study
5.5. Model Visualization Results
5.5.1. Training Process Visualization
5.5.2. Visualization of Prediction Results
5.6. Testing on Wheat Field Video Sequence
6. Discussion
6.1. Performance Analysis
6.2. Impact of Individual Components
6.3. Significance for Agricultural Applications
6.4. Limitations and Future Work
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
Abbreviation | Full Form |
MSFPN | Multi-Scale Feature Pyramid Network |
YOLO | You Only Look Once |
FPN | Feature Pyramid Network |
mAP | mean Average Precision |
CSPNet | Cross-Stage Partial Network |
SPP | Spatial Pyramid Pooling |
PAN | Path Aggregation Network |
PKINet | Poly Kernel Inception Network |
CAA | Context Anchor Attention |
FFN | Feed-Forward Network |
DWConv | Depthwise Convolution |
PWConv | Pointwise Convolution |
DyHead | Dynamic Head |
SGD | Stochastic Gradient Descent |
P | Precision |
R | Recall |
IoU | Intersection over Union |
References
- Godfray, H.C.J.; Beddington, J.R.; Crute, I.R.; Haddad, L.; Lawrence, D.; Muir, J.F.; Pretty, J.; Robinson, S.; Thomas, S.M.; Toulmin, C. Food security: The challenge of feeding 9 billion people. Science 2010, 327, 812–818. [Google Scholar] [CrossRef]
- Food and Agriculture Organization of the United Nations. The Future of Food and Agriculture: Trends and Challenges; FAO: Rome, Italy, 2017. [Google Scholar]
- Anwar, M.P.; Islam, A.K.M.M.; Yeasmin, S.; Rashid, M.H.; Juraimi, A.S.; Ahmed, S.; Shrestha, A. Weeds and Their Responses to Management Efforts in A Changing Climate. Agronomy 2021, 11, 1921. [Google Scholar] [CrossRef]
- Colbach, N.; Collard, A.; Guyot, S.H.M.; Mézière, D.; Munier-Jolain, N. Assessing innovative sowing patterns for integrated weed management with a 3D crop:weed competition model. Eur. J. Agron. 2014, 53, 74–89. [Google Scholar] [CrossRef]
- Jalli, M.; Huusela, E.; Jalli, H.; Kauppi, K.; Niemi, M.; Himanen, S.; Jauhiainen, L. Effects of Crop Rotation on Spring Wheat Yield and Pest Occurrence in Different Tillage Systems: A Multi-Year Experiment in Finnish Growing Conditions. Front. Sustain. Food Syst. 2021, 5, 647335. [Google Scholar] [CrossRef]
- Javaid, M.M.; Mahmood, A.; Bhatti, M.I.N.; Waheed, H.; Attia, K.; Aziz, A.; Nadeem, M.A.; Khan, N.; Al-Doss, A.A.; Fiaz, S.; et al. Efficacy of Metribuzin Doses on Physiological, Growth, and Yield Characteristics of Wheat and Its Associated Weeds. Front. Plant Sci. 2022, 13, 866793. [Google Scholar] [CrossRef] [PubMed]
- Usman, K.; Khalil, S.K.; Khan, A.Z.; Khalil, I.H.; Khan, M.A.; Amanullah. Tillage and herbicides impact on weed control and wheat yield under rice–wheat cropping system in Northwestern Pakistan. Soil Tillage Res. 2010, 110, 101–107. [Google Scholar] [CrossRef]
- Shamshiri, R.R.; Rad, A.K.; Behjati, M.; Balasundram, S.K. Sensing and Perception in Robotic Weeding: Innovations and Limitations for Digital Agriculture. Sensors 2024, 24, 6743. [Google Scholar] [CrossRef] [PubMed]
- Reed, N.H.; Butts, T.R.; Norsworthy, J.K.; Hardke, J.T.; Barber, L.T.; Bond, J.A.; Bowman, H.D.; Bateman, N.R.; Poncet, A.M.; Kouame, K.B.J. Ecological implications of row width and cultivar selection on rice (Oryza sativa) and barnyardgrass (Echinochloa crus-galli). Sci. Rep. 2024, 14, 24844. [Google Scholar] [CrossRef] [PubMed]
- Meesaragandla, S.; Jagtap, M.P.; Khatri, N.; Madan, H.; Vadduri, A.A. Herbicide spraying and weed identification using drone technology in modern farms: A comprehensive review. Results Eng. 2024, 21, 101870. [Google Scholar] [CrossRef]
- Rasappan, P.; Delphin, A.; Rani, C.; Sughashini, K.; Kurangi, C.; Nirmala, M.; Farhana, H.; Ahmed, T.; Balamurugan, S.P. Computer Vision and Deep Learning-enabled Weed Detection Model for Precision Agriculture. Comput. Syst. Sci. Eng. 2022, 44, 2759–2774. [Google Scholar] [CrossRef]
- Razfar, N.; True, J.; Bassiouny, R.; Venkatesh, V.; Kashef, R. Weed detection in soybean crops using custom lightweight deep learning models. J. Agric. Food Res. 2022, 8, 100308. [Google Scholar] [CrossRef]
- Rakhmatulin, I.; Kamilaris, A.; Andreasen, C. Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review. Remote Sens. 2021, 13, 4486. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Gallo, I.; Rehman, A.U.; Dehkordi, R.H.; Landro, N.; La Grassa, R.; Boschetti, M. Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images. Remote Sens. 2023, 15, 539. [Google Scholar] [CrossRef]
- Zhang, Z.; Yang, Y.; Xu, X.; Liu, L.; Yue, J.; Ding, R.; Lu, Y.; Liu, J.; Qiao, H. GVC-YOLO: A Lightweight Real-Time Detection Method for Cotton Aphid-Damaged Leaves Based on Edge Computing. Remote Sens. 2024, 16, 3046. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8, 8.0.0; Ultralytics: Frederick, MD, USA, 2023. [Google Scholar]
- Upadhyay, A.; Sunil, G.C.; Zhang, Y.; Koparan, C.; Sun, X. Development and evaluation of a machine vision and deep learning-based smart sprayer system for site-specific weed management in row crops: An edge computing approach. J. Agric. Food Res. 2024, 18, 101331. [Google Scholar] [CrossRef]
- Ali, M.S.; Rashid, M.R.A.; Hossain, T.; Kabir, M.A.; Kamrul, M.; Aumy, S.H.B.; Mridha, M.H.; Sajeeb, I.H.; Islam, M.M.; Jabid, T. A comprehensive dataset of rice field weed detection from Bangladesh. Data Brief 2024, 57, 110981. [Google Scholar] [CrossRef]
- Coleman, G.R.Y.; Kutugata, M.; Walsh, M.J.; Bagavathiannan, M.V. Multi-growth stage plant recognition: A case study of Palmer amaranth (Amaranthus palmeri) in cotton (Gossypium hirsutum). Comput. Electron. Agric. 2024, 217, 108622. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Farhadi, A.; Redmon, J. Yolov3: An incremental improvement. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1–6. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jocher, G. YOLOv5 by Ultralytics. License = AGPL-3.0. 2020. Available online: https://zenodo.org/records/7347926 (accessed on 8 October 2024).
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
- Sportelli, M.; Apolo-Apolo, O.E.; Fontanelli, M.; Frasconi, C.; Raffaelli, M.; Peruzzi, A.; Perez-Ruiz, M. Evaluation of YOLO Object Detectors for Weed Detection in Different Turfgrass Scenarios. Appl. Sci. 2023, 13, 8502. [Google Scholar] [CrossRef]
- Zhu, H.; Zhang, Y.; Mu, D.; Bai, L.; Wu, X.; Zhuang, H.; Li, H. Research on improved YOLOx weed detection based on lightweight attention module. Crop Prot. 2024, 177, 106563. [Google Scholar] [CrossRef]
- Dang, F.; Chen, D.; Lu, Y.; Li, Z. YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems. Comput. Electron. Agric. 2023, 205, 107655. [Google Scholar] [CrossRef]
- Chen, J.; Wang, H.; Zhang, H.; Luo, T.; Wei, D.; Long, T.; Wang, Z. Weed detection in sesame fields using a YOLO model with an enhanced attention mechanism and feature fusion. Comput. Electron. Agric. 2022, 202, 107412. [Google Scholar] [CrossRef]
- Xu, M.; Yoon, S.; Fuentes, A.; Park, D.S. A Comprehensive Survey of Image Augmentation Techniques for Deep Learning. Pattern Recognit. 2023, 137, 109347. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Liao, H.-Y.M.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.-H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Cai, X.; Lai, Q.; Wang, Y.; Wang, W.; Sun, Z.; Yao, Y. Poly kernel inception network for remote sensing detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 27706–27716. [Google Scholar]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7373–7382. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Padilla, R.; Netto, S.L.; Silva, E.A.B.d. A Survey on Performance Metrics for Object-Detection Algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niterói, Brazil, 1–3 July 2020; pp. 237–242. [Google Scholar]
- Liu, X.; Peng, H.; Zheng, N.; Yang, Y.; Hu, H.; Yuan, Y. Efficientvit: Memory efficient vision transformer with cascaded group attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14420–14430. [Google Scholar]
- Ding, X.; Zhang, Y.; Ge, Y.; Zhao, S.; Song, L.; Yue, X.; Shan, Y. UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio Video Point Cloud Time-Series and Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 5513–5524. [Google Scholar]
- Woo, S.; Debnath, S.; Hu, R.; Chen, X.; Liu, Z.; Kweon, I.S.; Xie, S. Convnext v2: Co-designing and scaling convnets with masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16133–16142. [Google Scholar]
- Chen, H.; Wang, Y.; Guo, J.; Tao, D. Vanillanet: The power of minimalism in deep learning. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver BC, Canada, 10–15 December 2024; Volume 36. [Google Scholar]
- Chen, J.; Kao, S.-H.; He, H.; Zhuo, W.; Wen, S.; Lee, C.-H.; Chan, S.-H.G. Run, don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Feng, C.; Zhong, Y.; Gao, Y.; Scott, M.R.; Huang, W. Tood: Task-aligned one-stage object detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 3490–3499. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Lin, T. Focal Loss for Dense Object Detection. arXiv 2017, arXiv:1708.02002. [Google Scholar]
- Zhang, S.; Chi, C.; Yao, Y.; Lei, Z.; Li, S.Z. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 9759–9768. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 10781–10790. [Google Scholar]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar]
Backbone | P/% | R/% | [email protected]/% | [email protected]:0.95/% | F1/% |
---|---|---|---|---|---|
YOLOv8 (baseline) | 92.0 | 76.4 | 83.6 | 65.7 | 83.5 |
EfficientViT | 90.0 | 75.3 | 81.9 | 62.1 | 82.0 |
UniRepLKNet | 92.2 | 75.1 | 82.4 | 62.1 | 82.8 |
Convnextv2 | 91.0 | 74.7 | 80.7 | 62.3 | 82.0 |
Vanillanet | 90.8 | 77.1 | 82.9 | 65.0 | 83.4 |
FasterNet | 93.3 | 74.7 | 82.4 | 62.5 | 83.0 |
PKINet | 92.7 | 76.3 | 84.8 | 67.7 | 83.7 |
Model | P/% | R/% | [email protected]/% | [email protected]:0.95/% | F1/% |
---|---|---|---|---|---|
YOLOv8n (baseline) | 92.0 | 76.4 | 83.6 | 65.7 | 83.5 |
YOLOv5n | 92.9 | 77.3 | 83.9 | 64.5 | 84.4 |
YOLOv10n | 89.9 | 75.3 | 83.2 | 64.7 | 82.0 |
TOOD | 82.0 | 66.7 | 81.9 | 55.3 | 73.6 |
RT-DETR-L | 92.2 | 75.3 | 80.4 | 60.8 | 82.9 |
Faster-RCNN | 80.4 | 65.6 | 80.8 | 52.9 | 72.3 |
RetinaNet | 84.0 | 62.8 | 84.4 | 56.5 | 71.9 |
ATSS | 84.5 | 63.1 | 83.8 | 58.2 | 72.3 |
EfficientDet | 84.0 | 63.0 | 83.5 | 57.3 | 72.0 |
PMDNet (ours) | 94.5 | 76.0 | 85.8 | 69.6 | 84.3 |
Experiment Number | PKINet | MSFPN | DyHead | P/% | R/% | [email protected]/% | [email protected]:0.95/% | F1/% |
---|---|---|---|---|---|---|---|---|
1 | ✗ | ✗ | ✗ | 92.0 | 76.4 | 83.6 | 65.7 | 83.5 |
2 | ✓ | ✗ | ✗ | 92.7 | 76.3 | 84.8 | 67.7 | 83.7 |
3 | ✗ | ✓ | ✗ | 93.7 | 75.7 | 84.5 | 68.0 | 83.7 |
4 | ✗ | ✗ | ✓ | 91.3 | 76.7 | 83.9 | 67.6 | 83.4 |
5 | ✓ | ✓ | ✗ | 91.9 | 77.0 | 85.0 | 69.2 | 83.8 |
6 | ✗ | ✓ | ✓ | 93.0 | 75.4 | 84.5 | 69.9 | 83.3 |
7 | ✓ | ✓ | ✓ | 94.5 | 76.0 | 85.8 | 69.6 | 84.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qi, Z.; Wang, J. PMDNet: An Improved Object Detection Model for Wheat Field Weed. Agronomy 2025, 15, 55. https://doi.org/10.3390/agronomy15010055
Qi Z, Wang J. PMDNet: An Improved Object Detection Model for Wheat Field Weed. Agronomy. 2025; 15(1):55. https://doi.org/10.3390/agronomy15010055
Chicago/Turabian StyleQi, Zhengyuan, and Jun Wang. 2025. "PMDNet: An Improved Object Detection Model for Wheat Field Weed" Agronomy 15, no. 1: 55. https://doi.org/10.3390/agronomy15010055
APA StyleQi, Z., & Wang, J. (2025). PMDNet: An Improved Object Detection Model for Wheat Field Weed. Agronomy, 15(1), 55. https://doi.org/10.3390/agronomy15010055