[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Advanced Image Collection, Processing, and Analysis in Crop and Livestock Management

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: 15 April 2025 | Viewed by 6291

Special Issue Editors


E-Mail Website
Guest Editor
Panhandle Research and Extension Center, University of Nebraska-Lincoln, Scottsbluff, NE, USA
Interests: application of image analysis in crop and livestock management; advanced crop and livestock modeling; Internet of Things (IoT); precision agriculture

E-Mail Website
Guest Editor
Biological Systems Engineering, College of Agriculture and Food Sciences, Florida A&M University, Tallahassee, FL, USA
Interests: agricultural engineering; digital agriculture; large-scale hydrologic and water quality modeling; remote sensing applications; environmental system optimization with a focus on green infrastructure and urban sustainability

Special Issue Information

Dear Colleagues,

The integration of imaging technology in agriculture has evolved significantly from basic field photography to sophisticated data collection systems. This transformation, driven by advances in computing and image processing, now empowers more precise agricultural practices through Artificial Intelligence (AI). Today, these technologies are essential for advancing crop and livestock management.

This Special Issue, titled "Advanced Image Collection, Processing, and Analysis in Crop and Livestock Management", underscores the value of image-based data in agriculture, providing insights into crop health and livestock management. Furthermore, the integration of AI, particularly through deep learning, is transforming agricultural practices by enabling more precise and informed decision-making. The focus extends to innovative image analysis techniques that enhance crop and soil monitoring, disease detection, and yield predictions through data from sensors and images. It also explores the use of remote sensing for near real-time, comprehensive management integral to digital agriculture. In livestock management, AI-driven image analysis contributes to sophisticated health monitoring and behavioral analytics, enabling strategies such as early illness detection and optimized feeding. The issue further examines the inclusion of RGB, depth, multispectral, and hyperspectral imaging to augment the data quality and utility. Additionally, the value of quick, mobile imaging processes tailored for modern agriculture’s fast-paced needs is highlighted, alongside the role of edge-computing in managing the significant 'digitization footprint' that these advanced imaging technologies bring to agricultural management. Contributions are encouraged to explore the integration of spatial and edge processing AI with web-based applications and visual analytics, aiming to enhance both productivity and sustainability in agriculture.

Dr. Weizhen Liang
Dr. Jingqiu Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • crop monitoring
  • image analysis
  • AI
  • deep learning
  • remote sensing
  • digital agriculture
  • edge computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 22521 KiB  
Article
DBCA-Net: A Dual-Branch Context-Aware Algorithm for Cattle Face Segmentation and Recognition
by Xiaopu Feng, Jiaying Zhang, Yongsheng Qi, Liqiang Liu and Yongting Li
Agriculture 2025, 15(5), 516; https://doi.org/10.3390/agriculture15050516 - 27 Feb 2025
Viewed by 200
Abstract
Cattle face segmentation and recognition in complex scenarios pose significant challenges due to insufficient fine-grained feature representation in segmentation networks and limited modeling of salient regions and local–global feature interactions in recognition models. To address these issues, DBCA-Net, a dual-branch context-aware algorithm for [...] Read more.
Cattle face segmentation and recognition in complex scenarios pose significant challenges due to insufficient fine-grained feature representation in segmentation networks and limited modeling of salient regions and local–global feature interactions in recognition models. To address these issues, DBCA-Net, a dual-branch context-aware algorithm for cattle face segmentation and recognition, is proposed. The method integrates an improved TransUNet-based segmentation network with a novel Fusion-Augmented Channel Attention (FACA) mechanism in the hybrid encoder, enhancing channel attention and fine-grained feature representation to improve segmentation performance in complex environments. The decoder incorporates an Adaptive Multi-Scale Attention Gate (AMAG) module, which mitigates interference from complex backgrounds through adaptive multi-scale feature fusion. Additionally, FACA and AMAG establish a dynamic feedback mechanism that enables iterative optimization of feature representation and parameter updates. For recognition, the GeLU-enhanced Partial Class Activation Attention (G-PCAA) module is introduced after Patch Partition, strengthening salient region modeling and enhancing local–global feature interaction. Experimental results demonstrate that DBCA-Net achieves superior performance, with 95.48% mIoU and 97.61% mDSC in segmentation tasks and 95.34% accuracy and 93.14% F1-score in recognition tasks. These findings underscore the effectiveness of DBCA-Net in addressing segmentation and recognition challenges in complex scenarios, offering significant improvements over existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>Structural diagram of the DBCA-Net algorithm.</p>
Full article ">Figure 2
<p>Diagram of the Fusion-Augmented Channel Attention (FACA) mechanism architecture.</p>
Full article ">Figure 3
<p>Architecture of AMAG (Adaptive Multi-Scale Attention Gate).</p>
Full article ">Figure 4
<p>Diagram of the inter-class feature enhancement model for cattle face recognition.</p>
Full article ">Figure 5
<p>Diagram of the GeLU-enhanced Partial Class Activation Attention (G-PCAA) mechanism.</p>
Full article ">Figure 6
<p>Diagram of local class center generation and global class representation generation.</p>
Full article ">Figure 7
<p>Loss curve of the cattle face segmentation network for complex environments.</p>
Full article ">Figure 8
<p>(<b>a</b>) Loss curves of the G-PCAA-enhanced recognition network. (<b>b</b>) Accuracy curves of the G-PCAA-enhanced recognition network.</p>
Full article ">Figure 9
<p>(<b>a</b>) Visualization of mean Intersection over Union (mIoU) for segmentation networks. (<b>b</b>) Visualization of mean Dice Similarity Coefficient (mDSC) for segmentation networks. (<b>c</b>) Visualization of 95th Percentile Hausdorff Distance (mHD95) for segmentation networks.</p>
Full article ">Figure 10
<p>(<b>a</b>) Visualization of recognition accuracy across different networks. (<b>b</b>) Visualization of recognition F1-score across different networks. (<b>c</b>) Visualization of model size across different networks.</p>
Full article ">
21 pages, 7934 KiB  
Article
Improved You Only Look Once v.8 Model Based on Deep Learning: Precision Detection and Recognition of Fresh Leaves from Yunnan Large-Leaf Tea Tree
by Chun Wang, Hongxu Li, Xiujuan Deng, Ying Liu, Tianyu Wu, Weihao Liu, Rui Xiao, Zuzhen Wang and Baijuan Wang
Agriculture 2024, 14(12), 2324; https://doi.org/10.3390/agriculture14122324 - 18 Dec 2024
Viewed by 710
Abstract
Yunnan Province, China, known for its superior ecological environment and diverse climate conditions, is home to a rich resource of tea-plant varieties. However, the subtle differences in shape, color and size among the fresh leaves of different tea-plant varieties pose significant challenges for [...] Read more.
Yunnan Province, China, known for its superior ecological environment and diverse climate conditions, is home to a rich resource of tea-plant varieties. However, the subtle differences in shape, color and size among the fresh leaves of different tea-plant varieties pose significant challenges for their identification and detection. This study proposes an improved YOLOv8 model based on a dataset of fresh leaves from five tea-plant varieties among Yunnan large-leaf tea trees. Dynamic Upsampling replaces the UpSample module in the original YOLOv8, reducing the data volume in the training process. The Efficient Pyramid Squeeze Attention Network is integrated into the backbone of the YOLOv8 network to boost the network’s capability to handle multi-scale spatial information. To improve model performance and reduce the number of redundant features within the network, a Spatial and Channel Reconstruction Convolution module is introduced. Lastly, Inner-SIoU is adopted to reduce network loss and accelerate the convergence of regression. Experimental results indicate that the improved YOLOv8 model achieves precision, recall and an mAP of 88.4%, 89.9% and 94.8%, representing improvements of 7.1%, 3.9% and 3.4% over the original model. This study’s proposed improved YOLOv8 model not only identifies fresh leaves from different tea-plant varieties but also achieves graded recognition, effectively addressing the issues of strong subjectivity in manual identification detection, the long training time of the traditional deep learning model and high hardware cost. It establishes a robust technical foundation for the intelligent and refined harvesting of tea in Yunnan’s tea gardens. Full article
Show Figures

Figure 1

Figure 1
<p>Sample images of tea leaves from different tea-plant varieties ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1).</p>
Full article ">Figure 2
<p>Examples of augmented tea-leaf dataset samples.</p>
Full article ">Figure 3
<p>Structure of the improved YOLOv8 Network. The parts enclosed by red dashed lines are the locations where the improved modules are added.</p>
Full article ">Figure 4
<p>Dynamic Upsampling structure: (<b>a</b>) is sampling based Dynamic Upsampling; (<b>b</b>) is the sampling point generator in DySample.</p>
Full article ">Figure 5
<p>PSA module structure. The upper part of the picture depicts the structure of the proposed pyramid squeeze attention (PSA) module, and the lower part of the picture depicts the SEWeight module.</p>
Full article ">Figure 6
<p>SPC implementation process.</p>
Full article ">Figure 7
<p>SRU and CRU module structure. The upper part of the picture depicts a Spatial Reconstruction Unit, the lower part of the picture depicts the Channel Reconstruction Unit.</p>
Full article ">Figure 8
<p>Inner-IoU structure. Green solid wireframe indicates Target Box, green dashed line indicates Anchor Box, red solid wireframe indicates Inner Target Box, and red dashed line indicates Inner Anchor Box.</p>
Full article ">Figure 9
<p>Change curve of loss function. The left side is the loss-function curve of the improved YOLOv8, and the right side is the loss-function curve of YOLOv8.</p>
Full article ">Figure 10
<p>Comparison of detection effects before and after the improvement of YOLOv8 ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1).</p>
Full article ">Figure 11
<p>Performance parameter change curves for different improvements of YOLOv8 in the training process. (<b>a</b>) Shows the precision change curve of different improvements of YOLOv8 during training, (<b>b</b>) shows the recall change curve of different improvements of YOLOv8 during training and (<b>c</b>) shows the mAP50% change curve of different improvements of YOLOv8 during training. In the legend, D stands for DySample, E stands for EPSANet, S stands for SCConv and I stands for Inner-SIoU.</p>
Full article ">Figure 12
<p>Different improvements of YOLOv8 in visual thermal map effect presentation ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1). Deep red indicates high attention, yellow indicates medium and high attention, green indicates moderate concern, light blue indicates low attention, dark blue indicates very low attention.</p>
Full article ">Figure 13
<p>Identification results of different models ((<b>A</b>) Changning Daye Tea; (<b>B</b>) Fo Xiang No. 3; (<b>C</b>) Xiang Gui Yin Hao; (<b>D</b>) Yun Kang No. 10; and (<b>E</b>) Yun Shan No. 1).</p>
Full article ">Figure 14
<p>Comparison of recognition results of improved YOLOv8 model under extreme lighting conditions: (<b>a</b>,<b>c</b>) indicates the recognition result of the model under normal illumination, (<b>b</b>) indicates the recognition result of the model under over-dark illumination, and (<b>d</b>) indicates the recognition result of the model under overexposure illumination.</p>
Full article ">
14 pages, 4478 KiB  
Article
A New Kiwi Fruit Detection Algorithm Based on an Improved Lightweight Network
by Yi Yang, Lijun Su, Aying Zong, Wanghai Tao, Xiaoping Xu, Yixin Chai and Weiyi Mu
Agriculture 2024, 14(10), 1823; https://doi.org/10.3390/agriculture14101823 - 16 Oct 2024
Cited by 1 | Viewed by 1193
Abstract
To address the challenges associated with kiwi fruit detection methods, such as low average accuracy, inaccurate recognition of fruits, and long recognition time, this study proposes a novel kiwi fruit recognition method based on an improved lightweight network S-YOLOv4-tiny detection algorithm. Firstly, the [...] Read more.
To address the challenges associated with kiwi fruit detection methods, such as low average accuracy, inaccurate recognition of fruits, and long recognition time, this study proposes a novel kiwi fruit recognition method based on an improved lightweight network S-YOLOv4-tiny detection algorithm. Firstly, the YOLOv4-tiny algorithm utilizes the CSPdarknet53-tiny network as a backbone feature extraction network, replacing the CSPdarknet53 network in the YOLOv4 algorithm to enhance the speed of kiwi fruit recognition. Additionally, a squeeze-and-excitation network has been incorporated into the S-YOLOv4-tiny detection algorithm to improve accurate image extraction of kiwi fruit characteristics. Finally, enhancing dataset pictures using mosaic methods has improved precision in the characteristic recognition of kiwi fruits. The experimental results demonstrate that the recognition and positioning of kiwi fruits have yielded improved outcomes. The mean average precision (mAP) stands at 89.75%, with a detection precision of 93.96% and a single-picture detection time of 8.50 ms. Compared to the YOLOv4-tiny detection algorithm network, the network in this study exhibits a 7.07% increase in mean average precision and a 1.16% acceleration in detection time. Furthermore, an enhancement method based on the Squeeze-and-Excitation Network (SENet) is proposed, as opposed to the convolutional block attention module (CBAM) and efficient channel attention (ECA). This approach effectively addresses issues related to slow training speed and low recognition accuracy of kiwi fruit, offering valuable technical insights for efficient mechanical picking methods. Full article
Show Figures

Figure 1

Figure 1
<p>Kiwi fruit images with the different degrees of occlusion.</p>
Full article ">Figure 2
<p>YOLOv4-tiny Network structure.</p>
Full article ">Figure 3
<p>The structure of the squeeze-and-excitation network.</p>
Full article ">Figure 4
<p>P-R curves for the four detection methods.</p>
Full article ">Figure 5
<p>The mean average accuracy changes with the number of iterations when the IOU value is 0.5.</p>
Full article ">Figure 6
<p>The detection of the different fruit occlusions.</p>
Full article ">Figure 7
<p>Image detection results of kiwi fruit by different mainstream networks.</p>
Full article ">
22 pages, 7012 KiB  
Article
A Multi-View Real-Time Approach for Rapid Point Cloud Acquisition and Reconstruction in Goats
by Yi Sun, Qifeng Li, Weihong Ma, Mingyu Li, Anne De La Torre, Simon X. Yang and Chunjiang Zhao
Agriculture 2024, 14(10), 1785; https://doi.org/10.3390/agriculture14101785 - 11 Oct 2024
Viewed by 910
Abstract
The body size, shape, weight, and scoring of goats are crucial indicators for assessing their growth, health, and meat production. The application of computer vision technology to measure these parameters is becoming increasingly prevalent. However, in real farm environments, obstacles, such as fences, [...] Read more.
The body size, shape, weight, and scoring of goats are crucial indicators for assessing their growth, health, and meat production. The application of computer vision technology to measure these parameters is becoming increasingly prevalent. However, in real farm environments, obstacles, such as fences, ground conditions, and dust, pose significant challenges for obtaining accurate goat point cloud data. These obstacles lead to difficulties in rapid data extraction and result in incomplete reconstructions, causing substantial measurement errors. To address these challenges, we developed a system for real-time, non-contact acquisition, extraction, and reconstruction of goat point clouds using three depth cameras. The system operates in a scenario where goats walk naturally through a designated channel, and bidirectional distributed triggering logic is employed to ensure real-time acquisition of the point cloud. We also designed a noise recognition and filtering method tailored to handle complex environmental interferences found on farms, enabling automatic extraction of the goat point cloud. Furthermore, a distributed point cloud completion algorithm was developed to reconstruct missing sections of the goat point cloud caused by unavoidable factors such as railings and dust. Measurements of body height, body slant length, and chest circumference were calculated separately with deviation of no more than 25 mm and an average error of 3.1%. The system processes each goat in an average time of 3–5 s. This method provides rapid and accurate extraction and complementary reconstruction of 3D point clouds of goats in motion on real farms, without human intervention. It offers a valuable technological solution for non-contact monitoring and evaluation of goat body size, weight, shape, and appearance. Full article
Show Figures

Figure 1

Figure 1
<p>Technical workflow diagram of the system.</p>
Full article ">Figure 2
<p>Acquisition equipment. (<b>A</b>) Equipment deployed in goat farms. (<b>B</b>) Goats passing through the equipment.</p>
Full article ">Figure 3
<p>Acquisition process overview (T: true. F: false).</p>
Full article ">Figure 4
<p>Point cloud registration result.</p>
Full article ">Figure 5
<p>Radius filter schematic (a: retained point; b: removed point). The first blue dot is a, and the second blue dot is b. Due to the red circle being drawn with a certain radius centered on the blue point, the point is highlighted in blue.</p>
Full article ">Figure 6
<p>Distribution and variance of point counts in point cloud clusters. (<b>A</b>) The background shows the point cloud after processing by the preceding algorithm. Blue bars represent the actual distribution of points within each point cloud cluster, while black curves depict a moving average fit to these blue bars. The manually selected railing locations are marked as L and R. (<b>B</b>) A scatter plot illustrates the differences in points counts within each point cloud cluster in region L, arranged from left to right (b1: plunging differences in points counts within point cloud cluster). (<b>C</b>) A scatter plot shows the differences in points counts within each point cloud cluster in region R, arranged from right to left (c1: flat differences in points counts within point cloud cluster, c2: plunging differences in points counts within point cloud cluster).</p>
Full article ">Figure 7
<p>Workflow for noise identification and filtering in a goat farm environment. (<b>A</b>) Initial removal of noise between the goat and railings, along with the elimination of substantial irrelevant noise. (<b>B</b>) Filtering of noise in non-essential spatial regions. (<b>C</b>) Processing of redundant points related to goats. (<b>D</b>) Ground noise detection and filtering. (<b>E</b>) Identification and removal of railing-related and other random noises.</p>
Full article ">Figure 8
<p>Initial identification of the range of segmentation positions (X: the white arrow indicates that the leg cutting position is initially determined within region, b: the black dashed line indicates the theoretical leg cutting position).</p>
Full article ">Figure 9
<p>Determining the actual segmentation position.</p>
Full article ">Figure 10
<p>Schematic of distributed point cloud complementation algorithm (h: cutting plane of the goat’s head, neck, and carcass, l: cutting plane of the goat’s legs and carcass).</p>
Full article ">Figure 11
<p>Results of the goat extraction and reconstruction algorithm. (<b>A</b>) Alignment results of goat point cloud. (<b>B</b>) Results of environmental noise recognition and filtering in the goat farm. (<b>C</b>) Results of the distributed point cloud complementation algorithm.</p>
Full article ">Figure 12
<p>Filtering results for different values of parameter D.</p>
Full article ">Figure 13
<p>Comparison of fence noise removal. (<b>A</b>) Schematic diagram of the railing near the goat. (<b>B</b>) Direct extraction results using the Euclidean clustering method. (<b>C</b>) Preliminary railing filtering results using the environmental noise recognition method in the goat farm. (<b>D</b>) Final railing filtering results using the environmental noise recognition and filtering method in goat farm.</p>
Full article ">Figure 14
<p>Comparison of completion algorithm results (<b>c1</b>: original chest point cloud, <b>c2</b>: completed chest point cloud, <b>d1</b>: original abdominal point cloud, <b>d2</b>: completed abdominal point cloud, <b>d3</b>: original the surface of the goat’s body, <b>d4</b>: smooths the surface of the goat’s body).</p>
Full article ">
26 pages, 21442 KiB  
Article
DGS-YOLOv8: A Method for Ginseng Appearance Quality Detection
by Lijuan Zhang, Haohai You, Zhanchen Wei, Zhiyi Li, Haojie Jia, Shengpeng Yu, Chunxi Zhao, Yan Lv and Dongming Li
Agriculture 2024, 14(8), 1353; https://doi.org/10.3390/agriculture14081353 - 13 Aug 2024
Cited by 2 | Viewed by 1464
Abstract
In recent years, the research and application of ginseng, a famous and valuable medicinal herb, has received extensive attention at home and abroad. However, with the gradual increase in the demand for ginseng, discrepancies are inevitable when using the traditional manual method for [...] Read more.
In recent years, the research and application of ginseng, a famous and valuable medicinal herb, has received extensive attention at home and abroad. However, with the gradual increase in the demand for ginseng, discrepancies are inevitable when using the traditional manual method for grading the appearance and quality of ginseng. Addressing these challenges was the primary focus of this study. This study obtained a batch of ginseng samples and enhanced the dataset by data augmentation, based on which we refined the YOLOv8 network in three key dimensions: firstly, we used the C2f-DCNv2 module and the SimAM attention mechanism to augment the model’s effectiveness in recognizing ginseng appearance features, followed by the use of the Slim-Neck combination (GSConv + VoVGSCSP) to lighten the model These improvements constitute our proposed DGS-YOLOv8 model, which achieved an impressive mAP50 of 95.3% for ginseng appearance quality detection. The improved model not only has a reduced number of parameters and smaller size but also improves 6.86%, 2.73%, and 3.82% in precision, mAP50, and mAP50-95 over the YOLOv8n model, which comprehensively outperforms the other related models. With its potential demonstrated in this experiment, this technology can be deployed in large-scale production lines to benefit the food and traditional Chinese medicine industries. In summary, the DGS-YOLOv8 model has the advantages of high detection accuracy, small model space occupation, easy deployment, and robustness. Full article
Show Figures

Figure 1

Figure 1
<p>A ginseng image acquisition device.</p>
Full article ">Figure 2
<p>Ginseng dataset. (<b>a</b>) Original dataset; (<b>b</b>) dataset after data enhancement.</p>
Full article ">Figure 3
<p>YOLOv8 model structure.</p>
Full article ">Figure 4
<p>Improved YOLOv8 model (DGS-YOLOv8).</p>
Full article ">Figure 5
<p>Illustration of 3 × 3 deformable convolution net v2.</p>
Full article ">Figure 6
<p>The structure of the C2f-DCN network.</p>
Full article ">Figure 7
<p>Structure of the GSConv module.</p>
Full article ">Figure 8
<p>Structure of the VoVGSCSP module.</p>
Full article ">Figure 9
<p>Structure of the SimAM attention mechanism.</p>
Full article ">Figure 10
<p>Comparison of indicators before and after model improvement.</p>
Full article ">Figure 11
<p>Visualization results of thermal features before and after the introduction of SimAM.</p>
Full article ">Figure 12
<p>Comparison of mAP50 and mAP50-95 curves of DGS-YOLOv8 model with different locations of added Simam attention mechanism.</p>
Full article ">Figure 13
<p>Comparison of data-enhanced metrics.</p>
Full article ">Figure 14
<p>Comparison experiments with other models.</p>
Full article ">Figure 15
<p>Pictures of detection results of different models.</p>
Full article ">Figure 15 Cont.
<p>Pictures of detection results of different models.</p>
Full article ">
15 pages, 2006 KiB  
Article
Tracking Free-Ranging Pantaneiro Sheep during Extreme Drought in the Pantanal through Precision Technologies
by Gianni Aguiar da Silva, Sandra Aparecida Santos, Paulo Roberto de Lima Meirelles, Rafael Silvio Bonilha Pinheiro, Marcos Paulo Silva Gôlo, Jorge Luiz Franco, Igor Alexandre Hany Fuzeta Schabib Péres, Laysa Fontes Moura and Ciniro Costa
Agriculture 2024, 14(7), 1154; https://doi.org/10.3390/agriculture14071154 - 16 Jul 2024
Viewed by 943
Abstract
The Pantanal has been facing consecutive years of extreme drought, with an impact on the quantity and quality of available pasture. However, little is known about how locally adapted breeds respond to the distribution of forage resources in this extreme drought scenario. This [...] Read more.
The Pantanal has been facing consecutive years of extreme drought, with an impact on the quantity and quality of available pasture. However, little is known about how locally adapted breeds respond to the distribution of forage resources in this extreme drought scenario. This study aimed to evaluate the movement of free-grazing Pantaneiro sheep using a low-cost GPS to assess the main grazing sites, measure the daily distance traveled, and determine the energy requirements for walking with body weight monitoring. In a herd of 100 animals, 31 were selected for weighing, and six ewes were outfitted with GPS collars. GPS data collected on these animals every 10 m from August 2020 to May 2021 was analyzed using the Python programming language. The traveled distance and activity energy requirements (ACT) for horizontal walking (Mcal/d of NEm) were determined. The 31 ewes were weighed at the beginning and end of each season. The available dry matter (DM) and floristic composition of the grazing sites were estimated at the peak of the drought. DM was predicted using power regression with NDVI (normalized difference vegetation index) (R2 = 0.94). DM estimates averaged 450 kg/ha, ranging from traces to 3830 kg/ha, indicating overall very low values. Individual variation in the frequency of use of grazing sites was observed (p < 0.05), reflecting the distances traveled and the energetic cost of the activity. The range of distances traveled by the animals varied from 3.3 to 17.7 km/d, with an average of 5.9 km/d, indicating low energy for walking. However, the traveled distance and ACT remained consistent over time; there were no significant differences observed between seasons (p > 0.05). On average, the ewes’ initial weight did not differ from the weight at the drought peak (p > 0.05), indicating that they maintained their initial weight, which is important for locally adapted breeds as it confers robustness and resilience. This study also highlighted the importance of the breed’s biodiverse diet during extreme drought, which enabled the selection of forage for energy and nutrient supplementation. The results demonstrated that precision tools such as GPS and satellite imagery enabled the study of animals in extensive systems, thereby contributing to decision-making within the production system. Full article
Show Figures

Figure 1

Figure 1
<p>Total rainfall of the hydrological year (October to September) of the year of study (2020–21), compared with the previous hydrological year (2010_20) and the normal climatological year (1977–2021). Source: Climatological Station of Nhumirim Ranch, Pantanal, MS.</p>
Full article ">Figure 2
<p>Location of the study area. Visualization of the Pantanal on the map of Brazil and the location of the experimental ranch showing the sampling points of the grazing sites (orange dots), highlighting the ten most frequently visited grazing sites (red dots) (source: Google Earth Pro).</p>
Full article ">Figure 3
<p>NDVI image clipping with the eight classes of vegetation in the study area.</p>
Full article ">Figure 4
<p>Random effects of individual ewes considered for the proximity frequency of the main grazing sites.</p>
Full article ">Figure 5
<p>Distance traveled (km/d) and activity energy requirements (ACT) for horizontal walking (Mcal/d of NE<sub>m</sub>) by season (<b>a</b>) and by Pantaneira ewes (<b>b</b>). PD (peak of drought); BR (beginning of the rains); PR (peak of rains) and BD (beginning of drought).</p>
Full article ">
Back to TopTop