[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,687)

Search Parameters:
Keywords = fast mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3461 KiB  
Article
Side-Scan Sonar Small Objects Detection Based on Improved YOLOv11
by Chang Zou, Siquan Yu, Yankai Yu, Haitao Gu and Xinlin Xu
J. Mar. Sci. Eng. 2025, 13(1), 162; https://doi.org/10.3390/jmse13010162 (registering DOI) - 18 Jan 2025
Viewed by 95
Abstract
Underwater object detection using side-scan sonar (SSS) remains a significant challenge in marine exploration, especially for small objects. Conventional methods for small object detection face various obstacles, such as difficulties in feature extraction and the considerable impact of noise on detection accuracy. To [...] Read more.
Underwater object detection using side-scan sonar (SSS) remains a significant challenge in marine exploration, especially for small objects. Conventional methods for small object detection face various obstacles, such as difficulties in feature extraction and the considerable impact of noise on detection accuracy. To address these issues, this study proposes an improved YOLOv11 network named YOLOv11-SDC. Specifically,a new Sparse Feature (SF) module is proposed, replacing the Spatial Pyramid Pooling Fast (SPPF) module from the original YOLOv11 architecture to enhance object feature selection. Furthermore, the proposed YOLOv11-SDC integrates a Dilated Reparam Block (DRB) with a C3k2 module to broaden the model’s receptive field. A Content-Guided Attention Fusion (CGAF) module is also incorporated prior to the detection module to assign appropriate weights to various feature maps, thereby emphasizing the relevant object information. Experimental results clearly demonstrate the superiority of YOLOv11-SDC over several iterations of YOLO versions in detection performance. The proposed method was validated through extensive real-world experiments, yielding a precision of 0.934, recall of 0.698, [email protected] of 0.825, and [email protected]:0.95 of 0.598. In conclusion, the improved YOLOv11-SDC offers a promising solution for detecting small objects in SSS images, showing substantial potential for marine applications. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Underwater Sonar Images)
18 pages, 3570 KiB  
Article
A Bionic Social Learning Strategy Pigeon-Inspired Optimization for Multi-Unmanned Aerial Vehicle Cooperative Path Planning
by Yankai Shen, Xinan Liu, Xiao Ma, Hong Du and Long Xin
Appl. Sci. 2025, 15(2), 910; https://doi.org/10.3390/app15020910 (registering DOI) - 17 Jan 2025
Viewed by 251
Abstract
This paper proposes a bionic social learning strategy pigeon-inspired optimization (BSLSPIO) algorithm to tackle cooperative path planning for multiple unmanned aerial vehicles (UAVs) with cooperative detection. Firstly, a modified pigeon-inspired optimization (PIO) is proposed, which incorporates a bionic social learning strategy. In this [...] Read more.
This paper proposes a bionic social learning strategy pigeon-inspired optimization (BSLSPIO) algorithm to tackle cooperative path planning for multiple unmanned aerial vehicles (UAVs) with cooperative detection. Firstly, a modified pigeon-inspired optimization (PIO) is proposed, which incorporates a bionic social learning strategy. In this modification, the global best is replaced by the average of the top-ranked solutions in the map and compass operator, while the global center is replaced by the local center in the landmark operator. The paper also proves the algorithm’s convergence and provides complexity analysis. Comparison experiments demonstrate that the proposed method searches for the optimal solution while guaranteeing fast convergence. Subsequently, a path-planning model, detection units’ network model, and cost estimation are constructed. The developed BSLSPIO is utilized to generate feasible paths for UAVs, adhering to time consistency constraints. The simulation results show that the BSLSPIO generates feasible paths at minimum cost and effectively solves the UAVs’ cooperative path-planning problem. Full article
(This article belongs to the Special Issue Design and Application of Bionic Aircraft and Biofuels)
Show Figures

Figure 1

Figure 1
<p>The standard PIO.</p>
Full article ">Figure 2
<p>The operations of the proposed BSLSPIO: (<b>a</b>) the modified map and compass operator; (<b>b</b>) the modified landmark operator.</p>
Full article ">Figure 3
<p>The designed sigmoid functions: (<b>a</b>) the sigmoid function 1; (<b>b</b>) the sigmoid function 2.</p>
Full article ">Figure 4
<p>The comparison curves of some tested functions: (<b>a</b>) <span class="html-italic">F</span><sub>3</sub>; (<b>b</b>) <span class="html-italic">F</span><sub>5</sub>; (<b>c</b>) <span class="html-italic">F</span><sub>9</sub>; and (<b>d</b>) <span class="html-italic">F</span><sub>11</sub>.</p>
Full article ">Figure 5
<p>The coordination of UAVs path planning.</p>
Full article ">Figure 6
<p>Multi-UAV cooperative path planning.</p>
Full article ">Figure 7
<p>The relative distances between each UAV.</p>
Full article ">Figure 8
<p>The comparison curves of different methods.</p>
Full article ">
20 pages, 11840 KiB  
Article
DBnet: A Lightweight Dual-Backbone Target Detection Model Based on Side-Scan Sonar Images
by Quanhong Ma, Shaohua Jin, Gang Bian, Yang Cui and Guoqing Liu
J. Mar. Sci. Eng. 2025, 13(1), 155; https://doi.org/10.3390/jmse13010155 - 17 Jan 2025
Viewed by 167
Abstract
Due to the large number of parameters and high computational complexity of current target detection models, it is challenging to perform fast and accurate target detection in side-scan sonar images under the existing technical conditions, especially in environments with limited computational resources. Moreover, [...] Read more.
Due to the large number of parameters and high computational complexity of current target detection models, it is challenging to perform fast and accurate target detection in side-scan sonar images under the existing technical conditions, especially in environments with limited computational resources. Moreover, since the original waterfall map of side-scan sonar only consists of echo intensity information, which is usually of a large size, it is difficult to fuse it with other multi-source information, which limits the detection accuracy of models. To address these issues, we designed DBnet, a lightweight target detector featuring two lightweight backbone networks (PP-LCNet and GhostNet) and a streamlined neck structure for feature extraction and fusion. To solve the problem of unbalanced aspect ratios in sonar data waterfall maps, DBnet employs the SAHI algorithm with sliding-window slicing inference to improve small-target detection accuracy. Compared with the baseline model, DBnet has 33% fewer parameters and 31% fewer GFLOPs while maintaining accuracy. Tests performed on two datasets (SSUTD and SCTD) showed that the mAP values improved by 2.3% and 6.6%. Full article
(This article belongs to the Special Issue New Advances in Marine Remote Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Operation flow chart.</p>
Full article ">Figure 2
<p>Diagram showing the DBnet model’s structure details.</p>
Full article ">Figure 3
<p>A schematic of the slices generated with SAHI in a sample. The colored dashed boxes indicate the four neighboring slices P1, P2, P3, and P4 corresponding to when X = 4, with a size of d × d pixels.</p>
Full article ">Figure 4
<p>Schematic of SAHI principle.The blue border is the whole image, the red border represents the corresponding slice, and the green border is the detection result.</p>
Full article ">Figure 5
<p>Schematic diagram of PP-LCNet’s structure. Conv is a standard 3 × 3 convolution. DepthSepConv denotes depth-separable convolution, where DW denotes depth-wise convolution and PW denotes point-wise convolution. Moreover, SE denotes the Squeeze-and-Excitation module.</p>
Full article ">Figure 6
<p>GhostConv operation principle.</p>
Full article ">Figure 7
<p>Ghost bottleneck.</p>
Full article ">Figure 8
<p>Selected samples from SSUTD and SCTD, both of which contain side-scan sonar images of airplane wrecks, shipwrecks, and drowned people.</p>
Full article ">Figure 9
<p>(<b>a</b>) The original image; (<b>b</b>–<b>f</b>) the data enhancement results.</p>
Full article ">Figure 10
<p>The distributions of targets and their sizes.</p>
Full article ">Figure 11
<p>The normalized confusion matrix of the model.</p>
Full article ">Figure 12
<p>mAP comparison curves of DBnet and baseline model.</p>
Full article ">Figure 13
<p>P-R curves of YOLOv8n and DBnet: (<b>a</b>) P-R curve of YOLOv8n detector; (<b>b</b>) P-R curve of DBnet detector.</p>
Full article ">Figure 14
<p>The orange arrows in the figure represent the slicing operation on the original large-size image, and the blue arrows represent the input of each slice into the DBnet detector for prediction.</p>
Full article ">Figure 15
<p>Graphs showing comparison of detection effects. (<b>a</b>) shows the results of detection of side-scan sonar images using the baseline model. (<b>b</b>) The result of using DBnet to detect the side-scan sonar image.</p>
Full article ">
20 pages, 7523 KiB  
Article
SMC-YOLO: A High-Precision Maize Insect Pest-Detection Method
by Qinghao Wang, Yongkang Liu, Qi Zheng, Rui Tao and Yong Liu
Agronomy 2025, 15(1), 195; https://doi.org/10.3390/agronomy15010195 - 15 Jan 2025
Viewed by 227
Abstract
Maize is an excellent crop with high yields and versatility, and the extent and frequency of pest outbreaks will have a serious impact on maize yields. Therefore, helping growers accurately identify pest species is important for improving corn yields. Thus, in this study, [...] Read more.
Maize is an excellent crop with high yields and versatility, and the extent and frequency of pest outbreaks will have a serious impact on maize yields. Therefore, helping growers accurately identify pest species is important for improving corn yields. Thus, in this study, we propose to use a pest detector called SMC-YOLO, which is proposed using You Only Look Once (YOLO) v8 as a reference model. First, the Spatial Pyramid Convolutional Pooling Module (SPCPM) is utilized in lieu of the Spatial Pyramid Pooling-Fast (SPPF) to enrich the diversity of feature information. Subsequently, a Multi-Dimensional Feature-Enhancement Module (MDFEM) is incorporated into the neck network. This module serves the purpose of augmenting the feature information associated with pests. Finally, a cross-scale feature-level non-local module (CSFLNLM) is incorporated in front of the detector head, which improves the global perception of the detector head. The results showed that SMC-YOLO achieved excellent results in several metrics, with its F1 Score (F1), mean Average Precision (mAP) @0.50, [email protected]:0.95 and [email protected] reaching 83.18%, 86.7%, 60.6% and 70%, respectively, outperforming YOLOv11. This study provides a more reliable method of pest identification for the development of smart agriculture. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

Figure 1
<p>Pictures, names and serial numbers of pests.</p>
Full article ">Figure 2
<p>Number of instances of each type of pest.</p>
Full article ">Figure 3
<p>General framework structure of the SMC-YOLO network model.</p>
Full article ">Figure 4
<p>(<b>a</b>) is the SPCPM; (<b>b</b>) is the parameter description of each module; (<b>c</b>,<b>d</b>) is the comparative models of ablation experiments for this module.</p>
Full article ">Figure 5
<p>(<b>a</b>) is the MDFEM proposed in this paper; (<b>b</b>)is the parameter description of each module; (<b>c</b>,<b>d</b>) is the comparative ablation experimental module.</p>
Full article ">Figure 6
<p>CSFLNLM General Architecture.</p>
Full article ">Figure 7
<p>Comparison of GC_Net and CSFLNLM. (<b>a</b>) GC_Net generates attention scores; (<b>b</b>) GC_Net Single-Pixel Capture Global Information; (<b>c</b>) CSFLNLM generates attention scores; (<b>d</b>) CSFLNLM single-pixel capture of global information.</p>
Full article ">Figure 8
<p>Comparison of PR curves of SPPF and SPCPM.</p>
Full article ">Figure 9
<p>Gradient CAM visualization results.</p>
Full article ">Figure 10
<p>PR curves for the baseline model vs. after adding CSFLNLM.</p>
Full article ">Figure 11
<p>YOLOv8’s confusion matrix.</p>
Full article ">Figure 12
<p>SMC-YOLO’s confusion matrix.</p>
Full article ">Figure 13
<p>Prediction results of SMC-YOLO with other networks. Red circles indicate missed targets; yellow circles indicate targets that were detected incorrectly.</p>
Full article ">
13 pages, 3150 KiB  
Article
Underwater Target Detection with High Accuracy and Speed Based on YOLOv10
by Zhengliang Hu, Le Cheng, Shui Yu, Pan Xu, Peng Zhang, Rui Tian and Jingqi Han
J. Mar. Sci. Eng. 2025, 13(1), 135; https://doi.org/10.3390/jmse13010135 - 14 Jan 2025
Viewed by 305
Abstract
Underwater target detection exhibits extensive applications in marine target exploration and marine environmental monitoring. However, conventional images of underwater targets present challenges including blurred contour information, complex environmental conditions, and pronounced scattering effects. In this work, an underwater target detection method based on [...] Read more.
Underwater target detection exhibits extensive applications in marine target exploration and marine environmental monitoring. However, conventional images of underwater targets present challenges including blurred contour information, complex environmental conditions, and pronounced scattering effects. In this work, an underwater target detection method based on YOLOv10 is designed, and the detection performance is compared with the YOLOv5 model. Experimental results demonstrate that the YOLOv10 model has a mAP50 of 85.6% on the URPC 2020 dataset, improving the mAP50 by 1.2% than that of YOLOv5. This model exhibits high detection accuracy and high proceeding speed, which provides a promising support for precise and fast underwater target detection. Full article
(This article belongs to the Special Issue Underwater Target Detection and Recognition)
Show Figures

Figure 1

Figure 1
<p>YOLO detection process.</p>
Full article ">Figure 2
<p>The network structure of YOLOv10.</p>
Full article ">Figure 3
<p>The YOLOv10 updated module. (<b>a</b>) PSA and (<b>b</b>) C2FCIB module.</p>
Full article ">Figure 4
<p>Prediction results of underwater target detection: (<b>a</b>) YOLOv5; (<b>b</b>) YOLOv10.</p>
Full article ">Figure 5
<p>The confusion matrix of four targets: (<b>a</b>) YOLOv5; (<b>b</b>) YOLOv10.</p>
Full article ">Figure 6
<p>The precision confidence curve: (<b>a</b>) YOLOv5; (<b>b</b>) YOLOv10.</p>
Full article ">Figure 7
<p>The recall confidence curve: (<b>a</b>) YOLOv5; (<b>b</b>) YOLOv10.</p>
Full article ">Figure 8
<p>The F1 score confidence curve: (<b>a</b>) YOLOv5; (<b>b</b>) YOLOv10.</p>
Full article ">
16 pages, 4833 KiB  
Article
High-Quality Text-to-Image Generation Using High-Detail Feature-Preserving Network
by Wei-Yen Hsu and Jing-Wen Lin
Appl. Sci. 2025, 15(2), 706; https://doi.org/10.3390/app15020706 - 13 Jan 2025
Viewed by 509
Abstract
Multistage text-to-image generation algorithms have shown remarkable success. However, the images produced often lack detail and suffer from feature loss. This is because these methods mainly focus on extracting features from images and text, using only conventional residual blocks for post-extraction feature processing. [...] Read more.
Multistage text-to-image generation algorithms have shown remarkable success. However, the images produced often lack detail and suffer from feature loss. This is because these methods mainly focus on extracting features from images and text, using only conventional residual blocks for post-extraction feature processing. This results in the loss of features, greatly reducing the quality of the generated images and necessitating more resources for feature calculation, which will severely limit the use and application of optical devices such as cameras and smartphones. To address these issues, the novel High-Detail Feature-Preserving Network (HDFpNet) is proposed to effectively generate high-quality, near-realistic images from text descriptions. The initial text-to-image generation (iT2IG) module is used to generate initial feature maps to avoid feature loss. Next, the fast excitation-and-squeeze feature extraction (FESFE) module is proposed to recursively generate high-detail and feature-preserving images with lower computational costs through three steps: channel excitation (CE), fast feature extraction (FFE), and channel squeeze (CS). Finally, the channel attention (CA) mechanism further enriches the feature details. Compared with the state of the art, experimental results obtained on the CUB-Bird and MS-COCO datasets demonstrate that the proposed HDFpNet achieves better performance and visual presentation, especially regarding high-detail images and feature preservation. Full article
(This article belongs to the Special Issue Advanced Image Analysis and Processing Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>Applications of the proposed HDFpNet model. (<b>a</b>) High-quality image generation from the warning signs on a construction site; (<b>b</b>) high-quality image generation from the descriptive content of machine operation steps; (<b>c</b>) high-quality image generation from information about birds in a tourist area.</p>
Full article ">Figure 2
<p>Architecture of the proposed HDFpNet. It consists of the iT2IG module, FESFE module (including CE block, FFE block, and CS block), and CA mechanism.</p>
Full article ">Figure 3
<p>Visual presentation and comparison of generated images by StackGAN [<a href="#B9-applsci-15-00706" class="html-bibr">9</a>], AttnGAN [<a href="#B10-applsci-15-00706" class="html-bibr">10</a>], DM-GAN [<a href="#B14-applsci-15-00706" class="html-bibr">14</a>], and HDFpNet (ours) conditioned on text descriptions from the CUB-Bird [<a href="#B17-applsci-15-00706" class="html-bibr">17</a>] test dataset.</p>
Full article ">Figure 4
<p>Visual presentation and comparison of generated images by StackGAN [<a href="#B9-applsci-15-00706" class="html-bibr">9</a>], AttnGAN [<a href="#B10-applsci-15-00706" class="html-bibr">10</a>], DM-GAN [<a href="#B14-applsci-15-00706" class="html-bibr">14</a>], and HDFpNet (ours) conditioned on text descriptions from the MS-COCO [<a href="#B18-applsci-15-00706" class="html-bibr">18</a>] test dataset.</p>
Full article ">Figure 4 Cont.
<p>Visual presentation and comparison of generated images by StackGAN [<a href="#B9-applsci-15-00706" class="html-bibr">9</a>], AttnGAN [<a href="#B10-applsci-15-00706" class="html-bibr">10</a>], DM-GAN [<a href="#B14-applsci-15-00706" class="html-bibr">14</a>], and HDFpNet (ours) conditioned on text descriptions from the MS-COCO [<a href="#B18-applsci-15-00706" class="html-bibr">18</a>] test dataset.</p>
Full article ">Figure 5
<p>Visual presentation and comparison of generated images by HDFpNet in different stages and GT in the CUB-Bird dataset [<a href="#B17-applsci-15-00706" class="html-bibr">17</a>].</p>
Full article ">Figure 6
<p>Different styles of images generated by HDFpNet from the same text description.</p>
Full article ">
22 pages, 18757 KiB  
Article
CSGD-YOLO: A Corn Seed Germination Status Detection Model Based on YOLOv8n
by Wenbin Sun, Meihan Xu, Kang Xu, Dongquan Chen, Jianhua Wang, Ranbing Yang, Quanquan Chen and Songmei Yang
Agronomy 2025, 15(1), 128; https://doi.org/10.3390/agronomy15010128 - 7 Jan 2025
Viewed by 319
Abstract
Seed quality testing is crucial for ensuring food security and stability. To accurately detect the germination status of corn seeds during the paper medium germination test, this study proposes a corn seed germination status detection model based on YOLO v8n (CSGD-YOLO). Initially, to [...] Read more.
Seed quality testing is crucial for ensuring food security and stability. To accurately detect the germination status of corn seeds during the paper medium germination test, this study proposes a corn seed germination status detection model based on YOLO v8n (CSGD-YOLO). Initially, to alleviate the complexity encountered in conventional models, a lightweight spatial pyramid pooling fast (L-SPPF) structure is engineered to enhance the representation of features. Simultaneously, a detection module dubbed Ghost_Detection, leveraging the GhostConv architecture, is devised to boost detection efficiency while simultaneously reducing parameter counts and computational overhead. Additionally, during the downsampling process of the backbone network, a downsampling module based on receptive field attention convolution (RFAConv) is designed to boost the model’s focus on areas of interest. This study further proposes a new module named C2f-UIB-iAFF based on the faster implementation of cross-stage partial bottleneck with two convolutions (C2f), universal inverted bottleneck (UIB), and iterative attention feature fusion (iAFF) to replace the original C2f in YOLOv8, streamlining model complexity and augmenting the feature fusion prowess of the residual structure. Experiments conducted on the collected corn seed germination dataset show that CSGD-YOLO requires only 1.91 M parameters and 5.21 G floating-point operations (FLOPs). The detection precision(P), recall(R), mAP0.5, and mAP0.50:0.95 achieved are 89.44%, 88.82%, 92.99%, and 80.38%. Compared with the YOLO v8n, CSGD-YOLO improves performance in terms of accuracy, model size, parameter number, and floating-point operation counts by 1.39, 1.43, 1.77, and 2.95 percentage points, respectively. Therefore, CSGD-YOLO outperforms existing mainstream target detection models in detection performance and model complexity, making it suitable for detecting corn seed germination status and providing a reference for rapid germination rate detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>The platform for corn germination image collection.</p>
Full article ">Figure 2
<p>Different germination states of corn seed in germination test. (<b>a</b>) Examples of seed germination states. (<b>b</b>) Boundary box annotations for seed germination states.</p>
Full article ">Figure 3
<p>Examples of data enhancement.</p>
Full article ">Figure 4
<p>The Distribution of number of tags.</p>
Full article ">Figure 5
<p>The structures of YOLO v8n.</p>
Full article ">Figure 6
<p>The structures of CSGD-YOLO.</p>
Full article ">Figure 7
<p>The structures of SPPF and L-SPPF.</p>
Full article ">Figure 8
<p>The structures of the C2f-UIB-iAFF module.</p>
Full article ">Figure 9
<p>The structures of Ghost_Detection module.</p>
Full article ">Figure 10
<p>The structures of the Downsampling Convolutional Module.</p>
Full article ">Figure 11
<p>Train and test loss curves on different data.</p>
Full article ">Figure 12
<p>Metrics curves of YOLO v8n.</p>
Full article ">Figure 13
<p>Confusion matrix of the model test. (<b>a</b>) Confusion matrix of YOLO v8n (<b>b</b>) Confusion matrix of CSGD-YOLO.</p>
Full article ">
21 pages, 3614 KiB  
Article
Power Quality Disturbance Identification Method Based on Improved CEEMDAN-HT-ELM Model
by Ke Liu, Jun Han, Song Chen, Liang Ruan, Yutong Liu and Yang Wang
Processes 2025, 13(1), 137; https://doi.org/10.3390/pr13010137 - 7 Jan 2025
Viewed by 426
Abstract
The issue of power quality disturbances in modern power systems has become increasingly complex and severe, with multiple disturbances occurring simultaneously, leading to a decrease in the recognition accuracy of traditional algorithms. This paper proposes a composite power quality disturbance identification method based [...] Read more.
The issue of power quality disturbances in modern power systems has become increasingly complex and severe, with multiple disturbances occurring simultaneously, leading to a decrease in the recognition accuracy of traditional algorithms. This paper proposes a composite power quality disturbance identification method based on the integration of improved Complementary Ensemble Empirical Mode Decomposition (CEEMDAN), Hilbert Transform (HT), and Extreme Learning Machine (ELM). Addressing the limitations of traditional signal processing techniques in handling nonlinear and non-stationary signals, this study first preprocesses the collected initial power quality signals using the improved CEEMDAN method to reduce modal aliasing and spurious components, thereby enabling a more precise decomposition of noisy signals into multiple Intrinsic Mode Functions (IMFs). Subsequently, the HT is utilized to conduct a thorough analysis of the reconstructed signals, extracting their time-amplitude information and instantaneous frequency characteristics. This feature information provides a rich data foundation for subsequent classification and identification. On this basis, an improved ELM is introduced as the classifier, leveraging its powerful nonlinear mapping capabilities and fast learning speed to perform pattern recognition on the extracted features, achieving accurate identification of composite power quality disturbances. To validate the effectiveness and practicality of the proposed method, a simulation experiment is designed. Upon examination, the approach introduced in this study retains a fault diagnosis accuracy exceeding 95%, even amidst significant noise disturbances. In contrast to conventional techniques, such as Convolutional Neural Network (CNN) and Support Vector Machine (SVM), this method achieves an accuracy enhancement of up to 5%. Following optimization via the Particle Swarm Optimization (PSO) algorithm, the model’s accuracy is boosted by 3.6%, showcasing its favorable adaptability. Full article
(This article belongs to the Special Issue Modeling, Simulation and Control in Energy Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic Diagram of ELM Structure Framework.</p>
Full article ">Figure 2
<p>The flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Structural Diagram and Related Introduction of the Smart Meter.</p>
Full article ">Figure 4
<p>The decomposition results of harmonic signals obtained through the CEEMDAN algorithm.</p>
Full article ">Figure 5
<p>The decomposition results of harmonic signals obtained through the EEMD algorithm.</p>
Full article ">Figure 6
<p>Verification of Accuracy with Different Activation Functions.</p>
Full article ">Figure 7
<p>Structural Diagram and Related Introduction of fault recording devices.</p>
Full article ">Figure 8
<p>Structural topology of the IEEE-33-bus testing system.</p>
Full article ">Figure 9
<p>Diagnostic accuracy under different numbers of disturbance signal sources.</p>
Full article ">
23 pages, 26242 KiB  
Article
The Application of Fast Fourier Transform Filtering to High Spatial Resolution Digital Terrain Models Derived from LiDAR Sensors for the Objective Mapping of Surface Features and Digital Terrain Model Evaluations
by Alberto González-Díez, Ignacio Díaz-Martínez, Pablo Cruz-Hernández, Antonio Barreda-Argüeso and Matthew Doughty
Remote Sens. 2025, 17(1), 150; https://doi.org/10.3390/rs17010150 - 4 Jan 2025
Viewed by 505
Abstract
In this paper, the application is investigated of fast Fourier transform filtering (FFT-FR) to high spatial resolution digital terrain models (HR-DTM) derived from LiDAR sensors, assessing its efficacy in identifying genuine relief elements, including both natural geological features and anthropogenic landforms. The suitability [...] Read more.
In this paper, the application is investigated of fast Fourier transform filtering (FFT-FR) to high spatial resolution digital terrain models (HR-DTM) derived from LiDAR sensors, assessing its efficacy in identifying genuine relief elements, including both natural geological features and anthropogenic landforms. The suitability of the derived filtered geomorphic references (FGRs) is evaluated through spatial correlation with ground truths (GTs) extracted from the topographical and geological geodatabases of Santander Bay, Northern Spain. In this study, it is revealed that existing artefacts, derived from vegetation or human infrastructures, pose challenges in the units’ construction, and large physiographic units are better represented using low-pass filters, whereas detailed units are more accurately depicted with high-pass filters. The results indicate a propensity of high-frequency filters to detect anthropogenic elements within the DTM. The quality of GTs used for validation proves more critical than the geodatabase scale. Additionally, in this study, it is demonstrated that the footprint of buildings remains uneliminated, indicating that the model is a poorly refined digital surface model (DSM) rather than a true digital terrain model (DTM). Experiments validate the DTM’s capability to highlight contacts and constructions, with water detection showing high precision (≥60%) and varying precision for buildings. Large units are better captured with low filters, whilst high filters effectively detect anthropogenic elements and more detailed units. This facilitates the design of validation and correction procedures for DEMs derived from LiDAR point clouds, enhancing the potential for more accurate and objective Earth surface representation. Full article
Show Figures

Figure 1

Figure 1
<p>Area selected in the three scenarios framed within Santander Bay (Zone_1). The heights proceed from a clip of the DTM called 9400_MDT02-ETRS89-HU30-0035-1-COB2.tif (IGN_2024) elaborated from the 2nd coverture LiDAR (acquired from 2015 to present). (<b>a</b>) Hillshade of the DTM used (the model has an ETRS89 geographic coordinate system with UTM planimetric projection). Inner line purple box corresponds to the area selected to carry out the experiments included in the scenarios 1 and 2. Grey boxes in this zone correspond to the areas selected in the Scenario_3 (a and b). (<b>b</b>) Principal land-use units considered in this study (A, facilities; B, roads and infrastructures; C, buildings; D, natural slopes; and E, water) extracted from BTN [<a href="#B36-remotesensing-17-00150" class="html-bibr">36</a>,<a href="#B37-remotesensing-17-00150" class="html-bibr">37</a>].</p>
Full article ">Figure 2
<p>Ground truths (GTs) utilized in scenarios 1 and 2. (<b>a</b>) Distinction between the ocean and continent (ocean valued as 100, continent valued as 200); (<b>b</b>) delineation of buildings and urban areas (buildings valued as 100, natural terrains valued as 200, water valued as 300); (<b>c</b>) identification of bedrock units (bedrocks valued as 100, remaining units valued as 200); (<b>d</b>) identification of superficial units (superficial units valued as 100, remaining units valued as 200); (<b>e</b>) identification of anthropic landscapes (anthropic landscapes valued as 100, remaining units valued as 200); (<b>f</b>) identification of dolines and karstic depressions (dolines and depressions valued as 100, remaining units valued as 200).</p>
Full article ">Figure 3
<p>Four aspects of the elevation analysis of the Santander Bay area (Zone_1) are presented. (<b>a</b>) The DTM utilized emphasizes elevations in a color range (−24.8 to 130.5 m), the black line box corresponds to the area selected in the three scenarios (the heights in this area ranging from −1.02 to 52.87 m); (<b>b</b>) a histogram depicting the distribution of elevations present in the Santander Bay area (mean and standard deviation are indicated); (<b>c</b>) a general view of the magnitude-true frequency plot, with cut-off frequencies (COFs) accentuated (green and orange dots are maximum and minimum of the main harmonics, respectively), the COF with figures are the filtered radius (FR) considered in this study; (<b>d</b>) a detailed view of the magnitude-true frequency plot, showcasing low and medium frequencies, with the COFs emphasized by figures.</p>
Full article ">Figure 4
<p>A visual comparison of the filtered geomorphic references (FGRs) derived in the area of Santander Bay for each cut-off frequency (COF), or filter radius, (FR) is presented. Color vectors illustrate the FGR (in the legend, each FGR is identified as the frequency symbol, FC plus FR figure), all of which are displayed on a shaded relief extracted from <a href="#remotesensing-17-00150-f001" class="html-fig">Figure 1</a> and superimposed in black.</p>
Full article ">Figure 5
<p>A visual comparison is shown between the filtered geomorphic reference models (FGRMs) from Experiment_1 of Scenario_1 and the ground truth (GT) used to distinguish between continental and sea areas (<a href="#remotesensing-17-00150-f002" class="html-fig">Figure 2</a>a). The figure is divided into the following subdivisions: (<b>a</b>) At the top of the rows, FGRMs for each filtering radius, FR (COF in the text), are indicated (each box considered includes the frequency symbol, FC, plus filter figure). The units represented are (1) area detected by the filter as water, and (2) area detected as continent; at the bottom of the rows, the corresponding validation model obtained appears (the spatial intersection units are 101, 102, 201, 202, with 101 and 202 being the hits (H), while the rest are failures). The four rows located at the right side show equivalent results obtained for each FR considered. (<b>b</b>) Global accuracy, GA, and kappa values obtained from each spatial intersection, emphasizing the FR offering the best accuracy indexes.</p>
Full article ">Figure 6
<p>Visual comparison between the filtered geomorphic reference models (FGRMs) and the spatial validation models derived for the analysis of three land use classes (Experiment_2, Scenario_1). (<b>a</b>) At the top of the rows, FGRMs for each filtering radius, FR (COF in the text), are indicated (each box considered includes the frequency symbol, FC, plus filter figure). The units represented are as follows: (1) area detected by the filter as buildings; (2) area detected as natural slopes; and (3) area detected as water. At the bottom of the rows, the derived validation models appear (the spatial intersection units are: 101, 102, 103, 201, 202, 203, 301, 302, 303, with 101, 202, and 303 being the hits or H, while the rest are failures). (<b>b</b>) Global accuracy, GA, and kappa values derived from each spatial intersection, emphasizing the FR offering the best accuracy indexes.</p>
Full article ">Figure 7
<p>Validation models obtained in Experiment_3 by the spatial crossing between the GT units of substrate rocks and corresponding FGRMs presented in <a href="#app1-remotesensing-17-00150" class="html-app">Figure S1B</a>. On the left, a true magnitude–frequency plot highlighting the result that offers the best accuracy indexes (global accuracy, GA, and kappa). Positions of the frequencies used in filtering or filter radius (FR) are introduced to aid identification. On the right, validation models obtained for each spatial crossing with the FGRMs considered. Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">Figure 8
<p>Validation models obtained in Experiment_4 by the spatial crossing between the GT units of surface units and corresponding FGRMs presented in <a href="#remotesensing-17-00150-f002" class="html-fig">Figure 2</a>d. On the left, a magnitude–frequency plot highlighting the result that offers the best accuracy indexes (global accuracy, GA, and kappa). Positions of the frequencies used in filtering or filter radius (FR) are introduced to aid identification. Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">Figure 9
<p>Validation models obtained in Experiment_5 by the spatial crossing between the GT units of anthropic reliefs and corresponding FGRMs presented in <a href="#app1-remotesensing-17-00150" class="html-app">Figure S3B</a>. On the left, a magnitude-frequency plot highlighting the result that offers the best accuracy indexes (global accuracy, GA, and kappa). Positions of the frequencies used in filtering or filter radius (FR) are introduced to aid identification. Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">Figure 10
<p>Models obtained in Experiment_6 by the spatial crossing between the GT units of dolines and the corresponding FGRMs presented in <a href="#app1-remotesensing-17-00150" class="html-app">Figure S4B</a>. On the left, a magnitude–frequency plot highlighting the result that offers the best accuracy indexes (global accuracy, GA, and kappa). Positions of the frequencies used in filtering or filter radius (FR) are introduced to aid identification. F Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">Figure 11
<p>Comparison between the GT of the new surface model and its corresponding FGRMs (Experiment_7). (<b>a</b>) A new GT model obtained for the surface units in the selected study area. (<b>b</b>) Detail of the GT model showing the incorporated geomorphic contacts (black lines) corresponding to the following elements: coastline, depressions, doline fields, and isolated dolines. (<b>c</b>) Filtered geomorphic reference modes (FGRMs) obtained by applying the FFT filters, whose filter radius (FR or cut-off frequencies) appear in the header. Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">Figure 12
<p>Validation models obtained in Experiment_7 by the spatial crossing of the new surface units ground truth (GT) and the corresponding FGRMs presented in <a href="#remotesensing-17-00150-f011" class="html-fig">Figure 11</a>c. On the left, a magnitude–frequency plot highlighting the result that offers the best accuracy indexes (global accuracy, GA, and kappa). Positions of the frequencies used in filtering or filter radius (FR) are introduced to help their identification. Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">Figure 13
<p>Validation of the GT generated by buildings and constructions (Experiment_8) in an urban location (black box area B of <a href="#remotesensing-17-00150-f001" class="html-fig">Figure 1</a>). The respective FGRMs obtained are not presented in the figure. Buildings have a value of 100, the remaining classes receive 200. (<b>a</b>) GT model obtained for this classification along Zone_1, where urban units have the value of 100 and the remaining receive 200. (<b>b</b>) Detail of the GT model for the urban area B (<a href="#remotesensing-17-00150-f001" class="html-fig">Figure 1</a>, area b), using the same values as the previous. (<b>c</b>) Validation models obtained from the spatial crossing between GT and FGRMs. Units are hits (101 and 202); and failures are 102 and 201. On the right, a table with the accuracy index values obtained for the spatial crossing. Filters are presented using acronyms FC plus FR (or filter figure).</p>
Full article ">
24 pages, 8310 KiB  
Article
NI-LIO: A Hybrid Approach Combining ICP and NDT for Improving Simultaneous Localization and Mapping Performance
by Jie Yu, Ting-Hai Yu, Qing-Yong Zhang and Trong-The Nguyen
Electronics 2025, 14(1), 178; https://doi.org/10.3390/electronics14010178 - 4 Jan 2025
Viewed by 442
Abstract
The accuracy and stability of front-end point cloud registration algorithms are crucial for the mapping and localization precision in laser SLAM (simultaneous localization and mapping) systems. Traditional point-to-line and point-to-plane iterative closest point (ICP) registration algorithms, widely used in SLAM front ends, often [...] Read more.
The accuracy and stability of front-end point cloud registration algorithms are crucial for the mapping and localization precision in laser SLAM (simultaneous localization and mapping) systems. Traditional point-to-line and point-to-plane iterative closest point (ICP) registration algorithms, widely used in SLAM front ends, often suffer from low efficiency, significant data dependency during the matching process, and a propensity for local optima. This registration method exhibits a more pronounced local optimum issue in large-scale SLAM mapping, thereby diminishing matching accuracy and increasing reliance on initial values. To address these limitations, this paper introduces NI-LIO, a novel SLAM algorithm that integrates ICP with normal distributions transform (NDT) to enhance localization accuracy, computational efficiency and robustness. By combining the precision of ICP with the robustness of NDT, the proposed algorithm significantly improves system stability and localization accuracy. The analysis of mapping and localization experiments indicates a significant reduction in errors compared to traditional SLAM algorithms, with experiments showing a REMS value decrease of over 20%. Compared to ALOAM, FAST_LIO2 and Lego-LOAM algorithms, the new NI-LIO algorithm shows improvements in both accuracy and stability, enabling the construction of a more precise and consistent global map. This algorithm exhibits excellent adaptability to various environments. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>The laser SLAM process diagram.</p>
Full article ">Figure 2
<p>ICP algorithm: Iterative process for point cloud registration.</p>
Full article ">Figure 3
<p>A workflow of the NDT registration algorithm.</p>
Full article ">Figure 4
<p>An illustration of the local optimal solution in the ICP algorithm.</p>
Full article ">Figure 5
<p>NI-LIO algorithm flowchart.</p>
Full article ">Figure 6
<p>Loose coupling LIO flowchart.</p>
Full article ">Figure 7
<p>Visual representation of cloud data processing in the map.</p>
Full article ">Figure 7 Cont.
<p>Visual representation of cloud data processing in the map.</p>
Full article ">Figure 8
<p>Trajectory comparison between NI-LIO, Lego-LOAM, FAST_LIO2, and ALOAM algorithms on the KITTI dataset.</p>
Full article ">Figure 8 Cont.
<p>Trajectory comparison between NI-LIO, Lego-LOAM, FAST_LIO2, and ALOAM algorithms on the KITTI dataset.</p>
Full article ">Figure 9
<p>Comparison of absolute pose error (APE) and root mean square error (RMSE) in mapping experiments using the NI-LIO, Lego-LOAM, and ALOAM algorithms on datasets 04, 05 and 07.</p>
Full article ">Figure 9 Cont.
<p>Comparison of absolute pose error (APE) and root mean square error (RMSE) in mapping experiments using the NI-LIO, Lego-LOAM, and ALOAM algorithms on datasets 04, 05 and 07.</p>
Full article ">Figure 10
<p>Comparison of relative pose error (RPE) across three datasets in the KITTI benchmark.</p>
Full article ">Figure 10 Cont.
<p>Comparison of relative pose error (RPE) across three datasets in the KITTI benchmark.</p>
Full article ">
22 pages, 6345 KiB  
Article
Fast Dynamic Time Warping and Hierarchical Clustering with Multispectral and Synthetic Aperture Radar Temporal Analysis for Unsupervised Winter Food Crop Mapping
by Hsuan-Yi Li, James A. Lawarence, Philippa J. Mason and Richard C. Ghail
Agriculture 2025, 15(1), 82; https://doi.org/10.3390/agriculture15010082 - 2 Jan 2025
Viewed by 541
Abstract
Food sustainability has become a major global concern in recent years. Multiple complimentary strategies to deal with this issue have been developed; one of these approaches is regenerative farming. The identification and analysis of crop type phenology are required to achieve sustainable regenerative [...] Read more.
Food sustainability has become a major global concern in recent years. Multiple complimentary strategies to deal with this issue have been developed; one of these approaches is regenerative farming. The identification and analysis of crop type phenology are required to achieve sustainable regenerative faming. Earth Observation (EO) data have been widely applied to crop type identification using supervised Machine Learning (ML) and Deep Learning (DL) classifications, but these methods commonly rely on large amounts of ground truth data, which usually prevent historical analysis and may be impractical in very remote, very extensive or politically unstable regions. Thus, the development of a robust but intelligent unsupervised classification model is attractive for the long-term and sustainable prediction of agricultural yields. Here, we propose FastDTW-HC, a combination of Fast Dynamic Time Warping (DTW) and Hierarchical Clustering (HC), as a significantly improved method that requires no ground truth input for the classification of winter food crop varieties of barley, wheat and rapeseed, in Norfolk, UK. A series of variables is first derived from the EO products, and these include spectral indices from Sentinel-2 multispectral data and backscattered amplitude values at dual polarisations from Sentinel-1 Synthetic Aperture Radar (SAR) data. Then, the phenological patterns of winter barley, winter wheat and winter rapeseed are analysed using the FastDTW-HC applied to the time-series created for each variable, between Nov 2019 and June 2020. Future research will extend this winter food crop mapping analysis using FastDTW-HC modelling to a regional scale. Full article
Show Figures

Figure 1

Figure 1
<p>The growth stages of winter barley, winter wheat and winter rapeseed from late November to June [<a href="#B33-agriculture-15-00082" class="html-bibr">33</a>,<a href="#B34-agriculture-15-00082" class="html-bibr">34</a>,<a href="#B35-agriculture-15-00082" class="html-bibr">35</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Location of Norfolk in the UK, using a Google Earth image (inset), and a Sentinel-2 image map of Norfolk, UK, with the yellow square showing the study area; (<b>b</b>) detailed image of the study area and ground truth point locations for winter barley (orange), wheat (blue) and rapeseed (lilac) from RPA, UK.</p>
Full article ">Figure 3
<p>The flowchart and workflow of this research.</p>
Full article ">Figure 4
<p>The general concepts of the Euclidean and DTW similarity (distance) calculations between pixels X and Y in two time-series.</p>
Full article ">Figure 5
<p>Illustration of a “warp path” between the index values of two pixels in two time-series datasets, X and Y, in an n-by-m matrix of time points, where the “warp path” represents the similarity between the index values of two pixels in time-series n and m.</p>
Full article ">Figure 6
<p>An example of the Fast DTW process on an optimal warping alignment with local neighbourhood adjustments from a 1/8 resolution to the original resolution.</p>
Full article ">Figure 7
<p>A graphical illustration of the hierarchical clustering concept. Five individual (conceptual) clusters (A, B, C, D and E) are clustered according to their similarity (i.e., distance) values. Clusters A and B and clusters D and E then form new clusters of AB and DE, whilst C remains alone. Similarities among AB, DE and the individual cluster C, are then used to form the second layer. Since C is more similar to AB, a new ABC cluster is formed whilst DE remains. The final layer gathers all remaining clusters into one large cluster, ABCDE, and the dendrogram of A, B, C, D and E is formed [<a href="#B48-agriculture-15-00082" class="html-bibr">48</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>) Supervised classification results on winter crops produced by the RPA (RPA, 2021); (<b>b</b>) initial result with the NDVI and the final integration results with R1 to R5 (<b>c</b>–<b>g</b>). Orange represents barley, blue represents wheat and lilac represents rapeseed.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Supervised classification results on winter crops produced by the RPA (RPA, 2021); (<b>b</b>) initial result with the NDVI and the final integration results with R1 to R5 (<b>c</b>–<b>g</b>). Orange represents barley, blue represents wheat and lilac represents rapeseed.</p>
Full article ">Figure 9
<p>Spectral index and amplitude values throughout the growing season for winter varieties of barley (orange), wheat (blue) and rapeseed (lilac).</p>
Full article ">Figure 9 Cont.
<p>Spectral index and amplitude values throughout the growing season for winter varieties of barley (orange), wheat (blue) and rapeseed (lilac).</p>
Full article ">
21 pages, 2535 KiB  
Article
A Game Model and Fault Recovery Algorithm for SDN Multi-Domain
by Tao Xu, Chen Chen, Kaiming Hu and Yi Zhuang
Sensors 2025, 25(1), 164; https://doi.org/10.3390/s25010164 - 30 Dec 2024
Viewed by 494
Abstract
Software-defined networking (SDN) offers an effective solution for flexible management of Wireless Sensor Networks (WSNs) by separating control logic from sensor nodes. This paper tackles the challenge of timely recovery from SDN controller failures and proposes a game theoretic model for multi-domain controllers. [...] Read more.
Software-defined networking (SDN) offers an effective solution for flexible management of Wireless Sensor Networks (WSNs) by separating control logic from sensor nodes. This paper tackles the challenge of timely recovery from SDN controller failures and proposes a game theoretic model for multi-domain controllers. A game-enhanced autonomous fault recovery algorithm for SDN controllers is proposed, which boasts fast fault recovery and low migration costs. Taking into account the remaining capacity of controllers and the transition relationships between devices, the target controller is first selected to establish a controller game domain. The issue of mapping the out-of-control switches within the controller game domain to the target controller is transformed into a linear programming problem for solution. A multi-population particle swarm optimization algorithm with repulsive interaction is employed to iteratively evolve the optimal mapping between controllers and switches. Finally, migration tasks are executed based on the optimal mapping results, and the role transition of the target controller is completed. Comparative experimental results demonstrate that, compared to existing SDN controller fault recovery algorithms, the proposed algorithm can balance the migration cost of switches and the load pressure on controllers while reducing propagation delay in SDN controllers, significantly decreasing the fault recovery time. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Multi-domain controller architecture diagram.</p>
Full article ">Figure 2
<p>Framework diagram of SDN multi-domain controller fault recovery algorithm.</p>
Full article ">Figure 3
<p>Flowchart for target controller selection and game domain construction.</p>
Full article ">Figure 4
<p>Comparison of migration costs for out-of-control switches.</p>
Full article ">Figure 5
<p>Comparison of fault recovery delays.</p>
Full article ">Figure 6
<p>Remaining capacity ratio of target controllers.</p>
Full article ">
28 pages, 31885 KiB  
Article
Comparative Analysis of Mechanistic and Correlative Models for Global and Bhutan-Specific Suitability of Parthenium Weed and Vulnerability of Agriculture in Bhutan
by Sangay Dorji, Stephen Stewart, Asad Shabbir, Ali Bajwa, Ammar Aziz and Steve Adkins
Plants 2025, 14(1), 83; https://doi.org/10.3390/plants14010083 - 30 Dec 2024
Viewed by 728
Abstract
Parthenium weed (Parthenium hysterophorus L.) is one of the most noxious and fast-spreading invasive alien species, posing a major threat to ecosystems, agriculture, and public health worldwide. Mechanistic and correlative species distribution models are commonly employed to determine the potential habitat suitability [...] Read more.
Parthenium weed (Parthenium hysterophorus L.) is one of the most noxious and fast-spreading invasive alien species, posing a major threat to ecosystems, agriculture, and public health worldwide. Mechanistic and correlative species distribution models are commonly employed to determine the potential habitat suitability of parthenium weed. However, a comparative analysis of these two approaches for parthenium weed is lacking, leaving a gap in understanding their relative effectiveness and ability to describe habitat suitability of parthenium weed. This study compared the mechanistic model CLIMEX with random forest (RF), the best-performing of a suite of correlative models. When compared against occurrence records and pseudo-absences, measured by area under the receiver operating characteristic curve, true skill statistic, sensitivity, and specificity, the results revealed higher performance of RF compared to CLIMEX. Globally, RF predicted 7 million km2 (2% of the total land mass) as suitable for parthenium weed, while CLIMEX predicted 20 million km2 (13%). Based on binary maps, RF and CLIMEX identified 67 and 20 countries as suitable, respectively. For Bhutan, globally trained RF predicted 8919 km2 (23% of the country’s total 38,394 km2) as currently suitable, with high suitability in the southern, west–central, central, and eastern districts, particularly along major highways. For the future, the 10 general circulation models downscaled to Bhutan showed a decrease in suitability across four scenarios (SSP126, SSP245, SSP370, SSP585) and three periods (2021–2050, 2051–2080, 2071–2100), with a northward shift in suitable habitats ranging from 2 to 76 km. Additionally, 2049 (23%) km2 of agricultural land is currently at risk of being invaded by parthenium weed. Correlative and mechanistic models are based on different niche concepts (i.e., realized and fundamental, respectively), and therefore combining them can provide a better understanding of actual and potential species distributions. Given the high suitability of parthenium weed under the current climate and its potential negative impacts in Bhutan, early action such as early detection and control of infested areas, regular survey and monitoring, and creating public awareness are proposed as risk mitigation strategies. Full article
(This article belongs to the Special Issue Plant Invasions across Scales)
Show Figures

Figure 1

Figure 1
<p>Pseudo-absences were randomly generated within each of the 14 bias layers (colored regions), equal to the occurrences enclosed by each bias layer, indicated by the number beside each bias layer. Solid lines around each bias layer represent the buffers dissolved, which were used to crop one of the “WorldClim” bioclimatic layers to generate bias layers. The inset map shows the current distribution of parthenium weed in Bhutan.</p>
Full article ">Figure 2
<p>A variable importance plot of predictor variables based on mean decrease accuracy (<b>A</b>) and mean decrease Gini (<b>B</b>). The mean decrease accuracy plot shows how much the model’s predictive accuracy drops when a variable is excluded, with higher values showing greater importance. The mean decrease in Gini measures how much each variable contributes to the node purity within the decision trees, where higher values indicate a stronger role in improving classification performance. Abbreviation: bio1 = annual mean temperature (°C), bio2 = mean diurnal range (°C), bio4 = temperature seasonality (°C), bio8 = mean temperature of the wettest quarter (°C), bio9 = mean temperature of the driest quarter (°C), bio12 = annual precipitation (mm), bio14 = precipitation of the driest month (mm), bio15 = precipitation seasonality (coefficient of variation) (dimensionless), bio18 = precipitation of the warmest quarter (mm), bio19 = precipitation of the coldest quarter (mm) [<a href="#B35-plants-14-00083" class="html-bibr">35</a>].</p>
Full article ">Figure 3
<p>(<b>A</b>) The global suitability of parthenium weed predicted by random forest, where darker shades of green indicate higher suitability; and (<b>B</b>) the binary suitability map. The binary classification was generated using the threshold value 0.588, which maximized sensitivity (true positive rate) and specificity (true negative rate).</p>
Full article ">Figure 3 Cont.
<p>(<b>A</b>) The global suitability of parthenium weed predicted by random forest, where darker shades of green indicate higher suitability; and (<b>B</b>) the binary suitability map. The binary classification was generated using the threshold value 0.588, which maximized sensitivity (true positive rate) and specificity (true negative rate).</p>
Full article ">Figure 4
<p>(<b>A</b>) The global suitability of parthenium weed predicted by the CLIMEX model, where darker shades of green indicate higher suitability; and (<b>B</b>) the binary suitability map. The binary classification was generated using the threshold value of 26, in which the sum of specificity and sensitivity was maximized.</p>
Full article ">Figure 4 Cont.
<p>(<b>A</b>) The global suitability of parthenium weed predicted by the CLIMEX model, where darker shades of green indicate higher suitability; and (<b>B</b>) the binary suitability map. The binary classification was generated using the threshold value of 26, in which the sum of specificity and sensitivity was maximized.</p>
Full article ">Figure 5
<p>Comparison of predicted suitability of parthenium weed between random forest and CLIMEX. The binary map of random forest (blue) is overlaid on the binary map of CLIMEX (orange) with their suitability overlap (violet). The binary maps were generated using a threshold value that maximized the sensitivity and specificity, which was 0.588 for RF and 26 for CLIMEX.</p>
Full article ">Figure 6
<p>The spatial map of predicted parthenium weed suitability in Bhutan under the current (1985–2015) and future (2051–2080) climate based on the SSP245 scenario, where darker shades of green indicate higher suitability. Higher suitability is typically found along the national highways (red lines), particularly in Punakha to Wangduephodrang, Trongsa to Zhemgang, Mongar to Lhuentse, Mongar to Trashigang, and Trashigang to Trashiyangtse. The names above each plot indicate the names of the general circulation models. The red lines are road networks.</p>
Full article ">Figure 7
<p>Projected changes in parthenium weed distribution under the SSP245 climate scenario for the 2051–2080 period. Gray regions represent areas where the predicted distribution is expected to decrease, while green and red regions highlight areas of no change and gain, respectively. Notably, areas of increased suitability were predicted along highways in Paro and Thimphu. Note: “K” and “M” in the <span class="html-italic">x</span>- and <span class="html-italic">y</span>-axis denote a thousand (×1000) and a million (×1,000,000) meters, respectively.</p>
Full article ">Figure 8
<p>The congruence and uncertainty in predictions across 10 GCMs were assessed using the ensemble mean probability (<b>A</b>), ensemble standard deviation (<b>B</b>), Threshold Agreement Index (<b>C</b>), and threshold-scaled standard deviation (<b>D</b>). The darker green areas indicate strong agreement as well as uncertainty.</p>
Full article ">Figure 9
<p>Maps of the potential impact of parthenium weed on agricultural land in Bhutan under the current and future climate of the SSP245 scenario and 2051–2080 period. The colored areas indicate the total agricultural land available in the country. Green indicates agricultural areas unlikely to be at risk, while red indicates areas likely to be at risk to parthenium weed. Note: “K” and “M” in the <span class="html-italic">x</span>- and <span class="html-italic">y</span>-axis denote a thousand (×1000) and a million (×1,000,000) meters, respectively.</p>
Full article ">Figure 10
<p>The standard deviational ellipse (SDE) and the mean location of parthenium weed distribution. Under the current climate (green), SDE is oriented towards the northeast direction at an angle of 87° (angle of rotation). In the future under the SSP245 scenario and during the 2051–2080 period, the two models that showed the lowest and the highest shift in the mean locations, ACCESS-CM2 (red) and MPI-ESM1-2-LR (blue), predicted a shift of 7 and 50 km northward, respectively.</p>
Full article ">Figure 11
<p>The map of Bhutan showing the predicted suitability of parthenium weed under the current climate (higher green intensity indicating higher suitability), which was validated through field visits. The red points (n = 92) represent locations where parthenium weed was detected during field visits conducted in April 2024. The black dots indicate parthenium weed occurrence records used for model calibration, sourced from multiple datasets (<a href="#app1-plants-14-00083" class="html-app">Supplementary Table S1</a>).</p>
Full article ">
19 pages, 10948 KiB  
Article
Detecting Plant Diseases Using Machine Learning Models
by Nazar Kohut, Oleh Basystiuk, Nataliya Shakhovska and Nataliia Melnykova
Sustainability 2025, 17(1), 132; https://doi.org/10.3390/su17010132 - 27 Dec 2024
Viewed by 462
Abstract
Sustainable agriculture is pivotal to global food security and economic stability, with plant disease detection being a key challenge to ensuring healthy crop production. The early and accurate identification of plant diseases can significantly enhance agricultural practices, minimize crop losses, and reduce the [...] Read more.
Sustainable agriculture is pivotal to global food security and economic stability, with plant disease detection being a key challenge to ensuring healthy crop production. The early and accurate identification of plant diseases can significantly enhance agricultural practices, minimize crop losses, and reduce the environmental impacts. This paper presents an innovative approach to sustainable development by leveraging machine learning models to detect plant diseases, focusing on tomato crops—a vital and globally significant agricultural product. Advanced object detection models including YOLOv8 (minor and nano variants), Roboflow 3.0 (Fast), EfficientDetV2 (with EfficientNetB0 backbone), and Faster R-CNN (with ResNet50 backbone) were evaluated for their precision, efficiency, and suitability for mobile and field applications. YOLOv8 nano emerged as the optimal choice, offering a mean average precision (MAP) of 98.6% with minimal computational requirements, facilitating its integration into mobile applications for real-time support to farmers. This research underscores the potential of machine learning in advancing sustainable agriculture and highlights future opportunities to integrate these models with drone technology, Internet of Things (IoT)-based irrigation, and disease management systems. Expanding datasets and exploring alternative models could enhance this technology’s efficacy and adaptability to diverse agricultural contexts. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of tomato fruit abnormalities.</p>
Full article ">Figure 2
<p>Examples of unusual object detection labels in the Tomato-Village dataset.</p>
Full article ">Figure 3
<p>Examples of images gathered in the laboratory and natural environment.</p>
Full article ">Figure 4
<p>Class distribution.</p>
Full article ">Figure 5
<p>Dataset visualization: (<b>a</b>) Histogram of object count; (<b>b</b>) Annotation heatmap.</p>
Full article ">Figure 6
<p>Examples of images from fields but with artificially created backgrounds.</p>
Full article ">Figure 7
<p>Examples of images that came from books and websites.</p>
Full article ">Figure 8
<p>Feature pyramid network architecture.</p>
Full article ">Figure 9
<p>Illustration of the framework. (<b>a</b>) FPN backbone, (<b>b</b>) bottom-up path augmentation, (<b>c</b>) adaptive feature pooling, (<b>d</b>) box branch, and (<b>e</b>) fully-connected fusion.</p>
Full article ">Figure 10
<p>The general flow of predicting tomato diseases is shown in the photo on the mobile.</p>
Full article ">Figure 11
<p>Detailed application process flow.</p>
Full article ">Figure 12
<p>Sample of learning curves for YOLOv8 nano trained throughout 80 epochs.</p>
Full article ">Figure 13
<p>Learning curves for YOLOv8 small trained throughout 80 epochs.</p>
Full article ">Figure 14
<p>Detailed application process flow.</p>
Full article ">Figure 15
<p>Detailed application pipeline.</p>
Full article ">Figure 16
<p>Simplified YOLOv8 training pipeline.</p>
Full article ">Figure 17
<p>Model performance regarding MAP50 and inference time on the Tesla T4 GPU.</p>
Full article ">Figure 18
<p>Confusion matrices were calculated for the YOLOv8 nano model as: (<b>a</b>) YOLOv8 nano model on the train dataset; (<b>b</b>) YOLOv8 nano model on the test dataset.</p>
Full article ">Figure 19
<p>Confusion matrices were calculated for the YOLOv8 nano model on the validation dataset.</p>
Full article ">Figure 20
<p>Results of disease detection using the trained YOLOv8 nano model: (<b>a</b>) leaf photographs in field conditions and (<b>b</b>) labeled image from the created dataset.</p>
Full article ">Figure 21
<p>Results of disease detection using the trained YOLOv8 nano model: (<b>a</b>) leaf photographs in field conditions and (<b>b</b>) labeled image from the created dataset.</p>
Full article ">Figure 22
<p>Results of disease detection using the trained YOLOv8 nano model: (<b>a</b>) leaf photographs in laboratory conditions and (<b>b</b>) labeled image from the created dataset.</p>
Full article ">
18 pages, 5813 KiB  
Article
Wind, Wave, and Ice Impacts on the Coastal Zone of the Sea of Azov
by Natalia Yaitskaya and Anastasiia Magaeva
Water 2025, 17(1), 36; https://doi.org/10.3390/w17010036 - 26 Dec 2024
Viewed by 402
Abstract
The coastal zone of the Sea of Azov is a dynamic environment influenced by various natural and anthropogenic factors, including wind, wave action, beach material removal, and cultivation on cliff edges. The coastal zone of freezing seas is also influenced by ice cover [...] Read more.
The coastal zone of the Sea of Azov is a dynamic environment influenced by various natural and anthropogenic factors, including wind, wave action, beach material removal, and cultivation on cliff edges. The coastal zone of freezing seas is also influenced by ice cover during winter. This study investigates the dynamics of the Sea of Azov’s coastal zone during winter (2014–2023), focusing on the impacts of waves and ice, to identify the most vulnerable coastal areas. We analyzed high-resolution satellite imagery and employed mathematical modeling to obtain data on ice pile-up, fast ice formation, wind patterns, and storm wave dynamics within the shallow coastal zone. Long-term wind data revealed an increase in maximum wind speeds in December and January, while February and March showed a decrease or no significant trend across most coastal observation stations. Storm waves (significant wave height) during the cold season can reach heights of 3.26 m, contributing to coastal erosion and instability. While the overall ice cover in the Sea of Azov is decreasing, with fast ice rarely exceeding 0.85% of the total sea area, ice pile-up still occurs almost annually, with the eastern part of Taganrog Bay exhibiting the highest probability of these events. Our analysis identified the primary impacts affecting the shallow coastal zone of the Sea of Azov between 2014 and 2023. A map was generated to illustrate these impacts, revealing that nearly the entire coastline is subject to varying degrees of wave and ice impact. Exceptions include the eastern coast, which experiences minimal fast ice and ice pile-up, with average or lower dynamic loads, and the southern coast, where wind–wave action is the dominant factor. Full article
(This article belongs to the Special Issue Hydroclimate Extremes: Causes, Impacts, and Mitigation Plans)
Show Figures

Figure 1

Figure 1
<p>Study area. (<b>A</b>) Geographical location of the study area on the map of Europe and the boundaries of the computational grid used for the WRF model. (<b>B</b>) Boundaries of the computational grid used for the SWAN model and the geographical locations of coastal hydrometeorological stations.</p>
Full article ">Figure 2
<p>Frequency of fast ice occurrence in the Sea of Azov (2014–2023).</p>
Full article ">Figure 3
<p>Ice pile-up events along the Sea of Azov coastline: (<b>a</b>) January 2012; (<b>b</b>) 24 December 2022; (<b>c</b>) 17 February 2023; (<b>d</b>) 22 March 2012.</p>
Full article ">Figure 4
<p>Frequency of ice pile-up events along the Sea of Azov coastline.</p>
Full article ">Figure 5
<p>Maximum daily average wind speeds (m/s) along the Sea of Azov coast during winter months (2014–2023). a—Positive long-term trend; b—negative long-term trend; c—no trend.</p>
Full article ">Figure 6
<p>Frequency of days with average daily wind speeds ≥ 7 m/s along the Sea of Azov coast during winter months (2014–2023).</p>
Full article ">Figure 7
<p>Maximum significant wave height (m) in the Sea of Azov (2010–2020).</p>
Full article ">Figure 8
<p>Maximum dynamic load (tf/m<sup>2</sup>) from wind waves in the Sea of Azov (2010–2020).</p>
Full article ">Figure 9
<p>Impacts on the coastal zone of the Sea of Azov during winter (2014–2023).</p>
Full article ">
Back to TopTop