[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (947)

Search Parameters:
Keywords = cloud filtering

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4583 KiB  
Article
Research on Fine-Scale Terrain Construction in High Vegetation Coverage Areas Based on Implicit Neural Representations
by Yi Zhang, Peipei He, Haihang Jing, Bin He, Weibo Yin, Junzhen Meng, Yuntian Ma, Haifeng Zhang, Bo Zhang and Haoxiang Shen
Sustainability 2025, 17(3), 1320; https://doi.org/10.3390/su17031320 - 6 Feb 2025
Viewed by 294
Abstract
Due to the high-density coverage of vegetation, the complexity of terrain, and occlusion issues, ground point extraction faces significant challenges. Airborne Light Detection and Ranging (LiDAR) technology plays a crucial role in complex mountainous areas. This article proposes a method for constructing fine [...] Read more.
Due to the high-density coverage of vegetation, the complexity of terrain, and occlusion issues, ground point extraction faces significant challenges. Airborne Light Detection and Ranging (LiDAR) technology plays a crucial role in complex mountainous areas. This article proposes a method for constructing fine terrain in high vegetation coverage areas based on implicit neural representation. This method consists of data preprocessing, multi-scale and multi-feature high-difference point cloud initial filtering, and an upsampling module based on implicit neural representation. Firstly, preprocess the regional point cloud data is preprocessed; then, K-dimensional trees (K-d trees) are used to construct spatial indexes, and spherical neighborhood methods are applied to capture the geometric and physical information of point clouds for multi-feature fusion, enhancing the distinction between terrain and non-terrain elements. Subsequently, a differential model is constructed based on DSM (Digital Surface Model) at different scales, and the elevation variation coefficient is calculated to determine the threshold for extracting the initial set of ground points. Finally, the upsampling module using implicit neural representation is used to finely process the initial ground point set, providing a complete and uniformly dense ground point set for the subsequent construction of fine terrain. To validate the performance of the proposed method, three sets of point cloud data from mountainous terrain with different features are selected as the experimental area. The experimental results indicate that, from a qualitative perspective, the proposed method significantly improves the classification of vegetation, buildings, and roads, with clear boundaries between different types of terrain. From a quantitative perspective, the Type I errors of the three selected regions are 4.3445%, 5.0623%, and 5.9436%, respectively. The Type II errors are 5.7827%, 6.8516%, and 7.3478%, respectively. The overall errors are 5.3361%, 6.4882%, and 6.7168%, respectively. The Kappa coefficients of the measurement areas all exceed 80%, indicating that the proposed method performs well in complex mountainous environments. Provide point cloud data support for the construction of wind and photovoltaic bases in China, reduce potential damage to the ecological environment caused by construction activities, and contribute to the sustainable development of ecology and energy. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the location of the wind and photovoltaic project in the experimental area.</p>
Full article ">Figure 2
<p>Flowchart of the fine point cloud filtering method for dense vegetation coverage in complex mountainous areas.</p>
Full article ">Figure 3
<p>Multi-feature neighborhood construction model. In the K-d tree, it can be clearly seen that the red, green, and blue lines in the above figure divide the space of the cube into two, four, and eight parts, respectively. The last 8 subspaces are leaf nodes; In a spherical neighborhood map, black dots are the current point, blue dots are points within the neighborhood of the current point, and the remaining points are terrain points in the neighborhood of the previous point.</p>
Full article ">Figure 4
<p>Illustrates the application of the implicit neural representation upsampling module in the processing of point clouds in complex mountainous terrain.</p>
Full article ">Figure 5
<p>Results obtained with different upsampling scales for the same input.</p>
Full article ">Figure 6
<p>4× upsampling point cloud data results.</p>
Full article ">Figure 7
<p>Results of processing the point cloud data of Area c.</p>
Full article ">Figure 8
<p>The DEM of the complex mountainous terrain generated after processing with the proposed method.</p>
Full article ">Figure 9
<p>Point cloud image and DEM for Area b.</p>
Full article ">Figure 10
<p>Maps of Area c, d, and e, along with their corresponding DEMs.</p>
Full article ">Figure 10 Cont.
<p>Maps of Area c, d, and e, along with their corresponding DEMs.</p>
Full article ">
20 pages, 4669 KiB  
Article
Monitoring Mangrove Phenology Based on Gap Filling and Spatiotemporal Fusion: An Optimized Mangrove Phenology Extraction Approach (OMPEA)
by Yu Hong, Runfa Zhou, Jinfu Liu, Xiang Que, Bo Chen, Ke Chen, Zhongsheng He and Guanmin Huang
Remote Sens. 2025, 17(3), 549; https://doi.org/10.3390/rs17030549 - 6 Feb 2025
Viewed by 318
Abstract
Monitoring mangrove phenology requires frequent, high-resolution remote sensing data, yet satellite imagery often suffers from coarse resolution and cloud interference. Traditional methods, such as denoising and spatiotemporal fusion, faced limitations: denoising algorithms usually enhance temporal resolution without improving spatial quality, while spatiotemporal fusion [...] Read more.
Monitoring mangrove phenology requires frequent, high-resolution remote sensing data, yet satellite imagery often suffers from coarse resolution and cloud interference. Traditional methods, such as denoising and spatiotemporal fusion, faced limitations: denoising algorithms usually enhance temporal resolution without improving spatial quality, while spatiotemporal fusion models struggle with prolonged data gaps and heavy noise. This study proposes an optimized mangrove phenology extraction approach (OMPEA), which integrates Landsat and MODIS data with a denoising algorithm (e.g., Gap Filling and Savitzky–Golay filtering, GF–SG) and a spatiotemporal fusion model (e.g., Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model, ESTARFM). The key of OMPEA is that GF–SG algorithm filled data gaps from cloud cover and satellite transit gaps, providing high-quality input to ESTARFM and improving its accuracy of NDVI imagery reconstruction in mangrove phenology extraction. By conducting experiments on the GEE platform, OMPEA generates 1-day, 30 m NDVI imagery, from which phenological parameters (i.e., the start (SoS), end (EoS), length (LoS), and peak (PoS) of the growing season) are derived using the maximum separation (MS) method. Validation in four mangrove areas along the coastal China shows that OMPEA significantly improves the potential to capture mangrove phenology in the presence of incomplete data. The OMPEA significantly increased usable data, adding 7–33 Landsat images and 318–415 MODIS images per region. The generated NDVI series exhibits strong spatiotemporal consistency with original data (R2: 0.788–0.998, RMSE: 0.007–0.253) and revealed earlier SoS and longer LoS at lower latitudes. Cross-correlation analysis showed a 2–3 month lagged effects of temperature on mangroves’ growth, with precipitation having minimal impact. The proposed OMPEA improves the possibility of capturing mangrove phenology under non-continuous and low-resolution data, providing valuable insights for large-scale and long-term mangrove conservation and management. Full article
Show Figures

Figure 1

Figure 1
<p>Map of study area: (<b>a</b>) the overall distribution of study area; (<b>b1</b>–<b>b4</b>) Zhangjiangkou National Mangrove Nature Reserve (ZNR) in Fujian Province, Qi’ao Island Provincial Nature Reserve (QPR) in Guangdong Province, Beilun Estuary National Nature Reserve (BNR) in Guangxi Province, and Dongzhaigang National Mangrove Nature Reserve (DNR) in Hainan Province.</p>
Full article ">Figure 2
<p>Workflow of mangrove phenology extraction based on OMPEA.</p>
Full article ">Figure 3
<p>Landsat 8 NDVI (16-day 30 m) and denoised Landsat NDVI (16-day 30 m) generated by OMPEA. Gray pixel indicates pixel with no data.</p>
Full article ">Figure 4
<p>MODIS NDVI (1-day 500 m) and denoised MODIS NDVI (1-day 30 m) generated by OMPEA. Gray pixel indicates pixel with no data.</p>
Full article ">Figure 5
<p>The OMPEA-generated fused NDVI imagery. Gray pixel indicates pixel with no data.</p>
Full article ">Figure 6
<p>Scatter density plots and marginal histograms of fused NDVI and denoised Landsat NDVI.</p>
Full article ">Figure 7
<p>Composite scatter plots and line plots of various NDVI time series.</p>
Full article ">Figure 8
<p>Fused NDVI time-series curve and phenological parameters.</p>
Full article ">Figure 9
<p>Boxplots of mangrove phenological parameters.</p>
Full article ">Figure 10
<p>The time-series curves for fused NDVI, precipitation, temperature, and their lagged time-series curves with corresponding lag days.</p>
Full article ">Figure 11
<p>The OMPEA-generated fused NDVI in QPR from 17 January 2020 to 24 March 2021. (<b>a</b>) Description of denoised Landsat 8 NDVI in a full-time range. (<b>b</b>) Description of denoised Landsat 8 NDVI across three different time ranges, (<b>c</b>,<b>d</b>) is fused NDVI that using (<b>a</b>,<b>b</b>) as inputs, respectively. Gray pixel indicates pixel with no data.</p>
Full article ">
19 pages, 11928 KiB  
Article
Point Cloud Vibration Compensation Algorithm Based on an Improved Gaussian–Laplacian Filter
by Wanhe Du, Xianfeng Yang and Jinghui Yang
Electronics 2025, 14(3), 573; https://doi.org/10.3390/electronics14030573 - 31 Jan 2025
Viewed by 424
Abstract
In industrial environments, steel plate surface inspection plays a crucial role in quality control. However, vibrations during laser scanning can significantly impact measurement accuracy. While traditional vibration compensation methods rely on complex dynamic modeling, they often face challenges in practical implementation and generalization. [...] Read more.
In industrial environments, steel plate surface inspection plays a crucial role in quality control. However, vibrations during laser scanning can significantly impact measurement accuracy. While traditional vibration compensation methods rely on complex dynamic modeling, they often face challenges in practical implementation and generalization. This paper introduces a novel point cloud vibration compensation algorithm that combines an improved Gaussian–Laplacian filter with adaptive local feature analysis. The key innovations include (1) an FFT-based vibration factor extraction method that effectively identifies vibration trends, (2) an adaptive windowing strategy that automatically adjusts based on local geometric features, and (3) a weighted compensation mechanism that preserves surface details while reducing vibration noise. The algorithm demonstrated significant improvements in signal-to-noise ratio: 15.78% for simulated data, 6.81% for precision standard parts, and 12.24% for actual industrial measurements. Experimental validation confirms the algorithm’s effectiveness across different conditions. This approach achieved a practical, implementable solution for surface inspection in steel plate surface inspection. Full article
Show Figures

Figure 1

Figure 1
<p>Algorithm flowchart.</p>
Full article ">Figure 2
<p>Overall approach diagram. (Standard bearing diameter: 129.991 ± 0.001 mm, and the solid arrow indicates processing using our algorithm.).</p>
Full article ">Figure 3
<p>Original data, trend plot and comparison before and after vibration compensation for a random laser line of simulated data.</p>
Full article ">Figure 4
<p>Comparison of differentials, variance, curvature and features before and after vibration compensation for a random laser line of simulated data.</p>
Full article ">Figure 5
<p>Frequency–power spectral density plot before and after vibration compensation for a random laser line of simulated data.</p>
Full article ">Figure 6
<p>On-site experimental setup.</p>
Full article ">Figure 7
<p>Experimental workflow diagram.</p>
Full article ">Figure 8
<p>Point cloud registration results and local magnification.</p>
Full article ">Figure 9
<p>Original data, trend plot and comparison before and after vibration compensation for a random laser line of the standard bearing data.</p>
Full article ">Figure 10
<p>Comparison of differentials, variance, curvature and features before and after vibration compensation for a random laser line of standard bearing data.</p>
Full article ">Figure 11
<p>Frequency–power spectral density plot before and after vibration compensation for a random laser line of standard bearing data.</p>
Full article ">Figure 12
<p>Original data, trend plot and comparison before and after vibration compensation for a random laser line of actual plane data.</p>
Full article ">Figure 13
<p>Comparison of differentials, variance, curvature and features before and after vibration compensation for a random laser line of actual plane data.</p>
Full article ">Figure 14
<p>Frequency–power spectral density plot before and after vibration compensation for a random laser line of actual plane data.</p>
Full article ">Figure 15
<p>Comparison before and after only Gaussian smoothing for a random laser line of actual plane data.</p>
Full article ">Figure 16
<p>Comparison before and after only Laplacian operator for a random laser line of actual plane data.</p>
Full article ">Figure 17
<p>Comparison before and after improved Gaussian–Laplacian filter for a random laser line of actual plane data.</p>
Full article ">Figure 18
<p>Point cloud comparison and local magnification before and after vibration compensation of actual plane data.</p>
Full article ">
21 pages, 3449 KiB  
Article
Indian Land Carbon Sink Estimated from Surface and GOSAT Observations
by Lorna Nayagam, Shamil Maksyutov, Rajesh Janardanan, Tomohiro Oda, Yogesh K. Tiwari, Gaddamidi Sreenivas, Amey Datye, Chaithanya D. Jain, Madineni Venkat Ratnam, Vinayak Sinha, Haseeb Hakkim, Yukio Terao, Manish Naja, Md. Kawser Ahmed, Hitoshi Mukai, Jiye Zeng, Johannes W. Kaiser, Yu Someya, Yukio Yoshida and Tsuneo Matsunaga
Remote Sens. 2025, 17(3), 450; https://doi.org/10.3390/rs17030450 - 28 Jan 2025
Viewed by 560
Abstract
The carbon sink over land plays a key role in the mitigation of climate change by removing carbon dioxide (CO2) from the atmosphere. Accurately assessing the land sink capacity across regions should contribute to better future climate projections and help guide [...] Read more.
The carbon sink over land plays a key role in the mitigation of climate change by removing carbon dioxide (CO2) from the atmosphere. Accurately assessing the land sink capacity across regions should contribute to better future climate projections and help guide the mitigation of global emissions towards the Paris Agreement. This study estimates terrestrial CO2 fluxes over India using a high-resolution global inverse model that assimilates surface observations from the global observation network and the Indian subcontinent, airborne sampling from Brazil, and data from the Greenhouse gas Observing SATellite (GOSAT) satellite. The inverse model optimizes terrestrial biosphere fluxes and ocean-atmosphere CO2 exchanges independently, and it obtains CO2 fluxes over large land and ocean regions that are comparable to a multi-model estimate from a previous model intercomparison study. The sensitivity of optimized fluxes to the weights of the GOSAT satellite data and regional surface station data in the inverse calculations is also examined. It was found that the carbon sink over the South Asian region is reduced when the weight of the GOSAT data is reduced along with a stricter data filtering. Over India, our result shows a carbon sink of 0.040 ± 0.133 PgC yr−1 using both GOSAT and global surface data, while the sink increases to 0.147 ± 0.094 PgC yr−1 by adding data from the Indian subcontinent. This demonstrates that surface observations from the Indian subcontinent provide a significant additional constraint on the flux estimates, suggesting an increased sink over the region. Thus, this study highlights the importance of Indian sub-continental measurements in estimating the terrestrial CO2 fluxes over India. Additionally, the findings suggest that obtaining robust estimates solely using the GOSAT satellite data could be challenging since the GOSAT satellite data yield significantly varies over seasons, particularly with increased rain and cloud frequency. Full article
(This article belongs to the Special Issue Remote Sensing of Carbon Fluxes and Stocks II)
Show Figures

Figure 1

Figure 1
<p>The locations of the Indian observation sites (orange squares), global surface observations from ObsPack (blue dots), and GOSAT (grey dots) observations. The data site SNG, used for independent validation, is also shown (red star).</p>
Full article ">Figure 2
<p>The South Asian flux estimates obtained by a subset of the OCO-2 MIP models (see [<a href="#B15-remotesensing-17-00450" class="html-bibr">15</a>] for individual model details) and our model. CT, TM5-4DVAR, CAMS, OU, and Baker are the selected OCO-2 MIP models. ISG, ISGH, and I4SG represent the different inversions carried out with the NTFVAR model.</p>
Full article ">Figure 3
<p>Time series of observation (green), forward (red), and optimized (blue) simulations for the selected background sites Syowa (SYO), Pallas (PAL), Hyytiälä (SMR) and Lampedusa (LMP). The smoothed values are the weekly averages of daily measures.</p>
Full article ">Figure 4
<p>Time series of observed (obs), forward (fwd, with prior fluxes), and inversion-optimized (opt) CO<sub>2</sub> concentration for the four Indian sites and Comilla, Bangladesh. The smoothed values are the weekly averages of daily measures.</p>
Full article ">Figure 5
<p>Monthly BIAS averaged for all Indian sites and Comilla (unit is ppm).</p>
Full article ">Figure 6
<p>Monthly bias for each Indian site and Comilla from inversions ISG, ISGH, and I4SG (unit ppm).</p>
Full article ">Figure 7
<p>The monthly posterior bias of GOSAT measurements averaged for each 10° latitude bin obtained from inversion ISG (unit ppm). The latitude given in the x-axis is the lower limit of each 10° bin.</p>
Full article ">Figure 8
<p>Same plots as <a href="#remotesensing-17-00450-f004" class="html-fig">Figure 4</a> but for SNG validation site.</p>
Full article ">
26 pages, 6721 KiB  
Article
Advanced Detection and Classification of Kelp Habitats Using Multibeam Echosounder Water Column Point Cloud Data
by Amy W. Nau, Vanessa Lucieer, Alexandre C. G. Schimel, Haris Kunnath, Yoann Ladroit and Tara Martin
Remote Sens. 2025, 17(3), 449; https://doi.org/10.3390/rs17030449 - 28 Jan 2025
Viewed by 664
Abstract
Kelps are important habitat-forming species in shallow marine environments, providing critical habitat, structure, and productivity for temperate reef ecosystems worldwide. Many kelp species are currently endangered by myriad pressures, including changing water temperatures, invasive species, and anthropogenic threats. This situation necessitates advanced methods [...] Read more.
Kelps are important habitat-forming species in shallow marine environments, providing critical habitat, structure, and productivity for temperate reef ecosystems worldwide. Many kelp species are currently endangered by myriad pressures, including changing water temperatures, invasive species, and anthropogenic threats. This situation necessitates advanced methods to detect kelp density, which would allow tracking density changes, understanding ecosystem dynamics, and informing evidence-based management strategies. This study introduces an innovative approach to detect kelp density with multibeam echosounder water column data. First, these data are filtered into a point cloud. Then, a range of variables are derived from these point cloud data, including average acoustic energy, volume, and point density. Finally, these variables are used as input to a Random Forest model in combination with bathymetric variables to classify sand, bare rock, sparse kelp, and dense kelp habitats. At 5 m resolution, we achieved an overall accuracy of 72.5% with an overall Area Under the Curve of 0.874. Notably, our method achieved high accuracy across the entire multibeam swath, with only a 1 percent point decrease in model accuracy for data falling within the part of the multibeam water column data impacted by sidelobe artefact noise, which significantly expands the potential of this data type for wide-scale monitoring of threatened kelp ecosystems. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of three study sites: The Gardens (<b>a</b>), North Freycinet (<b>b</b>), and Monroe 1 (<b>c</b>). The inset map of Tasmania in the top panel shows the relative locations of each site. MBES depth is displayed with a range of 0 to 40 m. Towed video tracks and classification results are shown in greyscale for sand (white), bare rock (light grey), sparse kelp (dark grey), and dense kelp (black).</p>
Full article ">Figure 2
<p>Overview diagram of our proposed method, including data processing for towed video, multibeam bathymetry, and WCD, modelling, and model evaluation.</p>
Full article ">Figure 3
<p>Examples of water column data variables generated for the three sites: (<b>a</b>) WC Mean amplitude variable at 5 m resolution, (<b>b</b>) WC Point count variable at 5 m resolution, (<b>c</b>) WC Volume variable at 5 m resolution, and (<b>d</b>) WC Volume variable at 1 m resolution. The units of volume for panels (<b>c</b>) and (<b>d</b>) were converted to volume per area for visual comparison between the different resolutions. For each panel, the sites correspond to The Gardens (<b>top</b>), North Freycinet (<b>middle</b>), and Monroe 1 (<b>bottom</b>).</p>
Full article ">Figure 4
<p>Examples of towed video data for each class type: (<b>a</b>) Sand, (<b>b</b>) Bare rock (sea urchins present), (<b>c</b>) Sparse kelp, and (<b>d</b>) Dense kelp.</p>
Full article ">Figure 5
<p>Average ROC curve across all CV folds for each class for the best performing Random Forest model (5 m resolution) including the AUC. Sand is shown as a dotted line, bare rock is shown as a dot-dash line, sparse kelp is shown as a dashed line, and dense kelp is shown as a solid line.</p>
Full article ">Figure 6
<p>Variable importance (Mean Decrease Gini) for the models at 5 m resolution (<b>left</b>), 3 m resolution (<b>middle</b>), and 1 m resolution (<b>right</b>). Higher values of Mean Decrease Gini indicate a higher importance ranking of those variables in the Random Forest model.</p>
Full article ">Figure 7
<p>Box plots of selected water column variables by class at 5 m (<b>top row</b>) and 1 m (<b>bottom row</b>) grid resolutions. The horizontal line inside of each box is the sample median. The top and bottom edges are the upper and lower quartiles, respectively. Outliers are shown as dots.</p>
Full article ">Figure 8
<p>Box plots of selected water column variables falling within (<b>top</b>) and beyond (<b>bottom</b>) the minimum slant range (MSR). The top and bottom edges are the upper and lower quartiles, respectively. Outliers are shown as dots.</p>
Full article ">Figure 9
<p>Classified maps based on the Random Forest model at 5 m resolution for three sites: (<b>a</b>) The Gardens, (<b>b</b>) North Freycinet, and (<b>c</b>) Monroe 1.</p>
Full article ">Figure 10
<p>Percent of each reef class (bare rock, sparse kelp, or dense kelp) within each site (The Gardens (white), North Freycinet (grey), and Monroe 1 (black)). The percentage values are shown at the top of each bar.</p>
Full article ">
19 pages, 2560 KiB  
Article
Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds
by Lili Zhang, Shuangyue Shi, Muhammad Zain, Binqian Sun, Dongwei Han and Chengming Sun
Agronomy 2025, 15(1), 245; https://doi.org/10.3390/agronomy15010245 - 20 Jan 2025
Viewed by 573
Abstract
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and [...] Read more.
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and breeding, yet traditional two-dimensional imaging is susceptible to reduced segmentation accuracy due to occlusions between plants. The current study proposes the use of binocular stereo-vision technology to obtain three-dimensional (3D) point clouds of rapeseed leaves at the seedling and bolting stages. The point clouds were colorized based on elevation values in order to better process the 3D point cloud data and extract rapeseed phenotypic parameters. Denoising methods were selected based on the source and classification of point cloud noise. However, for ground point clouds, we combined plane fitting with pass-through filtering for denoising, while statistical filtering was used for denoising outliers generated during scanning. We found that, during the seedling stage of rapeseed, a region-growing segmentation method was helpful in finding suitable parameter thresholds for leaf segmentation, and the Locally Convex Connected Patches (LCCP) clustering method was used for leaf segmentation at the bolting stage. Furthermore, the study results show that combining plane fitting with pass-through filtering effectively removes the ground point cloud noise, while statistical filtering successfully denoises outlier noise points generated during scanning. Finally, using the region-growing algorithm during the seedling stage with a normal angle threshold set at 5.0/180.0* M_PI and a curvature threshold set at 1.5 helps to avoid the under-segmentation and over-segmentation issues, achieving complete segmentation of rapeseed seedling leaves, while the LCCP clustering method fully segments rapeseed leaves at the bolting stage. The proposed method provides insights to improve the accuracy of subsequent point cloud phenotypic parameter extraction, such as rapeseed leaf area, and is beneficial for the 3D reconstruction of rapeseed. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Diagram of the scanning process in the field: (<b>a</b>) a smart phenotyping platform; (<b>b</b>) a sidebar camera; (<b>c</b>) a data flow diagram.</p>
Full article ">Figure 2
<p>The colored point clouds diagram: (<b>a</b>) cross-section of point clouds, (<b>b</b>) three-dimensional (3D) view of rapeseed.</p>
Full article ">Figure 3
<p>The point cloud image after fitting the plane.</p>
Full article ">Figure 4
<p>Illustration of the Extended Convexity Criterion (CC) theory.</p>
Full article ">Figure 5
<p>Pass-through filtering effect diagram: (<b>a</b>) the original point cloud image of rapeseed plot, (<b>b</b>) the point cloud image of rapeseed after pass-through.</p>
Full article ">Figure 6
<p>The relationship between the number of removed points and the standard deviation multiple under various nearest-neighbor numbers.</p>
Full article ">Figure 7
<p>The denoising results from the point cloud image of rapeseed after statistical filtering: (<b>a</b>) when <span class="html-italic">k</span> = 5, <span class="html-italic">α</span> = 0.01; (<b>b</b>) when <span class="html-italic">k</span> = 100, <span class="html-italic">α</span> = 0.01; (<b>c</b>) when <span class="html-italic">k</span> = 5, <span class="html-italic">α</span> = 0.5; (<b>d</b>) when <span class="html-italic">k</span> = 5, <span class="html-italic">α</span> = 5.</p>
Full article ">Figure 8
<p>Segmentation results of a single rapeseed plant based on region growth under the curvature value of (<b>a</b>) 0.5, (<b>b</b>) 1.0, and (<b>c</b>) 1.5.</p>
Full article ">Figure 9
<p>Evaluation of the leaf area accuracy of Huyou 039.</p>
Full article ">Figure 10
<p>Segmentation results of rapeseed leaves at the bolting stage using (<b>a</b>) the region-growing algorithm and (<b>b</b>) the LCCP algorithm.</p>
Full article ">Figure 11
<p>The point cloud of the leaf overlaps: (<b>a</b>) the red circle highlights the overlapping region, and (<b>b</b>) an enlarged view of this overlapping area.</p>
Full article ">
23 pages, 12001 KiB  
Article
Enhancing Off-Road Topography Estimation by Fusing LIDAR and Stereo Camera Data with Interpolated Ground Plane
by Gustav Sten, Lei Feng and Björn Möller
Sensors 2025, 25(2), 509; https://doi.org/10.3390/s25020509 - 16 Jan 2025
Viewed by 492
Abstract
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, [...] Read more.
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, have higher accuracy and longer range but much less coverage. LIDARs are also more expensive. The research question examines whether incorporating LIDARs can significantly improve stereo camera accuracy. Current sensor fusion methods use LIDARs’ raw measurements directly; thus, the improvement in estimation accuracy is limited to only LIDAR-scanned locations The main contribution of our new method is to construct a reference ground plane through the interpolation of LIDAR data so that the interpolated maps have similar coverage as the stereo camera’s point cloud. The interpolated maps are fused with the stereo camera point cloud via Kalman filters to improve a larger section of the topography map. The method is tested in three environments: controlled indoor, semi-controlled outdoor, and unstructured terrain. Compared to the existing method without LIDAR interpolation, the proposed approach reduces average error by 40% in the controlled environment and 67% in the semi-controlled environment, while maintaining large coverage. The unstructured environment evaluation confirms its corrective impact. Full article
Show Figures

Figure 1

Figure 1
<p>Beam distribution dependant on distance from sensor.</p>
Full article ">Figure 2
<p>Sensor and software setup. (<b>a</b>) Visualizing of how the LIDAR and stereo camera were mounted; (<b>b</b>) software setup for recording data.</p>
Full article ">Figure 3
<p>Process of mapping point clouds to elevation map. (<b>a</b>) Single point cloud mapping to elevation map; (<b>b</b>) multiple point clouds mapping to elevation map.</p>
Full article ">Figure 4
<p>Example of point clouds from stereo camera (<b>a</b>), LIDAR (<b>b</b>), and the actual ground truth at the center of the point cloud (<b>c</b>). Note that (<b>c</b>) is zoomed in.</p>
Full article ">Figure 5
<p>Interpolation methodology. (<b>a</b>) Description of the interpolation direction in the gridmap. (<b>b</b>) Trapezoid function for variance between two measured points, <math display="inline"><semantics> <msub> <mi>p</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>p</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Interpolated map and its corresponding variance. x and y are grid cell indexes.</p>
Full article ">Figure 7
<p>Single sensor elevation map.</p>
Full article ">Figure 8
<p>Estimation Errors of the Two Sensors.</p>
Full article ">Figure 9
<p>Fused elevation maps.</p>
Full article ">Figure 10
<p>Estimation errors of the two fusion methods.</p>
Full article ">Figure 11
<p>Photograph of the test area, with critical measurement points marked.</p>
Full article ">Figure 12
<p>Example of raw point clouds with the objects highlighted.</p>
Full article ">Figure 13
<p>Stereo camera and LIDAR maps with their resulting variance.</p>
Full article ">Figure 14
<p>Fused maps with their resulting variance.</p>
Full article ">Figure 15
<p>Photograph of the test area.</p>
Full article ">Figure 16
<p>Example of raw point clouds.</p>
Full article ">Figure 17
<p>Stereo camera and LIDAR maps with their resulting variance.</p>
Full article ">Figure 18
<p>Fused maps with their resulting variance.</p>
Full article ">Figure 19
<p>Estimation of both fusion methods along Y = 34.</p>
Full article ">
24 pages, 10041 KiB  
Article
Six-Dimensional Pose Estimation of Molecular Sieve Drying Package Based on Red Green Blue–Depth Camera
by Yibing Chen, Songxiao Cao, Qixuan Wang, Zhipeng Xu, Tao Song and Qing Jiang
Sensors 2025, 25(2), 462; https://doi.org/10.3390/s25020462 - 15 Jan 2025
Viewed by 476
Abstract
This paper aims to address the challenge of precise robotic grasping of molecular sieve drying bags during automated packaging by proposing a six-dimensional (6D) pose estimation method based on an red green blue-depth (RGB-D) camera. The method consists of three components: point cloud [...] Read more.
This paper aims to address the challenge of precise robotic grasping of molecular sieve drying bags during automated packaging by proposing a six-dimensional (6D) pose estimation method based on an red green blue-depth (RGB-D) camera. The method consists of three components: point cloud pre-segmentation, target extraction, and pose estimation. A minimum bounding box-based pre-segmentation method was designed to minimize the impact of packaging wrinkles and skirt curling. Orientation filtering combined with Euclidean clustering and Principal Component Analysis (PCA)-based iterative segmentation was employed to accurately extract the target body. Lastly, a multi-target feature fusion method was applied for pose estimation to compute an accurate grasping pose. To validate the effectiveness of the proposed method, 102 sets of experiments were conducted and compared with classical methods such as Fast Point Feature Histograms (FPFH) and Point Pair Features (PPF). The results showed that the proposed method achieved a recognition rate of 99.02%, processing time of 2 s, pose error rate of 1.31%, and spatial position error of 3.278 mm, significantly outperforming the comparative methods. These findings demonstrated the effectiveness of the method in addressing the issue of accurate 6D pose estimation of molecular sieve drying bags, with potential for future applications to other complex-shaped objects. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>Molecular sieve drying pack. (<b>a</b>) NMSDP before vacuuming; (<b>b</b>) NMSDP after vacuuming; ① front section; ② middle section; ③ rear section; ④ target entity 1; ⑤ target entity 2; ⑥ target entity 3.</p>
Full article ">Figure 2
<p>Holes in point cloud among target entities. The green circle denotes the regions of the target object where point cloud data are absent. The white section indicates the missing point cloud data, while the silver region signifies the point cloud of the packaging.</p>
Full article ">Figure 3
<p>Product cutting system and 3D camera system. The left image shows the front view of the CAD drawing, representing the layout of the on-site mechanism distribution. The right image is a schematic of the 3D camera system intended to illustrate the working range of the 3D camera.</p>
Full article ">Figure 4
<p>The overall process framework for 6D pose estimation of the NMSD Pack. The image on the left shows the RGB image acquired by the camera during an actual operational scenario.</p>
Full article ">Figure 5
<p>Minimum bounding box Segmentation. (<b>a</b>) Minimum bounding box encloses the object; (<b>b</b>) modify the size of the minimum bounding box; (<b>c</b>) segment the object.</p>
Full article ">Figure 6
<p>Point cloud pre-segmentation. (<b>a</b>) Point cloud <math display="inline"><semantics> <mrow> <mo>∁</mo> </mrow> </semantics></math> before pre-segmentation; (<b>b</b>) point cloud <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>C</mi> </mrow> <mrow> <mo>′</mo> </mrow> </msup> </mrow> </semantics></math> after pre-segmentation; (<b>c</b>) corresponding RGB image of the point cloud in (<b>a</b>); (<b>d</b>) corresponding RGB image of the point cloud in (<b>b</b>).</p>
Full article ">Figure 7
<p>Target rotation projection. (<b>a</b>) Target general projection area pose; (<b>b</b>) target minimum projection area pose.</p>
Full article ">Figure 8
<p>Improved minimum bounding box. (<b>a</b>) Front and side of the minimum bounding box: The blue arrow in the figure represents the Z-axis direction of the minimum bounding box, the green arrow represents the Y-axis direction, and the red arrow represents the X-axis direction; (<b>b</b>) front and side of the improved minimum bounding box.</p>
Full article ">Figure 9
<p>Extract three target entities. The left side of the figure shows the point cloud to be extracted after pre-segmentation, while the right side displays the point cloud after extraction. The red regions indicate the successfully extracted point clouds of the three target objects.</p>
Full article ">Figure 10
<p>Target and target entity in packaging. (<b>a</b>) Target in packaging; (<b>b</b>) extracted target entity.</p>
Full article ">Figure 11
<p>Feature fusion determination process.</p>
Full article ">Figure 12
<p>Center position compensation model. (<b>a</b>) Minimum bounding box enclosing object, the red frame represents the minimum bounding box surrounding the object, the black frame represents the ideal minimum bounding box, c1 is the centroid of the object, c2 is the intersection point of the short axis of the red frame and the long axis of the black frame, c3 is the ideal center of the object, and c4 is the center of the bounding box; (<b>b</b>) the long axis of the attitude intersects the plane of the bounding box.</p>
Full article ">Figure 13
<p>Ground truth box. (<b>a</b>–<b>d</b>) Truth box annotation for partial data.</p>
Full article ">Figure 14
<p>Estimated pose vs. true pose comparison. (<b>a</b>) Differences in yaw angle between estimated and true pose. The red frame represents the manually annotated frame, and the blue frame is generated based on the target pose estimated by the method described in this paper; (<b>b</b>) differences in roll and pitch angles between estimated and true pose.</p>
Full article ">Figure 15
<p>Effect with different initial search angle step sizes. (<b>a</b>) Initial bounding box; (<b>b</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> = 0.5°; (<b>c</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> = 1, 5, 7°; (<b>d</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> = 3, 9°.</p>
Full article ">Figure 16
<p>Analysis of experimental results based on different angular thresholds.</p>
Full article ">Figure 17
<p>Analysis of experimental results based on initial weight combinations.</p>
Full article ">Figure 18
<p>Pose estimation error of single and multi-object subjects.</p>
Full article ">Figure 19
<p>Data types. (<b>a</b>) Type A data, characterized by two deep grooves, with the most distinct contour features; (<b>b</b>) Type B data, characterized by one deep groove and one shallow groove; (<b>c</b>) Type C data, characterized by two shallow grooves; (<b>d</b>) Type D data, characterized by the absence of grooves and an extremely indistinct contour, representing defective vacuum-sealed products, which are in an abnormal condition.</p>
Full article ">Figure 20
<p>Pose estimation errors using different methods.</p>
Full article ">Figure 21
<p>Spatial coordinate error.</p>
Full article ">
20 pages, 22008 KiB  
Article
A Novel Approach to Optimize Key Limitations of Azure Kinect DK for Efficient and Precise Leaf Area Measurement
by Ziang Niu, Ting Huang, Chengjia Xu, Xinyue Sun, Mohamed Farag Taha, Yong He and Zhengjun Qiu
Agriculture 2025, 15(2), 173; https://doi.org/10.3390/agriculture15020173 - 14 Jan 2025
Viewed by 479
Abstract
Maize leaf area offers valuable insights into physiological processes, playing a critical role in breeding and guiding agricultural practices. The Azure Kinect DK possesses the real-time capability to capture and analyze the spatial structural features of crops. However, its further application in maize [...] Read more.
Maize leaf area offers valuable insights into physiological processes, playing a critical role in breeding and guiding agricultural practices. The Azure Kinect DK possesses the real-time capability to capture and analyze the spatial structural features of crops. However, its further application in maize leaf area measurement is constrained by RGB–depth misalignment and limited sensitivity to detailed organ-level features. This study proposed a novel approach to address and optimize the limitations of the Azure Kinect DK through the multimodal coupling of RGB-D data for enhanced organ-level crop phenotyping. To correct RGB–depth misalignment, a unified recalibration method was developed to ensure accurate alignment between RGB and depth data. Furthermore, a semantic information-guided depth inpainting method was proposed, designed to repair void and flying pixels commonly observed in Azure Kinect DK outputs. The semantic information was extracted using a joint YOLOv11-SAM2 model, which utilizes supervised object recognition prompts and advanced visual large models to achieve precise RGB image semantic parsing with minimal manual input. An efficient pixel filter-based depth inpainting algorithm was then designed to inpaint void and flying pixels and restore consistent, high-confidence depth values within semantic regions. A validation of this approach through leaf area measurements in practical maize field applications—challenged by a limited workspace, constrained viewpoints, and environmental variability—demonstrated near-laboratory precision, achieving an MAPE of 6.549%, RMSE of 4.114 cm2, MAE of 2.980 cm2, and R2 of 0.976 across 60 maize leaf samples. By focusing processing efforts on the image level rather than directly on 3D point clouds, this approach markedly enhanced both efficiency and accuracy with the sufficient utilization of the Azure Kinect DK, making it a promising solution for high-throughput 3D crop phenotyping. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Experiment setup. (<b>a</b>) Azure Kinect DK. (<b>b</b>) Experiment setup.</p>
Full article ">Figure 2
<p>Key limitations of the data generated by Azure Kinect DK. (<b>a</b>) Front-view point cloud of Azure Kinect DK showing RGB–depth misalignment. (<b>b</b>) Front-view point cloud showing void pixels inside leaf region. (<b>c</b>) Side-view point cloud showing flying pixels around the edges of leaf.</p>
Full article ">Figure 3
<p>The recalibration strategy.</p>
Full article ">Figure 4
<p>The framework of YOLOv11-SAM2.</p>
Full article ">Figure 5
<p>Pixel filter depth completion algorithm (The star in the figure represents the pixel currently being inpainted).</p>
Full article ">Figure 6
<p>The procedure of depth image inpainting.</p>
Full article ">Figure 7
<p>Example of the 3D data gridding process for leaf area measurement.</p>
Full article ">Figure 8
<p>Point cloud of maize plant. (<b>a</b>) Total point cloud before recalibration. (<b>b</b>) Total point cloud after recalibration. (<b>c</b>) Point cloud of specific maize plant before recalibration. (<b>d</b>) Point cloud of specific maize plant after recalibration.</p>
Full article ">Figure 9
<p>Semantic information extraction result of YOLOV11-SAM2 and depth image of leaf instances.</p>
Full article ">Figure 10
<p>Void pixel identification. (<b>a</b>) Original depth image of maize leaf. (<b>b</b>) Identified void pixels. (<b>c</b>) Original point cloud of maize leaf. (<b>d</b>) Inpainted point cloud of maize leaf.</p>
Full article ">Figure 11
<p>Visualization of inpainted depth image and point cloud under different edge identification ratios.</p>
Full article ">Figure 12
<p>Visualization of anomalous pixel identification. (<b>a</b>) The original depth image of leaf. (<b>b</b>) The depth image of known pixels without anomalous pixel identification. (<b>c</b>) The depth image of known pixels with anomalous pixel identification. (<b>d</b>) The depth image after depth completion without anomalous pixel identification. (<b>e</b>) The depth image after depth completion with anomalous pixel identification.</p>
Full article ">Figure 13
<p>The histogram of depth image with peaks.</p>
Full article ">Figure 14
<p>Performance of pixel filter using different search ranges.</p>
Full article ">Figure 15
<p>Visualization of 3D models for leaf area measurement.</p>
Full article ">Figure 16
<p>Scatter plots of measured and ground truth values of leaf area.</p>
Full article ">Figure 17
<p>Absolute error plot of measured leaf area.</p>
Full article ">Figure 18
<p>Absolute percentage error plot of measured leaf area.</p>
Full article ">
13 pages, 1770 KiB  
Article
Exploring Musical Feedback for Gait Retraining: A Novel Approach to Orthopedic Rehabilitation
by Luisa Cedin, Christopher Knowlton and Markus A. Wimmer
Healthcare 2025, 13(2), 144; https://doi.org/10.3390/healthcare13020144 - 14 Jan 2025
Viewed by 621
Abstract
Background/Objectives: Gait retraining is widely used in orthopedic rehabilitation to address abnormal movement patterns. However, retaining walking modifications can be challenging without guidance from physical therapists. Real-time auditory biofeedback can help patients learn and maintain gait alterations. This study piloted the feasibility of [...] Read more.
Background/Objectives: Gait retraining is widely used in orthopedic rehabilitation to address abnormal movement patterns. However, retaining walking modifications can be challenging without guidance from physical therapists. Real-time auditory biofeedback can help patients learn and maintain gait alterations. This study piloted the feasibility of the musification of feedback to medialize the center of pressure (COP). Methods: To provide musical feedback, COP and plantar pressure were captured in real time at 100 Hz from a wireless 16-sensor pressure insole. Twenty healthy subjects (29 ± 5 years old, 75.9 ± 10.5 Kg, 1.73 ± 0.07 m) were recruited to walk using this system and were further analyzed via marker-based motion capture. A lowpass filter muffled a pre-selected music playlist when the real-time center of pressure exceeded a predetermined lateral threshold. The only instruction participants received was to adjust their walking to avoid the muffling of the music. Results: All participants significantly medialized their COP (−9.38% ± 4.37, range −2.3% to −19%), guided solely by musical feedback. Participants were still able to reproduce this new walking pattern when the musical feedback was removed. Importantly, no significant changes in cadence or walking speed were observed. The results from a survey showed that subjects enjoyed using the system and suggested that they would adopt such a system for rehabilitation. Conclusions: This study highlights the potential of musical feedback for orthopedic rehabilitation. In the future, a portable system will allow patients to train at home, while clinicians could track their progress remotely through cloud-enabled telemetric health data monitoring. Full article
(This article belongs to the Special Issue 2nd Edition of the Expanding Scope of Music in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Flow chart of the study design with descriptions of data collected in each condition. COP: center of pressure.</p>
Full article ">Figure 2
<p>Representation of insoles’ geometry, sensors’ locations, medial (blue-colored sensors) and lateral (red-colored sensors) boundaries, and IMU positioning. Adapted from the manufacturer’s user guide (Insole3–Moticon, OpenGo, Munich, Germany).</p>
Full article ">Figure 3
<p>Musical feedback design.</p>
Full article ">Figure 4
<p>Gait line at the baseline (measured at warmup) and training with musical feedback. Shaded regions represent +/−1 SD.</p>
Full article ">Figure 5
<p>Mean plantar pressure throughout the stance phase at (<b>a</b>) the baseline and (<b>b</b>) training with musical feedback for one participant. Figure generated via a MOTICON OpenGo report.</p>
Full article ">
27 pages, 5429 KiB  
Article
Terrain Traversability via Sensed Data for Robots Operating Inside Heterogeneous, Highly Unstructured Spaces
by Amir Gholami and Alejandro Ramirez-Serrano
Sensors 2025, 25(2), 439; https://doi.org/10.3390/s25020439 - 13 Jan 2025
Viewed by 526
Abstract
This paper presents a comprehensive approach to evaluating the ability of multi-legged robots to traverse confined and geometrically complex unstructured environments. The proposed approach utilizes advanced point cloud processing techniques integrating voxel-filtered cloud, boundary and mesh generation, and dynamic traversability analysis to enhance [...] Read more.
This paper presents a comprehensive approach to evaluating the ability of multi-legged robots to traverse confined and geometrically complex unstructured environments. The proposed approach utilizes advanced point cloud processing techniques integrating voxel-filtered cloud, boundary and mesh generation, and dynamic traversability analysis to enhance the robot’s terrain perception and navigation. The proposed framework was validated through rigorous simulation and experimental testing with humanoid robots, showcasing the potential of the proposed approach for use in applications/environments characterized by complex environmental features (navigation inside collapsed buildings). The results demonstrate that the proposed framework provides the robot with an enhanced capability to perceive and interpret its environment and adapt to dynamic environment changes. This paper contributes to the advancement of robotic navigation and path-planning systems by providing a scalable and efficient framework for environment analysis. The integration of various point cloud processing techniques into a single architecture not only improves computational efficiency but also enhances the robot’s interaction with its environment, making it more capable of operating in complex, hazardous, unstructured settings. Full article
(This article belongs to the Special Issue Intelligent Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed SRE process.</p>
Full article ">Figure 2
<p>Projection of the robot’s ellipsoid boundary onto a virtual plane. A: Robot in a walking state. B: Robot in a crawling state.</p>
Full article ">Figure 3
<p>Visualization of the alpha shape construction process for a point set representing a simplified human figure (T-pose) with <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">α</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>. (<b>a</b>) Delaunay triangulation. (<b>b</b>) Alpha shapes.</p>
Full article ">Figure 4
<p>Methodology for calculating the slopeness of the environment.</p>
Full article ">Figure 5
<p>Methodology for calculating the roughness of the environment.</p>
Full article ">Figure 6
<p>The score metric <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>S</mi> </mrow> <mrow> <mi>n</mi> </mrow> <mrow> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> </mrow> </semantics></math> for the <math display="inline"><semantics> <mrow> <mi>n</mi> </mrow> </semantics></math>-th candidate ellipse within the virtual plane.</p>
Full article ">Figure 7
<p>(<b>a</b>) The space directly ahead of the robot (A: Uneven surface, B: Door, C: Curved rigid body, D: Closet, E: Structured truss, F: Flat surface). (<b>b</b>) The surroundings recorded by the camera mounted on the robot’s head.</p>
Full article ">Figure 8
<p>Voxel-filtered cloud (A: Uneven surface, B: Door, C: Curved rigid body, D: Closet, E: Structured truss, F: Flat surface).</p>
Full article ">Figure 9
<p>Environment’s mesh structured using alpha shapes method.</p>
Full article ">Figure 10
<p>Slopeness of the environment.</p>
Full article ">Figure 11
<p>Roughness of the environment.</p>
Full article ">Figure 12
<p>Virtual box in front of the robot.</p>
Full article ">Figure 13
<p>(<b>a</b>) Placing the projection of ellipsoid boundary onto the virtual plane. (<b>b</b>) Robot’s ellipsoid boundary.</p>
Full article ">
23 pages, 9054 KiB  
Article
A Study on Canopy Volume Measurement Model for Fruit Tree Application Based on LiDAR Point Cloud
by Na Guo, Ning Xu, Jianming Kang, Guohai Zhang, Qingshan Meng, Mengmeng Niu, Wenxuan Wu and Xingguo Zhang
Agriculture 2025, 15(2), 130; https://doi.org/10.3390/agriculture15020130 - 9 Jan 2025
Viewed by 516
Abstract
The accurate measurement of orchard canopy volume serves as a crucial foundation for wind regulation and dosage adjustments in precision orchard management. However, existing methods for measuring canopy volume fail to satisfy the high precision and real-time requirements necessary for accurate variable-rate applications [...] Read more.
The accurate measurement of orchard canopy volume serves as a crucial foundation for wind regulation and dosage adjustments in precision orchard management. However, existing methods for measuring canopy volume fail to satisfy the high precision and real-time requirements necessary for accurate variable-rate applications in fruit orchards. To address these challenges, this study develops a canopy volume measurement model for orchard spraying using LiDAR point cloud data. In the domain of point cloud feature extraction, an improved Alpha Shape algorithm is proposed for extracting point cloud contours. This method improves the validity judgment for contour line segments, effectively reducing contour length errors on each 3D point cloud projection plane. Additionally, improvements to the mesh integral volume method incorporate the effects of canopy gaps in height difference calculations, significantly enhancing the accuracy of canopy volume estimation. For feature selection, a random forest-based recursive feature elimination method with cross-validation was employed to filter 10 features. Ultimately, five key features were retained for model training: the number of point clouds, the 2D point cloud contour along the X- and Z-projection directions, the 2D width in the Y-projection direction, and the 2D length in the Z-projection direction. During model construction, the study optimized the hyperparameters of partial least squares regression (PLSR), backpropagation (BP) neural networks, and gradient boosting decision trees (GBDT) to build canopy volume measurement models tailored to the dataset. Experimental results indicate that the PLSR model outperformed other approaches, achieving optimal performance with three principal components. The resulting canopy volume measurement model achieved an R2 of 0.9742, an RMSE of 0.1879, and an MAE of 0.1161. These results demonstrate that the PLSR model exhibits strong generalization ability, minimal prediction bias, and low average prediction error, offering a valuable reference for precision control of canopy spraying in orchards. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>LZY604 driverless wheeled tractor.</p>
Full article ">Figure 2
<p>Point cloud data acquisition. (<b>a</b>) Acquisition site. (<b>b</b>) Multi-view point cloud image.</p>
Full article ">Figure 3
<p>SOR filter denoising before and after comparison. (<b>a</b>) Top view of before and after denoising. (<b>b</b>) Front view before and after denoising.</p>
Full article ">Figure 4
<p>Point cloud of single fruit trees.</p>
Full article ">Figure 5
<p>3D point cloud plane projection. (<b>a</b>) <span class="html-italic">X</span>-axis projection. (<b>b</b>) <span class="html-italic">Y</span>-axis projection. (<b>c</b>) <span class="html-italic">Z</span>-axis projection.</p>
Full article ">Figure 6
<p>Triangular sectioning and edge circle drawing.</p>
Full article ">Figure 7
<p>2D point cloud contour lines. (<b>a</b>) Comparison of contours in X-projection direction; (<b>b</b>) Comparison of contours in the Y-projection direction; (<b>c</b>) Comparison of contours in Z-projection direction.</p>
Full article ">Figure 7 Cont.
<p>2D point cloud contour lines. (<b>a</b>) Comparison of contours in X-projection direction; (<b>b</b>) Comparison of contours in the Y-projection direction; (<b>c</b>) Comparison of contours in Z-projection direction.</p>
Full article ">Figure 8
<p>Comparison of manual labeling Alpha Shape contour extraction algorithm with actual contour lengths. (<b>a</b>) Alpha Shape contour extraction algorithm manually labeled lengths. (<b>b</b>) Manual marking of actual contour lengths.</p>
Full article ">Figure 9
<p>Comparison of before and after improvement of grid integral volume method. (<b>a</b>) Grid integral volume method before improvement. (<b>b</b>) Improved grid integral volume method.</p>
Full article ">Figure 10
<p>Body element and point cloud presentation diagrams. (<b>a</b>) The body element is taken as 0.5 m. (<b>b</b>) The body element is taken as 0.1 m.</p>
Full article ">Figure 11
<p>Comparison of results between the body element method and the improved grid integration method.</p>
Full article ">Figure 12
<p>Test set results of a model test set for canopy volume measurement based on PLSR. (<b>a</b>) Plot of predicted versus true values for the test set. (<b>b</b>) Plot of test set scatter points versus residual fit.</p>
Full article ">Figure 13
<p>BP neural network-based canopy volume measurement model test set results. (<b>a</b>) Plot of predicted versus true values for the test set. (<b>b</b>) Plot of test set scatter points versus residual fit.</p>
Full article ">Figure 14
<p>Test set results of canopy volume measurement model based on GBDT. (<b>a</b>) Plot of predicted versus true values for the test set. (<b>b</b>) Plot of test set scatter points versus residual fit.</p>
Full article ">Figure 15
<p>Comparison of sycamore canopy contour extraction and volume acquisition. (<b>a</b>) Comparison of contours in X-projection direction. (<b>b</b>) Comparison of contours in the Y-projection direction. (<b>c</b>) Comparison of contours in Z-projection direction. (<b>d</b>) Volume acquisition comparison.</p>
Full article ">Figure 15 Cont.
<p>Comparison of sycamore canopy contour extraction and volume acquisition. (<b>a</b>) Comparison of contours in X-projection direction. (<b>b</b>) Comparison of contours in the Y-projection direction. (<b>c</b>) Comparison of contours in Z-projection direction. (<b>d</b>) Volume acquisition comparison.</p>
Full article ">
25 pages, 13628 KiB  
Article
Gradient Enhancement Techniques and Motion Consistency Constraints for Moving Object Segmentation in 3D LiDAR Point Clouds
by Fangzhou Tang, Bocheng Zhu and Junren Sun
Remote Sens. 2025, 17(2), 195; https://doi.org/10.3390/rs17020195 - 8 Jan 2025
Viewed by 495
Abstract
The ability to segment moving objects from three-dimensional (3D) LiDAR scans is critical to advancing autonomous driving technology, facilitating core tasks like localization, collision avoidance, and path planning. In this paper, we introduce a novel deep neural network designed to enhance the performance [...] Read more.
The ability to segment moving objects from three-dimensional (3D) LiDAR scans is critical to advancing autonomous driving technology, facilitating core tasks like localization, collision avoidance, and path planning. In this paper, we introduce a novel deep neural network designed to enhance the performance of 3D LiDAR point cloud moving object segmentation (MOS) through the integration of image gradient information and the principle of motion consistency. Our method processes sequential range images, employing depth pixel difference convolution (DPDC) to improve the efficacy of dilated convolutions, thus boosting spatial information extraction from range images. Additionally, we incorporate Bayesian filtering to impose posterior constraints on predictions, enhancing the accuracy of motion segmentation. To handle the issue of uneven object scales in range images, we develop a novel edge-aware loss function and use a progressive training strategy to further boost performance. Our method is validated on the SemanticKITTI-based LiDAR MOS benchmark, where it significantly outperforms current state-of-the-art (SOTA) methods, all while working directly on two-dimensional (2D) range images without requiring mapping. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Visualization of 2D image.</p>
Full article ">Figure 2
<p>Details of improved convolution methods.</p>
Full article ">Figure 3
<p>An overview of our method. The upper part illustrates the overall workflow of the network, while the lower part details the specific implementation of each submodule.</p>
Full article ">Figure 4
<p>Details of Depth Pixel Difference Convolution (DPDC).</p>
Full article ">Figure 5
<p>Qualitative comparisons of various methods for LiDAR-MOS in different scenes on the SemanticKITTI-MOS validation set are presented. Blue circles emphasize mispredictions and indistinct boundaries. For optimal viewing, refer to the images in color and zoom in for finer details.</p>
Full article ">Figure 6
<p>Qualitative comparisons of various methods for LiDAR-MOS between consecutive frames on the SemanticKITTI-MOS validation set are presented. Blue circles emphasize mispredictions and indistinct boundaries. For optimal viewing, refer to the images in color and zoom in for finer details.</p>
Full article ">
21 pages, 1040 KiB  
Article
AIDS-Based Cyber Threat Detection Framework for Secure Cloud-Native Microservices
by Heeji Park, Abir EL Azzaoui and Jong Hyuk Park
Electronics 2025, 14(2), 229; https://doi.org/10.3390/electronics14020229 - 8 Jan 2025
Viewed by 644
Abstract
Cloud-native architectures continue to redefine application development and deployment by offering enhanced scalability, performance, and resource efficiency. However, they present significant security challenges, particularly in securing inter-container communication and mitigating Distributed Denial of Service (DDoS) attacks in containerized microservices. This study proposes an [...] Read more.
Cloud-native architectures continue to redefine application development and deployment by offering enhanced scalability, performance, and resource efficiency. However, they present significant security challenges, particularly in securing inter-container communication and mitigating Distributed Denial of Service (DDoS) attacks in containerized microservices. This study proposes an Artificial Intelligence Intrusion Detection System (AIDS)-based cyber threat detection solution to address these critical security challenges inherent in cloud-native environments. By leveraging a Resilient Backpropagation Neural Network (RBN), the proposed solution enhances system security and resilience by effectively detecting and mitigating DDoS attacks in real time in both the network and application layers. The solution incorporates an Inter-Container Communication Bridge (ICCB) to ensure secure communication between containers. It also employs advanced technologies such as eXpress Data Path (XDP) and the Extended Berkeley Packet Filter (eBPF) for high-performance and low-latency security enforcement, thereby overcoming the limitations of existing research. This approach provides robust protection against evolving security threats while maintaining the dynamic scalability and efficiency of cloud-native architectures. Furthermore, the system enhances operational continuity through proactive monitoring and dynamic adaptability, ensuring effective protection against evolving threats while preserving the inherent scalability and efficiency of cloud-native environments. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed AIDS-based cyber threat detection framework.</p>
Full article ">Figure 2
<p>Inter-container communication bridge overview.</p>
Full article ">
17 pages, 3425 KiB  
Article
A 6D Object Pose Estimation Algorithm for Autonomous Docking with Improved Maximal Cliques
by Zhenqi Han and Lizhuang Liu
Sensors 2025, 25(1), 283; https://doi.org/10.3390/s25010283 - 6 Jan 2025
Viewed by 570
Abstract
Accurate 6D object pose estimation is critical for autonomous docking. To address the inefficiencies and inaccuracies associated with maximal cliques-based pose estimation methods, we propose a fast 6D pose estimation algorithm that integrates feature space and space compatibility constraints. The algorithm reduces the [...] Read more.
Accurate 6D object pose estimation is critical for autonomous docking. To address the inefficiencies and inaccuracies associated with maximal cliques-based pose estimation methods, we propose a fast 6D pose estimation algorithm that integrates feature space and space compatibility constraints. The algorithm reduces the graph size by employing Laplacian filtering to resample high-frequency signal nodes. Then, the truncated Chamfer distance derived from fusion features and spatial compatibility constraints is used to evaluate the accuracy of candidate pose alignment between source and reference point clouds, and the optimal pose transformation matrix is selected for 6D pose estimation. Finally, a point-to-plane ICP algorithm is applied to refine the 6D pose estimation for autonomous docking. Experimental results demonstrate that the proposed algorithm achieves recall rates of 94.5%, 62.2%, and 99.1% on the 3DMatch, 3DLoMatch, and KITTI datasets, respectively. On the autonomous docking dataset, the algorithm yields rotation and localization errors of 0.96° and 5.82 cm, respectively, outperforming existing methods and validating the effectiveness of our approach. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagram of automated docking.</p>
Full article ">Figure 2
<p>Pipeline of MAC.</p>
Full article ">Figure 3
<p>Spatial compatibility.</p>
Full article ">Figure 4
<p>Pipeline of improved MAC.</p>
Full article ">Figure 5
<p>A relative pose estimation framework for automatic docking.</p>
Full article ">Figure 6
<p>Automatic docking dataset.</p>
Full article ">
Back to TopTop