[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,863)

Search Parameters:
Keywords = cloud compare

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 421 KiB  
Article
Robust Access Control for Secure IoT Outsourcing with Leakage Resilience
by Khaled Riad
Sensors 2025, 25(3), 625; https://doi.org/10.3390/s25030625 (registering DOI) - 22 Jan 2025
Abstract
The Internet of Things (IoT) has revolutionized various industries by enabling seamless connectivity and data exchange among devices. However, the security and privacy of outsourced IoT data remain critical challenges, especially given the resource constraints of IoT devices. This paper proposes a robust [...] Read more.
The Internet of Things (IoT) has revolutionized various industries by enabling seamless connectivity and data exchange among devices. However, the security and privacy of outsourced IoT data remain critical challenges, especially given the resource constraints of IoT devices. This paper proposes a robust and leakage-resilient access control scheme based on Attribute-Based Encryption (ABE) with partial decryption outsourcing. The proposed scheme minimizes computational overhead on IoT devices by offloading intensive decryption tasks to the cloud, while ensuring resilience against master secret key leakage, side-channel attacks, and other common security threats. Comprehensive security analysis demonstrates the scheme’s robustness under standard cryptographic assumptions, and performance evaluations show significant improvements in decryption efficiency, scalability, and computational performance compared to existing solutions. The proposed scheme offers a scalable, efficient, and secure access control framework, making it highly suitable for real-world IoT deployments across domains such as smart healthcare, industrial IoT, and smart cities. Full article
Show Figures

Figure 1

Figure 1
<p>The system model showing the detailed communication steps among the four primary entities (Attribute Authority, Cloud Service Provider, IoT Devices, and IoT Users).</p>
Full article ">Figure 2
<p>Decryption time against the number of attributes, ref. [<a href="#B55-sensors-25-00625" class="html-bibr">55</a>].</p>
Full article ">Figure 3
<p>Encryption time comparison, refs. [<a href="#B55-sensors-25-00625" class="html-bibr">55</a>,<a href="#B56-sensors-25-00625" class="html-bibr">56</a>].</p>
Full article ">Figure 4
<p>Ciphertext size comparison, refs. [<a href="#B55-sensors-25-00625" class="html-bibr">55</a>,<a href="#B57-sensors-25-00625" class="html-bibr">57</a>].</p>
Full article ">
12 pages, 1335 KiB  
Article
Development of Postoperative Ocular Hypertension After Phacoemulsification for Removal of Cataracts in Dogs
by Myeong-Gon Kang, Chung-Hui Kim, Shin-Ho Lee and Jae-Hyeon Cho
Animals 2025, 15(3), 301; https://doi.org/10.3390/ani15030301 (registering DOI) - 22 Jan 2025
Abstract
A cataract is a disease in which the lens of the eye becomes clouded, causing a partial or complete loss of vision. Phacoemulsification (PHACO) is a modern surgical technique used in cataract surgery. Study findings: This study observed changes in intraocular pressure (IOP) [...] Read more.
A cataract is a disease in which the lens of the eye becomes clouded, causing a partial or complete loss of vision. Phacoemulsification (PHACO) is a modern surgical technique used in cataract surgery. Study findings: This study observed changes in intraocular pressure (IOP) after surgery in 31 dogs (48 eyes) with cataracts that visited a veterinary hospital. The procedure involved a lens extraction by PHACO and the implantation of an intraocular lens (IOL). Postoperative ocular hypertension (POH) was defined as a postoperative IOP of 25 mmHg or higher. To assess changes in IOP, IOP measurements were performed at 1, 2, 3, and 20 h, and at 1, 2, 3, 4, and 8 weeks after surgery. The IOP was found to be significantly higher at 1 (p < 0.05), 2 (p < 0.01), and 3 (p < 0.01) hours postoperatively compared with preoperatively. The IOP measurements were compared by dividing them into three groups according to the observation period. The IOP values were measured for three groups: before cataract surgery (Group A: 13.10 ± 8.29 mmHg), 1 to 3 h after cataract surgery (Group B: 17.84 ± 5.33 mmHg), and 20 h to 8 weeks after surgery (Group C: 13.71 ± 4.78 mmHg). The IOP values from 1 to 3 h after surgery (Group B) were significantly higher compared to both Group A (p < 0.01) and Group C (p < 0.001). Conclusions: It is suggested that POH occurring within 0 to 3 h after cataract surgery should be diagnosed as secondary glaucoma, and treatment should be performed accordingly. Full article
(This article belongs to the Special Issue Advances in Small Animal Ophthalmic Surgery (Volume II))
Show Figures

Figure 1

Figure 1
<p>PHACO for cataract removal: (<b>A</b>) corneal incision, (<b>B</b>) staining of anterior lens capsule using trypan blue, (<b>C</b>) round incision with diameter of 5 mm in the center of the eye lens anterior chamber, (<b>D</b>) hydrodissection to separate the eye lens, (<b>E</b>) removal of the lens nucleus using phaco handpieces, and (<b>F</b>) intraocular lens implantation and corneal suture.</p>
Full article ">Figure 2
<p>The intraocular pressures (IOPs) of the dogs that underwent surgery were measured according to the follow-up periods (Group A, B, and C). Values are presented as mean ± standard deviation (SDs). The <span class="html-italic">p</span>-values were obtained using the non-parametric Friedman test, followed by pairwise comparisons with the Wilcoxon signed-rank test. <span class="html-italic">p</span> &lt; 0.05. Group A: preoperative data; Group B: data from 1 to 3 h post-surgery; and Group C: data from 20 h to 8 weeks post-surgery. ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 3
<p>Time course of intraocular pressure (IOP) after cataract surgery using PHACO in dogs. Data are expressed as mean ± SD, <span class="html-italic">p</span>-value was acquired by paired <span class="html-italic">t</span> test. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01. Pre-OP: Pre-surgery, Post: Post-surgery.</p>
Full article ">
17 pages, 3431 KiB  
Article
Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method
by Shijun Pan, Keisuke Yoshida, Satoshi Nishiyama, Takashi Kojima and Yutaro Hashimoto
Land 2025, 14(2), 217; https://doi.org/10.3390/land14020217 (registering DOI) - 21 Jan 2025
Abstract
Riverine environmental information includes important data to collect, and the data collection still requires personnel’s field surveys. These on-site tasks still face significant limitations (i.e., hard or danger to entry). In recent years, as one of the efficient approaches for data collection, air-vehicle-based [...] Read more.
Riverine environmental information includes important data to collect, and the data collection still requires personnel’s field surveys. These on-site tasks still face significant limitations (i.e., hard or danger to entry). In recent years, as one of the efficient approaches for data collection, air-vehicle-based Light Detection and Ranging technologies have already been applied in global environmental research, i.e., land cover classification (LCC) or environmental monitoring. For this study, the authors specifically focused on seven types of LCC (i.e., bamboo, tree, grass, bare ground, water, road, and clutter) that can be parameterized for flood simulation. A validated airborne LiDAR bathymetry system (ALB) and a UAV-borne green LiDAR System (GLS) were applied in this study for cross-platform analysis of LCC. Furthermore, LiDAR data were visualized using high-contrast color scales to improve the accuracy of land cover classification methods through image fusion techniques. If high-resolution aerial imagery is available, then it must be downscaled to match the resolution of low-resolution point clouds. Cross-platform data interchangeability was assessed by comparing the interchangeability, which measures the absolute difference in overall accuracy (OA) or macro-F1 by comparing the cross-platform interchangeability. It is noteworthy that relying solely on aerial photographs is inadequate for achieving precise labeling, particularly under limited sunlight conditions that can lead to misclassification. In such cases, LiDAR plays a crucial role in facilitating target recognition. All the approaches (i.e., low-resolution digital imagery, LiDAR-derived imagery and image fusion) present results of over 0.65 OA and of around 0.6 macro-F1. The authors found that the vegetation (bamboo, tree, grass) and road species have comparatively better performance compared with clutter and bare ground species. Given the stated conditions, differences in the species derived from different years (ALB from year 2017 and GLS from year 2020) are the main reason. Because the identification of clutter species includes all the items except for the relative species in this research, RGB-based features of the clutter species cannot be substituted easily because of the 3-year gap compared with other species. Derived from on-site reconstruction, the bare ground species also has a further color change between ALB and GLS that leads to decreased interchangeability. In the case of individual species, without considering seasons and platforms, image fusion can classify bamboo and trees with higher F1 scores compared to low-resolution digital imagery and LiDAR-derived imagery, which has especially proved the cross-platform interchangeability in the high vegetation types. In recent years, high-resolution photography (UAV), high-precision LiDAR measurement (ALB, GLS), and satellite imagery have been used. LiDAR measurement equipment is expensive, and measurement opportunities are limited. Based on this, it would be desirable if ALB and GLS could be continuously classified by Artificial Intelligence, and in this study, the authors investigated such data interchangeability. A unique and crucial aspect of this study is exploring the interchangeability of land cover classification models across different LiDAR platforms. Full article
Show Figures

Figure 1

Figure 1
<p>Perspective of airborne LiDAR bathymetry and green LiDAR measurement area: (<b>a</b>) location of the Asahi River in Japan with kilo post (KP) values representing longitudinal distance (km) from the river mouth, (<b>b</b>) aerial-captured photographs based on the marked positions in (<b>a</b>,<b>c</b>) drone-captured photographs based on the marked positions in (<b>b</b>).</p>
Full article ">Figure 2
<p>In overland and underwater surveys, Light Detection and Ranging (LiDAR) using near-infrared (NIR) and green laser (GL) from ALB (left side, NIR and GL) and GLS (right-side, GL) is shown, respectively (laser points are shown in grayscale).</p>
Full article ">Figure 3
<p>Processes of different data types and responding operations (LR-TL, LR-DI, LiDAR-I, and image fusion).</p>
Full article ">Figure 4
<p>Comparison of data style-based averaged 2 m pixel<sup>−1</sup> resolution cross-platform interchangeability. Left vertical axis: reference of OA and macro-F1 value; right vertical axis: the reference of absolute difference value.</p>
Full article ">Figure 5
<p>Water areas that are not extractable using GLS alone (i.e., zoom in from LiDAR-I, Oct. 2020). HC means high contrast.</p>
Full article ">
19 pages, 5395 KiB  
Article
Optimizing 3D Point Cloud Reconstruction Through Integrating Deep Learning and Clustering Models
by Seyyedbehrad Emadi and Marco Limongiello
Electronics 2025, 14(2), 399; https://doi.org/10.3390/electronics14020399 - 20 Jan 2025
Viewed by 254
Abstract
Noise in 3D photogrammetric point clouds—both close-range and UAV-generated—poses a significant challenge to the accuracy and usability of digital models. This study presents a novel deep learning-based approach to improve the quality of point clouds by addressing this issue. We propose a two-step [...] Read more.
Noise in 3D photogrammetric point clouds—both close-range and UAV-generated—poses a significant challenge to the accuracy and usability of digital models. This study presents a novel deep learning-based approach to improve the quality of point clouds by addressing this issue. We propose a two-step methodology: first, a variational autoencoder reduces features, followed by clustering models to assess and mitigate noise in the point clouds. This study evaluates four clustering methods—k-means, agglomerative clustering, Spectral clustering, and Gaussian mixture model—based on photogrammetric parameters, reprojection error, projection accuracy, angles of intersection, distance, and the number of cameras used in tie point calculations. The approach is validated using point cloud data from the Temple of Neptune in Paestum, Italy. The results show that the proposed method significantly improves 3D reconstruction quality, with k-means outperforming other clustering techniques based on three evaluation metrics. This method offers superior versatility and performance compared to traditional and machine learning techniques, demonstrating its potential to enhance UAV-based surveying and inspection practices. Full article
(This article belongs to the Special Issue Point Cloud Data Processing and Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of the reprojection error.</p>
Full article ">Figure 2
<p>Illustration of the angle of intersection.</p>
Full article ">Figure 3
<p>Conceptual illustration of the proposed methodology, highlighting the main steps involved in optimizing point cloud data using deep learning clustering models.</p>
Full article ">Figure 4
<p>Application example: Temple of Neptune in Paestum (Italy).</p>
Full article ">Figure 5
<p>A selected section of the Temple of Neptune.</p>
Full article ">Figure 6
<p>Visualizations of point cloud data under different single-parameter noise reduction analyses: (<b>a</b>) reprojection errors, (<b>b</b>) average intersection angles, (<b>c</b>) number of images, and (<b>d</b>) projection accuracy.</p>
Full article ">Figure 7
<p>Distribution of clusters generated by: (<b>a</b>) GMM clustering algorithms, (<b>b</b>) k-means clustering algorithms, (<b>c</b>) agglomerative clustering algorithms, and (<b>d</b>) Spectral clustering algorithms.</p>
Full article ">
10 pages, 1433 KiB  
Article
Increasing Serious Illness Conversations in Patients at High Risk of One-Year Mortality Using Improvement Science: A Quality Improvement Study
by Kanishk D. Sharma, Sandip A. Godambe, Prachi P. Chavan, Agatha Parks-Savage and Marissa Galicia-Castillo
Healthcare 2025, 13(2), 199; https://doi.org/10.3390/healthcare13020199 - 20 Jan 2025
Viewed by 270
Abstract
Background: Serious illness conversation (SIC) in an important skillset for clinicians. A review of mortality meetings from an urban academic hospital highlighted the need for early engagement in SICs and advance care planning (ACP) to align medical treatments with patient-centered outcomes. The aim [...] Read more.
Background: Serious illness conversation (SIC) in an important skillset for clinicians. A review of mortality meetings from an urban academic hospital highlighted the need for early engagement in SICs and advance care planning (ACP) to align medical treatments with patient-centered outcomes. The aim of this study was to increase SICs and their documentation in patients with low one-year survival probability identified by updated Charlson Comorbidity Index (CCI) scores. Methods: This was a quality improvement study with data collected pre- and post-intervention at a large urban level one trauma center in Virginia, which also serves as a primary teaching hospital to about 400 residents and fellows. Patient chart reviews were completed to assess medical records and hospitalization data. Chi square tests were used to identify statistical significance with the alpha level set at <0.05. Integrated care managers were trained to identify and discuss high CCI scores during interdisciplinary rounds. Providers were encouraged to document SICs with identified patients in extent of care (EOC) notes within the hospital’s cloud-based electronic health record known as EPIC. Results: Sixty-two patients with high CCI scores were documented, with 16 (25.81%, p = 0.0001) having EOC notes. Patients with documented EOC notes were significantly more likely to change their focus of care, prompting palliative care (63.04% vs. 50%, p = 0.007) and hospice consults (93.48% vs. 68.75%, p = 0.01), compared to those without. Post-intervention surveys revealed that although 50% of providers conducted SICs, fewer used EOC notes for documentation. Conclusion: This initial intervention suggests that the documentation of SICs increases engagement in ACP, palliative care, hospice consultations, and do not resuscitate decisions. Full article
Show Figures

Figure 1

Figure 1
<p>Cause-and-effect diagram. “*” Items were prioritized during data collection.</p>
Full article ">Figure 2
<p>Driver diagram. Abbreviations: CCI = Charlson Comorbidity Index.</p>
Full article ">Figure 3
<p>Map of new process to increase serious illness conversations. ICM: Integrated Care Manager; CCI: Charlson Comorbidity Index; ACP: Advance Care Planning.</p>
Full article ">Figure 4
<p>Graph depicting what barriers were felt by physicians in having serious illness conversations with patients.</p>
Full article ">
20 pages, 32621 KiB  
Article
A Novel Rapeseed Mapping Framework Integrating Image Fusion, Automated Sample Generation, and Deep Learning in Southwest China
by Ruolan Jiang, Xingyin Duan, Song Liao, Ziyi Tang and Hao Li
Land 2025, 14(1), 200; https://doi.org/10.3390/land14010200 - 19 Jan 2025
Viewed by 486
Abstract
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic [...] Read more.
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic rapeseed mapping framework that integrates multi-source remote sensing data fusion, automated sample generation, and deep learning models. The framework was applied in Santai County, Sichuan Province, Southwest China, which has typical topographical and climatic characteristics. First, MODIS and Landsat data were used to fill the gaps in Sentinel-2 imagery, creating time-series images through the object-level processing version of the spatial and temporal adaptive reflectance fusion model (OL-STARFM). In addition, a novel spectral phenology approach was developed to automatically generate training samples, which were then input into the improved TS-ConvNeXt ECAPA-TDNN (NeXt-TDNN) deep learning model for accurate rapeseed mapping. The results demonstrated that the OL-STARFM approach was effective in rapeseed mapping. The proposed automated sample generation method proved effective in producing reliable rapeseed samples, achieving a low Dynamic Time Warping (DTW) distance (<0.81) when compared to field samples. The NeXt-TDNN model showed an overall accuracy (OA) of 90.12% and a mean Intersection over Union (mIoU) of 81.96% in Santai County, outperforming other models such as random forest, XGBoost, and UNet-LSTM. These results highlight the effectiveness of the proposed automatic rapeseed mapping framework in accurately identifying rapeseed. This framework offers a valuable reference for monitoring other crops in similar environments. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the study area in China; (<b>b</b>) the spatial distribution of the study area.</p>
Full article ">Figure 2
<p>Phenological calendar of three typical crops in Santai County. “E”, “M”, and “L” represent early, middle, and late periods of month, respectively.</p>
Full article ">Figure 3
<p>Framework of proposed three-stage rapeseed mapping.</p>
Full article ">Figure 4
<p>(<b>a</b>) The numbers of valid data based on the monthly synthesis of Sentinel-2 during the rapeseed growth period in Santai; (<b>b</b>) the framework of image fusion.</p>
Full article ">Figure 5
<p>The temporal features of rapeseed and other crops. The dashed line shows the gap between rapeseed and other types of features.</p>
Full article ">Figure 6
<p>Framework of NeXt-TDNN.</p>
Full article ">Figure 7
<p>(<b>a</b>) The potential samples generated by the Rapeseed Sample<sub>pheno</sub>. (<b>b</b>–<b>e</b>) The presented Sentinel-2 images are Ture-color (R: red; G: green; B: blue) images acquired in March. Blue and yellow represent potential sample objects proposed according to the rule, and the yellow and blue dots represent positive and negative sample points randomly selected from potential objects.</p>
Full article ">Figure 8
<p>Comparisons of the statistical distribution patterns of the DTW distance between all samples obtained from the rapeseed rule and field surveys in the study area. Rapeseed represents the DTW distance between rapeseed samples, while Non-Rapeseed represents the DTW distance between non-rapeseed samples.</p>
Full article ">Figure 9
<p>Comparison of time-series curves between automatically generated rapeseed samples and field collected rape samples.</p>
Full article ">Figure 10
<p>Rapeseed distribution map of Santai County (<b>a</b>), with detailed map of distribution in different areas (<b>b</b>–<b>d</b>), 2024.</p>
Full article ">Figure 11
<p>The recognition results of the four classifiers.</p>
Full article ">Figure 12
<p>A comparison of the recognition results of different classifiers. The specific positions of (<b>a</b>–<b>d</b>) are shown in <a href="#land-14-00200-f011" class="html-fig">Figure 11</a>. The blue circles mark better recognition results, and the red circles indicate error recognition or missing recognition.</p>
Full article ">
20 pages, 7483 KiB  
Article
An Enhanced LiDAR-Based SLAM Framework: Improving NDT Odometry with Efficient Feature Extraction and Loop Closure Detection
by Yan Ren, Zhendong Shen, Wanquan Liu and Xinyu Chen
Processes 2025, 13(1), 272; https://doi.org/10.3390/pr13010272 - 19 Jan 2025
Viewed by 458
Abstract
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the [...] Read more.
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the high computational complexity of processing large-scale point clouds. This paper introduces an improved NDT-based LiDAR odometry framework to address these challenges. The proposed method enhances computational efficiency and registration accuracy by introducing a unified feature point cloud framework that integrates planar and edge features, enabling more accurate and efficient inter-frame matching. To further improve loop closure detection, a parallel hybrid approach combining Radius Search and Scan Context is developed, which significantly enhances robustness and accuracy. Additionally, feature-based point cloud registration is seamlessly integrated with full cloud mapping in global optimization, ensuring high-precision pose estimation and detailed environmental reconstruction. Experiments on both public datasets and real-world environments validate the effectiveness of the proposed framework. Compared with traditional NDT, our method achieves trajectory estimation accuracy increases of 35.59% and over 35%, respectively, with and without loop detection. The average registration time is reduced by 66.7%, memory usage is decreased by 23.16%, and CPU usage drops by 19.25%. These results surpass those of existing SLAM systems, such as LOAM. The proposed method demonstrates superior robustness, enabling reliable pose estimation and map construction in dynamic, complex settings. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

Figure 1
<p>The system structure.</p>
Full article ">Figure 2
<p>Combined feature point cloud. (<b>a</b>) is the raw point cloud acquired by LiDAR, and (<b>b</b>) is the feature point cloud. The feature point cloud is composed of planar points, edge points, and ground points; the outlier points and small-scale points in the environment are removed; and only large-scale point clouds are retained. Compared to the original point cloud, the feature point cloud significantly reduces the number of points while effectively preserving environmental features.</p>
Full article ">Figure 3
<p>(<b>a</b>) KITTI data acquisition platform, equipped with an inertial navigation system (GPS/IMU) OXTS RT 3003, a Velodyne HDL-64E LiDAR, two 1.4 MP grayscale cameras, two 1.4 MP color cameras, and four zoom lenses. (<b>b</b>) Sensor installation positions on the platform.</p>
Full article ">Figure 4
<p>Comparison of trajectories across different algorithm frameworks for Sequence 00-10. The trajectories generated during mapping for LOAM, LeGO-LOAM, DLO, the original NDT, and our method are compared.</p>
Full article ">Figure 5
<p>Loop closure detection results for various methods on Sequence 09. It can be seen that our improved method effectively identifies the loop closure. The parallel strategy using two loop closure detection methods greatly improves detection accuracy.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>c</b>) Inter-frame registration time, memory usage, and CPU usage before and after the improvement. Our improved method effectively reduces matching time and computational load.</p>
Full article ">Figure 7
<p>Mobile robot platform.</p>
Full article ">Figure 8
<p>Maps generated using the improved method. (<b>a</b>–<b>d</b>) The one-way corridor, round-trip corridor, loop corridor, and long, feature-sparse corridor, respectively.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>d</b>) Maps generated by the original method. Significant mapping errors occurred in larger environments, such as (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>–<b>d</b>) Maps generated by the original method. Significant mapping errors occurred in larger environments, such as (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 10
<p>Detailed comparison between the improved and original methods. (<b>a</b>,<b>b</b>) The improved and original methods, respectively. The improved method balances detail preservation and computation speed, while the original sacrifices some environmental accuracy for mapping results.</p>
Full article ">Figure 11
<p>Map comparison. (<b>a</b>) The Google Earth image. (<b>b</b>) LeGO-LOAM failed to close the loop due to the lack of IMU data, leading to Z-axis drift. (<b>c</b>) The original NDT framework experienced significant drift in large-scale complex environments. (<b>d</b>) The improved method produced maps closely matching the real environment.</p>
Full article ">Figure 12
<p>Detail of Scenario 2. The improved method preserved environmental details without artifacts or mismatches.</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>c</b>) Scenario 2 map comparison. (<b>b</b>) The map generated by the original NDT method lacked details. (<b>c</b>) The improved method effectively preserved details.</p>
Full article ">
44 pages, 24354 KiB  
Article
Estimating Subcanopy Solar Radiation Using Point Clouds and GIS-Based Solar Radiation Models
by Daniela Buchalová, Jaroslav Hofierka, Jozef Šupinský and Ján Kaňuk
Remote Sens. 2025, 17(2), 328; https://doi.org/10.3390/rs17020328 - 18 Jan 2025
Viewed by 307
Abstract
This study explores advanced methodologies for estimating subcanopy solar radiation using LiDAR (Light Detection and Ranging)-derived point clouds and GIS (Geographic Information System)-based models, with a focus on evaluating the impact of different LiDAR data types on model performance. The research compares the [...] Read more.
This study explores advanced methodologies for estimating subcanopy solar radiation using LiDAR (Light Detection and Ranging)-derived point clouds and GIS (Geographic Information System)-based models, with a focus on evaluating the impact of different LiDAR data types on model performance. The research compares the performance of two modeling approaches—r.sun and the Point Cloud Solar Radiation Tool (PCSRT)—in capturing solar radiation dynamics beneath tree canopies. The models were applied to two contrasting environments: a forested area and a built-up area. The r.sun model, based on raster data, and the PCSRT model, which uses voxelized point clouds, were evaluated for their accuracy and efficiency in simulating solar radiation. Data were collected using terrestrial laser scanning (TLS), unmanned laser scanning (ULS), and aerial laser scanning (ALS) to capture the structural complexity of canopies. Results indicate that the choice of LiDAR data significantly affects model outputs. PCSRT, with its voxel-based approach, provides higher precision in heterogeneous forest environments. Among the LiDAR types, ULS data provided the most accurate solar radiation estimates, closely matching in situ pyranometer measurements, due to its high-resolution coverage of canopy structures. TLS offered detailed local data but was limited in spatial extent, while ALS, despite its broader coverage, showed lower precision due to insufficient point density under dense canopies. These findings underscore the importance of selecting appropriate LiDAR data for modeling solar radiation, particularly in complex environments. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

Figure 1
<p>Locations of study areas. (<b>A</b>): Forested area; (<b>B</b>): built-up area; (<b>C</b>): side view of the forested area; (<b>D</b>): side view of the built-up area—Jesenná Street. Green lines indicate canopy areas.</p>
Full article ">Figure 2
<p>Data collection methods used in the study areas; TLS (terrestrial laser scanning), ALS (aerial laser scanning), ULS (unmanned laser scanning).</p>
Full article ">Figure 3
<p>TLS positions in (<b>A</b>): forested area; (<b>B</b>): built-up area.</p>
Full article ">Figure 4
<p>The TLS point cloud density in the forested area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 5
<p>The TLS point cloud density in the built-up area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 6
<p>The ULS point cloud density in the forested area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 7
<p>The ALS point cloud density in the forested area; (<b>A</b>): total points (vegetation and ground), (<b>B</b>): ground points.</p>
Full article ">Figure 8
<p>The ALS point cloud density in the built-up area; (<b>A</b>): total points, (<b>B</b>): ground points.</p>
Full article ">Figure 9
<p>Localization of pyranometers in the forested area; (<b>A</b>): detailed photo of pyranometer in location A, (<b>B</b>): detailed photo of pyranometer in location B, (<b>C</b>): detailed photo of pyranometer in location C, (<b>D</b>): detailed photo of pyranometer in location D.</p>
Full article ">Figure 10
<p>Localization of pyranometer in the built-up area; (<b>A</b>): detailed photo of pyranometer in location A.</p>
Full article ">Figure 11
<p>Selected polygons for detailed data analysis in the forested area. P1: High vegetation; P2: meadow; P3: low vegetation; P4: high vegetation with canopy gaps.</p>
Full article ">Figure 12
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 1, 10 × 10 m, high vegetation.</p>
Full article ">Figure 13
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 2, 10 × 10 m, meadow.</p>
Full article ">Figure 14
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 3, 10 × 10 m, low vegetation.</p>
Full article ">Figure 15
<p>Comparison of TLS, ALS, and ULS data from the top and side views of polygon 4, 10 × 10 m, high vegetation with a gap in the vegetation.</p>
Full article ">Figure 16
<p>Selected polygons for detailed data analysis—built-up area. P1: high vegetation; P2: roof; P3: parking lot; P4: high vegetation.</p>
Full article ">Figure 17
<p>Comparison of TLS and ALS data from the top and side views of polygon 1, 10 × 10 m, high vegetation.</p>
Full article ">Figure 18
<p>Comparison of TLS and ALS data from the top and side views of polygon 2, 10 × 10 m, roof.</p>
Full article ">Figure 19
<p>Comparison of TLS and ALS data from the top and side views of polygon 3, 10 × 10 m, parking.</p>
Full article ">Figure 20
<p>Comparison of TLS and ALS data from the top and side views of polygon 4, 10 × 10 m, high vegetation.</p>
Full article ">Figure 21
<p>Estimated subcanopy solar radiation by PCSRT in forested area—ALS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 22
<p>Estimated subcanopy solar radiation by PCSRT in forested area—ULS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 23
<p>Estimated subcanopy solar radiation by PCSRT in forested area—TLS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 24
<p>Estimated subcanopy solar radiation by r.sun—ULS data; 27 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 25
<p>Estimated subcanopy solar radiation by r.sun—ALS data; 27 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 26
<p>Estimated subcanopy solar radiation by r.sun—TLS data; 27 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 27
<p>Estimated subcanopy solar radiation by PCSRT in built-up area—ALS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 28
<p>Estimated subcanopy solar radiation by PCSRT in built-up area—TLS data; 27 September 2023, 12 a.m.</p>
Full article ">Figure 29
<p>Estimated subcanopy solar radiation by r.sun—ALS data; 28 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 30
<p>Estimated subcanopy solar radiation by r.sun—TLS data; 28 September 2023, 10 a.m.; white line—computing region for LPI.</p>
Full article ">Figure 31
<p>Solar irradiance difference maps between r.sun and PCSRT models using the TLS and ALS data, built-up area; (<b>A</b>): difference between r.sun TLS—r.sun ALS, (<b>B</b>): difference between PCSRT TLS—PCSRT ALS, (<b>C</b>): difference between r.sun TLS—PCSRT TLS, (<b>D</b>): difference between r.sun ALS—PCSRT ALS.</p>
Full article ">Figure 32
<p>Solar irradiance difference maps between r.sun and PCSRT models—ULS, ALS, and TLS data, forested area; (<b>A</b>): difference between r.sun ULS—r.sun ALS, (<b>B</b>): difference between r.sun ULS—r.sun TLS, (<b>C</b>): difference between r.sun TLS—r.sun ALS, (<b>D</b>): difference between PCSRT ULS—PCSRT ALS, (<b>E</b>): PCSRT ULS—PCSRT TLS, (<b>F</b>): PCSRT TLS—PCSRT ALS, (<b>G</b>): r.sun ULS—PCSRT ULS, (<b>H</b>): r.sun TLS—PCSRT TLS, (<b>I</b>): r.sun ALS—PCSRT ALS.</p>
Full article ">Figure 33
<p>Solar irradiance difference histograms between r.sun and PCSRT models using the TLS and ALS data, built-up area; (<b>A</b>): difference between r.sun TLS—r.sun ALS, (<b>B</b>): difference between PCSRT TLS—PCSRT ALS, (<b>C</b>): difference between r.sun TLS—PCSRT TLS, (<b>D</b>): difference between r.sun ALS—PCSRT ALS.</p>
Full article ">Figure 34
<p>Solar irradiance histograms between r.sun and PCSRT models—ULS, ALS, and TLS data, forested area; (<b>A</b>): difference between r.sun ULS—r.sun ALS, (<b>B</b>): difference between r.sun ULS—r.sun TLS, (<b>C</b>): difference between r.sun TLS—r.sun ALS, (<b>D</b>): difference between PCSRT ULS—PCSRT ALS, (<b>E</b>): PCSRT ULS—PCSRT TLS, (<b>F</b>): PCSRT TLS—PCSRT ALS, (<b>G</b>): r.sun ULS—PCSRT ULS, (<b>H</b>): r.sun TLS—PCSRT TLS, (<b>I</b>): r.sun ALS—PCSRT ALS..</p>
Full article ">
16 pages, 6160 KiB  
Article
Package Positioning Based on Point Registration Network DCDNet-Att
by Juan Zhu, Chunrui Yang, Guolyu Zhu, Xiaofeng Yue and Qingming Zhao
Electronics 2025, 14(2), 352; https://doi.org/10.3390/electronics14020352 - 17 Jan 2025
Viewed by 316
Abstract
The application of robot technology in the automatic transportation process of packaging bags is becoming increasingly common. Point cloud registration is the key to applying industrial robots to automatic transportation systems. However, current point cloud registration models cannot effectively solve the registration of [...] Read more.
The application of robot technology in the automatic transportation process of packaging bags is becoming increasingly common. Point cloud registration is the key to applying industrial robots to automatic transportation systems. However, current point cloud registration models cannot effectively solve the registration of deformed targets like packaging bags. In this study, a new point cloud registration network, DCDNet-Att, is proposed, which uses a variable weight dynamic graph convolution module to extract point cloud features. A feature interaction module is used to extract common features between the source point cloud and the template point cloud. The same geometric features between the two pairs of point clouds are strengthened through a bottleneck module. A channel attention model is used to obtain the channel attention weights. The attention weight of each spatial position is calculated, and a rotation translation structure is used to sequentially obtain quaternions and translation vectors. A feature fitting loss function is used to constrain the parameters of the neural network model to have a larger receptive field. Compared with seven methods, including the ICP algorithm, GO-ICP algorithm, and FGR algorithm, the proposed method had rotation errors (MAE, RMSE, and Error of 1.458, 2.541, and 1.024 in the ModelNet40 dataset, respectively) and translation errors (MAE, RMSE, and Error of 0.0048, 0.0114, and 0.0174, respectively). When registering the ModelNet40 dataset with Gaussian noise, the rotation errors (MAE, RMSE, and Error) were 2.028, 3.437, and 2.478, respectively, and the translation errors (MAE, RMSE, and Error) were 0.0107, 0.0327, and 0.0285, respectively. The experimental results were superior to those of the other methods, and the model was effective at registering packaging bag point clouds. Full article
(This article belongs to the Special Issue Advanced Intelligent Control and Automation in Industrial 4.0 Era)
Show Figures

Figure 1

Figure 1
<p>Point cloud data collection platform for packaging bags.</p>
Full article ">Figure 2
<p>Image data.</p>
Full article ">Figure 3
<p>Point cloud of packaging bags.</p>
Full article ">Figure 3 Cont.
<p>Point cloud of packaging bags.</p>
Full article ">Figure 4
<p>Preprocessing of packaging bag point cloud.</p>
Full article ">Figure 5
<p>DCDNet-Att model structure.</p>
Full article ">Figure 6
<p>DCDNet-Att rotation error line chart with different iterations.</p>
Full article ">Figure 7
<p>DCDNet-Att translation error line chart with different iterations.</p>
Full article ">Figure 8
<p>DCDNet-Att point cloud registration results of noiseless ModelNet40 dataset.</p>
Full article ">Figure 9
<p>DCDNet-Att for Gaussian point cloud registration results.</p>
Full article ">Figure 10
<p>Schematic diagram of point cloud registration for packaging bags.</p>
Full article ">
22 pages, 4103 KiB  
Article
Seasonally Dependent Daytime and Nighttime Formation of Oxalic Acid Vapor and Particulate Oxalate in Tropical Coastal and Marine Atmospheres
by Le Yan, Yating Gao, Dihui Chen, Lei Sun, Yang Gao, Huiwang Gao and Xiaohong Yao
Atmosphere 2025, 16(1), 98; https://doi.org/10.3390/atmos16010098 (registering DOI) - 17 Jan 2025
Viewed by 254
Abstract
Oxalic acid is the most abundant low-molecular-weight dicarboxylic acid in the atmosphere, and it plays a crucial role in the formation of new particles and cloud condensation nuclei. However, most observational studies have focused on particulate oxalate, leaving a significant knowledge gap on [...] Read more.
Oxalic acid is the most abundant low-molecular-weight dicarboxylic acid in the atmosphere, and it plays a crucial role in the formation of new particles and cloud condensation nuclei. However, most observational studies have focused on particulate oxalate, leaving a significant knowledge gap on oxalic acid vapor. This study investigated the concentrations and formation of oxalic acid vapor and oxalate in PM2.5 at a rural tropical coastal island site in south China across different seasons, based on semi-continuous measurements using an Ambient Ion Monitor-Ion Chromatograph (AIM-IC) system. We replaced the default 25 μL sampling loop on the AIM-IC with a 250 μL loop, improving the ability to distinguish the signal of oxalic acid vapor from noise. The data revealed clear seasonal patterns in the dependent daytime and nighttime formation of oxalic acid vapor, benefiting from high signal-to-noise ratios. Specifically, concentrations were 0.059 ± 0.15 μg m−3 in February and April 2023, exhibiting consistent diurnal variations similar to those of O3, likely driven by photochemical reactions. These values decreased to 0.021 ± 0.07 μg m−3 in November and December 2023, with higher nighttime concentrations likely related to dark chemistry processes, amplified by accumulation due to low mixing layer height. The concentrations of oxalate in PM2.5 were comparable to those of oxalic acid vapor, but exhibited (3–7)-day variations, superimposed on diurnal fluctuations to varying degrees. Additionally, thermodynamic equilibrium calculations were performed on the coastal data, and independent size distributions of particulate oxalate in the upwind marine atmosphere were analyzed to support the findings. Full article
(This article belongs to the Section Aerosols)
Show Figures

Figure 1

Figure 1
<p>Map of the sampling site: (<b>a</b>) high-resolution terrains nearby from Google Earth (<b>b</b>,<b>c</b>); the photos were taken within ~1 km distance from the sampling site (<b>d</b>–<b>g</b>). Red stars in (<b>a</b>–<b>c</b>) represent the location of the sampling site.</p>
Full article ">Figure 2
<p>Time series of concentrations of oxalic-acid-vapor* and oxalate in PM<sub>2.5</sub> during Period 1 (<b>a</b>) and Period 2 (<b>b</b>). The correlations between oxalic-acid-vapor* and oxalate in PM<sub>2.5</sub> during Period 1 (<b>c</b>) and Period 2 (<b>d</b>). The diurnal variations in averaged concentrations of oxalic-acid-vapor* during Period 1 (<b>e</b>) and Period 2 (<b>f</b>) (the blue shadow in (<b>e</b>,<b>f</b>) represents the standard deviation).</p>
Full article ">Figure 3
<p>Correlations between oxalate and SO<sub>4</sub><sup>2−</sup> in PM<sub>2.5</sub> during Period 1 (<b>a</b>) and during Period 2 (<b>b</b>). The blue and red dots in (<b>a</b>) represent the data obtained from 18 to 22 February and 20–23 April, respectively.</p>
Full article ">Figure 4
<p>Comparisons of predicted concentrations of oxalate in PM<sub>2.5</sub> with observed values (<b>a</b>) inFebruary; (<b>b</b>) in April, (<b>c</b>) November–December with H<sub>2</sub>O<sub>2</sub> and (<b>d</b>) December with H<sub>2</sub>O; the difference in predicted oxalic-acid-vapor concentrations minus the observed values with the observed values, (<b>e</b>) in February; (<b>f</b>) in April, (<b>g</b>) November–December with H<sub>2</sub>O<sub>2</sub>; (<b>h</b>) December with H<sub>2</sub>O; black markers in (<b>e</b>–<b>h</b>) represent the cases with higher values in predicted vapor concentrations than the observations, and dark red markers represents the cases with lower predicted vapor values than the observations, respectively).</p>
Full article ">Figure 5
<p>The modeled and observed ratios of oxalic acid vapor* to oxalate in PM<sub>2.5</sub> varied with the modeled aerosol pH (<b>a</b>–<b>d</b>) modeled ratios in February, April, November–December and December with pure-H<sub>2</sub>O used in wet denuder; (<b>e</b>–<b>h</b>) same as (<b>a</b>–<b>d</b>) except for observed ratios; color bar represents LWC; black markers in (<b>e</b>–<b>h</b>) represent ~60% cases when the modeled unpractically high partitioning ratio of oxalic acid vapor).</p>
Full article ">Figure 6
<p>Mass size distributions of oxalate, nss-SO<sub>4</sub><sup>2−</sup> and DMA<sup>+</sup> in atmospheric particles and Spatial-temporal variation in particulate oxalate observed over the SCS in 2017. (<b>a</b>) Oxalate with a dominant supermicron mode; (<b>b</b>) oxalate with a minor or comparable supermicron mode; (<b>c</b>) nss-SO<sub>4</sub><sup>2−</sup>; (<b>d</b>) DMA<sup>+</sup>; (<b>e</b>) Na<sup>+</sup>; (<b>f</b>) NO<sub>3</sub><sup>−</sup>; (<b>g</b>) nss-K<sup>+</sup>; (<b>f</b>) spatial-temporal variation); (<b>h</b>) geographical distributions of oxalate mass concentrations in PM<sub>10</sub> and the mass ratio of oxalate in 1–3 μm particles to that in PM<sub>1.0</sub>. Red star in (<b>h</b>) represents the coastal sampling site in Sanya during 2023–2024 observations.</p>
Full article ">
23 pages, 12001 KiB  
Article
Enhancing Off-Road Topography Estimation by Fusing LIDAR and Stereo Camera Data with Interpolated Ground Plane
by Gustav Sten, Lei Feng and Björn Möller
Sensors 2025, 25(2), 509; https://doi.org/10.3390/s25020509 - 16 Jan 2025
Viewed by 329
Abstract
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, [...] Read more.
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, have higher accuracy and longer range but much less coverage. LIDARs are also more expensive. The research question examines whether incorporating LIDARs can significantly improve stereo camera accuracy. Current sensor fusion methods use LIDARs’ raw measurements directly; thus, the improvement in estimation accuracy is limited to only LIDAR-scanned locations The main contribution of our new method is to construct a reference ground plane through the interpolation of LIDAR data so that the interpolated maps have similar coverage as the stereo camera’s point cloud. The interpolated maps are fused with the stereo camera point cloud via Kalman filters to improve a larger section of the topography map. The method is tested in three environments: controlled indoor, semi-controlled outdoor, and unstructured terrain. Compared to the existing method without LIDAR interpolation, the proposed approach reduces average error by 40% in the controlled environment and 67% in the semi-controlled environment, while maintaining large coverage. The unstructured environment evaluation confirms its corrective impact. Full article
Show Figures

Figure 1

Figure 1
<p>Beam distribution dependant on distance from sensor.</p>
Full article ">Figure 2
<p>Sensor and software setup. (<b>a</b>) Visualizing of how the LIDAR and stereo camera were mounted; (<b>b</b>) software setup for recording data.</p>
Full article ">Figure 3
<p>Process of mapping point clouds to elevation map. (<b>a</b>) Single point cloud mapping to elevation map; (<b>b</b>) multiple point clouds mapping to elevation map.</p>
Full article ">Figure 4
<p>Example of point clouds from stereo camera (<b>a</b>), LIDAR (<b>b</b>), and the actual ground truth at the center of the point cloud (<b>c</b>). Note that (<b>c</b>) is zoomed in.</p>
Full article ">Figure 5
<p>Interpolation methodology. (<b>a</b>) Description of the interpolation direction in the gridmap. (<b>b</b>) Trapezoid function for variance between two measured points, <math display="inline"><semantics> <msub> <mi>p</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>p</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Interpolated map and its corresponding variance. x and y are grid cell indexes.</p>
Full article ">Figure 7
<p>Single sensor elevation map.</p>
Full article ">Figure 8
<p>Estimation Errors of the Two Sensors.</p>
Full article ">Figure 9
<p>Fused elevation maps.</p>
Full article ">Figure 10
<p>Estimation errors of the two fusion methods.</p>
Full article ">Figure 11
<p>Photograph of the test area, with critical measurement points marked.</p>
Full article ">Figure 12
<p>Example of raw point clouds with the objects highlighted.</p>
Full article ">Figure 13
<p>Stereo camera and LIDAR maps with their resulting variance.</p>
Full article ">Figure 14
<p>Fused maps with their resulting variance.</p>
Full article ">Figure 15
<p>Photograph of the test area.</p>
Full article ">Figure 16
<p>Example of raw point clouds.</p>
Full article ">Figure 17
<p>Stereo camera and LIDAR maps with their resulting variance.</p>
Full article ">Figure 18
<p>Fused maps with their resulting variance.</p>
Full article ">Figure 19
<p>Estimation of both fusion methods along Y = 34.</p>
Full article ">
17 pages, 30535 KiB  
Article
A Method to Evaluate Orientation-Dependent Errors in the Center of Contrast Targets Used with Terrestrial Laser Scanners
by Bala Muralikrishnan, Xinsu Lu, Mary Gregg, Meghan Shilling and Braden Czapla
Sensors 2025, 25(2), 505; https://doi.org/10.3390/s25020505 - 16 Jan 2025
Viewed by 339
Abstract
Terrestrial laser scanners (TLS) are portable dimensional measurement instruments used to obtain 3D point clouds of objects in a scene. While TLSs do not require the use of cooperative targets, they are sometimes placed in a scene to fuse or compare data from [...] Read more.
Terrestrial laser scanners (TLS) are portable dimensional measurement instruments used to obtain 3D point clouds of objects in a scene. While TLSs do not require the use of cooperative targets, they are sometimes placed in a scene to fuse or compare data from different instruments or data from the same instrument but from different positions. A contrast target is an example of such a target; it consists of alternating black/white squares that can be printed using a laser printer. Because contrast targets are planar as opposed to three-dimensional (like a sphere), the center of the target might suffer from errors that depend on the orientation of the target with respect to the TLS. In this paper, we discuss a low-cost method to characterize such errors and present results obtained from a short-range TLS and a long-range TLS. Our method involves comparing the center of a contrast target against the center of spheres and, therefore, does not require the use of a reference instrument or calibrated objects. For the short-range TLS, systematic errors of up to 0.5 mm were observed in the target center as a function of the angle for the two distances (5 m and 10 m) and resolutions (30 points-per-degree (ppd) and 90 ppd) considered for this TLS. For the long-range TLS, systematic errors of about 0.3 mm to 0.8 mm were observed in the target center as a function of the angle for the two distances (5 m and 10 m) at low resolution (28 ppd). Errors of under 0.3 mm were observed in the target center as a function of the angle for the two distances at high resolution (109 ppd). Full article
(This article belongs to the Special Issue Laser Scanning and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Commercially procured contrast target with magnetic/adhesive backing, (<b>b</b>) contrast target printed on cardstock using a laser printer, (<b>c</b>) contrast target mounted on a two-axis gimbal, (<b>d</b>) contrast target with a partial 38.1 mm (1.5 inches) sphere on the back.</p>
Full article ">Figure 2
<p>Artifact comprising four spheres and a contrast target to study errors as a function of orientation.</p>
Full article ">Figure 3
<p>Different orientations of the artifact, (<b>a</b>–<b>c</b>) rotation about the vertical axis, i.e., yaw, (<b>d</b>–<b>f</b>) rotation about the horizontal axis, i.e., pitch. Photos of the artifact oriented so that (<b>g</b>) yaw = 0°, pitch = 0°, (<b>h</b>) yaw = 40°, pitch = 0°, (<b>i</b>) yaw = 0°, pitch = −40°. The TLS is located directly in front of the target in part (<b>g</b>) at a distance of either 5 m or 10 m.</p>
Full article ">Figure 4
<p>(<b>a</b>) Intensity plot of the entire artifact, (<b>b</b>) intensity plot of the contrast target and the edge points (transition between the black (blue dots in figure) and the white (red dots) regions of a target).</p>
Full article ">Figure 5
<p>The 68% data ellipses visualizing the pooled within-sample covariance matrices for the four distance/resolution scenarios. Text annotations correspond to the standard deviations in the X (horizontal) and Y (vertical) coordinates for the far distance (10 m), low resolution (30 ppd) scenario (bolded and italicized values in <a href="#sensors-25-00505-t001" class="html-table">Table 1</a>), visualized by the magnitude of the dashed lines, and near distance (5 m), high resolution (90 ppd) scenario, indicated by solid lines (bolded values in <a href="#sensors-25-00505-t001" class="html-table">Table 1</a>).</p>
Full article ">Figure 6
<p>The 95% data ellipses from low-resolution scans (30 ppd) from TLS I for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t002" class="html-table">Table 2</a> have been added as text annotations.</p>
Full article ">Figure 7
<p>The 95% data ellipses from high-resolution scans (90 ppd) from TLS I for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t002" class="html-table">Table 2</a> have been added as text annotations.</p>
Full article ">Figure 8
<p>The 68% data ellipses visualizing the pooled within-sample covariance matrices for the four distance/resolution scenarios from the TLS II data. Text annotations correspond to the standard deviations in the X (horizontal) and Y (vertical) coordinates for the far distance (10 m), low resolution (28 ppd) scenario, visualized by the magnitude of the dashed lines (bolded and italicized values in <a href="#sensors-25-00505-t003" class="html-table">Table 3</a>), and near distance (5 m), high resolution (109 ppd) scenario, indicated by solid lines (bolded values in <a href="#sensors-25-00505-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 9
<p>The 95% data ellipses from low-resolution scans (28 ppd) from TLS II for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t004" class="html-table">Table 4</a> have been added as text annotations.</p>
Full article ">Figure 10
<p>The 95% data ellipses from high-resolution scans (109 ppd) from TLS II for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t004" class="html-table">Table 4</a> have been added as text annotations.</p>
Full article ">
20 pages, 6221 KiB  
Article
Evaluation of HY-2B SMR Sea Surface Temperature Products from 2019 to 2024
by Ping Liu, Yili Zhao, Wu Zhou and Shishuai Wang
Remote Sens. 2025, 17(2), 300; https://doi.org/10.3390/rs17020300 - 16 Jan 2025
Viewed by 312
Abstract
Haiyang 2B (HY-2B), the second Chinese ocean dynamic environment monitoring satellite, has been operational for nearly six years. The scanning microwave radiometer (SMR) onboard HY-2B provides global sea surface temperature (SST) observations. Comprehensive validation of these data is essential before they can be [...] Read more.
Haiyang 2B (HY-2B), the second Chinese ocean dynamic environment monitoring satellite, has been operational for nearly six years. The scanning microwave radiometer (SMR) onboard HY-2B provides global sea surface temperature (SST) observations. Comprehensive validation of these data is essential before they can be effectively applied. This study evaluates the operational SST product from the SMR, covering the period from 1 January 2019 to 31 August 2024, using direct comparison and extended triple collocation (ETC) methods. The direct comparison assesses bias and root mean square error (RMSE), while ETC analysis estimates the random error of the SST measurement systems and evaluates their ability to detect SST variations. Additionally, the spatial and temporal variations in error characteristics, as well as the crosstalk effects of sea surface wind speed, columnar water vapor, and columnar cloud liquid water, are analyzed. Compared with iQuam SST, the total RMSE of SMR SST for ascending and descending passes are 0.88 °C and 0.85 °C, with total biases of 0.1 °C and −0.08 °C, respectively. ETC analysis indicates that the random errors for ascending and descending passes are 0.87 °C and 0.80 °C, respectively. The SMR’s ability to detect SST variations decreases significantly at high latitudes and near 10°N latitude. Error analysis reveals that the uncertainty in SMR SSTs has increased over time, and the presence of crosstalk effects in SMR SST retrieval has been confirmed. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of triple collocations.</p>
Full article ">Figure 2
<p>Scatter plots of SMR SST against iQuam SST for ascending and descending passes. (<b>a</b>) Ascending. (<b>b</b>) Descending.</p>
Full article ">Figure 3
<p>Seasonal spatial distribution of SMR SST bias relative to iQuam SST for ascending passes. (<b>a</b>) Bias averaged over the period from December to February; (<b>b</b>) bias averaged over the period from March to May; (<b>c</b>) bias averaged over the period from June to August; (<b>d</b>) bias averaged over the period from September to November.</p>
Full article ">Figure 4
<p>Seasonal spatial distribution of SMR SST bias relative to iQuam SST for descending passes. (<b>a</b>) Bias averaged over the period from December to February; (<b>b</b>) bias averaged over the period from March to May; (<b>c</b>) bias averaged over the period from June to August; (<b>d</b>) bias averaged over the period from September to November.</p>
Full article ">Figure 5
<p>Scatter plots of ERA5 SST versus Argo SST: (<b>a</b>) comparison during the ascending pass of SMR, which corresponds to a time close to sunset; (<b>b</b>) comparison during the descending pass of the SMR, which corresponds to a time close to sunrise.</p>
Full article ">Figure 6
<p>Scatter plots of SMR SST against ERA5 SST for ascending and descending passes. (<b>a</b>) Ascending. (<b>b</b>) Descending.</p>
Full article ">Figure 7
<p>Temporal variation in error characteristics. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>sub</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 8
<p>Latitudinal variation in error characteristics. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>su</mi> <mi mathvariant="normal">b</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 9
<p>Variation in error characteristics related to SST. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi mathvariant="normal">s</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">b</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 10
<p>Variation in error characteristics related to ERA5 sea surface wind speed. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>sub</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 11
<p>Variation in error characteristics related to ERA5 columnar water vapor. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>sub</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 12
<p>Variation in error characteristics related to ERA5 columnar cloud liquid water. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>sub</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">
23 pages, 3787 KiB  
Article
Cloud-Based License Plate Recognition: A Comparative Approach Using You Only Look Once Versions 5, 7, 8, and 9 Object Detection
by Christine Bukola Asaju, Pius Adewale Owolawi, Chuling Tu and Etienne Van Wyk
Information 2025, 16(1), 57; https://doi.org/10.3390/info16010057 - 16 Jan 2025
Viewed by 335
Abstract
Cloud-based license plate recognition (LPR) systems have emerged as essential tools in modern traffic management and security applications. Determining the best approach remains paramount in the field of computer vision. This study presents a comparative analysis of various versions of the YOLO (You [...] Read more.
Cloud-based license plate recognition (LPR) systems have emerged as essential tools in modern traffic management and security applications. Determining the best approach remains paramount in the field of computer vision. This study presents a comparative analysis of various versions of the YOLO (You Only Look Once) object detection models, namely, YOLO 5, 7, 8, and 9, applied to LPR tasks in a cloud computing environment. Using live video, we performed experiments on YOLOv5, YOLOv7, YOLOv8, and YOLOv9 models to detect number plates in real time. According to the results, YOLOv8 is reported the most effective model for real-world deployment due to its strong cloud performance. It achieved an accuracy of 78% during cloud testing, while YOLOv5 showed consistent performance with 71%. YOLOv7 performed poorly in cloud testing (52%), indicating potential issues, while YOLOv9 reported 70% accuracy. This tight alignment of results shows consistent, although modest, performance across scenarios. The findings highlight the evolution of the YOLO architecture and its impact on enhancing LPR accuracy and processing efficiency. The results provide valuable insights into selecting the most appropriate YOLO model for cloud-based LPR systems, balancing the trade-offs between real-time performance and detection precision. This research contributes to advancing the field of intelligent transportation systems by offering a detailed comparison that can guide future implementations and optimizations of LPR systems in cloud environments. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Figure 1
<p>Cloud-based license plate recognition framework.</p>
Full article ">Figure 2
<p>Confusion matrix for YOLOv5 validation experiment.</p>
Full article ">Figure 3
<p>Precision/recall progress between epochs 0 and 100 for YOLOv5 experiment.</p>
Full article ">Figure 4
<p>Confusion matrix for YOLOv7 validation result.</p>
Full article ">Figure 5
<p>Precision/recall progress between epochs 0 and 100 for YOLOv7 experiment.</p>
Full article ">Figure 6
<p>Confusion matrix for YOLOv8 validation result.</p>
Full article ">Figure 7
<p>Precision/recall progress between epochs 0 and 100 for YOLOv8 experiment.</p>
Full article ">Figure 8
<p>Confusion matrix for YOLOv9 validation result.</p>
Full article ">Figure 9
<p>Precision/recall progress between epochs 0 and 100 for YOLOv9 experiment.</p>
Full article ">Figure 10
<p>Cloud-based framework.</p>
Full article ">Figure 11
<p>Output of YOLOv5 deployed on cloud.</p>
Full article ">Figure 12
<p>Output of YOLOv7 deployed on cloud.</p>
Full article ">Figure 13
<p>Output of YOLOv8 deployed on cloud.</p>
Full article ">Figure 14
<p>Output of YOLOv9 deployed on cloud.</p>
Full article ">
15 pages, 3290 KiB  
Article
Tomato Stem and Leaf Segmentation and Phenotype Parameter Extraction Based on Improved Red Billed Blue Magpie Optimization Algorithm
by Lina Zhang, Ziyi Huang, Zhiyin Yang, Bo Yang, Shengpeng Yu, Shuai Zhao, Xingrui Zhang, Xinying Li, Han Yang, Yixing Lin and Helong Yu
Agriculture 2025, 15(2), 180; https://doi.org/10.3390/agriculture15020180 - 15 Jan 2025
Viewed by 298
Abstract
In response to the structural changes of tomato seedlings, traditional image techniques are difficult to accurately quantify key morphological parameters, such as leaf area, internode length, and mutual occlusion between organs. Therefore, this paper proposes a tomato point cloud stem and leaf segmentation [...] Read more.
In response to the structural changes of tomato seedlings, traditional image techniques are difficult to accurately quantify key morphological parameters, such as leaf area, internode length, and mutual occlusion between organs. Therefore, this paper proposes a tomato point cloud stem and leaf segmentation framework based on Elite Strategy-based Improved Red-billed Blue Magpie Optimization (ES-RBMO) Algorithm. The framework uses a four-layer Convolutional Neural Network (CNN) for stem and leaf segmentation by incorporating an improved swarm intelligence algorithm with an accuracy of 0.965. Four key phenotypic parameters of the plant were extracted. The phenotypic parameters of plant height, stem thickness, leaf area and leaf inclination were analyzed by comparing the values extracted by manual measurements with the values extracted by the 3D point cloud technique. The results showed that the coefficients of determination (R2) for these parameters were 0.932, 0.741, 0.938 and 0.935, respectively, indicating high correlation. The root mean square error (RMSE) was 0.511, 0.135, 0.989 and 3.628, reflecting the level of error between the measured and extracted values. The absolute percentage errors (APE) were 1.970, 4.299, 4.365 and 5.531, which further quantified the measurement accuracy. In this study, an efficient and adaptive intelligent optimization framework was constructed, which is capable of optimizing data processing strategies to achieve efficient and accurate processing of tomato point cloud data. This study provides a new technical tool for plant phenotyping and helps to improve the intelligent management in agricultural production. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Image acquisition method. (<b>a</b>) Tomato plant sample Point cloud data acquisition; (<b>b</b>) Point cloud data acquisition scene; (<b>c</b>) Point cloud data acquisition; (<b>d</b>) Visualization of preprocessed image presentation.</p>
Full article ">Figure 2
<p>3DCNN model hierarchical flowchart.</p>
Full article ">Figure 3
<p>Comparison of each data of ES-RMBO comparison test. (<b>a</b>) Accuracy; (<b>b</b>) Recall rate; (<b>c</b>) F1 score; (<b>d</b>) IoU; (<b>e</b>) ACC.</p>
Full article ">Figure 4
<p>Comparison of each data of ES-RMBO ablation experiments. (<b>a</b>) Accuracy; (<b>b</b>) Recall rate; (<b>c</b>) F1 score; (<b>d</b>) IoU; (<b>e</b>) ACC.</p>
Full article ">Figure 5
<p>Measurements of phenotypic parameters. (<b>a</b>) Plant height; (<b>b</b>) Stem thickness; (<b>c</b>) Leaf area; (<b>d</b>) Leaf inclination angle.</p>
Full article ">Figure 6
<p>Point cloud results of tomato plants with different growth conditions. The green boxes in the figure indicate the undetected sites of the other models compared to ES-RMBO. (<b>a</b>) Normal tomato plants identified by ES-RMBO; (<b>b</b>) More complex tomato plants identified by ES-RMBO; (<b>c</b>) Normal tomato plants identified by AC-UNet; (<b>d</b>) More complex tomato plants identified by AC-UNet; (<b>e</b>) Normal tomato plants identified by UNet; (<b>f</b>) More complex tomato plants identified by UNet; (<b>g</b>) Normal tomato plants identified by PointNet++; (<b>h</b>) More complex tomato plants identified by PointNet++; (<b>i</b>) Normal tomato plants identified by PCNN; (<b>j</b>) More complex tomato plants identified by PCNN; (<b>k</b>) Normal tomato plants identified by DeepLabV3; (<b>l</b>) More complex tomato plants identified by DeepLabV3.</p>
Full article ">
Back to TopTop