[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,524)

Search Parameters:
Keywords = cross-ground

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 17029 KiB  
Article
Cross-Line Fusion of Ground Penetrating Radar for Full-Space Localization of External Defects in Drainage Pipelines
by Yuanjin Fang, Feng Yang, Xu Qiao, Maoxuan Xu, Liang Fang, Jialin Liu and Fanruo Li
Remote Sens. 2025, 17(2), 194; https://doi.org/10.3390/rs17020194 - 8 Jan 2025
Viewed by 135
Abstract
Drainage pipelines face significant threats to underground safety due to external defects. Ground Penetrating Radar (GPR) is a primary tool for detecting such defects from within the pipeline. However, existing methods are limited to single or multiple axial scan lines, which cannot provide [...] Read more.
Drainage pipelines face significant threats to underground safety due to external defects. Ground Penetrating Radar (GPR) is a primary tool for detecting such defects from within the pipeline. However, existing methods are limited to single or multiple axial scan lines, which cannot provide the precise spatial coordinates of the defects. To address this limitation, this study introduces a novel GPR-based drainage pipeline inspection robot system integrated with multiple sensors. The system incorporates MEMS-IMU, encoder modules, and ultrasonic ranging modules to control the GPR antenna for axial and circumferential scanning. A novel Cross-Line Fusion of GPR (CLF-GPR) method is introduced to integrate axial and circumferential scan data for the precise localization of external pipeline defects. Laboratory simulations were performed to assess the effectiveness of the proposed technology and method, while its practical applicability was further validated through real-world drainage pipeline inspections. The results demonstrate that the proposed approach achieves axial positioning errors of less than 2.0 cm, spatial angular positioning errors below 2°, and depth coordinate errors within 2.3 cm. These findings indicate that the proposed approach is reliable and has the potential to support the transparency and digitalization of urban underground drainage networks. Full article
(This article belongs to the Special Issue Advanced Ground-Penetrating Radar (GPR) Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of GPR detection principle for drainage pipelines. (<b>a</b>) Actual working conditions. (<b>b</b>) Detection principle.</p>
Full article ">Figure 2
<p>Diagram of GPR pipeline robot. (<b>a</b>) Extended state of prototype robot. (<b>b</b>) Collapsed state of telescopic antenna module. (<b>c</b>) Collapsed state of telescopic driving module. (<b>d</b>) Circular scanning state of GPR pipeline robot.</p>
Full article ">Figure 3
<p>Operational principle of GPR pipeline robot system.</p>
Full article ">Figure 4
<p>Definition of coordinate system. (<b>a</b>) Robot coordinate system. (<b>b</b>) Pipeline coordinate system and ground coordinate system.</p>
Full article ">Figure 5
<p>Geometric diagram of (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>n</mi> </mrow> </semantics></math> and (<b>b</b>) circumferential scan.</p>
Full article ">Figure 6
<p>Diagram of simulated defect and experimental setup.</p>
Full article ">Figure 7
<p>Processing of axial scan in experiment 1#: (<b>a</b>) Raw image of circumferential GPR signal; (<b>b</b>) GPR image after denoising and reconstructed; (<b>c</b>) edge detection and center point determination.</p>
Full article ">Figure 8
<p>Processing of circular scan in experiment 1#: (<b>a</b>) Raw image of circumferential GPR signal; (<b>b</b>) GPR image after denoising and reconstructed; (<b>c</b>) edge detection and center point determination.</p>
Full article ">Figure 9
<p>Zero-point calibration for GPR image; (<b>a</b>) raw image of GPR signal; (<b>b</b>) detailed waveforms of initial GPR signal traces.</p>
Full article ">Figure 10
<p>CLF-GPR resolution; (<b>a</b>) the CLF-GPR image of experiment 1#; (<b>b</b>) antenna angles corresponding to different GPR traces selected; (<b>c</b>) filtered and reconstructed GPR signals selected.</p>
Full article ">Figure 11
<p>The results of the estimated distance between the simulated defect and inner pipe wall.</p>
Full article ">Figure 12
<p>The CLF-GPR image and the results. (<b>a</b>), (<b>e</b>), and (<b>i</b>) are CLF-GPR images of 2#, 3#, and 4#, respectively; (<b>b</b>), (<b>f</b>), and (<b>j</b>) show the antenna angles corresponding to different GPR traces selected in 2#, 3#, and 4#, respectively; (<b>c</b>), (<b>g</b>), and (<b>k</b>) illustrate the GPR signals selected after filtering and reconstructing in 2#, 3#, and 4#, respectively; (<b>d</b>), (<b>h</b>), and (<b>l</b>) are the results of the estimated distance between the simulated defect and inner pipe wall in 2#, 3#, and 4#, respectively.</p>
Full article ">Figure 13
<p>Filed testing. (<b>a</b>) Field overview. (<b>b</b>) Simulated defect location; (<b>c</b>) wooden box placement; (<b>d</b>) Robot Advancement; Antenna Swing to (<b>e</b>) angle 1, (<b>f</b>) angle 2, and (<b>g</b>) angle 3.</p>
Full article ">Figure 14
<p>The CLF-GPR resolution of the experiment; (<b>a</b>) the CLF-GPR image; (<b>b</b>) antenna angles corresponding to different GPR traces selected; (<b>c</b>) filtered and reconstructed GPR signals selected; (<b>d</b>) the estimated distance between the simulated defect and inner pipe wall.</p>
Full article ">Figure 15
<p>Variation in MSE with the number of selected points.</p>
Full article ">
26 pages, 43142 KiB  
Article
Can Measurement and Input Uncertainty Explain Discrepancies Between the Wheat Canopy Scattering Model and SMAPVEX12 Observations?
by Lilangi Wijesinghe, Andrew W. Western, Jagannath Aryal and Dongryeol Ryu
Remote Sens. 2025, 17(1), 164; https://doi.org/10.3390/rs17010164 - 6 Jan 2025
Viewed by 270
Abstract
Realistic representation of microwave backscattering from vegetated surfaces is important for developing accurate soil moisture retrieval algorithms that use synthetic aperture radar (SAR) imagery. Many studies have reported considerable discrepancies between the simulated and observed backscatter. However, there has been limited effort to [...] Read more.
Realistic representation of microwave backscattering from vegetated surfaces is important for developing accurate soil moisture retrieval algorithms that use synthetic aperture radar (SAR) imagery. Many studies have reported considerable discrepancies between the simulated and observed backscatter. However, there has been limited effort to identify the sources of errors and contributions quantitatively using process-based backscatter simulation in comparison with extensive ground observations. This study examined the influence of input uncertainties on simulated backscatter from a first-order radiative transfer model, named the Wheat Canopy Scattering Model (WCSM), using ground-based and airborne data collected during the SMAPVEX12 campaign. Input uncertainties to WCSM were simulated using error statistics for two crop growth stages. The Sobol’ method was adopted to analyze the uncertainty in WCSM-simulated backscatters originating from different inputs before and after the wheat ear emergence. The results show that despite the presence of wheat ears, uncertainty in root mean square (RMS) height of 0.2 cm significantly influences simulated co-polarized backscatter uncertainty. After ear emergence, uncertainty in ears dominates simulated cross-polarized backscatter uncertainty. In contrast, uncertainty in RMS height before ear emergence dominates the accuracy of simulated cross-polarized backscatter. These findings suggest that considering wheat ears in the model structure and precise representation of surface roughness is essential to accurately simulate backscatter from a wheat field. Since the discrepancy between the simulated and observed backscatter coefficients is due to both model and observation uncertainty, the uncertainty of the UAVSAR data was estimated by analyzing the scatter between multiple backscatter coefficients obtained from the same targets near-simultaneously, assuming the scatter represents the observation uncertainty. Observation uncertainty of UAVSAR backscatter for HH, VV, and HV polarizations are 0.8 dB, 0.87 dB, and 0.86 dB, respectively. Discrepancies between WCSM-simulated backscatter and UAVSAR observations are discussed in terms of simulation and observation uncertainty. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the scattering mechanisms considered in the Wheat Canopy Scattering Model (WCSM) (adapted from Yan et al. [<a href="#B32-remotesensing-17-00164" class="html-bibr">32</a>]).</p>
Full article ">Figure 2
<p>Map of the study area and wheat sampling fields used in the present study overlaid on the UAVSAR-HH backscatter intensity image acquired on 17 July 2012 for line ID 31606. The inset illustrates the layout of the 16 sampling locations within each wheat field. Wheat crop measurements were made at sampling locations #2, #11, and #14 in each wheat field.</p>
Full article ">Figure 3
<p>Schematic diagram of the methods followed in the study; steps followed in UAVSAR data processing and backscatter extraction, simulating L-band backscatter using the Wheat Canopy Scattering Model (WCSM) using SMAPVEX12 ground measurements and allometric relationships from Yan et al. [<a href="#B32-remotesensing-17-00164" class="html-bibr">32</a>], simulation uncertainty, and observation uncertainty analyses.</p>
Full article ">Figure 4
<p>Wheat Canopy Scattering Model (WCSM) simulated total backscatter for HH, VV, and HV polarizations as a function of the incidence angle.</p>
Full article ">Figure 5
<p>Total sensitivity (<math display="inline"><semantics> <msub> <mi>S</mi> <mi>T</mi> </msub> </semantics></math>) values from the Sobol’ method for WCSM-simulated HH-, VV-, and HV-polarized backscatters for two different crop growth stages; (<b>a</b>) crop height 20 cm (before heading − without wheat ears) and (<b>b</b>) crop height 80 cm (after heading − with wheat ears).</p>
Full article ">Figure 6
<p>Correlation plots between observed backscatter for different line ID combinations: 31604 vs. 31603 (<b>a</b>) HH, (<b>b</b>) VV, (<b>c</b>) HV; 31605 vs. 31603 (<b>d</b>) HH, (<b>e</b>) VV, (<b>f</b>) HV; 31606 vs. 31603 (<b>g</b>) HH, (<b>h</b>) VV, (<b>i</b>) HV; 31605 vs. 31604 (<b>j</b>) HH, (<b>k</b>) VV, (<b>l</b>) HV; 31606 vs. 31604 (<b>m</b>) HH, (<b>n</b>) VV, (<b>o</b>) HV; 31606 vs. 31605 (<b>p</b>) HH, (<b>q</b>) VV, (<b>r</b>) HV.</p>
Full article ">Figure 7
<p>Comparison of observed backscatter from UAVSAR and simulated backscatter from the Wheat Canopy Scattering Model (WCSM) where rows represent the flight line IDs 31603, 31604, 31605, and 31606, and columns represent HH-, VV-, and HV-polarized backscatter, respectively. (<b>a</b>) 31603-HH, (<b>b</b>) 31603-VV, (<b>c</b>) 31603-HV, (<b>d</b>) 31604-HH, (<b>e</b>) 31604-VV, (<b>f</b>) 31604-HV, (<b>g</b>) 31605-HH, (<b>h</b>) 31605-VV, (<b>i</b>) 31605-HV, (<b>j</b>) 31606-HH, (<b>k</b>) 31606-VV, and (<b>l</b>) 31606-HV. Uncertainty in observations and simulations are shown via x and y error bars (±standard deviation), respectively, at the mid-right in each panel.</p>
Full article ">Figure 8
<p>Total backscatter and relative contributions of soil (attenuated soil scattering) and vegetation (volume scattering of ear, leaf, and stem) simulated from the Wheat Canopy Scattering Model (WCSM) for all three polarizations at sampling locations #2, #11, and #14 of wheat fields #44, #45, #65, #73, #74, #81, #85, and #91 on 17 July 2012 for line ID 31606.</p>
Full article ">Figure 9
<p>Time series of UAVSAR backscatter for HH, VV, and HV polarizations and soil moisture measurements in wheat field #42 (sampling location #2) during SMAPVEX12 campaign.</p>
Full article ">Figure 10
<p>Sensitivity of WCSM-simulated total backscatter for HH, VV, and HV polarizations at L-band with changes in gravimetric water content of wheat (<b>a</b>) ears, (<b>b</b>) leaves, and (<b>c</b>) stems.</p>
Full article ">Figure A1
<p>Evolution of sensitivity indices with ensemble size; first-order sensitivity (<math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>) (<b>a</b>) HH, (<b>b</b>) VV, (<b>c</b>) HV polarizations and total sensitivity (<math display="inline"><semantics> <msub> <mi>S</mi> <mi>T</mi> </msub> </semantics></math>) (<b>d</b>) HH, (<b>e</b>) VV, (<b>f</b>) HV polarizations.</p>
Full article ">Figure A2
<p>Rank of (<b>a</b>) <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <msub> <mi>S</mi> <mi>T</mi> </msub> </semantics></math> of the 20 input factors to WCSM from the Sobol’ method for HH, VV, and HV polarizations at two different crop heights; 20 cm and 80 cm.</p>
Full article ">
21 pages, 3337 KiB  
Article
Combining UAS LiDAR, Sonar, and Radar Altimetry for River Hydraulic Characterization
by Monica Coppo Frias, Alexander Rietz Vesterhauge, Daniel Haugård Olesen, Filippo Bandini, Henrik Grosen, Sune Yde Nielsen and Peter Bauer-Gottwein
Drones 2025, 9(1), 31; https://doi.org/10.3390/drones9010031 - 6 Jan 2025
Viewed by 305
Abstract
Accurate river hydraulic characterization is fundamental to assess flood risk, parametrize flood forecasting models, and develop river maintenance workflows. River hydraulic roughness and riverbed/floodplain geometry are the main factors controlling inundation extent and water levels. However, gauging stations providing hydrometric observations are declining [...] Read more.
Accurate river hydraulic characterization is fundamental to assess flood risk, parametrize flood forecasting models, and develop river maintenance workflows. River hydraulic roughness and riverbed/floodplain geometry are the main factors controlling inundation extent and water levels. However, gauging stations providing hydrometric observations are declining worldwide, and they provide point measurements only. To describe hydraulic processes, spatially distributed data are required. In situ surveys are costly and time-consuming, and they are sometimes limited by local accessibility conditions. Satellite earth observation (EO) techniques can be used to measure spatially distributed hydrometric variables, reducing the time and cost of traditional surveys. Satellite EO provides high temporal and spatial frequency, but it can only measure large rivers (wider than ca. 50 m) and only provides water surface elevation (WSE), water surface slope (WSS), and surface water width data. UAS hydrometry can provide WSE, WSS, water surface velocity and riverbed geometry at a high spatial resolution, making it suitable for rivers of all sizes. The use of UAS hydrometry can enhance river management, with cost-effective surveys offering large coverage and high-resolution data, which are fundamental in flood risk assessment, especially in areas that difficult to access. In this study, we proposed a combination of UAS hydrometry techniques to fully characterize the hydraulic parameters of a river. The land elevation adjacent to the river channel was measured with LiDAR, the riverbed elevation was measured with a sonar payload, and the WSE was measured with a UAS radar altimetry payload. The survey provided 57 river cross-sections with riverbed elevation, and 8 km of WSE and land elevation and took around 2 days of survey work in the field. Simulated WSE values were compared to radar altimetry observations to fit hydraulic roughness, which cannot be directly observed. The riverbed elevation cross-sections have an average error of 32 cm relative to RTK GNSS ground-truth measurements. This error was a consequence of the dense vegetation on land that prevents the LiDAR signal from reaching the ground and underwater vegetation, which has an impact on the quality of the sonar measurements and could be mitigated by performing surveys during winter, when submerged vegetation is less prevalent. Despite the error of the riverbed elevation cross-sections, the hydraulic model gave good estimates of the WSE, with an RMSE below 3 cm. The estimated roughness is also in good agreement with the values measured at a gauging station, with a Gauckler–Manning–Strickler coefficient of M = 16–17 m1/3/s. Hydraulic modeling results demonstrate that both bathymetry and roughness measurements are necessary to obtain a unique and robust hydraulic characterization of the river. Full article
Show Figures

Figure 1

Figure 1
<p>Area of interest and overview of the survey. Left image shows the LiDAR data over the entire river reach. The middle image shows the photogrammetry data, and the cross-sections with the ground-truth locations. The right image shows the map of Denmark indicating the location of the stream.</p>
Full article ">Figure 2
<p>Rigid arm solution attached to the DJI Matrice 600 pro. (<b>a</b>) shows the unfolded rigid arm in contact with water, which makes it possible to perform sonar depth measurements. (<b>b</b>) presents the folded setup that facilitates take-off, landing, and flying between cross-sections.</p>
Full article ">Figure 3
<p>Processing workflow for UAS hydrometry data and hydraulic characterization.</p>
Full article ">Figure 4
<p>(<b>a</b>) Processed high-resolution orthophoto example. (<b>b</b>) Processed LiDAR point cloud, which provides digital surface model (DSM).</p>
Full article ">Figure 5
<p>Data processing steps to combine LiDAR and sonar cross-sections from raw LiDAR (green points) and sonar (light green dots) data to create the final cross-section (black line).</p>
Full article ">Figure 6
<p>Combined LiDAR–sonar cross-sections with the corresponding orthophotos and ground-truth measurements from RTK at different chainage points: (<b>a</b>) 0 m, (<b>b</b>) 46 m, (<b>c</b>) 2471 m, (<b>d</b>) 2541 m, (<b>e</b>) 4972 m, (<b>f</b>) 5004 m, and (<b>g</b>) 5037 m.</p>
Full article ">Figure 7
<p>Radar altimetry WSE results. The left image corresponds to the WSE profile along the chainage, the dark blue line represents the 200 m average, the light blue dots are the raw radar altimetry returns, and the dashed lines are the standard deviation.</p>
Full article ">Figure 8
<p>Hydraulic model characterization: (<b>a</b>) shows the results considering uniform depth and M, (<b>b</b>) shows a uniform M characterization with variable depth geometry, (<b>c</b>) shows a distributed M characterization. The results using observed bathymetry and conceptual bathymetry are both represented.</p>
Full article ">
13 pages, 509 KiB  
Communication
Consensus-Based Guidelines for Best Practices in the Selection and Use of Examination Gloves in Healthcare Settings
by Jorge Freitas, Alexandre Lomba, Samuel Sousa, Viviana Gonçalves, Paulo Brois, Esmeralda Nunes, Isabel Veloso, David Peres and Paulo Alves
Nurs. Rep. 2025, 15(1), 9; https://doi.org/10.3390/nursrep15010009 - 2 Jan 2025
Viewed by 429
Abstract
Background/Objectives: Healthcare-associated infections (HAIs) and antimicrobial resistance (AMR) present significant challenges in modern healthcare, leading to increased morbidity, mortality, and healthcare costs. Examination gloves play a critical role in infection prevention by serving as a barrier to reduce the risk of cross-contamination between [...] Read more.
Background/Objectives: Healthcare-associated infections (HAIs) and antimicrobial resistance (AMR) present significant challenges in modern healthcare, leading to increased morbidity, mortality, and healthcare costs. Examination gloves play a critical role in infection prevention by serving as a barrier to reduce the risk of cross-contamination between healthcare workers and patients. This manuscript aims to provide consensus-based guidelines for the optimal selection, use, and disposal of examination gloves in healthcare settings, addressing both infection prevention and environmental sustainability. Methods: The guidelines were developed using a multi-stage Delphi process involving healthcare experts from various disciplines. Recommendations were structured to ensure compliance with international regulations and sustainability frameworks aligned with the One Health approach and Sustainable Development Goals (SDGs). Results: Key recommendations emphasize selecting gloves based on clinical needs and compliance with EN 455 standards. Sterile gloves are recommended for surgical and invasive procedures, while non-sterile gloves are suitable for routine care involving contact with blood and other body fluids or contaminated surfaces. Proper practices include performing hand hygiene before and after glove use, avoiding glove reuse, and training healthcare providers on donning and removal techniques to minimize cross-contamination. Disposal protocols should follow local clinical waste management regulations, promoting sustainability through recyclable or biodegradable materials whenever feasible. Conclusions: These consensus-based guidelines aim to enhance infection control, improve the safety of patients and healthcare workers, and minimize environmental impact. By adhering to these evidence-based practices, grounded in European regulations, healthcare settings can establish safe and sustainable glove management systems that serve as a model for global practices. Full article
Show Figures

Scheme 1

Scheme 1
<p>Types of gloves used in the provision of healthcare.</p>
Full article ">
14 pages, 3950 KiB  
Article
Ground Testing of a Miniature Turbine Jet Engine for Specific Flight Conditions
by Ryszard Chachurski, Łukasz Omen, Andrzej J. Panas and Piotr Zalewski
Energies 2025, 18(1), 73; https://doi.org/10.3390/en18010073 - 28 Dec 2024
Viewed by 360
Abstract
This paper presents the design and development project of an engine test stand specifically constructed for ground testing of miniature turbine jet engines (MTJEs) along with conclusive results of the conducted investigations. The tested engines serve as the propulsion system for an unmanned [...] Read more.
This paper presents the design and development project of an engine test stand specifically constructed for ground testing of miniature turbine jet engines (MTJEs) along with conclusive results of the conducted investigations. The tested engines serve as the propulsion system for an unmanned aerial vehicle (UAV) platform. The engine test stand was used to determine various operating parameters of the engine, with a particular focus on recording variations and changes in temperature and pressure at the engine control cross-sections: behind the compressor, the combustion chamber, and at the final cross-section of the nozzle. The analysis of the direct test results allowed the evaluation of the engine’s behavior under hydration conditions and documents the quantitative and qualitative response of the control system of the engine. Of particular interest are the results showing an increase in exhaust system temperature with a decrease in the temperature in combustion chamber under hydrated conditions. The test program assumed and considered the acting loads and forces in both standard and specific flight conditions, including scenarios for a heavy rain. The preliminary evaluation of the investigation results provided data and insights required for further analysis. Quantitatively, the measured temperature value in the exhaust system does not exceed 700 °C and the temperature increase resulting from the introduction of water and the engine’s response to the out-of-operation event is approximately 50 °C for the JetCat 140. Qualitatively different effects were observed in the combustion moment, consisting in a drop in temperature values during the introduction of water into the engine flow channel. The introduction of water into the GTM 140 inlet revealed no significant changes in the variations of pressure and temperature measured in selected engine design sections. Based on the knowledge and experience gained, a fully operational test stand to monitor the parameters and performance of the MTJEs, which are used for aerial target propulsion, was developed. Full article
Show Figures

Figure 1

Figure 1
<p>Miniature turbine jet engines: (<b>a</b>) GTM 140; (<b>b</b>) JetCat 140.</p>
Full article ">Figure 2
<p>MTJE main components: 1—electric starter, 2— inlet, 3—compressor rotor, 4—diffuser, 5—combustion chamber cover, 6—combustion chamber, 7—vaporizer, 8—fuel manifold with injectors, 9—turbine nozzle, 10—turbine rotor, 11—exhaust nozzle, 12, 13—shaft, 14—shaft tunnel, 15—bearings.</p>
Full article ">Figure 3
<p>Photographs of the engine test stand: (<b>a</b>) GTM 140 engine; (<b>b</b>) dynamometer components; 1—electrical signal emulator, 2—fuel pump, 3—fuel flow meter, 4—control valves, 5—electronic control module (ECU).</p>
Full article ">Figure 4
<p>GTM 140 test stand front panel: 1—Ground Support Unit (GSU); 2—thrust indicator; 3—potentiometer with power switch.</p>
Full article ">Figure 5
<p>JetCat 140 engine test stand: 1—engine; 2—fuel tank; 3—CL-363 strain gauge; 4—battery; 5—electronic thrust gauge; 6—Ground Support Unit (GSU); 7—platform.</p>
Full article ">Figure 6
<p>JetCat 140 engine with markings: control cross-sections X1, X2 and the location of temperature sensors in cross-sections (X2—looking in the direction of flight); T1 ÷ T9—thermocouple designations.</p>
Full article ">Figure 7
<p>Plots of rotor speed and temperature variation over time. (<b>a</b>) rotor speed characteristics; (<b>b</b>) temperature measurement results in cross-section X2.</p>
Full article ">Figure 8
<p>Temperature measurement results in control cross-sections X1 and X2.</p>
Full article ">Figure 9
<p>Effect of flow hydration on temperature change in control cross-section; 1°÷4°—consecutive numbers of water introduction to flow in quadruple cycle.</p>
Full article ">Figure 10
<p>GTM 140 engine with markings: research cross-sections and location of temperature sensors (X2, X3—looking in the direction of flight). Pressure hose (A) and thermocouple (B); T1 ÷ T10—thermocouple designations, P1 ÷ P4—pressure sensor designations.</p>
Full article ">Figure 11
<p>Temperature and pressure in selected control sections.</p>
Full article ">Figure 12
<p>Mobile ground stand for pre-flight engine inspection; (<b>a</b>) fully operational system with the checked engine; (<b>b</b>) mobile kit ready for transportation.</p>
Full article ">Figure 13
<p>Aerial target Jet2 driven by two MTJE JetCat 140s; (<b>a</b>) during take-off from the launcher; (<b>b</b>) airborne in flight.</p>
Full article ">
21 pages, 7071 KiB  
Article
Optimizing Automated Hematoma Expansion Classification from Baseline and Follow-Up Head Computed Tomography
by Anh T. Tran, Dmitriy Desser, Tal Zeevi, Gaby Abou Karam, Julia Zietz, Andrea Dell’Orco, Min-Chiun Chen, Ajay Malhotra, Adnan I. Qureshi, Santosh B. Murthy, Shahram Majidi, Guido J. Falcone, Kevin N. Sheth, Jawed Nawabi and Seyedmehdi Payabvash
Appl. Sci. 2025, 15(1), 111; https://doi.org/10.3390/app15010111 - 27 Dec 2024
Viewed by 340
Abstract
Hematoma expansion (HE) is an independent predictor of poor outcomes and a modifiable treatment target in intracerebral hemorrhage (ICH). Evaluating HE in large datasets requires segmentation of hematomas on admission and follow-up CT scans, a process that is time-consuming and labor-intensive in large-scale [...] Read more.
Hematoma expansion (HE) is an independent predictor of poor outcomes and a modifiable treatment target in intracerebral hemorrhage (ICH). Evaluating HE in large datasets requires segmentation of hematomas on admission and follow-up CT scans, a process that is time-consuming and labor-intensive in large-scale studies. Automated segmentation of hematomas can expedite this process; however, cumulative errors from segmentation on admission and follow-up scans can hamper accurate HE classification. In this study, we combined a tandem deep-learning classification model with automated segmentation to generate probability measures for false HE classifications. With this strategy, we can limit expert review of automated hematoma segmentations to a subset of the dataset, tailored to the research team’s preferred sensitivity or specificity thresholds and their tolerance for false-positive versus false-negative results. We utilized three separate multicentric cohorts for cross-validation/training, internal testing, and external validation (n = 2261) to develop and test a pipeline for automated hematoma segmentation and to generate ground truth binary HE annotations (≥3, ≥6, ≥9, and ≥12.5 mL). Applying a 95% sensitivity threshold for HE classification showed a practical and efficient strategy for HE annotation in large ICH datasets. This threshold excluded 47–88% of test-negative predictions from expert review of automated segmentations for different HE definitions, with less than 2% false-negative misclassification in both internal and external validation cohorts. Our pipeline offers a time-efficient and optimizable method for generating ground truth HE classifications in large ICH datasets, reducing the burden of expert review of automated hematoma segmentations while minimizing misclassification rate. Full article
(This article belongs to the Special Issue Novel Technologies in Radiology: Diagnosis, Prediction and Treatment)
Show Figures

Figure 1

Figure 1
<p>An example of an HE classification workflow with a high-sensitivity (95%) threshold classification. Combined segmentation and classification pipeline identifies the majority of subjects with HE (141 out of 148, 95.2%), and expert review of automated segmentations is limited to 35.5% of the subjects, correcting false-positive cases. This process results in 99.21% accurate HE classification in the whole dataset, with a final 0.7% false-negative rate. Notably, expert reviewers spend only a third of the time required for examining segmentations in the entire dataset, by focusing on test positive subjects, significantly improving efficiency. The approach is practical and efficient for generating ground truth annotations of HE in large ICH datasets.</p>
Full article ">Figure 2
<p>The pipeline for HE classification. Head CT scans were preprocessed for skull stripping, adjusting the intensities to the brain window/level, and resampling and registering to a common size space. The segmentation masks, along with the baseline and follow-up CTs, were used as input for a classification CNN to predict HE. The classifier outputs probability scores for each subject. Then, from the threshold array, sensitivity array, specificity array, and f1 score array, one can choose the optimal threshold; for example, a threshold based on the maximum F1 score [<a href="#B33-applsci-15-00111" class="html-bibr">33</a>]. After that, we can create the confusion matrix elements at a given threshold. Using ROC analysis of the final prediction probabilities [<a href="#B34-applsci-15-00111" class="html-bibr">34</a>], we established the 100%, 95%, and 90% sensitivity and specificity thresholds in the internal test cohort and evaluated them in the external validation cohort.</p>
Full article ">Figure 3
<p>Classification of ≥3 mL HE using CNN model and thresholds for 100%, 95%, and 90% sensitivity and specificity, as well as the highest accuracy threshold, in the internal test cohort (ATACH-2). These thresholds were then applied to the external validation cohort (Charité). The solid and dashed lines in the ROC curve refer to same-color sensitivity/specificity thresholds (as color coded in table cell) in the internal and external validation cohorts, respectively.</p>
Full article ">Figure 4
<p>Classification of ≥6 mL HE using CNN model and thresholds for 100%, 95%, and 90% sensitivity and specificity, as well as the highest accuracy threshold, in the internal test cohort (ATACH-2). These thresholds were then applied to the external validation cohort (Charité). The solid and dashed lines in the ROC curve refer to same-color sensitivity/specificity thresholds (as color coded in table cell) in the internal and external validation cohorts, respectively.</p>
Full article ">Figure 5
<p>Classification of ≥9 mL HE using CNN model and thresholds for 100%, 95%, and 90% sensitivity and specificity, as well as the highest accuracy threshold, in the internal test cohort (ATACH-2). These thresholds were then applied to the external validation cohort (Charité). The solid and dashed lines in the ROC curve refer to same-color sensitivity/specificity thresholds (as color coded in table cell) in the internal and external validation cohorts, respectively.</p>
Full article ">Figure 6
<p>Classification of ≥12.5 mL HE using CNN model and thresholds for 100%, 95%, and 90% sensitivity and specificity, as well as the highest accuracy threshold, in the internal test cohort (ATACH-2). These thresholds were then applied to the external validation cohort (Charité). The solid and dashed lines in the ROC curve refer to same-color sensitivity/specificity thresholds (as color coded in table cell) in the internal and external validation cohorts, respectively.</p>
Full article ">
20 pages, 11657 KiB  
Article
Assessment of PPP Using BDS PPP-B2b Products with Short-Time-Span Observations and Backward Smoothing Method
by Lewen Zhao and Wei Zhai
Remote Sens. 2025, 17(1), 25; https://doi.org/10.3390/rs17010025 - 25 Dec 2024
Viewed by 309
Abstract
The BeiDou Navigation Satellite System (BDS) offers orbit and clock corrections through the B2b signal, enabling Precise Point Positioning (PPP) without relying on ground communication networks. This capability supports applications such as aerial and maritime mapping. However, achieving high precision during the convergence [...] Read more.
The BeiDou Navigation Satellite System (BDS) offers orbit and clock corrections through the B2b signal, enabling Precise Point Positioning (PPP) without relying on ground communication networks. This capability supports applications such as aerial and maritime mapping. However, achieving high precision during the convergence period remains challenging, particularly for missions with short observation durations. To address this, we analyze the performance of PPP over short periods using PPP-B2b products and propose using the backward smoothing method to enhance the accuracy during the convergence period. Evaluation of the accuracy of PPP-B2b products shows that the orbit and clock accuracy of the BDS surpass those of GPS. Specifically, the BDS achieves orbit accuracies of 0.059 m, 0.178 m, and 0.186 m in the radial, along-track, and cross-track components, respectively, with a clock accuracy within 0.13 ns. The hourly static PPP achieves 0.5 m and 0.1 m accuracies with convergence times of 4.5 and 25 min at a 50% proportion, respectively. Nonetheless, 7.07% to 23.79% of sessions fail to converge to 0.1 m due to the limited availability of GPS and BDS corrections at certain stations. Simulated kinematic PPP requires an additional 1–4 min to reach the same accuracy as the static PPP. Using the backward smoothing method significantly enhances accuracy, achieving 0.024 m, 0.046 m, and 0.053 m in the north, east, and up directions, respectively. For vehicle-based positioning, forward PPP can achieve a horizontal accuracy better than 0.5 m within 4 min; however, during the convergence period, positioning errors may exceed 1.5 m and 3.0 m in the east and up direction. By applying the smoothing method, horizontal accuracy can reach better than 0.2 m, while the vertical accuracy can improve to better than 0.3 m. Full article
Show Figures

Figure 1

Figure 1
<p>Time series of GPS and BDS orbit errors in the radial, along-track, and cross-track directions for PPP, referenced using WUM products.</p>
Full article ">Figure 1 Cont.
<p>Time series of GPS and BDS orbit errors in the radial, along-track, and cross-track directions for PPP, referenced using WUM products.</p>
Full article ">Figure 2
<p>The statistics of GPS and BDS orbit errors in the radial, along-track, and cross-track directions, using WUM products as the reference.</p>
Full article ">Figure 3
<p>The clock STD and RMS for the GPS products of PPP-B2b, using the “WUM” as reference.</p>
Full article ">Figure 4
<p>The clock STD and RMS for the BDS products of PPP-B2b, using the “WUM” as reference.</p>
Full article ">Figure 5
<p>The SISRE time series for the GPS and BDS satellite from PPP-B2b products, with different colors representing the different satellites.</p>
Full article ">Figure 6
<p>Distribution of the MGEX stations.</p>
Full article ">Figure 7
<p>Convergence time for static PPP using with PPP-B2b products (<b>left</b>) and WHR products (<b>right</b>) with green dots representing the original positioning errors.</p>
Full article ">Figure 8
<p>Station-specific convergence time using different products to achieve 0.5 m and 0.1 m.</p>
Full article ">Figure 9
<p>Station-specific three-dimensional positioning RMS and its corresponding average number of satellites.</p>
Full article ">Figure 10
<p>Convergence time of kinematic PPP to reach 0.1 m and 0.5 m accuracy using PPP-B2b and WHR products.</p>
Full article ">Figure 11
<p>Variation in 3D positioning errors with respect to the number of visible satellites for hourly PPP.</p>
Full article ">Figure 12
<p>Distribution of hourly PPP positioning errors in the N/E/U directions calculated using PPP-B2b (<b>left</b>) and WHR (<b>right</b>) products.</p>
Full article ">Figure 13
<p>Positioning time series of station BIK0 on DOY 075 in the north, east, and up directions, along with the variations in the number of satellites (Nsat) and GDOP values.</p>
Full article ">Figure 14
<p>Vehicle trajectory for the dynamic experiment starting at point ‘1’ and ending at point ‘2’ (left), and time series of RTK positioning, along with its ambiguity fixing status, where the green color represents the fixed solution.</p>
Full article ">Figure 15
<p>Time series of positioning errors at the base station using PrideLab and NavEngine software with WHR products.</p>
Full article ">Figure 16
<p>Time series of positioning errors at the base station using WHR and PPP-B2b products processed with NavEngine.</p>
Full article ">Figure 17
<p>Time series of positioning errors for forward PPP using WHR and PPP-B2b products.</p>
Full article ">Figure 18
<p>Positioning error time series of backward PPP using the WHR and PPP-B2b products.</p>
Full article ">
21 pages, 3068 KiB  
Article
Analytical Solutions for Thermo-Mechanical Coupling Bending of Cross-Laminated Timber Panels
by Chen Li, Shengcai Li, Kong Yue, Peng Wu, Zhongping Xiao and Biqing Shu
Buildings 2025, 15(1), 26; https://doi.org/10.3390/buildings15010026 - 25 Dec 2024
Viewed by 299
Abstract
This study presents analytical solutions grounded in three-dimensional (3D) thermo-elasticity theory to predict the bending behavior of cross-laminated timber (CLT) panels under thermo-mechanical conditions, incorporating the orthotropic and temperature-dependent properties of wood. The model initially utilizes Fourier series expansion based on heat transfer [...] Read more.
This study presents analytical solutions grounded in three-dimensional (3D) thermo-elasticity theory to predict the bending behavior of cross-laminated timber (CLT) panels under thermo-mechanical conditions, incorporating the orthotropic and temperature-dependent properties of wood. The model initially utilizes Fourier series expansion based on heat transfer theory to address non-uniform temperature distributions. By restructuring the governing equations into eigenvalue equations, the general solutions for stresses and displacements in the CLT panel are derived, with coefficients determined through the transfer matrix method. A comparative analysis shows that the proposed solution aligns well with finite element results while offering superior computational efficiency. The solution based on the plane section assumption closely matches the proposed solution for thinner panels; however, discrepancies increase as panel thickness rises. Finally, this study explores the thermo-mechanical bending behavior of the CLT panel and proposes a modified superposition principle. The parameter study indicates that the normal stress is mainly affected by modulus and thermal expansion coefficients, while the deflection of the panel is largely dependent on thermal expansion coefficients but less affected by modulus. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram for CLT panel under thermo-mechanical coupling condition.</p>
Full article ">Figure 2
<p>Flow diagram for the eigenvalue method.</p>
Full article ">Figure 3
<p>Schematic diagram of transfer matrix relationship.</p>
Full article ">Figure 4
<p>Flow diagram for the present analytical process.</p>
Full article ">Figure 5
<p>Comparisons between experimental curves and numerical curve.</p>
Full article ">Figure 6
<p>Distribution of stress and displacement along the thickness of the CLT panel under the four kinds of action.</p>
Full article ">Figure 6 Cont.
<p>Distribution of stress and displacement along the thickness of the CLT panel under the four kinds of action.</p>
Full article ">Figure 7
<p>Distribution of stress and displacement along the <span class="html-italic">z</span>-direction in the CLT panel of six kinds of wood in PT action.</p>
Full article ">Figure 7 Cont.
<p>Distribution of stress and displacement along the <span class="html-italic">z</span>-direction in the CLT panel of six kinds of wood in PT action.</p>
Full article ">Figure 8
<p>Distribution of <span class="html-italic">z</span>-direction thermal stresses and displacements in the CLT panel made of redwood in PT action with three kinds of temperature differences.</p>
Full article ">Figure 9
<p>Effects of material constants on <math display="inline"><semantics> <mrow> <msubsup> <mi>σ</mi> <mrow> <mi>x</mi> <mi>max</mi> </mrow> <mi>i</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>w</mi> <mrow> <mi>max</mi> </mrow> <mi>i</mi> </msubsup> </mrow> </semantics></math> of CLT panel in PT action.</p>
Full article ">
23 pages, 9152 KiB  
Article
Multi-Band Scattering Characteristics of Miniature Masson Pine Canopy Based on Microwave Anechoic Chamber Measurement
by Kai Du, Yuan Li, Huaguo Huang, Xufeng Mao, Xiulai Xiao and Zhiqu Liu
Sensors 2025, 25(1), 46; https://doi.org/10.3390/s25010046 - 25 Dec 2024
Viewed by 248
Abstract
Using microwave remote sensing to invert forest parameters requires clear canopy scattering characteristics, which can be intuitively investigated through scattering measurements. However, there are very few ground-based measurements on forest branches, needles, and canopies. In this study, a quantitative analysis of the canopy [...] Read more.
Using microwave remote sensing to invert forest parameters requires clear canopy scattering characteristics, which can be intuitively investigated through scattering measurements. However, there are very few ground-based measurements on forest branches, needles, and canopies. In this study, a quantitative analysis of the canopy branches, needles, and ground contribution of Masson pine scenes in C-, X-, and Ku-bands was conducted based on a microwave anechoic chamber measurement platform. Four canopy scenes with different densities by defoliation in the vertical direction were constructed, and the backscattering data for each scene were collected in the C-, X-, and Ku-bands across eight incidence angles and eight azimuth angles, respectively. The results show that in the vertical observation direction, the backscattering energy of the C- and X-bands was predominantly contributed by the ground, whereas the Ku-band signal exhibited higher sensitivity to the canopy structure. The backscattering energy of the scene was influenced by the incident angle, particularly in the cross-polarization, where backscattering energy increased with larger incident angles. The scene’s backscattering energy was influenced by the scattering and extinction of canopy branches and needles, as well as by ground scattering, resulting in a complex relationship with canopy density. In addition, applying orientation correction to the polarization scattering matrix can mitigate the impact of the incident angle and reduce the decomposition energy errors in the Freeman–Durden model. In order to ensure the reliability of forest parameter inversion based on SAR data, a greater emphasis should be placed on physical models that account for signal scattering and the extinction process, rather than relying on empirical models. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Interior view of microwave characteristic measurement and simulation imaging science experiment platform (LAMP, Deqing, China); (<b>b</b>) Geometric diagram of the platform.</p>
Full article ">Figure 2
<p>(<b>a</b>) The scene with all needles (S1), (<b>b</b>) the first defoliation scene (S2), (<b>c</b>) the second defoliation scene (S3), (<b>d</b>) the scene without needles (S4).</p>
Full article ">Figure 3
<p>Workflow of this study.</p>
Full article ">Figure 4
<p>Illustration of backscatter energy profile and signal locations of canopy and ground.</p>
Full article ">Figure 5
<p>Statistics of the ground and canopy energy contribution ratios for different canopy structure scenes: (<b>a</b>) scene S1; (<b>b</b>) scene S2; (<b>c</b>) scene S3; (<b>d</b>) scene S4.</p>
Full article ">Figure 6
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the C-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 7
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the X-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 8
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the Ku-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 9
<p>Variation of backscattering energy with observation incidence angle for scene S1: (<b>a</b>) in the C-band; (<b>b</b>) in the X-band; (<b>c</b>) in the Ku-band.</p>
Full article ">Figure 10
<p>Variation of backscattering energy with observation incidence angle for scene S1 after de-orientation: (<b>a</b>) in the C-band; (<b>b</b>) in the X-band; (<b>c</b>) in the Ku-band.</p>
Full article ">Figure 11
<p>Side-looking backscattering energy for different canopy structure scenes of Masson pine: (<b>a</b>–<b>c</b>) represents the backscattering energy of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 12
<p>Side-looking backscattering energy for different canopy structure scenes of Masson pine after orientation correction: (<b>a</b>–<b>c</b>) represents the backscattering energy of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 13
<p>Decomposition energy error statistics based on different polarization decomposition algorithms: (<b>a</b>–<b>d</b>) represents the energy error distribution under different incident angles for scenes S1, S2, S3, and S4, respectively, using the Freeman–Durden model decomposition; (<b>e</b>–<b>h</b>) represents that for scenes S1, S2, S3, and S4, respectively, using the decomposition based on the Freeman–Durden model combined with orientation correction; (<b>i</b>–<b>l</b>) represents that for scenes S1, S2, S3, and S4, respectively, using the decomposition based on the modified Freeman–Durden model combined with orientation correction.</p>
Full article ">Figure 14
<p>The scattering characteristics energy proportion of each scene obtained by the Freeman–Durden model: (<b>a</b>–<b>c</b>) represents the energy proportion of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 15
<p>The scattering characteristics energy proportion of each scene obtained by the modified Freeman–Durden model combined with orientation correction: (<b>a</b>–<b>c</b>) represents the energy proportion of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">
16 pages, 787 KiB  
Article
Instrumenting Parkrun: Usefulness and Validity of Inertial Sensors
by Rachel Mason, Yunus Celik, Gill Barry, Alan Godfrey and Samuel Stuart
Sensors 2025, 25(1), 30; https://doi.org/10.3390/s25010030 - 24 Dec 2024
Viewed by 288
Abstract
The analysis of running gait has conventionally taken place within an expensive and restricted laboratory space, with wearable technology offering a practical, cost-effective, and unobtrusive way to examine running gait in more natural environments. This pilot study presents a wearable inertial measurement unit [...] Read more.
The analysis of running gait has conventionally taken place within an expensive and restricted laboratory space, with wearable technology offering a practical, cost-effective, and unobtrusive way to examine running gait in more natural environments. This pilot study presents a wearable inertial measurement unit (IMU) setup for the continuous analysis of running gait during an outdoor parkrun (i.e., 5 km). The study aimed to (1) provide analytical validation of running gait measures compared to time- and age-graded performance and (2) explore performance validation. Ten healthy adults (7 females, 3 males, mean age 37.2 ± 11.7 years) participated. The participants wore Axivity AX6 IMUs on the talus joint of each foot, recording tri-axial accelerometer and gyroscope data at 200 Hz. Temporal gait characteristics—gait cycle, ground contact time, swing time, and duty factor—were extracted using zero-crossing algorithms. The data were analyzed for correlations between the running performance, foot strike type, and fatigue-induced changes in temporal gait characteristics. Strong correlations were found between the performance time and both the gait cycle and ground contact time, with weak correlations for foot strike types. The analysis of asymmetry and fatigue highlighted modest changes in gait as fatigue increased, but no significant gender differences were found. This setup demonstrates potential for in-field gait analysis for running, providing insights for performance and injury prevention strategies. Full article
(This article belongs to the Special Issue Inertial Sensing System for Motion Monitoring)
Show Figures

Figure 1

Figure 1
<p>Time series analyses of running gait characteristics over the 5 km run (%). (<b>A</b>) Gait cycle, (<b>B</b>) ground contact time, (<b>C</b>) swing time, (<b>D</b>) duty factor. Each panel includes the mean (solid black line), standard deviation (shaded gray area), and elevation profile (dashed green line).</p>
Full article ">
26 pages, 2478 KiB  
Article
An Approach for Detecting Parkinson’s Disease by Integrating Optimal Feature Selection Strategies with Dense Multiscale Sample Entropy
by Minh Tai Pham Nguyen, Minh Khue Phan Tran, Tadashi Nakano, Thi Hong Tran and Quoc Duy Nam Nguyen
Information 2025, 16(1), 1; https://doi.org/10.3390/info16010001 - 24 Dec 2024
Viewed by 407
Abstract
Parkinson’s disease (PD) is a neurological disorder that severely affects motor function, especially gait, requiring accurate diagnosis and assessment instruments. This study presents Dense Multiscale Sample Entropy (DM-SamEn) as an innovative method for diminishing feature dimensions while maintaining the uniqueness of signal features. [...] Read more.
Parkinson’s disease (PD) is a neurological disorder that severely affects motor function, especially gait, requiring accurate diagnosis and assessment instruments. This study presents Dense Multiscale Sample Entropy (DM-SamEn) as an innovative method for diminishing feature dimensions while maintaining the uniqueness of signal features. DM-SamEn employs a weighting mechanism that considers the dynamic properties of the signal, thereby reducing redundancy and improving the distinctiveness of features extracted from vertical ground reaction force (VGRF) signals in patients with Parkinson’s disease. Subsequent to the extraction process, correlation-based feature selection (CFS) and sequential backward selection (SBS) refine feature sets, improving algorithmic accuracy. To validate the feature extraction and selection stage, three classifiers—Adaptive Weighted K-Nearest Neighbors (AW-KNN), Radial Basis Function Support Vector Machine (RBF-SVM), and Multilayer Perceptron (MLP)—were employed to evaluate classification efficacy and ascertain optimal performance across selection strategies, including CFS, SBS, and the hybrid SBS-CFS approach. K-fold cross-validation was employed to provide improved evaluation of model performance by assessing the model on various data subsets, thereby mitigating the risk of overfitting and augmenting the robustness of the results. As a result, the model demonstrated a significant ability to differentiate between PD patients and healthy controls, with classification accuracy reported as ACC [CI 95%: 97.82–98.5%] for disease identification and ACC [CI 95%: 96.3–97.3%] for severity assessment. Optimal performance was primarily achieved through feature sets chosen using SBS and the integrated SBS-CFS methods. The findings highlight the model’s potential as an effective instrument for diagnosing PD and assessing its severity, contributing to advancements in clinical management of the condition. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence 2024)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Gender and label distribution across each class in severity classification task; (<b>b</b>) gender and label distribution across each class in PD classification task.</p>
Full article ">Figure 2
<p>The methodology procedure has four stages: preprocessing, feature extraction, feature selection, and classification.</p>
Full article ">Figure 3
<p>The process of signal division using the time-slicing window method and outlier removal through the quartile approach and histogram analysis.</p>
Full article ">Figure 4
<p>Boxplot analysis of model consistency for PD and severity classification tasks.</p>
Full article ">Figure 5
<p>(<b>a</b>) Adjusted <span class="html-italic">p</span>-value matrix of paired T-tests comparing accuracies between classifier–feature selection method pairs (M-SamEn); (<b>b</b>) adjusted <span class="html-italic">p</span>-value matrix of paired <span class="html-italic">t</span>-tests comparing accuracies between classifier–feature selection method pairs (DM-SamEn).</p>
Full article ">Figure 6
<p>(<b>a</b>) Correlation matrix of the feature set extracted using the M-SamEn method; (<b>b</b>) correlation matrix of the feature set extracted using the DM-SamEn method.</p>
Full article ">Figure 7
<p>(<b>a</b>) Distribution of selected features by signal source (original and computed signals from Equations (<a href="#FD1-information-16-00001" class="html-disp-formula">1</a>)–(<a href="#FD3-information-16-00001" class="html-disp-formula">3</a>) after the feature selection stage; (<b>b</b>) distribution of selected features by feature extraction method after the feature selection stage.</p>
Full article ">Figure 8
<p>Comparison of feature count across feature selection methods (*: <span class="html-italic">p</span>-value &lt; 0.05; **: <span class="html-italic">p</span>-value &lt; 0.01).</p>
Full article ">
14 pages, 1953 KiB  
Article
Artificial Intelligence Unveils the Unseen: Mapping Novel Lung Patterns in Bronchiectasis via Texture Analysis
by Athira Nair, Rakesh Mohan, Mandya Venkateshmurthy Greeshma, Deepak Benny, Vikram Patil, SubbaRao V. Madhunapantula, Biligere Siddaiah Jayaraj, Sindaghatta Krishnarao Chaya, Suhail Azam Khan, Komarla Sundararaja Lokesh, Muhlisa Muhammaed Ali Laila, Vadde Vijayalakshmi, Sivasubramaniam Karunakaran, Shreya Sathish and Padukudru Anand Mahesh
Diagnostics 2024, 14(24), 2883; https://doi.org/10.3390/diagnostics14242883 - 21 Dec 2024
Viewed by 494
Abstract
Background and Objectives: Thin-section CT (TSCT) is currently the most sensitive imaging modality for detecting bronchiectasis. However, conventional TSCT or HRCT may overlook subtle lung involvement such as alveolar and interstitial changes. Artificial Intelligence (AI)-based analysis offers the potential to identify novel information [...] Read more.
Background and Objectives: Thin-section CT (TSCT) is currently the most sensitive imaging modality for detecting bronchiectasis. However, conventional TSCT or HRCT may overlook subtle lung involvement such as alveolar and interstitial changes. Artificial Intelligence (AI)-based analysis offers the potential to identify novel information on lung parenchymal involvement that is not easily detectable with traditional imaging techniques. This study aimed to assess lung involvement in patients with bronchiectasis using the Bronchiectasis Radiologically Indexed CT Score (BRICS) and AI-based quantitative lung texture analysis software (IMBIO, Version 2.2.0). Methods: A cross-sectional study was conducted on 45 subjects diagnosed with bronchiectasis. The BRICS severity score was used to classify the severity of bronchiectasis into four categories: Mild, Moderate, Severe, and tractional bronchiectasis. Lung texture mapping using the IMBIO AI software tool was performed to identify abnormal lung textures, specifically focusing on detecting alveolar and interstitial involvement. Results: Based on the Bronchiectasis Radiologically Indexed CT Score (BRICS), the severity of bronchiectasis was classified as Mild in 4 (8.9%) participants, Moderate in 14 (31.1%), Severe in 11 (24.4%), and tractional in 16 (35.6%). AI-based lung texture analysis using IMBIO identified significant alveolar and interstitial abnormalities, offering insights beyond conventional HRCT findings. This study revealed trends in lung hyperlucency, ground-glass opacity, reticular changes, and honeycombing across severity levels, with advanced disease stages showing more pronounced structural and vascular alterations. Elevated pulmonary vascular volume (PVV) was noted in cases with higher BRICSs, suggesting increased vascular remodeling in severe and tractional types. Conclusions: AI-based lung texture analysis provides valuable insights into lung parenchymal involvement in bronchiectasis that may not be detectable through conventional HRCT. Identifying significant alveolar and interstitial abnormalities underscores the potential impact of AI on improving the understanding of disease pathology and disease progression, and guiding future therapeutic strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Detection of parenchymal pathologies by AI and radiologist.</p>
Full article ">Figure 2
<p>Correlation between various parenchymal pathologies detected by Artificial Intelligence and BRICS.</p>
Full article ">Figure 3
<p>Lung texture analysis report: The Dicom representative image of each category of pattern. These were chosen based on the predominance of lung parenchymal patterns including hyperlucent, ground glass [GG], reticular, and honeycombing. Imbio LTA provided their regional distribution within three different lung zones [upper, middle, lower] in each lung. The platform provided an image in which each lung pattern was colored differently, allowing the evaluation of disease extent, composition, and location at a glance. Specific images were chosen because of the predominance of one of the patterns in each of them; at the same time, they had other patterns, also showing that all pathologies could coexist in a single case of bronchiectasis, yet have a predominance of a single pattern that may have a bearing on their clinical symptoms.</p>
Full article ">
17 pages, 4246 KiB  
Article
Seismic Response Analysis of Continuous Girder Bridges Crossing Faults with Assembled Rocking-Self-Centering Piers
by Tianyi Zhou, Yingxin Hui, Junlu Liu and Jiale Lv
Buildings 2024, 14(12), 4061; https://doi.org/10.3390/buildings14124061 - 21 Dec 2024
Viewed by 367
Abstract
Under the action of cross-fault ground motion, bridge piers can experience significant residual displacements, which can irreversibly impact the integrity and reliability of the bridge structure. Traditional seismic mitigation measures struggle to effectively prevent multi-span chain collapses caused by the tilting of bridge [...] Read more.
Under the action of cross-fault ground motion, bridge piers can experience significant residual displacements, which can irreversibly impact the integrity and reliability of the bridge structure. Traditional seismic mitigation measures struggle to effectively prevent multi-span chain collapses caused by the tilting of bridge piers. Therefore, it is of practical engineering significance to explore the effectiveness of rocking self-centering (RSC) piers as seismic mitigation measures for such bridges. In this paper, cross-fault ground motion with sliding effects is artificially synthesized based on the characteristics of the fault seismogenic mechanism. A finite element model of a cross-fault bridge is established using the OpenSees platform. The applicability of RSC piers to cross-fault bridges is explored. The results show that RSC piers can significantly reduce residual displacement during cross-fault ground motions, facilitating rapid recovery after an earthquake. RSC piers significantly reduce residual displacement in cross-fault bridges, with the most notable vibration reduction effects observed in piers adjacent to the fault. When an 80 cm fault displacement occurs, the vibration reduction rate reaches 48%. Additionally, when the fault’s permanent displacement increases the risk of pier toppling, the vibration reduction effect of the RSC pier is positively correlated with the degree of fault displacement. However, the amplification effect of RSC piers on the maximum relative displacement of bearings in cross-fault bridges cannot be ignored. In this study, for the first time, RSC piers were assembled on bridges spanning faults to investigate their seismic damping effect. When the degree of fault misalignment is greater than 60cm, the seismic damping effect of RSC abutments is positively correlated with the degree of fault misalignment, and its amplifying effect on the maximum relative displacement of the bearing becomes more and more obvious with the increase of permanent displacement. For example, when the fault misalignment degree is 60cm, the vibration reduction rate is 39%, and when the fault misalignment degree is 90cm, the vibration reduction rate is 54%. Designers should rationally configure RSC piers according to the specific bridge and site conditions to achieve optimal vibration reduction effects. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

Figure 1
<p>Bridge layout (unit: m).</p>
Full article ">Figure 2
<p>Constitutive relationship of bearing.</p>
Full article ">Figure 3
<p>Constitutive model of concrete.</p>
Full article ">Figure 4
<p>Constitutive model of reinforcing steel.</p>
Full article ">Figure 5
<p>RSC bridge column numerical analysis model.</p>
Full article ">Figure 6
<p>Validation of constitutive model and rationality of RSC bridge pier model. (<b>a</b>) Hysteretic constitutive model; (<b>b</b>) reasonableness verification.</p>
Full article ">Figure 7
<p>The target and selected seismic ground motion response spectrum.</p>
Full article ">Figure 8
<p>The low frequency component of displacement time history.</p>
Full article ">Figure 9
<p>Acceleration, velocity and displacement time histories. (<b>a</b>) Acceleration; (<b>b</b>) velocity; (<b>c</b>) displacement.</p>
Full article ">Figure 10
<p>Method of seismic ground motion input.</p>
Full article ">Figure 11
<p>Comparison of residual displacement of columns. (<b>a</b>) P3; (<b>b</b>) P2; (<b>c</b>) P1.</p>
Full article ">Figure 12
<p>Comparison of bending moment–curvature curves of column bottom. (<b>a</b>) P3; (<b>b</b>) P2; (<b>c</b>) P1.</p>
Full article ">Figure 13
<p>Comparison of force–displacement relationship of bearing. (<b>a</b>) P3; (<b>b</b>) P2; (<b>c</b>) P1.</p>
Full article ">Figure 14
<p>The influence of permanent displacement on the vibration reduction ratio of RSC columns.</p>
Full article ">Figure 15
<p>Impact of permanent displacement on max relative bearing displacement.</p>
Full article ">Figure 16
<p>Impact of permanent displacement on RSC seismic performance.</p>
Full article ">
20 pages, 37875 KiB  
Article
Unsupervised Domain Adaptation Semantic Segmentation of Remote Sensing Imagery with Scene Covariance Alignment
by Kangjian Cao, Sheng Wang, Ziheng Wei, Kexin Chen, Runlong Chang and Fu Xu
Electronics 2024, 13(24), 5022; https://doi.org/10.3390/electronics13245022 - 20 Dec 2024
Viewed by 345
Abstract
Remote sensing imagery (RSI) segmentation plays a crucial role in environmental monitoring and geospatial analysis. However, in real-world practical applications, the domain shift problem between the source domain and target domain often leads to severe degradation of model performance. Most existing unsupervised domain [...] Read more.
Remote sensing imagery (RSI) segmentation plays a crucial role in environmental monitoring and geospatial analysis. However, in real-world practical applications, the domain shift problem between the source domain and target domain often leads to severe degradation of model performance. Most existing unsupervised domain adaptation methods focus on aligning global-local domain features or category features, neglecting the variations of ground object categories within local scenes. To capture these variations, we propose the scene covariance alignment (SCA) approach to guide the learning of scene-level features in the domain. Specifically, we propose a scene covariance alignment model to address the domain adaptation challenge in RSI segmentation. Unlike traditional global feature alignment methods, SCA incorporates a scene feature pooling (SFP) module and a covariance regularization (CR) mechanism to extract and align scene-level features effectively and focuses on aligning local regions with different scene characteristics between source and target domains. Experiments on both the LoveDA and Yanqing land cover datasets demonstrate that SCA exhibits excellent performance in cross-domain RSI segmentation tasks, particularly outperforming state-of-the-art baselines across various scenarios, including different noise levels, spatial resolutions, and environmental conditions. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the scene covariance alignment (SCA).</p>
Full article ">Figure 2
<p>The LoveDA dataset under different noise conditions. Subplots (<b>a</b>–<b>c</b>) introduce Gaussian noise with <math display="inline"><semantics> <mi>σ</mi> </semantics></math> values of 0.0, 0.05, and 0.1, to simulate sensor noise and atmospheric interference.</p>
Full article ">Figure 3
<p>The LoveDA dataset under different contrast conditions. Subplots (<b>a</b>–<b>e</b>) apply linear contrast stretches with values of −0.4, 0.0, 0.4, 0.8, and 1.2, to adjust the image contrast.</p>
Full article ">Figure 4
<p>Comparison of model adaptation effects under different contrast train and evaluation datasets.</p>
Full article ">Figure 5
<p>Comparison of model adaptation effects under different noise level training and evaluation datasets.</p>
Full article ">Figure 6
<p>Comparison of model segmentation effects at different contrasts. The upper part shows the same remote sensing image processed with varying contrast levels, increasing gradually from left to right. The lower part displays the segmentation results of the SCA model.</p>
Full article ">Figure 7
<p>Comparison of model segmentation effects at different resolutions. The upper part shows the same remote sensing image processed with decreasing resolution levels from left to right. The lower part displays the segmentation results of the SCA model.</p>
Full article ">Figure 8
<p>Visual comparison of model segmentation effects under different scenarios.</p>
Full article ">
24 pages, 7818 KiB  
Article
Application of the U-Net Deep Learning Model for Segmenting Single-Photon Emission Computed Tomography Myocardial Perfusion Images
by Ahmad Alenezi, Ali Mayya, Mahdi Alajmi, Wegdan Almutairi, Dana Alaradah and Hamad Alhamad
Diagnostics 2024, 14(24), 2865; https://doi.org/10.3390/diagnostics14242865 - 20 Dec 2024
Viewed by 465
Abstract
Background: Myocardial perfusion imaging (MPI) is a type of single-photon emission computed tomography (SPECT) used to evaluate patients with suspected or confirmed coronary artery disease (CAD). Detection and diagnosis of CAD are complex processes requiring precise and accurate image processing. Proper segmentation is [...] Read more.
Background: Myocardial perfusion imaging (MPI) is a type of single-photon emission computed tomography (SPECT) used to evaluate patients with suspected or confirmed coronary artery disease (CAD). Detection and diagnosis of CAD are complex processes requiring precise and accurate image processing. Proper segmentation is critical for accurate diagnosis, but segmentation issues can pose significant challenges, leading to diagnostic difficulties. Machine learning (ML) algorithms have demonstrated superior performance in addressing segmentation problems. Methods: In this study, a deep learning (DL) algorithm, U-Net, was employed to enhance segmentation accuracy for image segmentation in MPI. Data were collected from 1100 patients who underwent MPI studies at Al-Jahra Hospital between 2015 and 2024. To train the U-Net model, 100 studies were segmented by nuclear medicine (NM) experts to create a ground truth (gold-standard coordinates). The dataset was divided into a training set (n = 100 images) and a validation set (n = 900 images). The performance of the U-Net model was evaluated using multiple cross-validation metrics, including accuracy, precision, intersection over union (IOU), recall, and F1 score. Result: A dataset of 4560 images and corresponding masks was generated. Both holdout and k-fold (k = 5) validation strategies were applied, utilizing cross-entropy and Dice score as evaluation metrics. The best results were achieved with the holdout split and cross-entropy loss function, yielding a test accuracy of 98.9%, a test IOU of 89.6%, and a test Dice coefficient of 94%. The k-fold validation scenario provided a more balanced true positive and false positive rate. The U-Net segmentation results were comparable to those produced by expert nuclear medicine technologists, with no significant difference (p = 0.1). Conclusions: The findings demonstrate that the U-Net model effectively addresses some segmentation challenges in MPI, facilitating improved diagnosis and analysis of mega data. Full article
(This article belongs to the Special Issue New Perspectives in Cardiac Imaging)
Show Figures

Figure 1

Figure 1
<p>The structure of the U-net model represents the input image (original image) and output mask.</p>
Full article ">Figure 2
<p>The full labeling process. The red circle in image 1 indicates the ground truth region of interest.</p>
Full article ">Figure 3
<p>The proposed data augmentation operations were applied to a training sample (images are colored using a pseudo color to clarify the differences between these various copies).</p>
Full article ">Figure 4
<p>Original is the raw data image, ground truth is the mask provided by experts, and random prediction is true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs) in a random supposed segmentation result of a sample of the utilized dataset. TPs indicate correctly predicted foreground pixels, while TNs indicate correctly predicted background pixels. FPs are background pixels incorrectly predicted as foreground, and FNs are foreground pixels incorrectly predicted as background.</p>
Full article ">Figure 5
<p>IoU example on a sample of the dataset and its corresponding prediction.</p>
Full article ">Figure 6
<p>Model performance showing multiple iterations of epochs for the training set (<b>left</b> graph). Model performance increases with multiple iterations of epochs of the validation set (<b>right</b> graph).</p>
Full article ">Figure 7
<p>Model evaluation metrics for all sets.</p>
Full article ">Figure 8
<p>Actual example of some random SPECT projection segmentation results via U-Net.</p>
Full article ">Figure 9
<p>Confusion matrices for model evaluation: true vs. predicted labels. (<b>a</b>) For training data; (<b>b</b>) for test data.</p>
Full article ">Figure 10
<p>ROC curves illustrating model performance: individual class ROC curves.</p>
Full article ">Figure 11
<p>Confusion matrix and ROC plot for the extra scenarios: (<b>a</b>) k-fold split scenario, (<b>b</b>) k-fold with Dice loss scenario.</p>
Full article ">Figure 11 Cont.
<p>Confusion matrix and ROC plot for the extra scenarios: (<b>a</b>) k-fold split scenario, (<b>b</b>) k-fold with Dice loss scenario.</p>
Full article ">Figure 12
<p>Actual example of SPECT frame segmentation results via U-Net (k-fold and Dice loss experiments). <b>K-fold<sup>1</sup></b> represents k-fold cross-entropy loss mask, and <b>K-fold<sup>2</sup></b> represents Dice loss mask.</p>
Full article ">Figure 13
<p>Model performance using different loss functions and split concepts (entropy loss, Dice loss, and k-fold split).</p>
Full article ">Figure 13 Cont.
<p>Model performance using different loss functions and split concepts (entropy loss, Dice loss, and k-fold split).</p>
Full article ">Figure 13 Cont.
<p>Model performance using different loss functions and split concepts (entropy loss, Dice loss, and k-fold split).</p>
Full article ">Figure 14
<p>An image representing corresponding sinograms (S1 and S2) and various axes of original and U-Net images. Images are vertical long axis (1 and 2, for original and U-Net images, respectively), horizontal long axis (3 and 4, for original and U-Net images, respectively), and short axis (5 and 6, for original and U-Net images, respectively). Original images were processed using Myovation<sup>®</sup> segmentation while U-Net images were segmented using U-Net code.</p>
Full article ">Figure 15
<p>The designed GUI: (<b>a</b>) the main design, (<b>b</b>) test example 1, and (<b>c</b>) test example 2.</p>
Full article ">Figure 15 Cont.
<p>The designed GUI: (<b>a</b>) the main design, (<b>b</b>) test example 1, and (<b>c</b>) test example 2.</p>
Full article ">Figure 15 Cont.
<p>The designed GUI: (<b>a</b>) the main design, (<b>b</b>) test example 1, and (<b>c</b>) test example 2.</p>
Full article ">
Back to TopTop