[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (21,677)

Search Parameters:
Keywords = real-time systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 981 KiB  
Review
State of the Art in Automated Operational Modal Identification: Algorithms, Applications, and Future Perspectives
by Hasan Mostafaei and Mahdi Ghamami
Machines 2025, 13(1), 39; https://doi.org/10.3390/machines13010039 (registering DOI) - 9 Jan 2025
Abstract
This paper presents a comprehensive review of automated modal identification techniques, focusing on various established and emerging methods, particularly Stochastic Subspace Identification (SSI). Automated modal identification plays a crucial role in structural health monitoring (SHM) by extracting key modal parameters such as natural [...] Read more.
This paper presents a comprehensive review of automated modal identification techniques, focusing on various established and emerging methods, particularly Stochastic Subspace Identification (SSI). Automated modal identification plays a crucial role in structural health monitoring (SHM) by extracting key modal parameters such as natural frequencies, damping ratios, and mode shapes from vibration data. To address the limitations of traditional manual methods, several approaches have been developed to automate this process. Among these, SSI stands out as one of the most effective time-domain methods due to its robustness in handling noisy environments and closely spaced modes. This review examines SSI-based algorithms, covering essential components such as system identification, noise mode elimination, stabilization diagram interpretation, and clustering techniques for mode identification. Advanced SSI implementations that incorporate real-time recursive estimation, adaptive stabilization criteria, and automated mode selection are also discussed. Additionally, the review covers frequency-domain methods like Frequency Domain Decomposition (FDD) and Enhanced Frequency Domain Decomposition (EFDD), highlighting their application in spectral analysis and modal parameter extraction. Techniques based on machine learning (ML), deep learning (DL), and artificial intelligence (AI) are explored for their ability to automate feature extraction, classification, and decision making in large-scale SHM systems. This review concludes by highlighting the current challenges, such as computational demands and data management, and proposing future directions for research in automated modal analysis to support resilient, sustainable infrastructure. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

Figure 1
<p>Summary of current state of the art in AOMA.</p>
Full article ">Figure 2
<p>Trend of reviewed papers on AOMA per year.</p>
Full article ">Figure 3
<p>(<b>a</b>) A Venn diagram illustrating the nested relationships among AI, ML, and DL. AI encompasses ML, while DL is a specialized subset of ML. (<b>b</b>) Data processing in AI Systems: A flowchart showing the pathways of input data through three AI systems.</p>
Full article ">
17 pages, 310 KiB  
Article
AI-Driven Innovations in Tourism: Developing a Hybrid Framework for the Saudi Tourism Sector
by Abdulkareem Alzahrani, Abdullah Alshehri, Maha Alamri and Saad Alqithami
AI 2025, 6(1), 7; https://doi.org/10.3390/ai6010007 (registering DOI) - 9 Jan 2025
Viewed by 81
Abstract
In alignment with Saudi Vision 2030’s strategic objectives to diversify and enhance the tourism sector, this study explores the integration of Artificial Intelligence (AI) in the Al-Baha district, a prime tourist destination in Saudi Arabia. Our research introduces a hybrid AI-based framework that [...] Read more.
In alignment with Saudi Vision 2030’s strategic objectives to diversify and enhance the tourism sector, this study explores the integration of Artificial Intelligence (AI) in the Al-Baha district, a prime tourist destination in Saudi Arabia. Our research introduces a hybrid AI-based framework that leverages sentiment analysis to assess and enhance tourist satisfaction, capitalizing on data extracted from social media platforms such as YouTube. This framework seeks to improve the quality of tourism experiences and augment the business value within the region. By analyzing sentiments expressed in user-generated content, the proposed AI system provides real-time insights into tourist preferences and experiences, enabling targeted interventions and improvements. The conducted experiments demonstrated the framework’s efficacy in identifying positive, neutral and negative sentiments, with the Multinomial Naive Bayes classifier showing superior performance in terms of precision and recall. These results indicate significant potential for AI to transform tourism practices in Al-Baha, offering enhanced experiences to visitors and driving the economic sustainability of the sector in line with the national vision. This study underscores the transformative potential of AI in refining operational strategies and aligning them with evolving tourist expectations, thereby supporting the broader goals of Saudi Vision 2030 for the tourism industry. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of one-versus-all SVM classifiers with hyperplanes in a 2D feature space.</p>
Full article ">Figure 2
<p>Illustration of K-Nearest Neighbors (KNN) classification for sentiment analysis. The test point (yellow square) is classified based on the majority sentiment of its neighbors. Blue circles represent positive sentiment points, while red squares indicate negative sentiment points. The dashed circle marks the boundary of the neighborhood defined by the value of k (e.g., 3 nearest neighbors). The classification is determined by the dominant sentiment among the neighbors within this boundary.</p>
Full article ">
27 pages, 553 KiB  
Systematic Review
Integrating Artificial Intelligence, Internet of Things, and Sensor-Based Technologies: A Systematic Review of Methodologies in Autism Spectrum Disorder Detection
by Georgios Bouchouras and Konstantinos Kotis
Algorithms 2025, 18(1), 34; https://doi.org/10.3390/a18010034 - 9 Jan 2025
Viewed by 80
Abstract
This paper presents a systematic review of the emerging applications of artificial intelligence (AI), Internet of Things (IoT), and sensor-based technologies in the diagnosis of autism spectrum disorder (ASD). The integration of these technologies has led to promising advances in identifying unique behavioral, [...] Read more.
This paper presents a systematic review of the emerging applications of artificial intelligence (AI), Internet of Things (IoT), and sensor-based technologies in the diagnosis of autism spectrum disorder (ASD). The integration of these technologies has led to promising advances in identifying unique behavioral, physiological, and neuroanatomical markers associated with ASD. Through an examination of recent studies, we explore how technologies such as wearable sensors, eye-tracking systems, virtual reality environments, neuroimaging, and microbiome analysis contribute to a holistic approach to ASD diagnostics. The analysis reveals how these technologies facilitate non-invasive, real-time assessments across diverse settings, enhancing both diagnostic accuracy and accessibility. The findings underscore the transformative potential of AI, IoT, and sensor-based driven tools in providing personalized and continuous ASD detection, advocating for data-driven approaches that extend beyond traditional methodologies. Ultimately, this review emphasizes the role of technology in improving ASD diagnostic processes, paving the way for targeted and individualized assessments. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>PRISMA flow diagram for selected papers.</p>
Full article ">Figure 2
<p>A framework proposed for ASD detection combining new technologies and traditional methods. Figure created using (the accessed date: 24 December 2024.) <a href="https://whimsical.com" target="_blank">https://whimsical.com</a>.</p>
Full article ">
18 pages, 7420 KiB  
Article
LEO-SOP Differential Doppler/INS Tight Integration Method Under Weak Observability
by Lelong Zhao, Ming Lei, Yue Liu, Yiwei Wang, Jian Ge, Xinnian Guo and Zhibo Fang
Electronics 2025, 14(2), 250; https://doi.org/10.3390/electronics14020250 - 9 Jan 2025
Viewed by 100
Abstract
The utilization of low Earth orbit (LEO) satellites’ signals of opportunity (SOPs) for absolute positioning and navigation in global navigation satellite system (GNSS)-denied environments has emerged as a significant area of research. Among various methodologies, tightly integrated Doppler/inertial navigation system (INS) frameworks present [...] Read more.
The utilization of low Earth orbit (LEO) satellites’ signals of opportunity (SOPs) for absolute positioning and navigation in global navigation satellite system (GNSS)-denied environments has emerged as a significant area of research. Among various methodologies, tightly integrated Doppler/inertial navigation system (INS) frameworks present a promising solution for achieving real-time LEO-SOP-based positioning in dynamic scenarios. However, existing integration schemes generally overlook the key characteristics of LEO opportunity signals, including the limited number of visible satellites and the random nature of signal broadcasts. These factors exacerbate the weak observability inherent in LEO-SoOP Doppler/INS positioning, resulting in difficulty in obtaining reliable solutions and degraded positioning accuracy. To address these issues, this paper proposes a novel LEO-SOP Doppler/INS tight integration method that incorporates trending information to alleviate the problem of weak observability. The method leverages a parallel filtering structure combining extended Kalman filter (EKF) and Rauch–Tung–Striebel (RTS) smoothing, extracting trend information from the quasi-real-time high-precision RTS filtering results to optimize the EKF positioning solution for the current epoch. This approach effectively avoids the overfitting problem commonly associated with directly using batch data to estimate the current epoch state. The experimental results validate the improved positioning accuracy and robustness of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Overall logical structure of the proposed method.</p>
Full article ">Figure 2
<p>Number of visible satellites during the simulation experiment.</p>
Full article ">Figure 3
<p>Comparison of positioning results of the two methods in the simulation experiment.</p>
Full article ">Figure 4
<p>Positioning deviations of the two methods in the simulation experiment.</p>
Full article ">Figure 5
<p>Comparison of the estimated velocity results in the simulation experiment.</p>
Full article ">Figure 6
<p>Comparison of the attitude estimation results in the simulation experiment.</p>
Full article ">Figure 7
<p>Visible satellites and Doppler measurement information during the on-board experiment: (<b>a</b>) Satellite distribution sky map during the experiment; (<b>b</b>) raw Doppler measurements during the experiment.</p>
Full article ">Figure 8
<p>Comparison of positioning results of the two methods in the on-board experiment.</p>
Full article ">Figure 9
<p>Variation in the positioning results over time in all directions.</p>
Full article ">Figure 10
<p>Positioning deviations of the two methods in the on-board experiment.</p>
Full article ">
14 pages, 3165 KiB  
Article
Exploring a Software Framework for Posture Tracking and Haptic Feedback Control: A Virtual Reality-Based Approach for Upper Limb Rehabilitation on the Oculus Quest 2
by Joaquin Dillen, Antonio H. J. Moreira and João L. Vilaça
Sensors 2025, 25(2), 340; https://doi.org/10.3390/s25020340 - 9 Jan 2025
Viewed by 114
Abstract
Virtual reality (VR) has gained significant attention in various fields including healthcare and industrial applications. Within healthcare, an interesting application of VR can be found in the field of physiotherapy. The conventional methodology for rehabilitating upper limb lesions is often perceived as tedious [...] Read more.
Virtual reality (VR) has gained significant attention in various fields including healthcare and industrial applications. Within healthcare, an interesting application of VR can be found in the field of physiotherapy. The conventional methodology for rehabilitating upper limb lesions is often perceived as tedious and uncomfortable. The manual nature of the process, performed by physicians, leaves patients in an environment lacking motivation and engagement. This presents an opportunity for implementing VR as a tool to enhance the rehabilitation process and improve the quality, efficiency, and evolution of recovery. However, physiotherapy often lacks relevant data to track the recovery process effectively, further compounding concerns about its efficacy. To address this, we propose the development of a posture control system using the Oculus Quest 2, a VR device. Our primary objective was to validate the performance aspects of this device and assess its potential as a rehabilitation tool, providing valuable support to healthcare professionals. Through a series of tests, we evaluated the effectiveness of our VR solution by integrating it into specific therapeutic exercises. This approach enhances patient involvement by offering real-time feedback on exercise execution and providing clear instructions for posture correction. The results demonstrate a notable impact on exercise performance, highlighting the feasibility of developing physiotherapeutically adapted solutions utilizing VR technology. By leveraging the Oculus Quest 2 system and the proposed framework, our research contributes to the advancement of VR-based rehabilitation practices. The findings offer valuable insights into the potential benefits of integrating immersive technologies into the field of physiotherapy, empowering healthcare professionals in their treatment approaches. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Experience representation: A visual depiction of the occlusion test. The ten blue spheres were evenly spaced along a circular arc around the headset in the virtual environment within the headset with a distance to the headset of approximately 2 m, inclined at 65 degrees both upward and downward from the front relative to the headset’s horizon line. The Polaris Vega was positioned directly in front of the headset at a distance of 2.5 m and a height of 2 m.</p>
Full article ">Figure 2
<p>Visual representation of points sequence: Illustration of the test execution process, where the headset was sequentially aimed at each sphere from P1 to P10. The sequence followed the predefined arrangement of points, with the Polaris Vega’s origin reference tracker positioned on the headset.</p>
Full article ">Figure 3
<p>Passive external rotation exercise illustration adapted by the American Academy of Orthopedic Surgeons Rotator Cuff and Shoulder Conditioning Program [<a href="#B30-sensors-25-00340" class="html-bibr">30</a>].</p>
Full article ">Figure 4
<p>Exercise tool representation: Adaptation of the passive external rotation exercise using Oculus Quest controllers. The tool consisted of a 60 cm long wooden cylinder with a diameter of 2.2 cm. Two custom base adapters were attached at either end of the cylinder to securely hold the Oculus Quest controllers, enabling their integration into the exercise setup.</p>
Full article ">Figure 5
<p>Experience representation: On the left, the blue target represents the virtual target used by the posture tracking system. On the right, the red circles indicate the placement of Polaris passive 4-marker rigid bodies utilized for tracking within the Polaris Vega, where the one placed on the headset works as the origin reference for the one placed on the chest controller.</p>
Full article ">Figure 6
<p>Standard deviation comparison: Comparison of the standard deviations for each point and coordinate across both systems during the occlusion test. The data is presented separately by axis, with red representing the Polaris Vega system and blue representing the Oculus Quest system.</p>
Full article ">Figure 7
<p>Standard deviation differential: A comparison of the standard deviation differentials derived from the data collected from 10 participants during the occlusion test. The results are color-coded, with blue representing the Oculus Quest system and red representing the Polaris Vega system.</p>
Full article ">Figure 8
<p>Posture analysis data from various tests conducted using the tracking framework. Each color corresponds to a specific test subject, illustrating variations in posture across exercises and individuals.</p>
Full article ">Figure 9
<p>Bidimensional distance between the chest controller pointer and the target center across tests (T2–T7), showing the medians, interquartile ranges, and variability.</p>
Full article ">
10 pages, 2095 KiB  
Article
Stable Field Emissions from Zirconium Carbide Nanoneedle Electron Source
by Yimeng Wu, Jie Tang, Shuai Tang, You-Hu Chen, Ta-Wei Chiu, Masaki Takeguchi, Ayako Hashimoto and Lu-Chang Qin
Nanomaterials 2025, 15(2), 93; https://doi.org/10.3390/nano15020093 (registering DOI) - 9 Jan 2025
Viewed by 137
Abstract
In this study, a single zirconium carbide (ZrC) nanoneedle structure oriented in the <100> direction was fabricated by a dual-beam focused ion beam (FIB-SEM) system, and its field emission characteristics and emission current stability were evaluated. Benefiting from controlled fabrication with real-time observation, [...] Read more.
In this study, a single zirconium carbide (ZrC) nanoneedle structure oriented in the <100> direction was fabricated by a dual-beam focused ion beam (FIB-SEM) system, and its field emission characteristics and emission current stability were evaluated. Benefiting from controlled fabrication with real-time observation, the ZrC nanoneedle has a smooth surface and a tip with a radius of curvature smaller than 20 nm and a length greater than 2 μm. Due to its low work function and well-controlled morphology, the ZrC nanoneedle emitter, positioned in a high-vacuum chamber, was able to generate a single and collimated electron beam with a current of 1.2 nA at a turn-on voltage of 210 V, and the current increased to 100 nA when the applied voltage reached 325 V. After the treatment of the nanoneedle tip, the field emission exhibited a stable emission for 150 min with a fluctuation of 1.4% and an emission current density as high as 1.4 × 1010 A m−2. This work presents an efficient and controllable method for fabricating nanostructures, and this method is applicable to the transition metal compound ZrC as a field emission emitter, demonstrating its potential as an electron source for electron-beam devices. Full article
(This article belongs to the Section Synthesis, Interfaces and Nanostructures)
Show Figures

Figure 1

Figure 1
<p>Schematic of (<b>a</b>) the fabrication process of ZrC nanoneedles using the FIB-SEM system. (<b>b</b>) The experimental setup for the field emission test.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic of the ZrC nanoneedle field emission electron source with hairpin structure. (<b>b</b>) SEM image of ZrC nanoneedle during the process of Ga-ion milling. (<b>c</b>) SEM image of ZrC nanoneedle after fabrication was completed. (<b>d</b>) TEM image and electron diffraction pattern (inset) of the sharpened ZrC nanoneedle tip. (<b>e</b>) High-resolution TEM image near the surface region.</p>
Full article ">Figure 3
<p>Field emission characteristics of the ZrC nanoneedle emitter. (<b>a</b>) I-V curve of field emissions and (<b>b</b>) its corresponding F-N plot. (<b>c</b>) FEM pattern of the ZrC nanoneedle with a single emission spot in the axial direction. (<b>d</b>) Field emission intensity following a Gaussian distribution with FWHM of 7.1 mm.</p>
Full article ">Figure 4
<p>The 30 min field emission stability before (red line) and after (black line) the ZrC nanoneedle emitter stabilized under emission currents of (<b>a</b>) 3 nA, (<b>b</b>) 10 nA, and (<b>c</b>) 50 nA with fluctuations of 0.30%, 0.31%, and 0.60%, respectively. (<b>d</b>) Long-term stability with a fluctuation of 1.41% after 2.5 h of measurement.</p>
Full article ">
23 pages, 6144 KiB  
Article
Based on the Geometric Characteristics of Binocular Imaging for Yarn Remaining Detection
by Ke Le and Yanhong Yuan
Sensors 2025, 25(2), 339; https://doi.org/10.3390/s25020339 - 9 Jan 2025
Viewed by 143
Abstract
The automated detection of yarn margins is crucial for ensuring the continuity and quality of production in textile workshops. Traditional methods rely on workers visually inspecting the yarn margin to determine the timing of replacement; these methods fail to provide real-time data and [...] Read more.
The automated detection of yarn margins is crucial for ensuring the continuity and quality of production in textile workshops. Traditional methods rely on workers visually inspecting the yarn margin to determine the timing of replacement; these methods fail to provide real-time data and cannot meet the precise scheduling requirements of modern production. The complex environmental conditions in textile workshops, combined with the cylindrical shape and repetitive textural features of yarn bobbins, limit the application of traditional visual solutions. Therefore, we propose a visual measurement method based on the geometric characteristics of binocular imaging: First, all contours in the image are extracted, and the distance sequence between the contours and the centroid is extracted. This sequence is then matched with a predefined template to identify the contour information of the yarn bobbin. Additionally, four equations for the tangent line from the camera optical center to the edge points of the yarn bobbin contour are established, and the angle bisectors of each pair of tangents are found. By solving the system of equations for these two angle bisectors, their intersection point is determined, giving the radius of the yarn bobbin. This method overcomes the limitations of monocular vision systems, which lack depth information and suffer from size measurement errors due to the insufficient repeat positioning accuracy when patrolling back and forth. Next, to address the self-occlusion issues and matching difficulties during binocular system measurements caused by the yarn bobbin surface’s repetitive texture, an imaging model is established based on the yarn bobbin’s cylindrical characteristics. This avoids pixel-by-pixel matching in binocular vision and enables the accurate measurement of the remaining yarn margin. The experimental data show that the measurement method exhibits high precision within the recommended working distance range, with an average error of only 0.68 mm. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The distribution of weaving machines in a textile workshop: (<b>a</b>) a real textile workshop; (<b>b</b>) the production layout of the textile workshop.</p>
Full article ">Figure 2
<p>A schematic diagram of binocular stereovision measurement: (<b>a</b>) the principle of binocular triangulation. Here, <math display="inline"><semantics> <mrow> <mi>P</mi> </mrow> </semantics></math> is a point in the world coordinate system, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the image points on the image planes <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>l</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>l</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the epipolars. (<b>b</b>) A basic model of a pinhole camera. The length in world coordinates is imaged as the pixel on the imaging plane through the camera’s optical center <math display="inline"><semantics> <mrow> <mi>O</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>f</mi> </mrow> </semantics></math> is the camera focal length, and <math display="inline"><semantics> <mrow> <mi>Z</mi> </mrow> </semantics></math> is the distance between point and the binocular camera. (<b>c</b>) The process of binocular pixel matching.</p>
Full article ">Figure 3
<p>The monocular camera imaging process, where light rays pass through the cylinder, cross the image plane at points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and converge at the optical center <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>. Point <math display="inline"><semantics> <mrow> <mi>O</mi> </mrow> </semantics></math> represents the center of the cylinder’s circular cross-section.</p>
Full article ">Figure 4
<p>Binocular camera imaging process.</p>
Full article ">Figure 5
<p>The imaging process of the cross-section of the yarn bobbin along a vertical axis. Here, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the horizontal coordinates of the camera optical centers on the pixel plane; the outer contour points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>l</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>l</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mrow> <mi>r</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the real-space points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>L</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>L</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mi>f</mi> </mrow> </semantics></math> is the camera focal length; and <math display="inline"><semantics> <mrow> <mi>b</mi> </mrow> </semantics></math> is the baseline of the binocular camera.</p>
Full article ">Figure 6
<p>The process of locating the contour of the yarn bobbin using centroid distance: (<b>a</b>) the centroid distance sequence template of the yarn bobbin contour; (<b>b</b>) the centroid distance sequence of the yarn bobbin contour; (<b>c</b>) a schematic of the matching process, where the first row shows the extracted contours, the second row shows the corresponding centroid distance sequences, and the third row shows the results of the cross-correlation function between the extracted centroid distance sequences and the centroid distance sequence template.</p>
Full article ">Figure 7
<p>A schematic of epipolar geometry, where <math display="inline"><semantics> <mrow> <mi>P</mi> </mrow> </semantics></math> is a point in the world coordinate system, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the image points on the image planes <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>, and the epipoles <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> are the intersections of the baseline <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> with the image planes <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>. At this point, the plane formed by <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <msub> <mrow> <mi>O</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mi>P</mi> </mrow> </semantics></math> is called the epipolar plane: (<b>a</b>) the original epipolar geometry diagram; (<b>b</b>) the epipolar geometry diagram after epipolar rectification.</p>
Full article ">Figure 8
<p>The contour localization process: (<b>a</b>) original image; (<b>b</b>) contours detected using structured forests; (<b>c</b>) contours after filtering; (<b>d</b>) extracted contour centroids; (<b>e</b>) centroid distance sequence image; (<b>f</b>) image after the localization of yarn bobbin contours.</p>
Full article ">Figure 9
<p>Measurement results at different distances: (<b>a</b>) a schematic of measurements at different distances for the yarn bobbin, numbers 1–12 represent the sequential positions on the camera mount, and the arrow indicates the direction of camera movement; (<b>b</b>) the measured yarn bobbin samplesfrom left to right: yarn bobbin1; yarn bobbin2.</p>
Full article ">Figure 10
<p>Measurement results at different distances: (<b>a</b>) the measurement results of yarn bobbin1 at different distances; (<b>b</b>) the measurement results of yarn bobbin2 at different distances.</p>
Full article ">Figure 11
<p>Measurement results at different angles: (<b>a</b>) a schematic of measurements at different camera positions, numbers 1–14 represent the sequential positions on the camera mount, and the arrow indicates the direction of camera movement; (<b>b</b>) measured yarn bobbin samples—from left to right: yarn bobbin1, yarn bobbin2, and yarn bobbin3; (<b>c</b>) measurement sizes at different camera positions.</p>
Full article ">Figure 12
<p>Yarn bobbin sample, the numbers in the figure represent the samples numbered in ascending order based on the size of the yarn bobbin: (<b>a</b>) a sample captured in the laboratory; (<b>b</b>) a sample captured in the production workshop.</p>
Full article ">Figure 13
<p>Measurement results. The points in the binocular vision method that did not match successfully resulted in excessively large errors, which are not displayed in the error bar chart in the figure: (<b>a</b>) a comparison of measurement errors in yarn bobbin radius using different methods in a laboratory environment; (<b>b</b>) a comparison of measurement errors in yarn bobbin radius using different methods in a production workshop environment.</p>
Full article ">
17 pages, 2037 KiB  
Article
Application of Deep Learning to Identify Flutter Flight Testing Signals Parameters and Analysis of Real F-18 Flutter Flight Test Data
by Sami Abou-Kebeh, Roberto Gil-Pita and Manuel Rosa-Zurera
Aerospace 2025, 12(1), 34; https://doi.org/10.3390/aerospace12010034 - 9 Jan 2025
Viewed by 137
Abstract
Aircraft envelope expansion during the installation of new underwing stores presents significant challenges, particularly due to the aeroelastic flutter phenomenon. Accurate modeling of aeroelastic behavior often necessitates flight testing, which poses risks due to the potential catastrophic consequences of reaching the flutter point. [...] Read more.
Aircraft envelope expansion during the installation of new underwing stores presents significant challenges, particularly due to the aeroelastic flutter phenomenon. Accurate modeling of aeroelastic behavior often necessitates flight testing, which poses risks due to the potential catastrophic consequences of reaching the flutter point. Traditional methods, like frequency sweeps, are effective but require prolonged exposure to flutter conditions, making them less suitable for transonic flight validations. This paper introduces a robust deep learning approach to process sine dwell signals from aeroelastic flutter flight tests, characterized by short data lengths (less than 5 s) and low frequencies (less than 10 Hz). We explore the preliminary viability of different deep learning networks and compare their performances to existing methods such as the PRESTO algorithm and Laplace Wavelet Matching Pursuit estimation. Deep learning algorithms demonstrate substantial accuracy and robustness, providing reliable parameter identification for flutter analysis while significantly reducing the time spent near flutter conditions. Although the results with the networks trained show less accuracy than the PRESTO algorithm, they are more accurate than the Laplace Wavelet estimation, and the results are promising enough to justify extended investigation on this area. This approach is validated using both synthetic data and real F-18 flight test signals, which highlights its potential for real-time analysis and broader applicability in aeroelastic testing. Full article
(This article belongs to the Special Issue Recent Advances in Flight Testing)
Show Figures

Figure 1

Figure 1
<p>Diagram of data preparation for MLP and DNN networks.</p>
Full article ">Figure 2
<p>Construction of the input matrix for CNN processing. Once the time series dataset is transformed into a complex frequency spectrum, each point (example as a blue vector) will be represented in the base <span class="html-italic">B</span> (red vectors) and projected onto the vector space <span class="html-italic">V</span> (black vectors). Note that the unit vectors <span class="html-italic">B</span> are a subset of <span class="html-italic">V</span>.</p>
Full article ">Figure 3
<p>Multi-Layer Perceptron sample network diagram. This example depicts an MLP with 20 neurons in the hidden layer.</p>
Full article ">Figure 4
<p>DNN sample. In this case, a DNN with one input layer, three hidden layers, and one output layer is depicted. Each hidden layer has 40 neurons.</p>
Full article ">Figure 5
<p>Sample CNN. This example illustrates a CNN with one input layer, two convolutional layers, one connection layer, and one regression layer. Each convolutional layer employs a different number of neurons and convolutional processes.</p>
Full article ">Figure 6
<p>Error histograms for frequency and damping for PRESTO, Laplace Wavelet Matching Pursuit, CNN 100 × 6, and DNN 100 × 100 × 100 chosen as sample methods. The other neural network methods exhibit a similar distribution to those depicted here. The damping plot for the PRESTO method was truncated at <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </semantics></math> relative damping due to the tail extending to <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>35</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Scatter plot comparing the regression curves of real data and synthetic reconstructed data. A comparison between Laplace Wavelet Matching Pursuit, PRESTO, and a CNN 100 × 6, taken as a sample method, is shown. The horizontal axis shows the normalized original time series data, while the vertical axis represents the respective reconstructed normalized signals. The red line represents the linear regression curve.</p>
Full article ">
21 pages, 702 KiB  
Review
The Role of Artificial Intelligence and Emerging Technologies in Advancing Total Hip Arthroplasty
by Luca Andriollo, Aurelio Picchi, Giulio Iademarco, Andrea Fidanza, Loris Perticarini, Stefano Marco Paolo Rossi, Giandomenico Logroscino and Francesco Benazzo
J. Pers. Med. 2025, 15(1), 21; https://doi.org/10.3390/jpm15010021 - 9 Jan 2025
Viewed by 124
Abstract
Total hip arthroplasty (THA) is a widely performed surgical procedure that has evolved significantly due to advancements in artificial intelligence (AI) and robotics. As demand for THA grows, reliable tools are essential to enhance diagnosis, preoperative planning, surgical precision, and postoperative rehabilitation. AI [...] Read more.
Total hip arthroplasty (THA) is a widely performed surgical procedure that has evolved significantly due to advancements in artificial intelligence (AI) and robotics. As demand for THA grows, reliable tools are essential to enhance diagnosis, preoperative planning, surgical precision, and postoperative rehabilitation. AI applications in orthopedic surgery offer innovative solutions, including automated hip osteoarthritis (OA) diagnosis, precise implant positioning, and personalized risk stratification, thereby improving patient outcomes. Deep learning models have transformed OA severity grading and implant identification by automating traditionally manual processes with high accuracy. Additionally, AI-powered systems optimize preoperative planning by predicting the hip joint center and identifying complications using multimodal data. Robotic-assisted THA enhances surgical precision with real-time feedback, reducing complications such as dislocations and leg length discrepancies while accelerating recovery. Despite these advancements, barriers such as cost, accessibility, and the steep learning curve for surgeons hinder widespread adoption. Postoperative rehabilitation benefits from technologies like virtual and augmented reality and telemedicine, which enhance patient engagement and adherence. However, limitations, particularly among elderly populations with lower adaptability to technology, underscore the need for user-friendly platforms. To ensure comprehensiveness, a structured literature search was conducted using PubMed, Scopus, and Web of Science. Keywords included “artificial intelligence”, “machine learning”, “robotics”, and “total hip arthroplasty”. Inclusion criteria emphasized peer-reviewed studies published in English within the last decade focusing on technological advancements and clinical outcomes. This review evaluates AI and robotics’ role in THA, highlighting opportunities and challenges and emphasizing further research and real-world validation to integrate these technologies into clinical practice effectively. Full article
Show Figures

Figure 1

Figure 1
<p>Clinical application workflow of key artificial intelligence tools and new technologies [OA: osteoarthritis; ML: machine learning; DL: deep learning; THA: total hip arthroplasty].</p>
Full article ">
25 pages, 4764 KiB  
Article
Leveraging Deep Learning for Real-Time Coffee Leaf Disease Identification
by Opeyemi Adelaja and Bernardi Pranggono
AgriEngineering 2025, 7(1), 13; https://doi.org/10.3390/agriengineering7010013 - 8 Jan 2025
Viewed by 321
Abstract
Agriculture is vital for providing food and economic benefits, but crop diseases pose significant challenges, including coffee cultivation. Traditional methods for disease identification are labor-intensive and lack real-time capabilities. This study aims to address existing methods’ limitations and provide a more efficient, reliable, [...] Read more.
Agriculture is vital for providing food and economic benefits, but crop diseases pose significant challenges, including coffee cultivation. Traditional methods for disease identification are labor-intensive and lack real-time capabilities. This study aims to address existing methods’ limitations and provide a more efficient, reliable, and cost-effective solution for coffee leaf disease identification. It presents a novel approach to the real-time identification of coffee leaf diseases using deep learning. We implemented several transfer learning (TL) models, including ResNet101, Xception, CoffNet, and VGG16, to evaluate the feasibility and reliability of our solution. The experiment results show that the proposed models achieved high accuracy rates of 97.30%, 97.60%, 97.88%, and 99.89%, respectively. CoffNet, our proposed model, showed a notable processing speed of 125.93 frames per second (fps), making it suitable for real-time applications. Using a diverse dataset of mixed images from multiple devices, our approach reduces the workload of farmers and simplifies the disease detection process. The findings lay the groundwork for the development of practical and efficient systems that can assist coffee growers in disease management, promoting sustainable farming practices, and food security. Full article
(This article belongs to the Special Issue The Future of Artificial Intelligence in Agriculture)
17 pages, 19075 KiB  
Article
A Channel Attention-Driven Optimized CNN for Efficient Early Detection of Plant Diseases in Resource Constrained Environment
by Sana Parez, Naqqash Dilshad and Jong Weon Lee
Agriculture 2025, 15(2), 127; https://doi.org/10.3390/agriculture15020127 - 8 Jan 2025
Viewed by 272
Abstract
Agriculture is a cornerstone of economic prosperity, but plant diseases can severely impact crop yield and quality. Identifying these diseases accurately is often difficult due to limited expert availability and ambiguous information. Early detection and automated diagnosis systems are crucial to mitigate these [...] Read more.
Agriculture is a cornerstone of economic prosperity, but plant diseases can severely impact crop yield and quality. Identifying these diseases accurately is often difficult due to limited expert availability and ambiguous information. Early detection and automated diagnosis systems are crucial to mitigate these challenges. To address this, we propose a lightweight convolutional neural network (CNN) designed for resource-constrained devices termed as LeafNet. LeafNet draws inspiration from the block-wise VGG19 architecture but incorporates several optimizations, including a reduced number of parameters, smaller input size, and faster inference time while maintaining competitive accuracy. The proposed LeafNet leverages small, uniform convolutional filters to capture fine-grained details of plant disease features, with an increasing number of channels to enhance feature extraction. Additionally, it integrates channel attention mechanisms to prioritize disease-related features effectively. We evaluated the proposed method on four datasets: the benchmark plant village (PV), the data repository of leaf images (DRLIs), the newly curated plant composite (PC) dataset, and the BARI Sunflower (BARI-Sun) dataset, which includes diverse and challenging real-world images. The results show that the proposed performs comparably to state-of-the-art methods in terms of accuracy, false positive rate (FPR), model size, and runtime, highlighting its potential for real-world applications. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed optimized <span class="html-italic">LeafNet</span> for efficient plant disease detection.</p>
Full article ">Figure 2
<p>Two completely linked layers, a multiplication operation, and GAP make up the channel attention module. This module has the ability to re-calibrate the input feature maps.</p>
Full article ">Figure 3
<p>Confusion matrices of the proposed <span class="html-italic">LeafNet</span> for every dataset that is part of the experiment. (<b>a</b>) PV. (<b>b</b>) DRLI. (<b>c</b>) PC. (<b>d</b>) BARI-Sun.</p>
Full article ">Figure 4
<p>The accuracy and loss of the suggested <span class="html-italic">LeafNet</span> method during training and validation on the PC dataset. (<b>a</b>) Accuracy. (<b>b</b>) Loss.</p>
Full article ">Figure 5
<p>Qualitative evaluation of <span class="html-italic">LeafNet</span> using the included datasets. Results for the accurate prediction of the input images are highlighted in <span style="color: #0000FF">blue</span>, while the <span style="color: #FF0000">red</span> represents the inaccurate.</p>
Full article ">
12 pages, 1457 KiB  
Article
Machine Learning Models for the Early Real-Time Prediction of Deterioration in Intensive Care Units—A Novel Approach to the Early Identification of High-Risk Patients
by Dominik Thiele, Reitze Rodseth, Richard Friedland, Fabian Berger, Chris Mathew, Caroline Maslo, Vanessa Moll, Christoph Leithner, Christian Storm, Alexander Krannich and Jens Nee
J. Clin. Med. 2025, 14(2), 350; https://doi.org/10.3390/jcm14020350 - 8 Jan 2025
Viewed by 166
Abstract
Background Predictive machine learning models have made use of a variety of scoring systems to identify clinical deterioration in ICU patients. However, most of these scores include variables that are dependent on medical staff examining the patient. We present the development of a [...] Read more.
Background Predictive machine learning models have made use of a variety of scoring systems to identify clinical deterioration in ICU patients. However, most of these scores include variables that are dependent on medical staff examining the patient. We present the development of a real-time prediction model using clinical variables that are digital and automatically generated for the early detection of patients at risk of deterioration. Methods Routine monitoring data were used in this analysis. ICU patients with at least 24 h of vital sign recordings were included. Deterioration was defined as qSOFA ≥ 2. Model development and validation were performed internally by splitting the cohort into training and test datasets and validating the results on the test dataset. Five different models were trained, tested, and compared against each other. The models were an artificial neural network (ANN), a random forest (RF), a support vector machine (SVM), a linear discriminant analysis (LDA), and a logistic regression (LR). Results In total, 7156 ICU patients were screened for inclusion in the study, which resulted in models trained from a total of 28,348 longitudinal measurements. The artificial neural network showed a superior predictive performance for deterioration, with an area under the curve of 0.81 over 0.78 (RF), 0.78 (SVM), 0.77 (LDA), and 0.76 (LR), by using only four vital parameters. The sensitivity was higher than the specificity for the artificial neural network. Conclusions The artificial neural network, only using four automatically recorded vital signs, was best able to predict deterioration, 10 h before documentation in clinical records. This real-time prediction model has the potential to flag at-risk patients to the healthcare providers treating them, for closer monitoring and further investigation. Full article
(This article belongs to the Section Intensive Care)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the data infrastructure and the final best model. Data are transferred from the ICU to a database. The data are then mirrored into an anonymized database. The different models are trained in an RStudio Server environment, from which access to the anonymized database is granted. The final best model is an artificial neural network (ANN). Abbreviations: sbp, systolic blood pressure; dbp, diastolic blood pressure; hr, heart rate; spo2, saturation of peripheral oxygen; ANN, artificial neural network; ICU, intensive care unit.</p>
Full article ">Figure 2
<p>Exemplary presentation of the sliding median window of the heart rate of a patient. The window the size of two hours (red box) is shifted over the course of the measurements. Within the window, the median heart rate is calculated to smoothen the curve. The gray curve shows the smoothened curve. The yellow curve represents the original heart rate. The red bar symbolizes the timepoint when the patient deteriorates.</p>
Full article ">Figure 3
<p>Exemplary presentation of the course of vital signs in one patient with qSOFA ≥ 2. The time period for prediction is marked in gray, and the timepoint when qSOFA ≥ 2 is marked in red. (dbp, diastolic blood pressure; hr, heart rate; mbp, mean arterial blood pressure; rr, respiratory rate; spo2, peripheral capillary oxygen saturation; sbp, systolic blood pressure).</p>
Full article ">Figure 4
<p>Receiver operating curves (ROCs) of the model comparison 10 h before deterioration.</p>
Full article ">
31 pages, 6635 KiB  
Article
Optimization of Multi-Vehicle Cold Chain Logistics Distribution Paths Considering Traffic Congestion
by Zhijiang Lu, Kai Wu, E Bai and Zhengning Li
Symmetry 2025, 17(1), 89; https://doi.org/10.3390/sym17010089 - 8 Jan 2025
Viewed by 226
Abstract
Urban road traffic congestion has become a serious issue for cold chain logistics in terms of delivery time, distribution cost, product freshness, and even organization revenue and reputation. This study focuses on the cold chain distribution path by considering road traffic congestion with [...] Read more.
Urban road traffic congestion has become a serious issue for cold chain logistics in terms of delivery time, distribution cost, product freshness, and even organization revenue and reputation. This study focuses on the cold chain distribution path by considering road traffic congestion with transportation, real-time vehicle delivery speeds, and multiple-vehicle conditions. Therefore, a vehicle routing optimization model has been established with the objectives of minimizing costs, reducing carbon emissions, and maintaining cargo freshness, and a multi-objective hybrid genetic algorithm has been developed in combination with large neighborhood search (LNSNSGA-III) for leveraging strong local search capabilities, optimizing delivery routes, and enhancing delivery efficiency. Moreover, by reasonably adjusting departure times, product freshness can be effectively enhanced. The vehicle combination strategy performs well across multiple indicators, particularly the three-type vehicle strategy. The results show that costs and carbon emissions are influenced by environmental and refrigeration temperature factors, providing a theoretical basis for cold chain management. This study highlights the harmonious optimization of cold chain coordination, balancing multiple constraints, ensuring efficient logistic system operation, and maintaining equilibrium across all dimensions, all of which reflect the concept of symmetry. In practice, these research findings can be applied to urban traffic management, delivery optimization, and cold chain logistics control to improve delivery efficiency, minimize operational costs, reduce carbon emissions, and enhance corporate competitiveness and customer satisfaction. Future research should focus on integrating complex traffic and real-time data to enhance algorithm adaptability and explore customized delivery strategies, thereby achieving more efficient and environmentally friendly logistics solutions. Full article
(This article belongs to the Special Issue Symmetry in Civil Transportation Engineering)
Show Figures

Figure 1

Figure 1
<p>Service process of the distribution center.</p>
Full article ">Figure 2
<p>Relationship between vehicle speed and TCC.</p>
Full article ">Figure 3
<p>Sub-region set and distance set.</p>
Full article ">Figure 4
<p>Chromosome encoding principles.</p>
Full article ">Figure 5
<p>Reference Points Illustration.</p>
Full article ">Figure 6
<p>Chromosome Coding Diagram.</p>
Full article ">Figure 7
<p>LNSNSGA-III algorithm flowchart.</p>
Full article ">Figure 8
<p>TCC-S display.</p>
Full article ">Figure 9
<p>Comparison of two-dimensional pareto frontiers for four algorithms.</p>
Full article ">Figure 10
<p>Comparison of three-dimensional Pareto frontiers for the four algorithms.</p>
Full article ">Figure 11
<p>Delivery paths before and after optimization.</p>
Full article ">Figure 12
<p>Delivery scheme without adjusting vehicle departure time.</p>
Full article ">Figure 13
<p>Delivery scheme after adjusting vehicle departure time.</p>
Full article ">Figure 14
<p>Multi-vehicle model delivery route map.</p>
Full article ">Figure 15
<p>Freshness variation with <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> under different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">T</mi> </mrow> <mo>*</mo> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Cost and carbon emissions variation with <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> under different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mo>*</mo> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">
30 pages, 2076 KiB  
Article
Real-Time Detection, Evaluation, and Mapping of Crowd Panic Emergencies Based on Geo-Biometrical Data and Machine Learning
by Ilias Lazarou, Anastasios L. Kesidis and Andreas Tsatsaris
Digital 2025, 5(1), 2; https://doi.org/10.3390/digital5010002 - 8 Jan 2025
Viewed by 180
Abstract
Crowd panic emergencies can pose serious risks to public safety, and effective detection and mapping of such events are crucial for rapid response and mitigation. In this paper, we propose a real-time system for detecting and mapping crowd panic emergencies based on machine [...] Read more.
Crowd panic emergencies can pose serious risks to public safety, and effective detection and mapping of such events are crucial for rapid response and mitigation. In this paper, we propose a real-time system for detecting and mapping crowd panic emergencies based on machine learning and georeferenced biometric data from wearable devices and smartphones. The system uses a Gaussian SVM machine learning classifier to predict whether a person is stressed or not and then performs real-time spatial analysis to monitor the movement of stressed individuals. To further enhance emergency detection and response, we introduce the concept of CLOT (Classifier Confidence Level Over Time) as a parameter that influences the system’s noise filtering and detection speed. Concurrently, we introduce a newly developed metric called DEI (Domino Effect Index). The DEI is designed to assess the severity of panic-induced crowd behavior by considering factors such as the rate of panic transmission, density of panicked people, and alignment with the road network. This metric offers immeasurable benefits by assessing the magnitude of the cascading impact, enabling emergency responders to quickly determine the severity of the event and take necessary actions to prevent its escalation. Based on individuals’ trajectories and adjacency, the system produces dynamic areas that represent the development of the phenomenon’s spatial extent in real time. The results show that the proposed system is effective in detecting and mapping crowd panic emergencies in real time. The system generates three types of dynamic areas: a dynamic Crowd Panic Area based on the initial stressed locations of the persons, a dynamic Crowd Panic Area based on the current stressed locations of the persons, and the dynamic geometric difference between these two. These areas provide emergency responders with a real-time understanding of the extent and development of the crowd panic emergency, allowing for a more targeted and effective response. By incorporating the CLOT and the DEI, emergency responders can better understand crowd behavior and develop more effective response strategies to mitigate the risks associated with panic-induced crowd movements. In conclusion, our proposed system, enhanced by the incorporation of these two new metrics, proves to be a dependable and efficient tool for detecting, mapping, and assessing the severity of crowd panic emergencies, leading to a more efficient response and ultimately safeguarding public safety. Full article
(This article belongs to the Special Issue Hybrid Artificial Intelligence for Systems and Applications)
29 pages, 4271 KiB  
Article
Maximum Mixture Correntropy Criterion-Based Variational Bayesian Adaptive Kalman Filter for INS/UWB/GNSS-RTK Integrated Positioning
by Sen Wang, Peipei Dai, Tianhe Xu, Wenfeng Nie, Yangzi Cong, Jianping Xing and Fan Gao
Remote Sens. 2025, 17(2), 207; https://doi.org/10.3390/rs17020207 - 8 Jan 2025
Viewed by 204
Abstract
The safe operation of unmanned ground vehicles (UGVs) demands fundamental and essential requirements for continuous and reliable positioning performance. Traditional coupled navigation systems, combining the global navigation satellite system (GNSS) with an inertial navigation system (INS), provide continuous, drift-free position estimation. However, challenges [...] Read more.
The safe operation of unmanned ground vehicles (UGVs) demands fundamental and essential requirements for continuous and reliable positioning performance. Traditional coupled navigation systems, combining the global navigation satellite system (GNSS) with an inertial navigation system (INS), provide continuous, drift-free position estimation. However, challenges like GNSS signal interference and blockage in complex scenarios can significantly degrade system performance. Moreover, ultra-wideband (UWB) technology, known for its high precision, is increasingly used as a complementary system to the GNSS. To tackle these challenges, this paper proposes a novel tightly coupled INS/UWB/GNSS-RTK integrated positioning system framework, leveraging a variational Bayesian adaptive Kalman filter based on the maximum mixture correntropy criterion. This framework is introduced to provide a high-precision and robust navigation solution. By incorporating the maximum mixture correntropy criterion, the system effectively mitigates interference from anomalous measurements. Simultaneously, variational Bayesian estimation is employed to adaptively adjust noise statistical characteristics, thereby enhancing the robustness and accuracy of the integrated system’s state estimation. Furthermore, sensor measurements are tightly integrated with the inertial measurement unit (IMU), facilitating precise positioning even in the presence of interference from multiple signal sources. A series of real-world and simulation experiments were carried out on a UGV to assess the proposed approach’s performance. Experimental results demonstrate that the approach provides superior accuracy and stability in integrated system state estimation, significantly mitigating position drift error caused by uncertainty-induced disturbances. In the presence of non-Gaussian noise disturbances introduced by anomalous measurements, the proposed approach effectively implements error control, demonstrating substantial advantages in positioning accuracy and robustness. Full article
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>Message exchange process in the DS-TWR.</p>
Full article ">Figure 2
<p>Schematic diagram of multilateration positioning.</p>
Full article ">Figure 3
<p>Overview of the TC INS/UWB/GNSS-RTK integrated positioning system.</p>
Full article ">Figure 4
<p>Flowchart of the MMCC-based VBAKF algorithm.</p>
Full article ">Figure 5
<p>Overview of the UGV equipment and reference trajectory. (<b>a</b>) Experimental data collection platform. (<b>b</b>) Top view of reference trajectory.</p>
Full article ">Figure 6
<p>Positioning error sequences in the ENU directions for various solution strategies in Case 1.</p>
Full article ">Figure 7
<p>Overview of estimated position trajectory for various solution strategies in Case 1.</p>
Full article ">Figure 8
<p>CDF curves of horizontal positioning errors for various solution strategies in Case 1.</p>
Full article ">Figure 9
<p>Distribution of horizontal positioning errors for various solution strategies in Case 1.</p>
Full article ">Figure 10
<p>Improved percentage of the proposed MMCC-VBAKF in Case 1.</p>
Full article ">Figure 11
<p>Positioning error sequences in the ENU directions for various solution strategies in Case 2.</p>
Full article ">Figure 12
<p>Overview of estimated position trajectory for various solution strategies in Case 2.</p>
Full article ">Figure 13
<p>CDF curves of horizontal positioning errors for various solution strategies in Case 2.</p>
Full article ">Figure 14
<p>Distribution of horizontal positioning errors for various solution strategies in Case 2.</p>
Full article ">Figure 15
<p>Improved percentage of the proposed MMCC-VBAKF in Case 2.</p>
Full article ">
Back to TopTop