[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,269)

Search Parameters:
Keywords = sensor fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3395 KiB  
Article
Drone-Based Wildfire Detection with Multi-Sensor Integration
by Akmalbek Abdusalomov, Sabina Umirzakova, Makhkamov Bakhtiyor Shukhratovich, Mukhriddin Mukhiddinov, Azamat Kakhorov, Abror Buriboev and Heung Seok Jeon
Remote Sens. 2024, 16(24), 4651; https://doi.org/10.3390/rs16244651 (registering DOI) - 12 Dec 2024
Viewed by 299
Abstract
Wildfires pose a severe threat to ecological systems, human life, and infrastructure, making early detection critical for timely intervention. Traditional fire detection systems rely heavily on single-sensor approaches and are often hindered by environmental conditions such as smoke, fog, or nighttime scenarios. This [...] Read more.
Wildfires pose a severe threat to ecological systems, human life, and infrastructure, making early detection critical for timely intervention. Traditional fire detection systems rely heavily on single-sensor approaches and are often hindered by environmental conditions such as smoke, fog, or nighttime scenarios. This paper proposes Adaptive Multi-Sensor Oriented Object Detection with Space–Frequency Selective Convolution (AMSO-SFS), a novel deep learning-based model optimized for drone-based wildfire and smoke detection. AMSO-SFS combines optical, infrared, and Synthetic Aperture Radar (SAR) data to detect fire and smoke under varied visibility conditions. The model introduces a Space–Frequency Selective Convolution (SFS-Conv) module to enhance the discriminative capacity of features in both spatial and frequency domains. Furthermore, AMSO-SFS utilizes weakly supervised learning and adaptive scale and angle detection to identify fire and smoke regions with minimal labeled data. Extensive experiments show that the proposed model outperforms current state-of-the-art (SoTA) models, achieving robust detection performance while maintaining computational efficiency, making it suitable for real-time drone deployment. Full article
Show Figures

Figure 1

Figure 1
<p>This figure illustrates the overall architecture of the AMSO-SFS model, which is designed for drone-based wildfire and smoke detection. The model leverages data from three distinct sensor modalities: IR, and SAR. These inputs are preprocessed and aligned to ensure they represent the same scene before being fed into the multi-sensor fusion module. The multi-sensor fusion module is a critical component of the architecture, where complementary information from the optical, IR, and SAR data are combined.</p>
Full article ">Figure 2
<p>This figure demonstrates the design and functionality of the SPU and FPU, key components of the SFS-Conv module. These units work together to enhance feature extraction by capturing complementary spatial and frequency-domain information. The SPU dynamically adjusts its receptive field to capture multi-scale spatial features. It achieves this by employing kernels of varying sizes, which adapt to the scale of the detected objects, such as small ignition points or extensive smoke plumes.</p>
Full article ">Figure 3
<p>This figure represents the CSU, which is responsible for fusing the spatial and frequency features extracted by the SPU and the FPU. The CSU is responsible for adaptively fusing the spatial and frequency-domain features extracted by the SPU and the FPU, ensuring that only the most informative features are retained for wildfire and smoke detection. The CSU operates by calculating channel-wise attention scores for the spatial and frequency feature maps. These scores are derived using learned weights and a sigmoid activation function, which assigns importance to each feature channel based on its contribution to the detection task.</p>
Full article ">Figure 4
<p>Shows a FLAME dataset example of wildfire and smoke using three different types of sensors: optical, IR, and SAR.</p>
Full article ">Figure 5
<p>Showcases the robustness of the detection model in identifying fire and smoke under different environmental conditions, including dense smoke, snow, and varying distances. The bounding boxes and confidence scores validate the accuracy of the system in these challenging scenarios.</p>
Full article ">
21 pages, 20775 KiB  
Article
Sensor Fusion Method for Object Detection and Distance Estimation in Assisted Driving Applications
by Stefano Favelli, Meng Xie and Andrea Tonoli
Sensors 2024, 24(24), 7895; https://doi.org/10.3390/s24247895 - 10 Dec 2024
Viewed by 310
Abstract
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between [...] Read more.
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors’ attributes to plan the vehicle’s speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors’ calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>ApproximateTime policy graphical representation with four data streams. The red dots in the figure represent the pivot data points used to synchronize the different messages in one sample.</p>
Full article ">Figure 2
<p>Overview of the sensor fusion pipeline with the sensors proposed for the experiments.</p>
Full article ">Figure 3
<p>Hesai PandarXT-32 LiDAR and Stereolabs ZED2 camera integrated sensors’ setup.</p>
Full article ">Figure 4
<p>LiDAR and camera mounting position on the vehicle used for on-road data acquisition.</p>
Full article ">Figure 5
<p>Extrinsic parameters with standard deviation from the calibration procedure [<a href="#B29-sensors-24-07895" class="html-bibr">29</a>].</p>
Full article ">Figure 6
<p>Distance measure evaluation in parking lot scenario with target vehicle.</p>
Full article ">Figure 7
<p>Distance measure evaluation in parking lot scenario with yield sign target.</p>
Full article ">Figure 8
<p>Parking lot scenario point cloud visualization (<b>left</b>: vehicle, <b>right</b>: yield sign).</p>
Full article ">Figure 9
<p>Comparison of distance estimation methods on vehicle detection in parking lot.</p>
Full article ">Figure 10
<p>Comparison of distance estimation methods on yield sign detection in parking lot.</p>
Full article ">Figure 11
<p>Longitudinal and lateral distance estimation performance. The blue data points in the figure represent the longitudinal distance estimation, while the red points represent the lateral one.</p>
Full article ">Figure 12
<p>Test performed on a car-following scenario on a suburban road.</p>
Full article ">Figure 13
<p>Detection in the following scenario of filtering on the lateral coordinate.</p>
Full article ">Figure 14
<p>Relative distance acquisition: raw data in blue and filtered data in red.</p>
Full article ">Figure 15
<p>Single-lane road in neighborhood.</p>
Full article ">Figure 16
<p>Two-lane road without traffic light.</p>
Full article ">Figure 17
<p>Three-lane road with traffic light.</p>
Full article ">Figure 18
<p>Projected 3D point clouds on 2D camera images.</p>
Full article ">
12 pages, 1136 KiB  
Article
Research on GNSS/IMU/Visual Fusion Positioning Based on Adaptive Filtering
by Ao Liu, Hang Guo, Min Yu, Jian Xiong, Huiyang Liu and Pengfei Xie
Appl. Sci. 2024, 14(24), 11507; https://doi.org/10.3390/app142411507 - 10 Dec 2024
Viewed by 333
Abstract
The accuracy of satellite positioning results depends on the number of available satellites in the sky. In complex environments such as urban canyons, the effectiveness of satellite positioning is often compromised. To enhance the positioning accuracy of low-cost sensors, this paper combines the [...] Read more.
The accuracy of satellite positioning results depends on the number of available satellites in the sky. In complex environments such as urban canyons, the effectiveness of satellite positioning is often compromised. To enhance the positioning accuracy of low-cost sensors, this paper combines the visual odometer data output by Xtion with the GNSS/IMU integrated positioning data output by the satellite receiver and MEMS IMU both in the mobile phone through adaptive Kalman filtering to improve positioning accuracy. Studies conducted in different experimental scenarios have found that in unobstructed environments, the RMSE of GNSS/IMU/visual fusion positioning accuracy improves by 50.4% compared to satellite positioning and by 24.4% compared to GNSS/IMU integrated positioning. In obstructed environments, the RMSE of GNSS/IMU/visual fusion positioning accuracy improves by 57.8% compared to satellite positioning and by 36.8% compared to GNSS/IMU integrated positioning. Full article
Show Figures

Figure 1

Figure 1
<p>GNSS/IMU/visual fusion positioning algorithm flowchart.</p>
Full article ">Figure 2
<p>Deviation between satellite positioning and visual odometer poses.</p>
Full article ">Figure 3
<p>Experimental equipment setup.</p>
Full article ">Figure 4
<p>Position trajectory solution diagram.</p>
Full article ">
23 pages, 6025 KiB  
Article
Integrating Vision and Olfaction via Multi-Modal LLM for Robotic Odor Source Localization
by Sunzid Hassan, Lingxiao Wang and Khan Raqib Mahmud
Sensors 2024, 24(24), 7875; https://doi.org/10.3390/s24247875 - 10 Dec 2024
Viewed by 344
Abstract
Odor source localization (OSL) technology allows autonomous agents like mobile robots to localize a target odor source in an unknown environment. This is achieved by an OSL navigation algorithm that processes an agent’s sensor readings to calculate action commands to guide the robot [...] Read more.
Odor source localization (OSL) technology allows autonomous agents like mobile robots to localize a target odor source in an unknown environment. This is achieved by an OSL navigation algorithm that processes an agent’s sensor readings to calculate action commands to guide the robot to locate the odor source. Compared to traditional ‘olfaction-only’ OSL algorithms, our proposed OSL algorithm integrates vision and olfaction sensor modalities to localize odor sources even if olfaction sensing is disrupted by non-unidirectional airflow or vision sensing is impaired by environmental complexities. The algorithm leverages the zero-shot multi-modal reasoning capabilities of large language models (LLMs), negating the requirement of manual knowledge encoding or custom-trained supervised learning models. A key feature of the proposed algorithm is the ‘High-level Reasoning’ module, which encodes the olfaction and vision sensor data into a multi-modal prompt and instructs the LLM to employ a hierarchical reasoning process to select an appropriate high-level navigation behavior. Subsequently, the ‘Low-level Action’ module translates the selected high-level navigation behavior into low-level action commands that can be executed by the mobile robot. To validate our algorithm, we implemented it on a mobile robot in a real-world environment with non-unidirectional airflow environments and obstacles to mimic a complex, practical search environment. We compared the performance of our proposed algorithm to single-sensory-modality-based ‘olfaction-only’ and ‘vision-only’ navigation algorithms, and a supervised learning-based ‘vision and olfaction fusion’ (Fusion) navigation algorithm. The experimental results show that the proposed LLM-based algorithm outperformed the other algorithms in terms of success rates and average search times in both unidirectional and non-unidirectional airflow environments. Full article
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the OSL system. The robot platform is equipped with a camera for vision and a chemical detector and an anemometer for olfactory sensing. The proposed algorithm utilizes a multi-modal LLM for navigation decision making.</p>
Full article ">Figure 2
<p>The framework of the proposed multi-modal LLM-based navigation algorithm. The three main modules are the ‘Environment Sensing’ module, ‘High-level Reasoning’ module, and ‘Low-level Action’ module.</p>
Full article ">Figure 3
<p>Robot notation. Robot position <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> and heading <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> are monitored by the built-in localization system. Wind speed <span class="html-italic">u</span> and wind direction are measured from the additional anemometer in the body frame. Wind direction in the inertial frame <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi>I</mi> <mi>n</mi> <mi>e</mi> <mi>r</mi> <mi>t</mi> <mi>i</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </semantics></math> is derived from robot heading <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> and wind direction in the body frame.</p>
Full article ">Figure 4
<p>Implementation of the prompt. The system prompt includes the task, actions, hints and output instructions. The final prompt (orange box) includes the system prompt (green box) and the olfactory description (blue box).</p>
Full article ">Figure 5
<p>Querying the LLM with image and prompt. The input of the model is the visual frame and the prompt. The output of the model is the high-level action selection.</p>
Full article ">Figure 6
<p>The flow diagram of the ‘High-level Reasoning’ module. It illustrates how the proposed LLM-based agent integrates visual and olfactory sensory observations to make high-level navigation behavior decisions.</p>
Full article ">Figure 7
<p>(<b>a</b>) Moth mate-seeking behaviors. This figure was retrieved from [<a href="#B73-sensors-24-07875" class="html-bibr">73</a>]. (<b>b</b>) Moth-inspired ‘surge’ and (<b>c</b>) ‘casting’ navigation behaviors.</p>
Full article ">Figure 8
<p>(<b>a</b>) Figure of the search area. The size of the search area is 8.2 m × 3.3 m. The odor source is a humidifier that generates ethanol plumes. An obstacle prevents vision of the plume initially and obstructs navigation. Two perpendicular electric fans are used to create unidirectional or non−unidirectional airflow. There are objects to test the visual reasoning capability of the LLM model. (<b>b</b>) Schematic diagram of the search area. We selected four different robot initial positions in the downwind area in the repeated tests.</p>
Full article ">Figure 9
<p>(<b>a</b>) The robot platform includes a camera for vision sensing and a chemical sensor and an anemometer for olfaction sensing. (<b>b</b>) The computation system consists of the robot platform and a remote PC. The dotted line represents a wireless link and the solid line represents a physical connection.</p>
Full article ">Figure 10
<p>Trajectory graph of a successful sample run with the proposed multi-modal LLM-based OSL algorithm in unidirectional airflow environment. The navigation behaviors are color-separated. The obstacle is indicated by an orange box, and the odor source is represented by a red point with the surrounding circular source declaration region.</p>
Full article ">Figure 11
<p>Examples of ‘environment sensing’ and ‘reasoning output’ by the GPT-4o model.</p>
Full article ">Figure 12
<p>Robot trajectories of repeated tests in unidirectional airflow environment: (<b>a</b>–<b>d</b>) ‘olfaction-only’ (OO); (<b>e</b>–<b>h</b>) ‘vision-only’ (VO); (<b>i</b>–<b>l</b>) ‘vision and olfaction fusion’ (Fusion); and (<b>m</b>–<b>p</b>) ‘LLM-based’ (LLM) navigation algorithms.</p>
Full article ">Figure 13
<p>Robot trajectories of repeated tests in non-unidirectional airflow environment: (<b>a</b>–<b>d</b>) ‘olfaction-only’ (OO); (<b>e</b>–<b>h</b>) ‘vision-only’ (VO); (<b>i</b>–<b>l</b>) ‘vision and olfaction fusion’ (Fusion); and (<b>m</b>–<b>p</b>) ‘LLM-based’ (LLM) navigation algorithms.</p>
Full article ">Figure 14
<p>Mean differences of success rates of the four navigation algorithms. The positive differences are statistically significant at family-wise error rate (FWER) of <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">
18 pages, 11734 KiB  
Data Descriptor
Multi-Modal Dataset of Human Activities of Daily Living with Ambient Audio, Vibration, and Environmental Data
by Thomas Pfitzinger, Marcel Koch, Fabian Schlenke and Hendrik Wöhrle
Data 2024, 9(12), 144; https://doi.org/10.3390/data9120144 - 9 Dec 2024
Viewed by 363
Abstract
The detection of human activities is an important step in automated systems to understand the context of given situations. It can be useful for applications like healthcare monitoring, smart homes, and energy management systems for buildings. To achieve this, a sufficient data basis [...] Read more.
The detection of human activities is an important step in automated systems to understand the context of given situations. It can be useful for applications like healthcare monitoring, smart homes, and energy management systems for buildings. To achieve this, a sufficient data basis is required. The presented dataset contains labeled recordings of 25 different activities of daily living performed individually by 14 participants. The data were captured by five multisensors in supervised sessions in which a participant repeated each activity several times. Flawed recordings were removed, and the different data types were synchronized to provide multi-modal data for each activity instance. Apart from this, the data are presented in raw form, and no further filtering was performed. The dataset comprises ambient audio and vibration, as well as infrared array data, light color and environmental measurements. Overall, 8615 activity instances are included, each captured by the five multisensor devices. These multi-modal and multi-channel data allow various machine learning approaches to the recognition of human activities, for example, federated learning and sensor fusion. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of activity instance duration within each class, as well as the minimum, median, and maximum duration. Separated into short and long activities.</p>
Full article ">Figure 2
<p>Three examples of activity recordings from multisensor 4. The measurements for the different environmental readings are not depicted as they each consist of a single value for the example.</p>
Full article ">Figure 3
<p>The infrared array data from the <span class="html-italic">Walk to room</span> example shown in <a href="#data-09-00144-f002" class="html-fig">Figure 2</a>. The participant enters the sensor’s field of view from the left and then walks away from the sensor. Each second, an <math display="inline"><semantics> <mrow> <mn>8</mn> <mo>×</mo> <mn>8</mn> </mrow> </semantics></math> matrix of IR-temperature readings is captured, displayed as a heatmap.</p>
Full article ">Figure 4
<p>Folders and files in the dataset.</p>
Full article ">Figure 5
<p>Table structure of the data in each HDF5-file. For one recording, the high frequency data consist of an array, and a single value is given for the environmental data. <span class="html-italic">infrared-array</span> and <span class="html-italic">light-color</span> have multiple values, each with a corresponding timestamp in the neighboring column.</p>
Full article ">Figure 6
<p>Participant IDs and total time of the recordings. ID 999 is used for <span class="html-italic">No activity</span>, where no participant was involved.</p>
Full article ">Figure 7
<p>Recording sequence of one activity set. The red arrows represent inputs by the observer.</p>
Full article ">Figure 8
<p>Front and side views and the internal circuit board of the multisensors used for recording the data. (<b>a</b>) Multisensor front view. (<b>b</b>) Multisensor side view. (<b>c</b>) Circuit board of a multisensor. ESP32 (A), microphone (B), accelerometer (C), infrared array (D), light color sensor (E), environmental sensor (F).</p>
Full article ">Figure 9
<p>Layout of the two rooms that composed the recording environment. The multisensor positions are marked with blue rectangles and a triangle pointing to the direction they are facing. The rotation of the sensors along the facing axis is also included.</p>
Full article ">Figure 10
<p>Audio and vibration before and after cross-correlation.</p>
Full article ">Figure A1
<p>Variation of audio data for each activity class using the standard deviation per entry.</p>
Full article ">Figure A2
<p>Variation of vibration data for each activity class. For each entry the standard deviation of the Euclidian norms was calculated.</p>
Full article ">Figure A3
<p>Variation of infrared array data for each activity class. For each entry, the standard deviation of the means was calculated.</p>
Full article ">Figure A4
<p>Variation of light color data for each activity class. For each entry, the standard deviation of the Euclidian norms was calculated.</p>
Full article ">Figure A5
<p>Temperature distribution for each activity class.</p>
Full article ">Figure A6
<p>Humidity distribution for each activity class.</p>
Full article ">Figure A7
<p>Pressure distribution for each activity class.</p>
Full article ">Figure A8
<p>Air quality index distribution for each activity class.</p>
Full article ">Figure A9
<p>VOC distribution for each activity class.</p>
Full article ">Figure A10
<p>CO<sub>2</sub> equivalent distribution for each activity class.</p>
Full article ">
21 pages, 1344 KiB  
Review
Tackling Heterogeneous Light Detection and Ranging-Camera Alignment Challenges in Dynamic Environments: A Review for Object Detection
by Yujing Wang, Abdul Hadi Abd Rahman, Fadilla ’Atyka Nor Rashid and Mohamad Khairulamirin Md Razali
Sensors 2024, 24(23), 7855; https://doi.org/10.3390/s24237855 - 9 Dec 2024
Viewed by 366
Abstract
Object detection is an essential computer vision task that identifies and locates objects within images or videos and is crucial for applications such as autonomous driving, robotics, and augmented reality. Light Detection and Ranging (LiDAR) and camera sensors are widely used for reliable [...] Read more.
Object detection is an essential computer vision task that identifies and locates objects within images or videos and is crucial for applications such as autonomous driving, robotics, and augmented reality. Light Detection and Ranging (LiDAR) and camera sensors are widely used for reliable object detection. These sensors produce heterogeneous data due to differences in data format, spatial resolution, and environmental responsiveness. Existing review articles on object detection predominantly focus on the statistical analysis of fusion algorithms, often overlooking the complexities of aligning data from these distinct modalities, especially dynamic environment data alignment. This paper addresses the challenges of heterogeneous LiDAR-camera alignment in dynamic environments by surveying over 20 alignment methods for three-dimensional (3D) object detection, focusing on research published between 2019 and 2024. This study introduces the core concepts of multimodal 3D object detection, emphasizing the importance of integrating data from different sensor modalities for accurate object recognition in dynamic environments. The survey then delves into a detailed comparison of recent heterogeneous alignment methods, analyzing critical approaches found in the literature, and identifying their strengths and limitations. A classification of methods for aligning heterogeneous data in 3D object detection is presented. This paper also highlights the critical challenges in aligning multimodal data, including dynamic environments, sensor fusion, scalability, and real-time processing. These limitations are thoroughly discussed, and potential future research directions are proposed to address current gaps and advance the state-of-the-art. By summarizing the latest advancements and highlighting open challenges, this survey aims to stimulate further research and innovation in heterogeneous alignment methods for multimodal 3D object detection, thereby pushing the boundaries of what is currently achievable in this rapidly evolving domain. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Structure and organization of this paper.</p>
Full article ">Figure 2
<p>Frequency distribution of datasets based on (<b>a</b>) dataset size, (<b>b</b>) number of classes, and (<b>c</b>) number of sensors.</p>
Full article ">Figure 3
<p>Magnetization frame of multimodal alignment and fusion.</p>
Full article ">Figure 4
<p>Magnetization Frame of Multimodal alignment and fusion.</p>
Full article ">Figure 5
<p>Summary of alignment approaches.</p>
Full article ">Figure 6
<p>Structure and keywords of the challenges and research directions.</p>
Full article ">
29 pages, 4333 KiB  
Review
Sensors, Techniques, and Future Trends of Human-Engagement-Enabled Applications: A Review
by Zhuangzhuang Dai, Vincent Gbouna Zakka, Luis J. Manso, Martin Rudorfer, Ulysses Bernardet, Johanna Zumer and Manolya Kavakli-Thorne
Algorithms 2024, 17(12), 560; https://doi.org/10.3390/a17120560 - 6 Dec 2024
Viewed by 622
Abstract
Human engagement is a vital test research area actively explored in cognitive science and user experience studies. The rise of big data and digital technologies brings new opportunities into this field, especially in autonomous systems and smart applications. This article reviews the latest [...] Read more.
Human engagement is a vital test research area actively explored in cognitive science and user experience studies. The rise of big data and digital technologies brings new opportunities into this field, especially in autonomous systems and smart applications. This article reviews the latest sensors, current advances of estimation methods, and existing domains of application to guide researchers and practitioners to deploy engagement estimators in various use cases from driver drowsiness detection to human–robot interaction (HRI). Over one hundred references were selected, examined, and contrasted in this review. Specifically, this review focuses on accuracy and practicality of use in different scenarios regarding each sensor modality, as well as current opportunities that greater automatic human engagement estimation could unlock. It is highlighted that multimodal sensor fusion and data-driven methods have shown significant promise in enhancing the accuracy and reliability of engagement estimation. Upon compiling the existing literature, this article addresses future research directions, including the need for developing more efficient algorithms for real-time processing, generalization of data-driven approaches, creating adaptive and responsive systems that better cater to individual needs, and promoting user acceptance. Full article
(This article belongs to the Special Issue AI Algorithms for Positive Change in Digital Futures)
Show Figures

Figure 1

Figure 1
<p>An illustration of the sub-processes of engagement. The human appraisal system, cognitive system, motivation system, and motor system all reveal important information about engagement. Emotions are part of the appraisal process of the behaviour of the interaction partner. A certain level of cognitive load will be associated with engagement. Motor responses such as gross motor, gaze, and facial expression can be captured through observation. Many techniques can measure engagement sub-processes even while not directly measuring engagement as whole.</p>
Full article ">Figure 2
<p>Three typical DNN architectures for human engagement classification or regression. <b>Left</b> takes as input a video containing multiple frames and applies CNN+LSTM in conjunction with fully connected layers for engagement output, such as [<a href="#B14-algorithms-17-00560" class="html-bibr">14</a>,<a href="#B57-algorithms-17-00560" class="html-bibr">57</a>]. <b>Top right</b> evaluates engagement from a single frame with CNN+MLP as is tested in [<a href="#B131-algorithms-17-00560" class="html-bibr">131</a>,<a href="#B138-algorithms-17-00560" class="html-bibr">138</a>]. <b>Bottom right</b> curates a feature vector concatenating sole or multimodal features before learning hidden states with Recurrent Neural Networks (RNNs), such as GRU and LSTM in [<a href="#B13-algorithms-17-00560" class="html-bibr">13</a>].</p>
Full article ">Figure 3
<p>Example applications enabled by engagement estimation. (<b>A</b>) E-learning can benefit from automatic learner engagement evaluation. (<b>B</b>) Driver drowsiness detection is critical for safety and reducing road accidents (data from [<a href="#B68-algorithms-17-00560" class="html-bibr">68</a>]). (<b>C</b>) Engagement estimation plays a key role in designing human–computer interfaces, social robots, and autonomous systems for HRI.</p>
Full article ">
26 pages, 548 KiB  
Systematic Review
A Systematic Review of Cutting-Edge Radar Technologies: Applications for Unmanned Ground Vehicles (UGVs)
by Can Ersü, Eduard Petlenkov and Karl Janson
Sensors 2024, 24(23), 7807; https://doi.org/10.3390/s24237807 - 6 Dec 2024
Viewed by 488
Abstract
This systematic review evaluates the integration of advanced radar technologies into unmanned ground vehicles (UGVs), focusing on their role in enhancing autonomy in defense, transportation, and exploration. A comprehensive search across IEEE Xplore, Google Scholar, arXiv, and Scopus identified relevant studies from 2007 [...] Read more.
This systematic review evaluates the integration of advanced radar technologies into unmanned ground vehicles (UGVs), focusing on their role in enhancing autonomy in defense, transportation, and exploration. A comprehensive search across IEEE Xplore, Google Scholar, arXiv, and Scopus identified relevant studies from 2007 to 2024. The studies were screened, and 54 were selected for full analysis based on inclusion criteria. The review details advancements in radar perception, machine learning integration, and sensor fusion while also discussing the challenges of radar deployment in complex environments. The findings reveal both the potential and limitations of radar technology in UGVs, particularly in adverse weather and unstructured terrains. The implications for practice, policy, and future research are outlined. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Annual research publications in radar applications from 2014 to 2023, categorized by application areas—object detection, point-cloud improvements, navigation, and object tracking.</p>
Full article ">Figure 2
<p>Key terms associated with sensor fusion technologies and related concepts discussed in the literature.</p>
Full article ">
24 pages, 3438 KiB  
Review
Advances in Global Remote Sensing Monitoring of Discolored Pine Trees Caused by Pine Wilt Disease: Platforms, Methods, and Future Directions
by Hao Shi, Liping Chen, Meixiang Chen, Danzhu Zhang, Qiangjia Wu and Ruirui Zhang
Forests 2024, 15(12), 2147; https://doi.org/10.3390/f15122147 - 5 Dec 2024
Viewed by 456
Abstract
Pine wilt disease (PWD), caused by pine wood nematodes, is a major forest disease that poses a serious threat to global pine forest resources. Therefore, the prompt identification of PWD-discolored trees is crucial for controlling its spread. Currently, remote sensing is the primary [...] Read more.
Pine wilt disease (PWD), caused by pine wood nematodes, is a major forest disease that poses a serious threat to global pine forest resources. Therefore, the prompt identification of PWD-discolored trees is crucial for controlling its spread. Currently, remote sensing is the primary approach for monitoring PWD. This study comprehensively reviews advances in the global remote sensing monitoring of PWD. It explores the remote sensing platforms and identification methods used in the detection of PWD-discolored trees, evaluates their precision, and provides prospects for existing problems. Three observations were made from existing studies: First, unmanned aerial vehicles (UAVs) are the dominant remote sensing platforms, and RGB data sources are the most commonly used for identifying PWD-discolored trees. Second, deep-learning methods are increasingly applied to identify PWD-discolored trees. Third, the early monitoring of PWD-discolored trees has gained increasing attention. This study reveals the problems associated with the acquisition of remote sensing images and identification algorithms. Future research directions include the fusion of multiple sensors to enhance the identification precision and early monitoring of PWD-discolored trees to obtain an optimal detection window period. This study aimed to provide technical references and scientific foundations for the comprehensive monitoring and control of PWD. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) represents <span class="html-italic">Monochamus alternatus</span>, and (<b>b</b>) shows PWN under an optical microscope.</p>
Full article ">Figure 2
<p>Global distribution map and first record timeline of pine wilt disease (the data are from <a href="https://www.cabidigitallibrary.org/doi/10.1079/cabicompendium.10448" target="_blank">https://www.cabidigitallibrary.org/doi/10.1079/cabicompendium.10448</a>, accessed on 21 May 2024). Black represents the native countries of PWD, while green, blue, and red represent the countries where PWD has been introduced.</p>
Full article ">Figure 3
<p>Pine forests affected by PWD. (<b>a</b>) represents the forest affected by PWD from a drone’s perspective, and (<b>b</b>,<b>c</b>) are typical pine trees after infection.</p>
Full article ">Figure 4
<p>UVA images of pine trees at different stages of infection.</p>
Full article ">Figure 5
<p>The spectral reflectance curves of infected trees at different infection stages [<a href="#B65-forests-15-02147" class="html-bibr">65</a>].</p>
Full article ">
25 pages, 5891 KiB  
Article
Discrete Event System Specification for IoT Applications
by Iman Alavi Fazel and Gabriel Wainer
Sensors 2024, 24(23), 7784; https://doi.org/10.3390/s24237784 - 5 Dec 2024
Viewed by 341
Abstract
The Internet of Things (IoT) has emerged as a transformative technology with a variety of applications across various industries. However, the development of IoT systems is hindered by challenges such as interoperability, system complexity, and the need for streamlined development and maintenance processes. [...] Read more.
The Internet of Things (IoT) has emerged as a transformative technology with a variety of applications across various industries. However, the development of IoT systems is hindered by challenges such as interoperability, system complexity, and the need for streamlined development and maintenance processes. In this study, we introduce a robust architecture grounded in discrete event system specification (DEVS) as a model-driven development solution to overcome these obstacles. Our proposed architecture utilizes the publish/subscribe paradigm, and it also adds to the robustness of the proposed solution with the incorporation of the Brooks–Iyengar algorithm to enhance fault tolerance against unreliable sensor readings. We detail the DEVS specification that is used to define this architecture and validate its effectiveness through a detailed home automation case study that integrates multiple sensors and actuators. Full article
(This article belongs to the Special Issue Wireless Sensor Networks: Signal Processing and Communications)
Show Figures

Figure 1

Figure 1
<p>The coupled DEVS model developed to execute on IoT devices.</p>
Full article ">Figure 2
<p>Multiple nodes communicating with a message broker.</p>
Full article ">Figure 3
<p>Input and output relations of the NetworkMedium model.</p>
Full article ">Figure 4
<p>The coupling of IoT nodes to the message broker for simulation.</p>
Full article ">Figure 5
<p>Nodes’ values in each iteration of message exchange in simulation #1.</p>
Full article ">Figure 6
<p>Nodes’ values in each iteration of message exchange in simulation #2.</p>
Full article ">Figure 7
<p>The architecture of the case study.</p>
Full article ">Figure 8
<p>The electrical connection between ESP32 and Grove Temperature sensor.</p>
Full article ">Figure 9
<p>The electrical connection between ESP32 and MH-Z19 sensor.</p>
Full article ">Figure 10
<p>DEVS models for HVAC control node.</p>
Full article ">Figure 11
<p>DEVS models for the smart blind device.</p>
Full article ">Figure 12
<p>The models and their coupling to simulate the PIDControl model.</p>
Full article ">Figure 13
<p>The plant’s output subjected to the input from the PIDControl model.</p>
Full article ">Figure 14
<p>Angle of the servomotor under simulation in different timestamps.</p>
Full article ">Figure 15
<p>Deployment of the sensors on the VSim scale model room.</p>
Full article ">Figure 16
<p>Inexact agreement in the case study in different iterations.</p>
Full article ">Figure 17
<p>Actuating command generated by the PIDControl model.</p>
Full article ">Figure 18
<p>The fused CO<sub>2</sub> sensor readings in the Base model.</p>
Full article ">Figure 19
<p>The temperature the nodes agree on in different timestamps.</p>
Full article ">Figure 20
<p>The actuating command the PIDControl model generates.</p>
Full article ">
22 pages, 6724 KiB  
Article
An FPGA-Based Trigonometric Kalman Filter Approach for Improving the Measurement Quality of a Multi-Head Rotational Encoder
by Dariusz Janiszewski
Energies 2024, 17(23), 6122; https://doi.org/10.3390/en17236122 - 5 Dec 2024
Viewed by 343
Abstract
This article introduces an advanced theoretical approach, named the Trigonometric Kalman Filter (TKF), to enhance measurement accuracy for multi-head rotational encoders. Leveraging the processing capabilities of a Field-Programmable Gate Array (FPGA), the proposed TKF algorithm uses trigonometric functions and sophisticated signal fusion techniques [...] Read more.
This article introduces an advanced theoretical approach, named the Trigonometric Kalman Filter (TKF), to enhance measurement accuracy for multi-head rotational encoders. Leveraging the processing capabilities of a Field-Programmable Gate Array (FPGA), the proposed TKF algorithm uses trigonometric functions and sophisticated signal fusion techniques to provide highly accurate real-time angle estimation with rapid response. The inclusion of the Coordinate Rotation Digital Computer (CORDIC) algorithm enables swift and efficient computation of trigonometric values, facilitating precise tracking of angular position and rotational speed. This approach represents a notable advancement in control systems, where high accuracy and minimal latency are essential for optimal performance. The paper addresses key challenges in angle measurement, particularly the signal fusion inaccuracies that often impede precision in high-demand applications. Implementing the TKF with an FPGA-based pure fixed-point method not only enhances computational efficiency but also significantly reduces latency when compared to conventional software-based solutions. This FPGA-based implementation is particularly advantageous in real-time applications where processing speed and accuracy are critical, and it demonstrates the effective integration of hardware acceleration in improving measurement fidelity. To validate the effectiveness of this approach, the TKF was rigorously tested on a precision drive control system, configured for a direct PMSM drive in an astronomical telescope mount equipped with a standard 0.5m telescope frequently used by astronomers. This real-world application highlights the TKF’s ability to meet the stringent positioning and measurement accuracy requirements characteristic of astronomical observation, a field where minute angular adjustments are critical. The FPGA-based design enables high-frequency updates, essential for managing the minor, precise adjustments required for telescope control. The study includes a comprehensive computational analysis and experimental testing on an Altera Stratix FPGA board, presenting a detailed comparison of the TKF’s performance with other known methods, including fusion techniques such as differential methods, αβ filters, and related Kalman filtering applied to one sensors. The study demonstrates that the four-head fusion configuration of the TKF outperforms traditional methods in terms of measurement accuracy and responsiveness. Full article
(This article belongs to the Section F3: Power Electronics)
Show Figures

Figure 1

Figure 1
<p>Typical computing step for the CORDIC algorithm.</p>
Full article ">Figure 2
<p>Rotating encoder ring (ER) and four fixed read heads (RH1–RH4): real montage (<b>a</b>), schematic view (<b>b</b>).</p>
Full article ">Figure 3
<p>TKF diagram.</p>
Full article ">Figure 4
<p>The laboratory prototype astronomical two-axis direct-drive mount with an <math display="inline"><semantics> <mrow> <mn>11</mn> <mo>″</mo> </mrow> </semantics></math> telescope [<a href="#B30-energies-17-06122" class="html-bibr">30</a>].</p>
Full article ">Figure 5
<p>FPGA Controllerl—DE4.</p>
Full article ">Figure 6
<p>Schematic diagram of the computation system in Quartus II software.</p>
Full article ">Figure 7
<p>Trial scenario—torque demand (<math display="inline"><semantics> <msub> <mi>i</mi> <mi>q</mi> </msub> </semantics></math>) to produce small movement.</p>
Full article ">Figure 8
<p>TKF results with floating-point (<b>a</b>) and fixed-point (<b>b</b>) operations.</p>
Full article ">Figure 9
<p>Read and estimated position in dynamical (<b>a</b>) and steady state (<b>b</b>).</p>
Full article ">Figure 10
<p>Comparison of estimated speed <math display="inline"><semantics> <msub> <mover accent="true"> <mi>ω</mi> <mo stretchy="false">^</mo> </mover> <mi>r</mi> </msub> </semantics></math> in dynamical (<b>a</b>) and steady-state (<b>b</b>) operations.</p>
Full article ">Figure 11
<p>Comparison of dynamical behavior’s estimated speed <math display="inline"><semantics> <msub> <mover accent="true"> <mi>ω</mi> <mo stretchy="false">^</mo> </mover> <mi>r</mi> </msub> </semantics></math> during large changes in value, for positive (<b>a</b>) and negative (<b>b</b>) response.</p>
Full article ">Figure 12
<p>Comparison of dynamical behavior results with other methods for whole scenario (<b>a</b>) and enlarged part with biggest changes (<b>b</b>).</p>
Full article ">
19 pages, 3861 KiB  
Article
A Novel Temporal Fusion Channel Network with Multi-Channel Hybrid Attention for the Remaining Useful Life Prediction of Rolling Bearings
by Cunsong Wang, Junjie Jiang, Heng Qi, Dengfeng Zhang and Xiaodong Han
Processes 2024, 12(12), 2762; https://doi.org/10.3390/pr12122762 - 5 Dec 2024
Viewed by 393
Abstract
The remaining useful life (RUL) prediction of rolling bearings is crucial for optimizing maintenance schedules, reducing downtime, and extending machinery lifespan. However, existing multi-channel feature fusion methods do not fully capture the correlations between channels and time points in multi-dimensional sensor data. To [...] Read more.
The remaining useful life (RUL) prediction of rolling bearings is crucial for optimizing maintenance schedules, reducing downtime, and extending machinery lifespan. However, existing multi-channel feature fusion methods do not fully capture the correlations between channels and time points in multi-dimensional sensor data. To address the above problems, this paper proposes a multi-channel feature fusion algorithm based on a hybrid attention mechanism and temporal convolutional networks (TCNs), called MCHA-TFCN. The model employs a dual-channel hybrid attention mechanism, integrating self-attention and channel attention to extract spatiotemporal features from multi-channel inputs. It uses causal dilated convolutions in TCNs to capture long-term dependencies and incorporates enhanced residual structures for global feature fusion, effectively extracting high-level spatiotemporal degradation information. The experimental results on the PHM2012 dataset show that MCHA-TFCN achieves excellent performance, with an average Root-Mean-Square Error (RMSE) of 0.091, significantly outperforming existing methods like the DANN and CNN-LSTM. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of traditional convolution calculation.</p>
Full article ">Figure 2
<p>Schematic diagram of dilated convolution calculation.</p>
Full article ">Figure 3
<p>Schematic diagram of dilated causal convolution calculation.</p>
Full article ">Figure 4
<p>Structural diagram of TCN residual block.</p>
Full article ">Figure 5
<p>Schematic diagram of dilated convolution with residual connection.</p>
Full article ">Figure 6
<p>Schematic diagram of channel attention mechanism.</p>
Full article ">Figure 7
<p>The RUL prediction process of the proposed method.</p>
Full article ">Figure 8
<p>Diagram of DCHA structure.</p>
Full article ">Figure 9
<p>MCHA-TFCN structure diagram.</p>
Full article ">Figure 10
<p>Rolling bearing test bench.</p>
Full article ">Figure 11
<p>Display of bearing prediction effect under working condition 1.</p>
Full article ">Figure 12
<p>Display of bearing prediction effect under working condition 2.</p>
Full article ">
13 pages, 2146 KiB  
Article
Real-Time Postural Disturbance Detection Through Sensor Fusion of EEG and Motion Data Using Machine Learning
by Zhuo Wang, Avia Noah, Valentina Graci, Emily A. Keshner, Madeline Griffith, Thomas Seacrist, John Burns, Ohad Gal and Allon Guez
Sensors 2024, 24(23), 7779; https://doi.org/10.3390/s24237779 - 5 Dec 2024
Viewed by 392
Abstract
Millions of people around the globe are impacted by falls annually, making it a significant public health concern. Falls are particularly challenging to detect in real time, as they often occur suddenly and with little warning, highlighting the need for innovative detection methods. [...] Read more.
Millions of people around the globe are impacted by falls annually, making it a significant public health concern. Falls are particularly challenging to detect in real time, as they often occur suddenly and with little warning, highlighting the need for innovative detection methods. This study aimed to assist in the advancement of an accurate and efficient fall detection system using electroencephalogram (EEG) data to recognize the reaction to a postural disturbance. We employed a state-space-based system identification approach to extract features from EEG signals indicative of reactions to postural perturbations and compared its performance with those of traditional autoregressive (AR) and Shannon entropy (SE) methods. Using EEG epochs starting from 80 ms after the onset of the event yielded improved performance compared with epochs that started from the onset. The classifier trained on the EEG data achieved promising results, with a sensitivity of up to 90.9%, a specificity of up to 97.3%, and an accuracy of up to 95.2%. Additionally, a real-time algorithm was developed to integrate the EEG and accelerometer data, which enabled accurate fall detection in under 400 ms and achieved an over 99% accuracy in detecting unexpected falls. This research highlights the potential of using EEG data in conjunction with other sensors for developing more accurate and efficient fall detection systems, which can improve the safety and quality of life for elderly adults and other vulnerable individuals. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Neuroscience)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Experimental setup illustration, showing “upright stance” (<b>left</b>) and “balance perturbation” (<b>right</b>) stages. (<b>B</b>) Scalp EEG channel locations. (<b>C</b>) Full view of the cEEGrid piece. (<b>D</b>) Illustration of subject wearing the cEEGrids. (<b>E</b>) Mastoid channel locations.</p>
Full article ">Figure 2
<p>Exemplary EEG (Ch. R6) activity during two unpredictable events and ten predictable events. Minimal differences in the EEG activity were observed between these two types of events up to 80 ms following the event onset. The vertical line marks the onset of each event.</p>
Full article ">Figure 3
<p>Exemplar event data illustrating EEG (Ch. R6) and platform-based acceleration. The EEG data shown were detrended, baseline corrected, and filtered using a 2nd-order Butterworth bandpass filter (2.5 Hz–30 Hz) and a 60 Hz notch filter. The acceleration data represents one-dimensional raw measurements in the antero-posterior direction from the platform. The vertical line marks the onset of the event.</p>
Full article ">Figure 4
<p>Block diagram for the proposed dynamic state-space model.</p>
Full article ">Figure 5
<p>Algorithm workflow for real-time fall detection using EEG and acceleration signals.</p>
Full article ">
24 pages, 6400 KiB  
Article
Innovative Modeling of IMU Arrays Under the Generic Multi-Sensor Integration Strategy
by Benjamin Brunson, Jianguo Wang and Wenbo Ma
Sensors 2024, 24(23), 7754; https://doi.org/10.3390/s24237754 - 4 Dec 2024
Viewed by 362
Abstract
This research proposes a novel modeling method for integrating IMU arrays into multi-sensor kinematic positioning/navigation systems. This method characterizes sensor errors (biases/scale factor errors) for each IMU in an IMU array, leveraging the novel Generic Multisensor Integration Strategy (GMIS) and the framework for [...] Read more.
This research proposes a novel modeling method for integrating IMU arrays into multi-sensor kinematic positioning/navigation systems. This method characterizes sensor errors (biases/scale factor errors) for each IMU in an IMU array, leveraging the novel Generic Multisensor Integration Strategy (GMIS) and the framework for comprehensive error analysis in Discrete Kalman filtering developed through the authors’ previous research. This work enables the time-varying estimation of all individual sensor errors for an IMU array, as well as rigorous fault detection and exclusion for outlying measurements from all constituent sensors. This research explores the feasibility of applying Variance Component Estimation (VCE) to IMU array data, using separate variance components to characterize the performance of each IMU’s gyroscopes and accelerometers. This analysis is only made possible by directly modeling IMU inertial measurements under the GMIS. A real land-vehicle kinematic dataset was used to demonstrate the proposed technique. The a posteriori positioning/attitude standard deviations were compared between multi-IMU and single IMU solutions, with the multi-IMU solution providing an average accuracy improvement of ca. 14–16% in the estimated position, 30% in the estimated roll and pitch, and 40% in the estimated heading. The results of this research demonstrate that IMUs in an array do not generally exhibit homogeneous behavior, even when using the same model of tactical-grade MEMS IMU. Furthermore, VCE was used to compare the performance of three IMU sensors, which is not possible under other IMU array data fusion techniques. This research lays the groundwork for the future evaluation of IMU array sensor configurations. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the traditional sensor integration strategy for a simple GPS/IMU sensor integration. This workflow was adapted from those used in [<a href="#B17-sensors-24-07754" class="html-bibr">17</a>,<a href="#B18-sensors-24-07754" class="html-bibr">18</a>,<a href="#B19-sensors-24-07754" class="html-bibr">19</a>,<a href="#B20-sensors-24-07754" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>The general workflow of integrating positioning sensors in the GMIS.</p>
Full article ">Figure 3
<p>The top-down view of the trajectory of the kinematic dataset. The coordinates presented are local geodetic coordinates relative to the starting location.</p>
Full article ">Figure 4
<p>The velocity profile of the kinematic dataset. Velocity values are expressed in the navigation frame of local geodetic coordinates.</p>
Full article ">Figure 5
<p>The acceleration profile of the kinematic dataset. Acceleration values are expressed in the navigation frame of local geodetic coordinates.</p>
Full article ">Figure 6
<p>The roll, pitch, and heading profiles for the kinematic dataset. Attitude is presented in the local navigation frame.</p>
Full article ">Figure 7
<p>The estimated time derivatives for the roll, pitch, and heading values over the kinematic dataset. These values are presented in the local navigation frame.</p>
Full article ">Figure 8
<p>The estimated accelerometer-specific force residuals of each IMU in the array for the kinematic dataset.</p>
Full article ">Figure 9
<p>The estimated gyroscope angular rate residuals of each IMU in the array for the kinematic dataset.</p>
Full article ">Figure 10
<p>Histograms of the gyroscope standardized residuals for all three constituent IMUs. Standard normal distribution superimposed for reference.</p>
Full article ">Figure 11
<p>Histograms of the accelerometer standardized residuals for all three constituent IMUs. Standard normal distribution superimposed for reference.</p>
Full article ">Figure 12
<p>The estimated accelerometer bias for (<b>a</b>) the first IMU in the array, (<b>b</b>) the second IMU in the array, and (<b>c</b>) the third IMU in the array.</p>
Full article ">Figure 13
<p>The estimated accelerometer scale factor errors for (<b>a</b>) the first IMU in the array, (<b>b</b>) the second IMU in the array, and (<b>c</b>) the third IMU in the array.</p>
Full article ">Figure 14
<p>The estimated gyroscope bias for (<b>a</b>) the first IMU in the array, (<b>b</b>) the second IMU in the array, and (<b>c</b>) the third IMU in the array.</p>
Full article ">Figure 15
<p>The estimated gyroscope scale factor errors for (<b>a</b>) the first IMU in the array, (<b>b</b>) the second IMU in the array, and (<b>c</b>) the third IMU in the array.</p>
Full article ">Figure 16
<p>The estimated overall standard error of unit weight for the MIMU-integrated system (moving window: 20 s).</p>
Full article ">Figure 17
<p>The estimated standard errors of unit weight for the specific force measurements of three sets of IMU accelerometers in the MIMU-integrated system (moving window: 20 s).</p>
Full article ">Figure 18
<p>The estimated standard errors of unit weight for the angular rate measurements of three sets of IMU gyroscopes in the MIMU-integrated system (moving window: 20 s).</p>
Full article ">Figure 19
<p>The ratio of the a posteriori MIMU position standard deviations to the a posteriori SIMU position standard deviations (Note: the dashed lines plot the average ratios for each sensor axis).</p>
Full article ">Figure 20
<p>The ratio of the a posteriori MIMU attitude standard deviations to the a posteriori SIMU attitude standard deviations (Note: the dashed lines plot the average ratios for each sensor axis).</p>
Full article ">Figure 21
<p>The ratio of the a posteriori MIMU attitude time-derivative standard deviations to the a posteriori SIMU attitude time-derivative standard deviations (Note: the dashed lines plot the average ratios for each sensor axis).</p>
Full article ">Figure 22
<p>The estimated standard errors of unit weight for the SIMU-integrated system (moving window: 20 s).</p>
Full article ">
16 pages, 468 KiB  
Article
Modeling and Analysis of Dispersive Propagation of Structural Waves for Vibro-Localization
by Murat Ambarkutuk and Paul E. Plassmann
Sensors 2024, 24(23), 7744; https://doi.org/10.3390/s24237744 - 4 Dec 2024
Viewed by 316
Abstract
The dispersion of structural waves, where wave speed varies with frequency, introduces significant challenges in accurately localizing occupants in a building based on vibrations caused by their movements. This study presents a novel multi-sensor vibro-localization technique that accounts for dispersion effects, enhancing the [...] Read more.
The dispersion of structural waves, where wave speed varies with frequency, introduces significant challenges in accurately localizing occupants in a building based on vibrations caused by their movements. This study presents a novel multi-sensor vibro-localization technique that accounts for dispersion effects, enhancing the accuracy and robustness of occupant localization. The proposed method utilizes a model-based approach to parameterize key propagation phenomena, including wave dispersion and attenuation, which are fitted to observed waveforms. The localization is achieved by maximizing the joint likelihood of the occupant’s location based on sensor measurements. The effectiveness of the proposed technique is validated using two experimental datasets: one from a controlled environment involving an aluminum plate and the other from a building-scale experiment conducted at Goodwin Hall, Virginia Tech. Results for the proposed algorithm demonstrates a significant improvement in localization accuracy compared to benchmark algorithms. Specifically, in the aluminum plate experiments, the proposed technique reduced the average localization precision from 7.77 cm to 1.97 cm, representing a ∼74% improvement. Similarly, in the Goodwin Hall experiments, the average localization error decreased from 0.67 m to 0.3 m, with a ∼55% enhancement in accuracy. These findings indicate that the proposed approach outperforms existing methods in accurately determining occupant locations, even in the presence of dispersive wave propagation. Full article
Show Figures

Figure 1

Figure 1
<p>This figure illustrates the wave propagation process. <b>Left</b>: An illustration of the geometric layout of the floor, sensors, and the occupant location. <b>Right</b>: The wave propagation process from the occupant to the sensors. As can be seen in the figure, both sensors <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </semantics></math> are affected by the wave propagation process. Sensor <span class="html-italic">i</span> is further away from the occupant than sensor <span class="html-italic">j</span>, which results in a delay and more attenuation relatively to sensor <span class="html-italic">j</span>.</p>
Full article ">Figure 2
<p>This figure illustrates the process of transforming sensor measurements, <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi>i</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi>j</mi> </msub> </semantics></math>, into estimated signatures and assessing their similarity. The shorthand <span class="html-italic">P</span> denotes the propagation operator, which is used to convert sensor data into meaningful signatures. Initially, the unpropagation step transforms the measurements into estimated signatures <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="script">P</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>{</mo> <msub> <mi mathvariant="bold">z</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="script">P</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>{</mo> <msub> <mi mathvariant="bold">z</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </mrow> </semantics></math>, producing the estimated signature <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold">s</mi> <mo stretchy="false">¯</mo> </mover> </semantics></math>. Subsequently, the similarity between the propagated signature <math display="inline"><semantics> <mrow> <mi mathvariant="script">P</mi> <mo>{</mo> <mover accent="true"> <mi mathvariant="bold">s</mi> <mo stretchy="false">¯</mo> </mover> <mo>}</mo> </mrow> </semantics></math> and the original measurement <math display="inline"><semantics> <msub> <mi mathvariant="bold">z</mi> <mi>j</mi> </msub> </semantics></math> is assessed, allowing for a comparison of the sensor outputs.</p>
Full article ">Figure 3
<p>Layout of the aluminum plate experiment, showing impact locations and sensor placements. The 6061 aluminum alloy plate (50 × 50 × 2 cm) was impacted at 81 distinct locations, spaced 5 cm apart. Training points (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math>) are represented by squares (■), while testing points (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>41</mn> </mrow> </semantics></math>) are shown as filled circles (●). The positions of four Piezometric PCB 333B50 sensors are marked with green squares (<span style="color:#00ff00">■</span>). This configuration was used to collect vibrational data for calibrating and validating the vibro-localization model.</p>
Full article ">Figure 4
<p>Layout of the experimental setup conducted in Goodwin Hall at Virginia Tech. The figure illustrates the predefined walking path along which participants moved, with step locations marked by black squares (■) for training and black circles (●) for testing. Green squares (<span style="color:#00ff00">■</span>) indicate the sensor locations distributed along the corridor to capture vibrational data. This layout, previously utilized in studies [<a href="#B22-sensors-24-07744" class="html-bibr">22</a>,<a href="#B24-sensors-24-07744" class="html-bibr">24</a>], serves as the benchmark for building-scale experimental data collection.</p>
Full article ">Figure 5
<p>This figure presents a comparison of the joint likelihoods and sensor data for an impact location near the sensor array. The top left subfigure shows the joint likelihood computed using the baseline [<a href="#B18-sensors-24-07744" class="html-bibr">18</a>] method, while the top right subfigure displays the joint likelihood obtained from the proposed technique. The bottom left subfigure illustrates the raw sensor measurements, and the bottom right subfigure shows the estimated signatures derived from these measurements. This comparison highlights the performance of both methods in accurately estimating the impact location based on vibrational data.</p>
Full article ">Figure 6
<p>PDF and CDF of localization error for the proposed method and the baseline approach. In the PDF (<b>left</b>), the proposed method exhibits a sharp peak around 20 m, indicating a higher frequency of lower localization errors compared to the baseline, which shows a more distributed error profile. The CDF (<b>right</b>) further supports this observation, as the proposed method achieves 80% cumulative frequency at a lower error range than the baseline, demonstrating a more consistent and accurate performance. These results suggest that the proposed technique significantly reduces localization error, achieving more reliable estimates than the baseline method.</p>
Full article ">Figure 7
<p>Representative examples of localization results using the proposed method on the building dataset. Each subfigure illustrates the joint likelihood calculated from the measured waveforms, with the true occupant location marked by a red cross (<span style="color:red">×</span>) and the estimated location by a black plus sign (+). (<b>a</b>) shows the occupant at the leftmost end of the corridor, (<b>b</b>) at the center, and (<b>c</b>) at the rightmost end. These results demonstrate the reliability and effectiveness of the proposed technique in estimating impact locations across different occupant positions.</p>
Full article ">Figure 8
<p>PDF and CDF of the localization error for both occupants in the Goodwin Hall dataset. The results indicate that the error distributions for both Occupant A and Occupant B are similar, demonstrating that the proposed technique is robust to inter-occupant differences. Despite the variations in walking patterns and body dynamics between different individuals, the method maintains relatively consistent performance. (<b>a</b>) PDF of the localization error observed in Occupant A and Occupant B data. (<b>b</b>) CDF of the localization error observed in Occupant A and Occupant B data.</p>
Full article ">
Back to TopTop