[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (24)

Search Parameters:
Keywords = robotics olfaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6025 KiB  
Article
Integrating Vision and Olfaction via Multi-Modal LLM for Robotic Odor Source Localization
by Sunzid Hassan, Lingxiao Wang and Khan Raqib Mahmud
Sensors 2024, 24(24), 7875; https://doi.org/10.3390/s24247875 - 10 Dec 2024
Viewed by 487
Abstract
Odor source localization (OSL) technology allows autonomous agents like mobile robots to localize a target odor source in an unknown environment. This is achieved by an OSL navigation algorithm that processes an agent’s sensor readings to calculate action commands to guide the robot [...] Read more.
Odor source localization (OSL) technology allows autonomous agents like mobile robots to localize a target odor source in an unknown environment. This is achieved by an OSL navigation algorithm that processes an agent’s sensor readings to calculate action commands to guide the robot to locate the odor source. Compared to traditional ‘olfaction-only’ OSL algorithms, our proposed OSL algorithm integrates vision and olfaction sensor modalities to localize odor sources even if olfaction sensing is disrupted by non-unidirectional airflow or vision sensing is impaired by environmental complexities. The algorithm leverages the zero-shot multi-modal reasoning capabilities of large language models (LLMs), negating the requirement of manual knowledge encoding or custom-trained supervised learning models. A key feature of the proposed algorithm is the ‘High-level Reasoning’ module, which encodes the olfaction and vision sensor data into a multi-modal prompt and instructs the LLM to employ a hierarchical reasoning process to select an appropriate high-level navigation behavior. Subsequently, the ‘Low-level Action’ module translates the selected high-level navigation behavior into low-level action commands that can be executed by the mobile robot. To validate our algorithm, we implemented it on a mobile robot in a real-world environment with non-unidirectional airflow environments and obstacles to mimic a complex, practical search environment. We compared the performance of our proposed algorithm to single-sensory-modality-based ‘olfaction-only’ and ‘vision-only’ navigation algorithms, and a supervised learning-based ‘vision and olfaction fusion’ (Fusion) navigation algorithm. The experimental results show that the proposed LLM-based algorithm outperformed the other algorithms in terms of success rates and average search times in both unidirectional and non-unidirectional airflow environments. Full article
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the OSL system. The robot platform is equipped with a camera for vision and a chemical detector and an anemometer for olfactory sensing. The proposed algorithm utilizes a multi-modal LLM for navigation decision making.</p>
Full article ">Figure 2
<p>The framework of the proposed multi-modal LLM-based navigation algorithm. The three main modules are the ‘Environment Sensing’ module, ‘High-level Reasoning’ module, and ‘Low-level Action’ module.</p>
Full article ">Figure 3
<p>Robot notation. Robot position <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> and heading <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> are monitored by the built-in localization system. Wind speed <span class="html-italic">u</span> and wind direction are measured from the additional anemometer in the body frame. Wind direction in the inertial frame <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi>I</mi> <mi>n</mi> <mi>e</mi> <mi>r</mi> <mi>t</mi> <mi>i</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </semantics></math> is derived from robot heading <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> and wind direction in the body frame.</p>
Full article ">Figure 4
<p>Implementation of the prompt. The system prompt includes the task, actions, hints and output instructions. The final prompt (orange box) includes the system prompt (green box) and the olfactory description (blue box).</p>
Full article ">Figure 5
<p>Querying the LLM with image and prompt. The input of the model is the visual frame and the prompt. The output of the model is the high-level action selection.</p>
Full article ">Figure 6
<p>The flow diagram of the ‘High-level Reasoning’ module. It illustrates how the proposed LLM-based agent integrates visual and olfactory sensory observations to make high-level navigation behavior decisions.</p>
Full article ">Figure 7
<p>(<b>a</b>) Moth mate-seeking behaviors. This figure was retrieved from [<a href="#B73-sensors-24-07875" class="html-bibr">73</a>]. (<b>b</b>) Moth-inspired ‘surge’ and (<b>c</b>) ‘casting’ navigation behaviors.</p>
Full article ">Figure 8
<p>(<b>a</b>) Figure of the search area. The size of the search area is 8.2 m × 3.3 m. The odor source is a humidifier that generates ethanol plumes. An obstacle prevents vision of the plume initially and obstructs navigation. Two perpendicular electric fans are used to create unidirectional or non−unidirectional airflow. There are objects to test the visual reasoning capability of the LLM model. (<b>b</b>) Schematic diagram of the search area. We selected four different robot initial positions in the downwind area in the repeated tests.</p>
Full article ">Figure 9
<p>(<b>a</b>) The robot platform includes a camera for vision sensing and a chemical sensor and an anemometer for olfaction sensing. (<b>b</b>) The computation system consists of the robot platform and a remote PC. The dotted line represents a wireless link and the solid line represents a physical connection.</p>
Full article ">Figure 10
<p>Trajectory graph of a successful sample run with the proposed multi-modal LLM-based OSL algorithm in unidirectional airflow environment. The navigation behaviors are color-separated. The obstacle is indicated by an orange box, and the odor source is represented by a red point with the surrounding circular source declaration region.</p>
Full article ">Figure 11
<p>Examples of ‘environment sensing’ and ‘reasoning output’ by the GPT-4o model.</p>
Full article ">Figure 12
<p>Robot trajectories of repeated tests in unidirectional airflow environment: (<b>a</b>–<b>d</b>) ‘olfaction-only’ (OO); (<b>e</b>–<b>h</b>) ‘vision-only’ (VO); (<b>i</b>–<b>l</b>) ‘vision and olfaction fusion’ (Fusion); and (<b>m</b>–<b>p</b>) ‘LLM-based’ (LLM) navigation algorithms.</p>
Full article ">Figure 13
<p>Robot trajectories of repeated tests in non-unidirectional airflow environment: (<b>a</b>–<b>d</b>) ‘olfaction-only’ (OO); (<b>e</b>–<b>h</b>) ‘vision-only’ (VO); (<b>i</b>–<b>l</b>) ‘vision and olfaction fusion’ (Fusion); and (<b>m</b>–<b>p</b>) ‘LLM-based’ (LLM) navigation algorithms.</p>
Full article ">Figure 14
<p>Mean differences of success rates of the four navigation algorithms. The positive differences are statistically significant at family-wise error rate (FWER) of <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 3190 KiB  
Article
Robotic Odor Source Localization via Vision and Olfaction Fusion Navigation Algorithm
by Sunzid Hassan, Lingxiao Wang and Khan Raqib Mahmud
Sensors 2024, 24(7), 2309; https://doi.org/10.3390/s24072309 - 5 Apr 2024
Cited by 2 | Viewed by 1835
Abstract
Robotic odor source localization (OSL) is a technology that enables mobile robots or autonomous vehicles to find an odor source in unknown environments. An effective navigation algorithm that guides the robot to approach the odor source is the key to successfully locating the [...] Read more.
Robotic odor source localization (OSL) is a technology that enables mobile robots or autonomous vehicles to find an odor source in unknown environments. An effective navigation algorithm that guides the robot to approach the odor source is the key to successfully locating the odor source. While traditional OSL approaches primarily utilize an olfaction-only strategy, guiding robots to find the odor source by tracing emitted odor plumes, our work introduces a fusion navigation algorithm that combines both vision and olfaction-based techniques. This hybrid approach addresses challenges such as turbulent airflow, which disrupts olfaction sensing, and physical obstacles inside the search area, which may impede vision detection. In this work, we propose a hierarchical control mechanism that dynamically shifts the robot’s search behavior among four strategies: crosswind maneuver, Obstacle-Avoid Navigation, Vision-Based Navigation, and Olfaction-Based Navigation. Our methodology includes a custom-trained deep-learning model for visual target detection and a moth-inspired algorithm for Olfaction-Based Navigation. To assess the effectiveness of our approach, we implemented the proposed algorithm on a mobile robot in a search environment with obstacles. Experimental results demonstrate that our Vision and Olfaction Fusion algorithm significantly outperforms vision-only and olfaction-only methods, reducing average search time by 54% and 30%, respectively. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the proposed method for the OSL experiment. We utilized the Turtlebot3 robot platform. We equipped it with a camera, Laser Distance Sensor, airflow sensor, chemical sensor, etc. The robot utilizes 3 navigation behaviors—Obstacle-Avoid Navigation, Vision-Based Navigation, and Olfaction-Based Navigation to output robot heading and linear velocity.</p>
Full article ">Figure 2
<p>The flow diagram of the proposed OSL algorithm. There are four navigation behaviors, including ‘Crosswind maneuver’, ‘Obstacle-Avoid Navigation’, ‘Vision-Based Navigation’, and ‘Olfaction-Based Navigation’.</p>
Full article ">Figure 3
<p>Robot notations. Robot position <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> and heading <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> are monitored by the built-in localization system. Wind speed <span class="html-italic">u</span> and wind direction are measured from the additional anemometer in the body frame. Wind direction in inertial frame <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mrow> <mi>I</mi> <mi>n</mi> <mi>e</mi> <mi>r</mi> <mi>t</mi> <mi>i</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </semantics></math> is derived from robot heading <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> and wind direction in body frame.</p>
Full article ">Figure 4
<p>Five directions in the robot’s laser distance sensing, including Left, Slightly Left, Front, Slightly Right, and Right. <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> <mi>r</mi> <mo>[</mo> <mi>x</mi> <mo>]</mo> </mrow> </semantics></math> denotes the distance between the robot and the object at the angle <span class="html-italic">x</span>, which is measured from the onboard laser distance sensor.</p>
Full article ">Figure 5
<p>Two sample frames that include humidifier odor plumes in different lighting and spatial conditions. The frames are sampled out of the total 243 frames used for training the vision model. All of the frames were captured by the Turtlebot robot in the experiment area.</p>
Full article ">Figure 6
<p>(<b>a</b>) The experimental setup. The robot is initially placed in a downwind area with the objective of finding the odor source. A humidifier loaded with ethanol is employed to generate odor plumes. Two electric fans are placed perpendicularly to create artificial wind fields. Two obstacles are placed in the search area. (<b>b</b>) The Turtlebot3 waffle pi mobile robot is used in this work. In addition to the camera and Laser Distance Sensor, the robot is equipped with a chemical sensor and an anemometer for measuring chemical concentration, wind speeds, and directions.</p>
Full article ">Figure 7
<p>(<b>a</b>) The schematic diagram of the search area with e1−laminar airflow setup. The five robot starting positions are used for testing the performance of the Olfaction-Based Navigation, Vision-Based Navigation, and Vision and Olfaction Fusion Navigation tests. (<b>b</b>) The schematic diagram of the search area with e2−turbulent airflow setup.</p>
Full article ">Figure 8
<p>System configuration. This system contains two main components, including the Turtlebot3 and the remote PC. The solid connection line represents physical connection, and the dotted connection line represents wireless link.</p>
Full article ">Figure 9
<p>(<b>a</b>) The flow diagram of the Olfaction-Only Navigation algorithm. There are three navigation behaviors, including ‘Crosswind maneuver’, ‘Obstacle-Avoid Navigation’, and ‘Olfaction-Based Navigation’. (<b>b</b>) The flow diagram of the Vision-Only Navigation algorithm. There are three navigation behaviors, including ‘Crosswind maneuver’, ‘Obstacle-Avoid Navigation’, and ‘Vision-Based Navigation’.</p>
Full article ">Figure 10
<p>Robot trajectory graphs and snapshots of OSL tests with the Vision and Olfaction Fusion Navigation algorithm in turbulent airflow environment.</p>
Full article ">Figure 11
<p>Robot trajectories of repeated tests in six navigation algorithm and airflow environment combinations. Trajectories in laminar airflow environments are (<b>a</b>) e1o—Olfaction-Only Navigation algorithm, (<b>b</b>) e1v—Vision-Only Navigation algorithm, and (<b>c</b>) e1vo—Vision and Olfaction Fusion Navigation algorithm. Trajectories in turbulent airflow environment are (<b>d</b>) e2o—Olfaction-Only Navigation algorithm, (<b>e</b>) e2v—Vision-Only Navigation algorithm, (<b>f</b>) e2vo—Vision and Olfaction Fusion Navigation algorithm. The behaviors that the robot was following under the three navigation algorithms are shown in the trajectory. These behaviors include Crosswind (Crosswind maneuver behavior), Obstacle (Obstacle-Avoid Navigation behavior), Olfaction (Olfaction-Based Navigation behavior), and Vision (Vision-Based Navigation behavior). Five robot starting positions are highlighted with a blue star, the obstacles are the orange boxes, and the odor source is the red point with the surrounding circular source declaration region.</p>
Full article ">
24 pages, 6985 KiB  
Article
Adaptive Space-Aware Infotaxis II as a Strategy for Odor Source Localization
by Shiqi Liu, Yan Zhang and Shurui Fan
Entropy 2024, 26(4), 302; https://doi.org/10.3390/e26040302 - 29 Mar 2024
Cited by 1 | Viewed by 1065
Abstract
Mobile robot olfaction of toxic and hazardous odor sources is of great significance in anti-terrorism, disaster prevention, and control scenarios. Aiming at the problems of low search efficiency and easily falling into a local optimum of the current odor source localization strategies, the [...] Read more.
Mobile robot olfaction of toxic and hazardous odor sources is of great significance in anti-terrorism, disaster prevention, and control scenarios. Aiming at the problems of low search efficiency and easily falling into a local optimum of the current odor source localization strategies, the paper proposes the adaptive space-aware Infotaxis II algorithm. To improve the tracking efficiency of robots, a new reward function is designed by considering the space information and emphasizing the exploration behavior of robots. Considering the enhancement in exploratory behavior, an adaptive navigation-updated mechanism is proposed to adjust the movement range of robots in real time through information entropy to avoid an excessive exploration behavior during the search process, which may lead the robot to fall into a local optimum. Subsequently, an improved adaptive cosine salp swarm algorithm is applied to confirm the optimal information adaptive parameter. Comparative simulation experiments between ASAInfotaxis II and the classical search strategies are carried out in 2D and 3D scenarios regarding the search efficiency and search behavior, which show that ASAInfotaxis II is competent to improve the search efficiency to a larger extent and achieves a better balance between exploration and exploitation behaviors. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of admissible action sets in a 2D space: (<b>a</b>) 4-direction; (<b>b</b>) 6-direction; and (<b>c</b>) 8-direction sets.</p>
Full article ">Figure 2
<p>Schematic diagram of admissible action sets in a 3D space: (<b>a</b>) 6-direction set, left view; (<b>b</b>) 6-direction set, main view; (<b>c</b>) 6-direction set, top view; (<b>d</b>) 14-direction set, left view; (<b>e</b>) 14-direction set, main view; (<b>f</b>) 14-direction set, top view; (<b>g</b>) 26-direction set, left view; (<b>h</b>) 26-direction set, main view; and (<b>i</b>) 26-direction set, top view.</p>
Full article ">Figure 3
<p>Variation curves of the navigation correction factor under different information adaptive parameters.</p>
Full article ">Figure 4
<p>Multi-peak optimization problem: 2D parameters: <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>100</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>0.6</mn> <mo> </mo> <mi mathvariant="normal">H</mi> <mi mathvariant="normal">z</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>0.1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mfenced open="[" close="]" separators="|"> <mrow> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>9</mn> </mrow> </mfenced> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>8</mn> <mo>]</mo> </mrow> </semantics></math>, and odor source location (1, 6.4); 3D parameters: <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.6</mn> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>200</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi mathvariant="normal">H</mi> <mi mathvariant="normal">z</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>1</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>0.2</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mfenced open="[" close="]" separators="|"> <mrow> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>18</mn> </mrow> </mfenced> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>∈</mo> <mfenced open="[" close="]" separators="|"> <mrow> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>18</mn> </mrow> </mfenced> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mfenced open="[" close="]" separators="|"> <mrow> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>18</mn> </mrow> </mfenced> </mrow> </semantics></math>, and odor source location (2, 2, 5). (<b>a</b>) The 4-direction set in 2D; and (<b>b</b>) 6-direction set in 3D.</p>
Full article ">Figure 5
<p>(<b>a</b>) Convergence factor curve for a conventional scheme. (<b>b</b>) Cosine convergence factor variation curve.</p>
Full article ">Figure 6
<p>Adaptive weight change curve.</p>
Full article ">Figure 7
<p>Box line plots of ASAInfotaxis II with the other cognitive search strategies for various admissible action sets in 2D scenarios, where A = Infotaxis, B = Infotaxis II, C = Entrotaxis, D = Sinfotaxis, E = Space-aware Infotaxis, F = SAInfotaxis II, G = ESAInfotaxis II, and H = ASAInfotaxis II. (<b>a</b>) The 4-direction; (<b>b</b>) 6-direction; and (<b>c</b>) 8-direction sets. The black “+” indicates the mean of the data, the red line indicates the median of the data, and the black “○” indicates outliers.</p>
Full article ">Figure 8
<p>Mean search time in a 2D scene, where A = Infotaxis, B = Infotaxis II, C = Entrotaxis, D = Sinfotaxis, E = Space-aware Infotaxis, F = SAInfotaxis II, G = ESAInfotaxis II, and H = ASAInfotaxis II.</p>
Full article ">Figure 9
<p>Comparison results of the search paths under a 4-direction admissible action set. (<b>a</b>) Infotaxis: 209 steps; (<b>b</b>) Infotaxis II: 167 steps; (<b>c</b>) Entrotaxis: 187 steps; (<b>d</b>) Sinfotaxis: 267 steps; (<b>e</b>) Space-aware Infotaxis: 195 steps; (<b>f</b>) SAInfotaxis II: 199 steps; (<b>g</b>) ESAInfotaxis II: 118 steps; and (<b>h</b>) ASAInfotaxis II_ 96 steps. The orange star indicates the true source location, the green square indicates the initial robot position, the red line indicates the trajectory of the robot, red dots indicate zero measurements, and black crosses indicate non-zero measurements.</p>
Full article ">Figure 10
<p>Comparison of the information-gathering rate curves in a 2D scene, where SAInfotaxis II = SAI II, ESAInfotaxis II = ESAI II, and ASAInfotaxis II = ASAI II.</p>
Full article ">Figure 11
<p>Comparison of the arrival time pdfs in a 2D scene, from the point (7, 4) for ASAInfotaxis II and several cognitive strategies, where SAInfotaxis II = SAI II, ESAInfotaxis II = ESAI II, and ASAInfotaxis II = ASAI II.</p>
Full article ">Figure 12
<p>Box line plots of ASAInfotaxis II with the other cognitive search strategies for various admissible action sets in 3D scenarios, where A = Infotaxis, B = Infotaxis II, C = Entrotaxis, D = Sinfotaxis, E = Space-aware Infotaxis, F = SAInfotaxis II, G = ESAInfotaxis II, and H = ASAInfotaxis II. (<b>a</b>) The 6-direction set; (<b>b</b>) 14-direction set; and (<b>c</b>) 26-direction set. The black “+” indicates the mean of the data, the red line indicates the median of the data, and the black “○” indicates outliers.</p>
Full article ">Figure 13
<p>Mean search time in 3D scenarios, where A = Infotaxis, B = Infotaxis II, C = Entrotaxis, D = Sinfotaxis, E = Space-aware Infotaxis, F = SAInfotaxis II, G = ESAInfotaxis II, and H = ASAInfotaxis II.</p>
Full article ">Figure 14
<p>Comparison results of the search paths under the 6-direction admissible action set. (<b>a</b>) Infotaxis: 77 steps; (<b>b</b>) Infotaxis II: 283 steps; (<b>c</b>) Entrotaxis: 129 steps; (<b>d</b>) Sinfotaxis: 81 steps; (<b>e</b>) Space-aware Infotaxis: 113 steps; (<b>f</b>) SAInfotaxis II: 500 steps; (<b>g</b>) ESAInfotaxis II: 21 steps; and (<b>h</b>) ASAInfotaxis II: 12 steps. The orange star indicates the true source location, the green square indicates the initial robot position, the red line indicates the trajectory of the robot, red dots indicate zero measurements, and black crosses indicate non-zero measurements.</p>
Full article ">Figure 15
<p>Comparison of information-gathering rate curves in 3D scenarios, where SAInfotaxis II = SAI II, ESAInfotaxis II = ESAI II, and ASAInfotaxis II = ASAI II.</p>
Full article ">Figure 16
<p>Comparison of the arrival time pdfs in 3D scenarios, from the point (6, 14, 5) for ASAInfotaxis II and several cognitive strategies, where SAInfotaxis II = SAI II, ESAInfotaxis II = ESAI II, and ASAInfotaxis II = ASAI II.</p>
Full article ">Figure A1
<p>Convergence curves of the swarm intelligence algorithms.</p>
Full article ">
22 pages, 6079 KiB  
Article
Information-Driven Gas Distribution Mapping for Autonomous Mobile Robots
by Andres Gongora, Javier Monroy, Faezeh Rahbar, Chiara Ercolani, Javier Gonzalez-Jimenez and Alcherio Martinoli
Sensors 2023, 23(12), 5387; https://doi.org/10.3390/s23125387 - 7 Jun 2023
Cited by 3 | Viewed by 1981
Abstract
The ability to sense airborne pollutants with mobile robots provides a valuable asset for domains such as industrial safety and environmental monitoring. Oftentimes, this involves detecting how certain gases are spread out in the environment, commonly referred to as a gas distribution map, [...] Read more.
The ability to sense airborne pollutants with mobile robots provides a valuable asset for domains such as industrial safety and environmental monitoring. Oftentimes, this involves detecting how certain gases are spread out in the environment, commonly referred to as a gas distribution map, to subsequently take actions that depend on the collected information. Since the majority of gas transducers require physical contact with the analyte to sense it, the generation of such a map usually involves slow and laborious data collection from all key locations. In this regard, this paper proposes an efficient exploration algorithm for 2D gas distribution mapping with an autonomous mobile robot. Our proposal combines a Gaussian Markov random field estimator based on gas and wind flow measurements, devised for very sparse sample sizes and indoor environments, with a partially observable Markov decision process to close the robot’s control loop. The advantage of this approach is that the gas map is not only continuously updated, but can also be leveraged to choose the next location based on how much information it provides. The exploration consequently adapts to how the gas is distributed during run time, leading to an efficient sampling path and, in turn, a complete gas map with a relatively low number of measurements. Furthermore, it also accounts for wind currents in the environment, which improves the reliability of the final gas map even in the presence of obstacles or when the gas distribution diverges from an ideal gas plume. Finally, we report various simulation experiments to evaluate our proposal against a computer-generated fluid dynamics ground truth, as well as physical experiments in a wind tunnel. Full article
(This article belongs to the Special Issue Robotics for Environment Sensing)
Show Figures

Figure 1

Figure 1
<p>Our proposed control loop.</p>
Full article ">Figure 2
<p>Functional diagram of the GW-GMRF algorithm. It takes as inputs a set of gas and wind samples from known positions, usually from an e-nose and anemometer carried by a robot, as well as a map of the environment that shows the presence of obstacles and possible wind inlets and outlets. The outputs are the estimated gas and wind maps and their associated uncertainties, represented as Gaussian distributions on a grid map.</p>
Full article ">Figure 3
<p>Illustration of multi-step path planning. Although each movement step is in a straight line, they can be combined in sequence to assess the reward for a path that (<b>a</b>) bends around corners and obstacles in complex environments or (<b>b</b>) can reach high reward areas that would otherwise be blocked off by a low reward area. In this latter example, if the robot were to plan for one step only, it would stay in the medium reward area (a reward of 3 instead of 1), but by planning for several steps, it realizes that the blue path has a better total reward than the red one and should thus be preferred (a total reward of 21 instead of 9).</p>
Full article ">Figure 4
<p>Control loop for IGDM. After acquiring the first gas and wind samples and as long as the halting criterion is not met, IGDM estimates the gas map with a coarse (low-resolution) GW-GMRF estimator. This map is then fed into the POMDP to simulate the outcome of the reward function <span class="html-italic">R</span> for the possible movement combinations in <span class="html-italic">A</span> (accounting for obstacles using the robot’s obstacle map) and choose the optimal movement path. After, or while, executing the first action in said path, the robot takes new sensor readings and repeats the process. All gas and wind observations are stored along with information of when and where they were collected, so that IGDM may produce a high-resolution GW-GMRF estimate at the end of the exploration or (although omitted in the illustration) at any other moment the robot might need the latest maps.</p>
Full article ">Figure 5
<p>Example of the possible movement options for the robot. The current position is the filled circle and the first movement is assumed to be North (A). After reaching position A, the robot may move in any direction except South to avoid backtracking (red arrows) and revisiting the same location twice. The result of limiting the movement options is that the number of possible combinations to be checked is reduced. Note that some positions are reachable over different paths (e.g., D can be reached either over B or C), that might have a different POMDP reward and/or avoid obstacles.</p>
Full article ">Figure 6
<p>Floor plans of the simulated scenarios. Red crosses depict the locations of the gas source (release point), and labels A–F mark the starting points for the robot. Only one inlet /outlet in each scenario is left open, while the others are marked in red.</p>
Full article ">Figure 7
<p>IGDM exploration of scenario I showing the ground truth (top row) and the estimate after 20, 80, and 120 m of exploration. On the robot path map, the starting position is denoted by a yellow square and its positions in the gas and wind maps are estimated by an arrow. The gas source is shown as a red circle in the ground truth.</p>
Full article ">Figure 8
<p>IGDM exploration of scenario II showing the ground truth (top row) and the estimate after 40 and 100 m of exploration. On the robot path map, the starting position is denoted by a yellow square, and its positions in the gas and wind maps are estimated by an arrow. The gas source is shown as a red circle in the ground truth.</p>
Full article ">Figure 9
<p>Example path for the random exploration strategies. (<b>a</b>) Brownian strategy that selects uniformly between movements in the N, W, S, or E direction with the same step size as IGDM. (<b>b</b>) Similar to (<b>a</b>) but undoing the last movement is forbidden. (<b>c</b>) Advance until collision strategy, turning a random angle when an obstacle is reached.</p>
Full article ">Figure 10
<p>Comparison of the RMSE between the estimated gas map and the simulation ground truth with respect to the length of the robot’s exploration path. Each line represents the average RMSE obtained for the six starting positions (A–F) in each scenario, and the colored area around them represents one standard deviation.</p>
Full article ">Figure 11
<p>Maps of the wind tunnel where the physical experiments were conducted (<b>a</b>) without and (<b>b</b>) with obstacles. The length of the tunnel was 18 m, but the experimental area (depicted in white) where the robot can move was limited to 12 m to allow for safety and service areas on each side (highlighted in gray). The wind speed was 1 m/s from right to left as indicated by the arrows, the gas source was located at the border of the experimental area at the position marked with a cross, and the robot started close to the entrance of the tunnel, marked with a circle.</p>
Full article ">Figure 12
<p>Photos of the experimental setup in the wind tunnel. (<b>left</b>) The obstacle course was built with stacked cardboard boxes to block the gas from the emission point (the obstacles in front of it were moved aside for the picture). (<b>right</b>) Picture of the Khepera IV robot equipped with a gas and wind sensor board, next to some obstacles for a size comparison.</p>
Full article ">Figure 13
<p>Results of the physical experiments in (<b>a</b>) the empty wind tunnel after 5 m and (<b>b</b>) after 25 m of exploration, and (<b>c</b>) the wind tunnel with obstacles. The robot’s starting position is highlighted as a yellow square, and its position when the estimates were computed is highlighted with a cross. The red arrows on the estimated gas maps indicate the side of the wind tunnel (outside the experiment area) where the gas source was placed.</p>
Full article ">
22 pages, 3002 KiB  
Article
A Comparison of Multiple Odor Source Localization Algorithms
by Marshall Staples, Chris Hugenholtz, Alex Serrano-Ramirez, Thomas E. Barchyn and Mozhou Gao
Sensors 2023, 23(10), 4799; https://doi.org/10.3390/s23104799 - 16 May 2023
Cited by 1 | Viewed by 2019
Abstract
There are two primary algorithms for autonomous multiple odor source localization (MOSL) in an environment with turbulent fluid flow: Independent Posteriors (IP) and Dempster–Shafer (DS) theory algorithms. Both of these algorithms use a form of occupancy grid mapping to map the probability that [...] Read more.
There are two primary algorithms for autonomous multiple odor source localization (MOSL) in an environment with turbulent fluid flow: Independent Posteriors (IP) and Dempster–Shafer (DS) theory algorithms. Both of these algorithms use a form of occupancy grid mapping to map the probability that a given location is a source. They have potential applications to assist in locating emitting sources using mobile point sensors. However, the performance and limitations of these two algorithms is currently unknown, and a better understanding of their effectiveness under various conditions is required prior to application. To address this knowledge gap, we tested the response of both algorithms to different environmental and odor search parameters. The localization performance of the algorithms was measured using the earth mover’s distance. Results indicate that the IP algorithm outperformed the DS theory algorithm by minimizing source attribution in locations where there were no sources, while correctly identifying source locations. The DS theory algorithm also identified actual sources correctly but incorrectly attributed emissions to many locations where there were no sources. These results suggest that the IP algorithm offers a more appropriate approach for solving the MOSL problem in environments with turbulent fluid flow. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The airmass path.</p>
Full article ">Figure 2
<p>Configuration of the MOSL simulation.</p>
Full article ">Figure 3
<p>Plume dispersion model.</p>
Full article ">Figure 4
<p>The source configurations with the wind field. The orange points indicate the source locations and the arrows represent the wind velocity with the length of the arrow representing magnitude. (<b>a</b>) crosswind source configuration, (<b>b</b>) staggered source configuration, (<b>c</b>) inline source configuration.</p>
Full article ">Figure 5
<p>Occupancy grid map at the normal parameter level for (<b>a</b>) the IP algorithm with the crosswind source configuration, (<b>b</b>) the DS algorithm with the crosswind source configuration, (<b>c</b>) the IP algorithm staggered source configuration, (<b>d</b>) the DS algorithm staggered source configuration, (<b>e</b>) the IP algorithm with the inline source configuration, and (<b>f</b>) the DS algorithm with the inline source configuration.</p>
Full article ">Figure 6
<p>The occupancy grid map for (<b>a</b>) the IP algorithm with the crosswind source configuration with the low wind speed parameter level, (<b>b</b>) the DS algorithm with the crosswind source configuration with the low wind speed parameter level, (<b>c</b>) the IP algorithm with the crosswind source configuration with the high wind speed parameter level, (<b>d</b>) the DS algorithm with the crosswind source configuration with the high wind speed parameter level, (<b>e</b>) the IP algorithm with the inline source configuration with the low release rate parameter level, and (<b>f</b>) the DS algorithm with the inline source configuration with the low release rate parameter level.</p>
Full article ">Figure 7
<p>(<b>a</b>) Violin plots of EMD scores for the staggered source configuration, (<b>b</b>) violin plots of EMD scores for the inline source configuration, and (<b>c</b>) violin plots of EMD scores for the crosswind source configuration.</p>
Full article ">
12 pages, 12530 KiB  
Article
Robust Moth-Inspired Algorithm for Odor Source Localization Using Multimodal Information
by Shunsuke Shigaki, Mayu Yamada, Daisuke Kurabayashi and Koh Hosoda
Sensors 2023, 23(3), 1475; https://doi.org/10.3390/s23031475 - 28 Jan 2023
Cited by 9 | Viewed by 2780
Abstract
Odor-source localization, by which one finds the source of an odor by detecting the odor itself, is an important ability to possess in order to search for leaking gases, explosives, and disaster survivors. Although many animals possess this ability, research on implementing olfaction [...] Read more.
Odor-source localization, by which one finds the source of an odor by detecting the odor itself, is an important ability to possess in order to search for leaking gases, explosives, and disaster survivors. Although many animals possess this ability, research on implementing olfaction in robotics is still developing. We developed a novel algorithm that enables a robot to localize an odor source indoors and outdoors by taking inspiration from the adult male silk moth, which we used as the target organism. We measured the female-localization behavior of the silk moth by using a virtual reality (VR) system to obtain the relationship between multiple sensory stimuli and behavior during the localization behavior. The results showed that there were two types of search active and inactive depending on the direction of odor and wind detection. In an active search, the silk moth moved faster as the odor-detection frequency increased, whereas in the inactive search, they always moved slower under all odor-detection frequencies. This phenomenon was constructed as a robust moth-inspired (RMI) algorithm and implemented on a ground-running robot. Experiments on odor-source localization in three environments with different degrees of environmental complexity showed that the RMI algorithm has the best localization performance among conventional moth-inspired algorithms. Analysis of the trajectories showed that the robot could move smoothly through the odor plume even when the environment became more complex. This indicates that switching and modulating behavior based on the direction of odor and wind detection contributes to the adaptability and robustness of odor-source localization. Full article
(This article belongs to the Special Issue Sensors for Olfaction and Taste)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of VR system for an insect. The VR device has an odor, wind, and visual stimulators. A VR device was connected to a virtual space on the computer, allowing the insect to perform odor-source localization virtually.</p>
Full article ">Figure 2
<p>Analysis of a silk moth velocity change in VR experiments. (<b>a</b>) Odor detection frequency and changes in translation and angular velocity in Case A. (<b>b</b>) Odor detection frequency and changes in translation and angular velocity in Case B. (<b>c</b>) Effect of angular velocity on visual stimulus change.</p>
Full article ">Figure 3
<p>Outline of the typical behavioral patterns during female localization of the silk moth and flowchart of the proposed algorithm.</p>
Full article ">Figure 4
<p>Image of the constructed robot and the system configuration.</p>
Full article ">Figure 5
<p>Schematic diagram of the experimental field for each scenario.</p>
Full article ">Figure 6
<p>Typical trajectories for scenarios A and B.</p>
Full article ">Figure 7
<p>Results of 20 repeated experiments. Results of (<b>a</b>) search success rate, (<b>b</b>) localization time, and (<b>c</b>) RMSE for trajectory. Groups sharing the same letter are not significantly different.</p>
Full article ">Figure 8
<p>Experimental results for Scenario C. (<b>a</b>) Typical trajectory for each algorithm. (<b>b</b>) Search success rate. (<b>c</b>) Localization time. (<b>d</b>) RMSE of trajectory. Groups sharing the same letter are not significantly different.</p>
Full article ">
32 pages, 3700 KiB  
Article
A Novel Nanosafety Approach Using Cell Painting, Metabolomics, and Lipidomics Captures the Cellular and Molecular Phenotypes Induced by the Unintentionally Formed Metal-Based (Nano)Particles
by Andi Alijagic, Nikolai Scherbak, Oleksandr Kotlyar, Patrik Karlsson, Xuying Wang, Inger Odnevall, Oldřich Benada, Ali Amiryousefi, Lena Andersson, Alexander Persson, Jenny Felth, Henrik Andersson, Maria Larsson, Alexander Hedbrant, Samira Salihovic, Tuulia Hyötyläinen, Dirk Repsilber, Eva Särndahl and Magnus Engwall
Cells 2023, 12(2), 281; https://doi.org/10.3390/cells12020281 - 11 Jan 2023
Cited by 12 | Viewed by 4014
Abstract
Additive manufacturing (AM) or industrial 3D printing uses cutting-edge technologies and materials to produce a variety of complex products. However, the effects of the unintentionally emitted AM (nano)particles (AMPs) on human cells following inhalation, require further investigations. The physicochemical characterization of the AMPs, [...] Read more.
Additive manufacturing (AM) or industrial 3D printing uses cutting-edge technologies and materials to produce a variety of complex products. However, the effects of the unintentionally emitted AM (nano)particles (AMPs) on human cells following inhalation, require further investigations. The physicochemical characterization of the AMPs, extracted from the filter of a Laser Powder Bed Fusion (L-PBF) 3D printer of iron-based materials, disclosed their complexity, in terms of size, shape, and chemistry. Cell Painting, a high-content screening (HCS) assay, was used to detect the subtle morphological changes elicited by the AMPs at the single cell resolution. The profiling of the cell morphological phenotypes, disclosed prominent concentration-dependent effects on the cytoskeleton, mitochondria, and the membranous structures of the cell. Furthermore, lipidomics confirmed that the AMPs induced the extensive membrane remodeling in the lung epithelial and macrophage co-culture cell model. To further elucidate the biological mechanisms of action, the targeted metabolomics unveiled several inflammation-related metabolites regulating the cell response to the AMP exposure. Overall, the AMP exposure led to the internalization, oxidative stress, cytoskeleton disruption, mitochondrial activation, membrane remodeling, and metabolic reprogramming of the lung epithelial cells and macrophages. We propose the approach of integrating Cell Painting with metabolomics and lipidomics, as an advanced nanosafety methodology, increasing the ability to capture the cellular and molecular phenotypes and the relevant biological mechanisms to the (nano)particle exposure. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p><b>Schematic overview of the experimental design.</b> To investigate the toxic effects of the (nano)particles unintentionally emitted at the metal AM occupational settings (AMPs), on human cells, the study was designed in three main phases. (<b>A</b>) Extensive AMPs’ physicochemical characterization, performed using a wide spectrum of analytical methods to support the data interpretation and comparability. In addition, the AMP dispersions were tested for the presence of the endotoxin. (<b>B</b>) Plate-based assays were employed to understand the AMP effects on the cell viability, the ROS production, and internalization. The high-content screening (HCS) by a Cell Painting assay, followed by the univariate, unsupervised multivariate, and supervised multivariate analyses, were performed to understand the impact of AMPs on the cells’ morphological profiles (U-2 OS cells). Mito—mitochondria; ER—endoplasmic reticulum; AGP—actin, Golgi, plasma membrane. (<b>C</b>) Lipidomic and metabolomic analyses were performed in a co-culture model (A549/THP-1 cells), mimicking the lung tissue as a potential key target organ for AMPs, and to close the knowledge gap on the AMP MoAs. Figure created with Biorender.com.</p>
Full article ">Figure 2
<p><b>Physicochemical characterization of the (nano)particles unintentionally emitted at the metal AM occupational settings (AMPs</b>). (<b>A</b>–<b>F</b>) Scanning electron microscopy (SEM) images demonstrating the size and shape characteristics of the collected AMPs. The blue arrows indicate the spherical micron-sized particles (possibly the feedstock material, based on size/shape/chemical composition), the yellow arrow highlights the micron-sized particles with an altered shape, and the green arrows indicate the large and irregularly shaped particle clusters. The red rectangular spaces in (<b>A</b>,<b>C</b>,<b>E</b>) are magnified in (<b>B</b>,<b>D</b>,<b>F</b>). The red arrows in (<b>C</b>,<b>E</b>) show the nanosatellites attached to the surface of the micron-sized particles. The white dashed areas in (<b>C</b>,<b>E</b>) demonstrate the alterations in the micron-sized particle surface topography. The green arrows in (<b>E</b>,<b>F</b>) show the tendency of the micron-sized particles to adhere to a large number of nanoparticle aggregates/agglomerates. The transmission electron microscopy (TEM) image and the graph in (<b>F</b>), upper right corner, report the nanoparticle size distribution and shape. (<b>G</b>) SEM combined with the energy dispersive spectroscopy (EDS) analysis of the bulk chemical composition (relative metal composition of Fe, Cr, Mn, Mo, Al, Si and V) of the AMPs (see also <a href="#app1-cells-12-00281" class="html-app">Figure S1</a>). (<b>H</b>) X-ray photoelectron spectroscopy (XPS) analysis of the relative oxidized metal composition (Fe, Cr, Mn) of the outermost surface of the AMPs (both the micron-sized particles (surface) and the nanoparticles (surface/bulk)).</p>
Full article ">Figure 3
<p><b>Impact of the (nano)particles unintentionally emitted at the metal AM occupational settings (AMPs) on the cell viability/metabolic activity, ROS production, and internalization</b>. Plate-based assays with the U-2 OS cells exposed to a range of AMP concentrations (0–100 µg/mL) discloses that: (<b>A</b>) AMPs did not impair the cell viability (blue color—Hoechst 33342 labelling viable nuclei; green color—Image-iT DEAD Green viability stain labelling unviable nuclei), however, AMPs reduced the metabolic activity of cells, as shown in the Alamar blue assay (graph on the right). RFI—relative fluorescence units; (<b>B</b>) AMPs induced the ROS production and exerted oxidative stress in the exposed cells (blue color—Hoechst 33,342 nuclear labelling; red color—CellROX Deep Red ROS labelling). The graph on the right is a quantitative summary of the relative fluorescence obtained from the cells after the ROS labelling. The fluorescent signal was quantified using ImageJ software and the data is reported as the mean value of N = 150 cells, per condition. CTCF—corrected total cell fluorescence. * <span class="html-italic">p</span> &lt; 0.05; *** <span class="html-italic">p</span> &lt; 0.001. (<b>C</b>) Unexposed U-2 OS cells observed under the scanning electron microscope (SEM) at 10,000× magnification. (<b>D</b>–<b>F</b>) U-2 OS cells exposed to 50 µg/mL AMPs for 24 h and imaged under SEM. Red arrows indicate the AMP aggregates/agglomerates associated with the cell’s outer surface (<b>D</b>) or partially covered by the cell membrane (<b>E</b>). Energy dispersive spectroscopy (EDS) spectra in the upper right corners (<b>E</b>,<b>F</b>) indicate the composition of the electron-dense AMP areas. Pt—platinum.</p>
Full article ">Figure 4
<p><b>Cell Painting labelling patterns in the U-2 OS cells.</b> Representative images of the control and cells exposed to 0.156, 0.313, 0.625, 1.25, 2.5, 5, 25, 50, and 100 µg/mL of the (nano)particles unintentionally emitted at the metal AM occupational settings (AMPs), live-labeled for the mitochondria (Mito), fixed, permeabilized, and labeled with the remaining fluorescent probes for the nuclei (DNA), actin/Golgi/plasma membrane (AGP), endoplasmic reticulum (ER), and RNA/nucleoli (RNA). Distinct morphological effects of the AMPs observed qualitatively, are evident in each channel, with the exception of the DNA-related features, where the morphological changes can be observed only to a lower extent. All images were acquired at a 20× magnification.</p>
Full article ">Figure 5
<p><b>Quantitative summary of the morphological effects for the (nano)particles unintentionally emitted at the metal AM occupational settings (AMPs).</b> Treatment-level feature data were normalized and scaled per plate (N = 3), the batch effect was corrected, and then averaged. Three heatmaps were established, corresponding to the morphological features organized by compartment: (<b>A</b>) cytoplasm (962 features), (<b>B</b>) nuclei (976 features), and (<b>C</b>) cell (998 features); by feature group: correlation, intensity, radial distribution, and texture; and by fluorescent channel: nuclei (DNA), actin cytoskeleton/Golgi/plasma membrane (AGP), endoplasmic reticulum (ER), RNA/nucleoli (RNA), and mitochondria (Mito). The colors represent the fold change in each measured feature, with respect to the unexposed control cells. The rows correspond to the individual concentrations of the AMPs (0.156–100 µg/mL). Exposure concentrations are in descending order from top to bottom. Columns represent the individual morphological features. Data were derived from 435,678 single cell profiles distributed across six technical replicate wells of three microplates/biological replicates (N = 18 wells in total). (<b>D</b>) Boxplots for the feature <span class="html-italic">Texture_Entropy_AGP_5_02_256</span> are shown as an example of a feature that is clearly dependent on the AMP concentration (<span class="html-italic">x</span>-axis). (<b>E</b>) Uniform manifold approximation and projection (UMAP) was employed at the image level on the median of the morphological profiles of the U-2 OS cells exposed to the AMPs. Color legend indicates the AMP concentrations (µg/mL). (<b>F</b>) Results for the sparse partial least squares discriminant analysis (sPLS-DA) predictions averaged over three cross-validation runs, as a scatterplot. Predicted concentration class (<span class="html-italic">y</span>-axis) is shown as dependent on the true concentration class (<span class="html-italic">x</span>-axis). Circle radius visualizes the frequencies for the respective result, with numbers as text for values larger than 20. Sensitivity for the predictions of large AMP concentrations was 82%, the specificity was 94%. Concentrations of AMPs were considered “small” if less than, or equal to 1.25 µg/mL. Clear concentration dependency, with the predictive potential for the new samples, could be found for the AMP concentrations larger than 2.5 µg/mL.</p>
Full article ">Figure 6
<p><b>Effects of the (nano)particles unintentionally emitted at the metal AM occupational settings (AMPs) on the lipidome and polar metabolome</b>. The fold changes (blue and red) of the top 25 out of 73 identified lipids, via the untargeted lipidomics (<b>A</b>) and the top 25 out of 63 identified polar metabolites, via the targeted metabolomics (<b>C</b>) are represented as feature-clustered heatmaps. Each column within the heatmaps represents one of three biological replicates with two technical repetitions. SiO<sub>2</sub> was used as a positive control. Volcano plots show the up-regulation (red) or down-regulation (blue) of the lipids (<b>B</b>) and polar metabolites (<b>D</b>) after 24 h of exposure to 25 μg/mL AMPs. The log2 (FC—fold change) of the relative abundance of the lipids/polar metabolites in the AMP-exposed cells and in the control cells. The <span class="html-italic">y</span>-axis represents the −log10 (<span class="html-italic">p</span>-value) between the exposed and control samples. Results reported in the volcano plots (<b>B</b>,<b>D</b>) are a summary of three biological replicates with two technical repetitions.</p>
Full article ">
17 pages, 5209 KiB  
Article
Mobile Robot Gas Source Localization Using SLAM-GDM with a Graphene-Based Gas Sensor
by Wan Abdul Syaqur Norzam, Huzein Fahmi Hawari, Kamarulzaman Kamarudin, Zaffry Hadi Mohd Juffry, Nurul Athirah Abu Hussein, Monika Gupta and Abdulnasser Nabil Abdullah
Electronics 2023, 12(1), 171; https://doi.org/10.3390/electronics12010171 - 30 Dec 2022
Cited by 4 | Viewed by 2717
Abstract
Mobile olfaction is one of the applications of mobile robots. Metal oxide sensors (MOX) are mobile robots’ most popular gas sensors. However, the sensor has drawbacks, such as high-power consumption, high operating temperature, and long recovery time. This research compares a reduced graphene [...] Read more.
Mobile olfaction is one of the applications of mobile robots. Metal oxide sensors (MOX) are mobile robots’ most popular gas sensors. However, the sensor has drawbacks, such as high-power consumption, high operating temperature, and long recovery time. This research compares a reduced graphene oxide (RGO) sensor with the traditionally used MOX in a mobile robot. The method uses a map created from simultaneous localization and mapping (SLAM) combined with gas distribution mapping (GDM) to draw the gas distribution in the map and locate the gas source. RGO and MOX are tested in the lab for their response to 100 and 300 ppm ethanol. Both sensors’ response and recovery times show that RGO resulted in 56% and 54% faster response times, with 33% and 57% shorter recovery times than MOX. In the experiment, one gas source, 95% ethanol solution, is placed in the lab, and the mobile robot runs through the map in 7 min and 12 min after the source is set, with five repetitions. The results show the average distance error of the predicted source from the actual location was 19.52 cm and 30.28 cm using MOX and 25.24 cm and 30.60 cm using the RGO gas sensor for the 7th and 12th min trials, respectively. The errors show that the predicted gas source location based on MOX is 1.0% (12th min), much closer to the actual site than that predicted with RGO. However, RGO also shows a larger gas sensing area than MOX by 0.35–8.33% based on the binary image of the SLAM-GDM map, which indicates that RGO is much more sensitive than MOX in the trial run. Regarding power consumption, RGO consumes an average of 294.605 mW, 56.33% less than MOX, with an average consumption of 674.565 mW. The experiment shows that RGO can perform as well as MOX in mobile olfaction applications but with lower power consumption and operating temperature. Full article
(This article belongs to the Special Issue Recent Advances in Industrial Robots)
Show Figures

Figure 1

Figure 1
<p>Bveeta Mobile Robot with Gas Sensing PCB.</p>
Full article ">Figure 2
<p>Full-duplex communication from the (<b>a</b>) base station through wireless communications of (<b>b</b>) Router to (<b>c</b>) workstation.</p>
Full article ">Figure 3
<p>ROS Master and ROS Node with published and subscribed topics.</p>
Full article ">Figure 4
<p>Illustration of RGO Preparation process from GO.</p>
Full article ">Figure 5
<p>GDM 2D mean gas concentration map.</p>
Full article ">Figure 6
<p>Experimental setup of indoor Lab location.</p>
Full article ">Figure 7
<p>RGO and MOX sensor response (%) towards 100 ppm and 300 ppm ethanol. RGO (<b>a</b>) 100 ppm (<b>b</b>) 300 ppm, MOX (<b>c</b>) 100 ppm (<b>d</b>) 300 ppm.</p>
Full article ">Figure 8
<p>Gas sensor response and recovery time measurement (<b>a</b>) RGO (<b>b</b>) MOX.</p>
Full article ">Figure 9
<p>Stability testing of gas sensor (<b>a</b>) RGO and (<b>b</b>) MOX.</p>
Full article ">
13 pages, 4804 KiB  
Article
A Simulation Framework for the Integration of Artificial Olfaction into Multi-Sensor Mobile Robots
by Pepe Ojeda, Javier Monroy and Javier Gonzalez-Jimenez
Sensors 2021, 21(6), 2041; https://doi.org/10.3390/s21062041 - 14 Mar 2021
Cited by 10 | Viewed by 3416
Abstract
The simulation of how a gas disperses in a environment is a necessary asset for the development of olfaction-based autonomous agents. A variety of simulators already exist for this purpose, but none of them allows for a sufficiently convenient integration with other types [...] Read more.
The simulation of how a gas disperses in a environment is a necessary asset for the development of olfaction-based autonomous agents. A variety of simulators already exist for this purpose, but none of them allows for a sufficiently convenient integration with other types of sensing (such as vision), which hinders the development of advanced, multi-sensor olfactory robotics applications. In this work, we present a framework for the simulation of gas dispersal and sensing alongside vision by integrating GADEN, a state-of-the-art Gas Dispersion Simulator, with the Unity 3D, a video game development engine that is used in many different areas of research and helps with the creation of visually realistic, complex environments. We discuss the motivation for the development of this tool, describe its characteristics, and present some potential use cases that are based on cutting-edge research in the field of olfactory robotics. Full article
(This article belongs to the Special Issue Chemical Gas Sensors for Environment Monitoring)
Show Figures

Figure 1

Figure 1
<p>Illustration of the integration of artificial vision and gas sensing systems in a realistic simulated environment. (<b>a</b>) Recreation of a small flat including a wide variety of objects. (<b>b</b>) Simulated image captured by a virtual camera, where objects have been detected and localized (blue bounding boxes). (<b>c</b>) Visualization of a gas plume within a similar environment.</p>
Full article ">Figure 2
<p>Currently, the results of GADEN simulations have only been integrated for visualization with Rviz. Despite its clarity for user feedback, this visualization does not allow for a high degree of realism and is not suitable for testing image-based algorithms.</p>
Full article ">Figure 3
<p>Flow diagram that shows the different steps of the simulation. Most of the work is conducted offline to keep the computational load of the online phase low. (<b>A</b>) shows the mesh model that defines the environment, (<b>B</b>) shows the airflow vector field computed by CFD, (<b>C</b>) shows the filament dispersion simulation in GADEN, and (<b>D</b>) shows the integration and visualization of the result in Unity.</p>
Full article ">Figure 4
<p>(<b>a</b>) Measurements obtained with a simulated photoionization detector at a given point over time. (<b>b</b>) RGB and depth images obtained from Unity’s simulated camera.</p>
Full article ">Figure 5
<p>The same gas plume, as visualized with each of the presented methods: (<b>A</b>) billboarded 2D assets, (<b>B</b>) procedural textures from OpenSimplex noise, and (<b>C</b>) volumetric rendering through ray marching.</p>
Full article ">Figure 6
<p>The volumetric rendering of the gas plume is based on taking concentration samples at several points along the view ray to compute the optical depth, and with it the transmittance of the gas puff. Additional rays are cast towards the light source at each sampling point to simulate the scattering of light towards the camera.</p>
Full article ">Figure 7
<p>Examples of simulations carried out in with the proposed framework, as represented in the Unity engine. (<b>a</b>) The high degree of visual fidelity of the environments is specially relevant for semantics-based applications. In this example, the YOLO neural network [<a href="#B47-sensors-21-02041" class="html-bibr">47</a>] is able to recognize relevant items that could be sources of certain gas emissions. (<b>b</b>) Simulating the visuals of the gas plume is key for testing methods where the robots rely on vision to locate a gas release.</p>
Full article ">
15 pages, 3764 KiB  
Project Report
Olfaction, Vision, and Semantics for Mobile Robots. Results of the IRO Project
by Javier Monroy, Jose-Raul Ruiz-Sarmiento, Francisco-Angel Moreno, Cipriano Galindo and Javier Gonzalez-Jimenez
Sensors 2019, 19(16), 3488; https://doi.org/10.3390/s19163488 - 9 Aug 2019
Cited by 7 | Viewed by 4140
Abstract
Olfaction is a valuable source of information about the environment that has not been sufficiently exploited in mobile robotics yet. Certainly, odor information can contribute to other sensing modalities, e.g., vision, to accomplish high-level robot activities, such as task planning or execution in [...] Read more.
Olfaction is a valuable source of information about the environment that has not been sufficiently exploited in mobile robotics yet. Certainly, odor information can contribute to other sensing modalities, e.g., vision, to accomplish high-level robot activities, such as task planning or execution in human environments. This paper organizes and puts together the developments and experiences on combining olfaction and vision into robotics applications, as the result of our five-years long project IRO: Improvement of the sensory and autonomous capability of Robots through Olfaction. Particularly, it investigates mechanisms to exploit odor information (usually coming in the form of the type of volatile and its concentration) in problems such as object recognition and scene–activity understanding. A distinctive aspect of this research is the special attention paid to the role of semantics within the robot perception and decision-making processes. The obtained results have improved the robot capabilities in terms of efficiency, autonomy, and usefulness, as reported in our publications. Full article
(This article belongs to the Special Issue Gas Sensors and Smart Sensing Systems)
Show Figures

Figure 1

Figure 1
<p>Picture of the e-nose prototype built for the IRO project. Its modular and compact design allows it to be easily mounted on a mobile robot and adapted to the application requirements.</p>
Full article ">Figure 2
<p>The robots employed in the experiments: (<b>a</b>) Rhodon; and (<b>b</b>) Giraff.</p>
Full article ">Figure 3
<p>Illustration of the sensor and time points relevance for the classification of gases: (<b>a</b>–<b>d</b>) normalized sensor relevance of an e-nose composed of five gas sensors when exposed to four different gas classes; (<b>e</b>) time points relevance profile averaged over all classes; and (<b>f</b>) mean prediction accuracy over time for window lengths of <math display="inline"><semantics> <mrow> <mo>≈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>10</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>20</mn> <mo>}</mo> </mrow> </semantics></math> s. These results correspond to an e-nose dataset collected under semi-controlled measurement conditions as described in [<a href="#B34-sensors-19-03488" class="html-bibr">34</a>].</p>
Full article ">Figure 3 Cont.
<p>Illustration of the sensor and time points relevance for the classification of gases: (<b>a</b>–<b>d</b>) normalized sensor relevance of an e-nose composed of five gas sensors when exposed to four different gas classes; (<b>e</b>) time points relevance profile averaged over all classes; and (<b>f</b>) mean prediction accuracy over time for window lengths of <math display="inline"><semantics> <mrow> <mo>≈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>10</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>20</mn> <mo>}</mo> </mrow> </semantics></math> s. These results correspond to an e-nose dataset collected under semi-controlled measurement conditions as described in [<a href="#B34-sensors-19-03488" class="html-bibr">34</a>].</p>
Full article ">Figure 4
<p>The Rhodon robot with the robotic arm used in the experiments.</p>
Full article ">Figure 5
<p>Average classification accuracy of a naive Bayes classifier for different lengths and positions of the sliding window within the time-series e-nose data.</p>
Full article ">Figure 6
<p>Average classification accuracy for different motion speeds using two classifiers (Naive Bayes and RBF SVM (Radial Basis Function for Supported Vector Machine)): (<b>Left</b>) classification accuracy when training the classifiers with static data samples; and (<b>Right</b>) results when the classifiers have been trained with data collected in motion.</p>
Full article ">Figure 7
<p>(<b>Left</b>) Scene from the NYUv2 dataset with segmented patches and their <span class="html-italic">ids</span> (<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>⋯</mo> <msub> <mi>x</mi> <mn>12</mn> </msub> </mrow> </semantics></math>). (<b>Right</b>) Conditional Random Field (CRF) graph built according to the patches in the NYUv2 scene (the node and relations of the wall, <math display="inline"><semantics> <msub> <mi>x</mi> <mn>10</mn> </msub> </semantics></math>, have been omitted for clarity). The orange area illustrates the scope of a pairwise factor modeling the relations between two objects, while the blue one stands for the scope of a unary factor classifying an object according to its features. Black boxes represent the expected results from a probabilistic inference process over such CRF.</p>
Full article ">Figure 8
<p>Examples of information from the Robot@Home dataset. The first column presents reconstructed scenes from the sequences within the dataset. The second column shows labeled reconstructed scenes. The third to fifth columns are examples of individual point clouds from RGB-D observations labeled by the propagation of the annotations within the reconstructed scenes.</p>
Full article ">Figure 9
<p>Diagram of a traditional teleoperation system (in black) and extended olfactory telerobotics (in blue). The latter requires equipping the mobile robot with additional sensors (e.g., an e-nose or an anemometer), and enhances the teleoperation user-interface to display this new sensory data.</p>
Full article ">Figure 10
<p>(<b>Left</b>) Ultrasonic scent-diffuser and one of the gas source candidates. (<b>Middle</b>) User interface for teleoperating the robot running on a laptop. (<b>Right</b>) Giraff telepresence-robot equipped with an e-nose and an anemometer for remote sensing, and a LIDAR for self-localization.</p>
Full article ">
34 pages, 2022 KiB  
Article
A Comparative Study of Bio-Inspired Odour Source Localisation Strategies from the State-Action Perspective
by João Macedo, Lino Marques and Ernesto Costa
Sensors 2019, 19(10), 2231; https://doi.org/10.3390/s19102231 - 14 May 2019
Cited by 29 | Viewed by 3502
Abstract
Locating odour sources with robots is an interesting problem with many important real-world applications. In the past years, the robotics community has adapted several bio-inspired strategies to search for odour sources in a variety of environments. This work studies and compares some of [...] Read more.
Locating odour sources with robots is an interesting problem with many important real-world applications. In the past years, the robotics community has adapted several bio-inspired strategies to search for odour sources in a variety of environments. This work studies and compares some of the most common strategies from a behavioural perspective with the aim of knowing: (1) how different are the behaviours exhibited by the strategies for the same perceptual state; and (2) which are the most consensual actions for each perceptual state in each environment. The first step of this analysis consists of clustering the perceptual states, and building histograms of the actions taken for each cluster. In case of (1), a histogram is made for each strategy separately, whereas for (2), a single histogram containing the actions of all strategies is produced for each cluster of states. Finally, statistical hypotheses tests are used to find the statistically significant differences between the behaviours of the strategies in each state. The data used for performing this study was gathered from a purpose-built simulator which accurately simulates the real-world phenomena of odour dispersion and air flow, whilst being sufficiently fast to be employed in learning and evolutionary robotics experiments. This paper also proposes an xml-inspired structure for the generated datasets that are used to store the perceptual information of the robots over the course of the simulations. These datasets may be used in learning experiments to estimate the quality of a candidate solution or for measuring its novelty. Full article
Show Figures

Figure 1

Figure 1
<p>Flow charts of the modified Silkworm Moth (<b>left</b>) and Dung Beetle (<b>right</b>) algorithms, adapted from [<a href="#B2-sensors-19-02231" class="html-bibr">2</a>]. <math display="inline"><semantics> <mi>θ</mi> </semantics></math> and <span class="html-italic">s</span> are user-defined parameters that, respectively, control the amplitude of the rotations and the length of the straight motions.</p>
Full article ">Figure 2
<p>Flow chart of the Multiphase strategy proposed by Ishida et al. [<a href="#B11-sensors-19-02231" class="html-bibr">11</a>].</p>
Full article ">Figure 3
<p>Screenshots of the developed simulator showing the advection-dominated (<b>left</b>) and diffusion-dominated (<b>right</b>) environments used in this work. Each environment is an enclosed rectangular space, containing a single odour source (thick green circle). In the advection-dominated environment, the wind flows from the left to the right of the environment, carrying the odour filaments (thin green circles). Conversely, in the diffusion-dominated environment, the weak and unstable air flow is unable to carry the odour filaments away from the source, clustering in its vicinity. The molecular dispersion of odour is represented by the increase of the diameter of the filaments. The wind velocity is computed on a grid which covers the entire arena. The wind vector of each vertex of the grid is shown as a black line, indicating its direction and speed. Each screenshot contains one mobile robots (black circle), equipped with the necessary sensors to perform odour source localisation. The red lines drawn over the robot represent the beams from its simulated Laser Range Finder, used for obstacle detection.</p>
Full article ">Figure 4
<p>(<b>Top</b>) A 100 m by 50 m cut-out of the meandering plume generated by the proposed simulator. (<b>Bottom</b>) Time-averaged chemical concentration emitted over 600 s (faded curves, whose transparency is proportional to the mean concentration), surrounded by the Gaussian Plume model (dashed lines). The chemical plumes and Gaussian Plume model were generated using the same parameters as Farrell et al. [<a href="#B4-sensors-19-02231" class="html-bibr">4</a>].</p>
Full article ">Figure 5
<p>Plot of the chemical concentrations sensed at 2, 5, 10 and 15 m downwind from the source, averaged over 600 s. Each dataset is normalised by the data collected at 2 m downwind from the source.</p>
Full article ">Figure 6
<p>Average silhouette values for <span class="html-italic">k</span> between 2 and 10, applied to the results of Environment 1.</p>
Full article ">Figure 7
<p>5 state-action mappings for environment 1. The top of each sub-figure presents the centroid of the respective state cluster. The bottom of each sub-figure presents the stacked histograms of relative frequency of the actions taken by each strategy in the respective state. A fourth bar is added to the histogram of each strategy, representing the percentage of simulation steps spent perceiving the corresponding state. (<b>a</b>) State 1: the robot has yet to detect any odour concentration; (<b>b</b>) State 2: the robot is sensing low concentrations of odour; (<b>c</b>) State 3: the robot is detecting high concentrations of odour; (<b>d</b>) State 4: the robot has recently lost contact with the chemical plume; (<b>e</b>) State 5: the robot has not detected odour for an extended time period.</p>
Full article ">Figure 8
<p>Average silhouette values for <span class="html-italic">k</span> values between 2 and 10, applied to the results of Environment 2.</p>
Full article ">Figure 9
<p>5 state-action mappings for environment 2. The top of each sub-figure presents the centroid of the respective state cluster. The bottom of each sub-figure presents the stacked histograms of relative frequency of the actions taken by each strategy in the respective state. A fourth bar is added to the histogram of each strategy, representing the percentage of simulation steps spent perceiving the corresponding state. (<b>a</b>) State 1: the robot has yet to sense any odour concentration; (<b>b</b>) State 2: the robot is detecting low concentrations of odour; (<b>c</b>) State 3: the robot is detecting high concentrations of odour; (<b>d</b>) State 4: the robot has recently stopped detecting odour; (<b>e</b>) State 5: the robot has lost the plume for an extended time period.</p>
Full article ">Figure 10
<p>5 state-action mappings for environment 1. The top of each sub-figure presents the centroid of the respective state cluster. The bottom of each sub-figure presents the histograms of relative frequency of the actions taken by all strategies in the respective state, characterized by the direction of motion. A fourth column is presented in these histograms, representing the relative amount of simulation steps that all strategies spent perceiving the respective state. (<b>a</b>) State 1: the robot has yet to detect odour; (<b>b</b>) State 2: the robot is detecting low concentrations of odour; (<b>c</b>) State 3: the robot is detecting high concentrations of odour; (<b>d</b>) State 4: the robot has recently lost the chemical plume; (<b>e</b>) State 5: the robot has lost the odour plume for an extended time period.</p>
Full article ">Figure 11
<p>5 state-action mappings for environment 2. The top of each sub-figure presents the centroid of the respective state cluster. The bottom of each sub-figure presents the histograms of relative frequency of the actions taken by all strategies in the respective state, characterized by the direction of motion. A fourth column is presented in these histograms, representing the relative amount of simulation steps that all strategies spent perceiving the respective state. (<b>a</b>) State 1: the robot has yet to detect odour; (<b>b</b>) State 2: the robot is detecting low concentrations of odour; (<b>c</b>) State 3: the robot is detecting high concentrations of odour; (<b>d</b>) State 4: the robot has recently lost the odour plume; (<b>e</b>) State 5: the robot has lost the plume for an extended time period.</p>
Full article ">
21 pages, 6309 KiB  
Article
Multi-Domain Airflow Modeling and Ventilation Characterization Using Mobile Robots, Stationary Sensors and Machine Learning
by Victor Hernandez Bennetts, Kamarulzaman Kamarudin, Thomas Wiedemann, Tomasz Piotr Kucner, Sai Lokesh Somisetty and Achim J. Lilienthal
Sensors 2019, 19(5), 1119; https://doi.org/10.3390/s19051119 - 5 Mar 2019
Cited by 4 | Viewed by 4612
Abstract
Ventilation systems are critically important components of many public buildings and workspaces. Proper ventilation is often crucial for preventing accidents, such as explosions in mines and avoiding health issues, for example, through long-term exposure to harmful respirable matter. Validation and maintenance of ventilation [...] Read more.
Ventilation systems are critically important components of many public buildings and workspaces. Proper ventilation is often crucial for preventing accidents, such as explosions in mines and avoiding health issues, for example, through long-term exposure to harmful respirable matter. Validation and maintenance of ventilation systems is thus of key interest for plant operators and authorities. However, methods for ventilation characterization, which allow us to monitor whether the ventilation system in place works as desired, hardly exist. This article addresses the critical challenge of ventilation characterization—measuring and modelling air flow at micro-scales—that is, creating a high-resolution model of wind speed and direction from airflow measurements. Models of the near-surface micro-scale flow fields are not only useful for ventilation characterization, but they also provide critical information for planning energy-efficient paths for aerial robots and many applications in mobile robot olfaction. In this article we propose a heterogeneous measurement system composed of static, continuously sampling sensing nodes, complemented by localized measurements, collected during occasional sensing missions with a mobile robot. We introduce a novel, data-driven, multi-domain airflow modelling algorithm that estimates (1) fields of posterior distributions over wind direction and speed (“ventilation maps”, spatial domain); (2) sets of ventilation calendars that capture the evolution of important airflow characteristics at measurement positions (temporal domain); and (3) a frequency domain analysis that can reveal periodic changes of airflow in the environment. The ventilation map and the ventilation calendars make use of an improved estimation pipeline that incorporates a wind sensor model and a transition model to better filter out sporadic, noisy airflow changes. These sudden changes may originate from turbulence or irregular activity in the surveyed environment and can, therefore, disturb modelling of the relevant airflow patterns. We tested the proposed multi-domain airflow modelling approach with simulated data and with experiments in a semi-controlled environment and present results that verify the accuracy of our approach and its sensitivity to different turbulence levels and other disturbances. Finally, we deployed the proposed system in two different real-world industrial environments (foundry halls) with different ventilation regimes for three weeks during full operation. Since airflow ground truth cannot be obtained, we present a qualitative discussion of the generated airflow models with plant operators, who concluded that the computed models accurately depicted the expected airflow patterns and are useful to understand how pollutants spread in the work environment. This analysis may then provide the basis for decisions about corrective actions to avoid long-term exposure of workers to harmful respirable matter. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed ventilation characterization system. Data from mobile robots and sensing nodes are combined to produce temporal, spatial and frequency domain models.</p>
Full article ">Figure 2
<p>Comparison between the sensor-aware probability density function (PDF) wind models (boxplots using different values of <math display="inline"><semantics> <mover accent="true"> <msub> <mi>σ</mi> <mi>ν</mi> </msub> <mo stretchy="false">^</mo> </mover> </semantics></math>, and <math display="inline"><semantics> <mover accent="true"> <msub> <mi>σ</mi> <mi>θ</mi> </msub> <mo stretchy="false">^</mo> </mover> </semantics></math>) and standard airflow mixture models (dashed blue line). The Cramer–Von Mises criterion is used as evaluation metric). (<b>a</b>) Wind direction. (<b>b</b>) Wind speed.</p>
Full article ">Figure 3
<p>Examples of simulated wind direction signals under different noise levels. (<b>a</b>) Squared waveform. (<b>b</b>) Sinusoidal waveform.</p>
Full article ">Figure 4
<p>Results summary of the <math display="inline"><semantics> <mrow> <mi mathvariant="italic">f</mi> <mo>−</mo> </mrow> </semantics></math>AFM algorithm. Notice that the median highest energy spectral density HESD (red lines) closely follow the period of the simulated signals (dahsed lines). (<b>a</b>) Squared signal. (<b>b</b>) Sinusoidal signal.</p>
Full article ">Figure 5
<p>Results for the experimental run where the distance <span class="html-italic">d</span> between the fan and the anemometer was <math display="inline"><semantics> <mrow> <mn>0.50</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>. (<b>a</b>) Correlogram; (<b>b</b>) Frequency spectrum where the HESD is highlighted in red.</p>
Full article ">Figure 6
<p>Measurement system used in this paper. (<b>a</b>) Mobile robot. (<b>b</b>) Sensing nodes.</p>
Full article ">Figure 7
<p>Bi-modal Weibull distributions <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>W</mi> <mi>B</mi> <mo>(</mo> <mi>ν</mi> <mo>)</mo> </mrow> </semantics></math> computed at the different locations of the sensing nodes in foundry-A.</p>
Full article ">Figure 8
<p>Ventilation calendars for sensor location S-1. (<b>a</b>) Wind direction calendar. (<b>b</b>) Turbulence calendar. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>v</mi> <mi>M</mi> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> computed from data collected from September 6th to September 16th.</p>
Full article ">Figure 9
<p>Ventilation calendars for location S-3. (<b>a</b>) Wind direction calendar. (<b>b</b>) Turbulence calendar. (<b>c</b>) Global <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>v</mi> <mi>M</mi> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> calendar. (<b>d</b>) Hourly distributions <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>v</mi> <mi>M</mi> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> computed with data collected during the morning of September 10th and September 11th, 2018.</p>
Full article ">Figure 10
<p>Ventilation maps for foundry-A. (<b>a</b>) Expected wind map, where the arrows indicate the wind direction while the color code denotes the wind speed. (<b>b</b>) Turbulence map.</p>
Full article ">Figure 11
<p>Summary of the characterization performed at foundry-B. At each sensor location S-1 to S-5 models <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>v</mi> <mi>M</mi> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> are shown.</p>
Full article ">Figure 12
<p>(<b>a</b>) Correlogram computed from data collected with sensor S-5 during October 18th and October 19th. (<b>b</b>) The corresponding frequency spectrum.</p>
Full article ">
28 pages, 5949 KiB  
Article
Towards Gas Discrimination and Mapping in Emergency Response Scenarios Using a Mobile Robot with an Electronic Nose
by Han Fan, Victor Hernandez Bennetts, Erik Schaffernicht and Achim J. Lilienthal
Sensors 2019, 19(3), 685; https://doi.org/10.3390/s19030685 - 7 Feb 2019
Cited by 40 | Viewed by 6425
Abstract
Emergency personnel, such as firefighters, bomb technicians, and urban search and rescue specialists, can be exposed to a variety of extreme hazards during the response to natural and human-made disasters. In many of these scenarios, a risk factor is the presence of hazardous [...] Read more.
Emergency personnel, such as firefighters, bomb technicians, and urban search and rescue specialists, can be exposed to a variety of extreme hazards during the response to natural and human-made disasters. In many of these scenarios, a risk factor is the presence of hazardous airborne chemicals. The recent and rapid advances in robotics and sensor technologies allow emergency responders to deal with such hazards from relatively safe distances. Mobile robots with gas-sensing capabilities allow to convey useful information such as the possible source positions of different chemicals in the emergency area. However, common gas sampling procedures for laboratory use are not applicable due to the complexity of the environment and the need for fast deployment and analysis. In addition, conventional gas identification approaches, based on supervised learning, cannot handle situations when the number and identities of the present chemicals are unknown. For the purpose of emergency response, all the information concluded from the gas detection events during the robot exploration should be delivered in real time. To address these challenges, we developed an online gas-sensing system using an electronic nose. Our system can automatically perform unsupervised learning and update the discrimination model as the robot is exploring a given environment. The online gas discrimination results are further integrated with geometrical information to derive a multi-compound gas spatial distribution map. The proposed system is deployed on a robot built to operate in harsh environments for supporting fire brigades, and is validated in several different real-world experiments of discriminating and mapping multiple chemical compounds in an indoor open environment. Our results show that the proposed system achieves high accuracy in gas discrimination in an online, unsupervised, and computationally efficient manner. The subsequently created gas distribution maps accurately indicate the presence of different chemicals in the environment, which is of practical significance for emergency response. Full article
Show Figures

Figure 1

Figure 1
<p>The diagram of the robotic gas-sensing system.</p>
Full article ">Figure 2
<p>A graphical representation of considering two layers of neighborhoods for a given measurement <math display="inline"><semantics> <mi mathvariant="bold">r</mi> </semantics></math>. The data points in red correspond to the first layer neighbors, and the data points in blue correspond to the second layer neighbors. The data points in green are outside the two layers of neighborhoods of <math display="inline"><semantics> <mi mathvariant="bold">r</mi> </semantics></math>. In this example, the first layer was selected with neighborhood size <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> and the second layer was selected with neighborhood size <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>An example of baseline drift of a MOX sensor and adjusted baseline offsets (the red segments after period B0).</p>
Full article ">Figure 4
<p>The state machine of a gas-sensing process. The transition conditions are explained further in the text.</p>
Full article ">Figure 5
<p>The diagram of the gas discrimination module. The learning path is triggered by the state machine, while the prediction path is used for all incoming measurements.</p>
Full article ">Figure 6
<p>An example of determining the search space of the number of clusters. The dataset used here includes ethanol and acetone.</p>
Full article ">Figure 7
<p>The gas distribution mapping module coupled with the class posteriors from the gas discrimination module.</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>b</b>): Real-world working environments for the SmokeBot, which are imitated in a firefighter training facility; (<b>c</b>): SmokeBot sensor set-ups; (<b>d</b>): Experimental environment.</p>
Full article ">Figure 9
<p>(<b>a</b>): The responses of MOX sensors and corresponding assigned states in the trial Experiment 2. In this case, the ground-truth baseline is never fully recovered; (<b>b</b>): The baseline offsets learned in each periodic updates (<math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>U</mi> <mi>B</mi> </mrow> </msub> </semantics></math> = 50 s) during the baseline states. The drift effect is observable for all three MOX sensors.</p>
Full article ">Figure 10
<p>The distributions of OCNN scores (<math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>N</mi> </msub> <mi>N</mi> </mrow> </semantics></math>) of baseline responses and the measurements in the first exposure state in three experiments, namely (<b>a</b>): Experiment 1, (<b>b</b>): Experiment 2: and (<b>c</b>): Experiment 3. In each subfigure, the first box-plot corresponds to the training set of OCNN (<math display="inline"><semantics> <mi mathvariant="bold">C</mi> </semantics></math>), which is also the measurements in the first exposure state; The second box-plot corresponds to the data with Top <math display="inline"><semantics> <mrow> <mn>25</mn> <mo>%</mo> </mrow> </semantics></math> <math display="inline"><semantics> <msub> <mi>s</mi> <mrow> <mi>G</mi> <mi>M</mi> </mrow> </msub> </semantics></math> score in <math display="inline"><semantics> <mi mathvariant="bold">C</mi> </semantics></math>. This set of measurements is denoted as <math display="inline"><semantics> <msup> <mi mathvariant="bold">C</mi> <mo>−</mo> </msup> </semantics></math>; The third box-plot corresponds to the baseline responses collected during <math display="inline"><semantics> <msub> <mi>T</mi> <mi>B</mi> </msub> </semantics></math>; The fourth box-plot corresponds to the identified baseline responses after <math display="inline"><semantics> <msub> <mi>T</mi> <mi>B</mi> </msub> </semantics></math>; The fifth box-plot corresponds to all data labeled as baseline responses. In all three experiments shown here, the <math display="inline"><semantics> <msub> <mi>s</mi> <mrow> <mi>N</mi> <mi>N</mi> </mrow> </msub> </semantics></math> values of the baseline responses are distributed over a very narrow interval, and this interval does not overlap with <math display="inline"><semantics> <msup> <mi mathvariant="bold">C</mi> <mo>−</mo> </msup> </semantics></math>.</p>
Full article ">Figure 11
<p>The distributions of accuracy rate <span class="html-italic">H</span> of the multi-compound discrimination models in each trial. The discrimination models are periodically updated (<math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>U</mi> <mi>D</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> s) during the exposure state, which may result in different performance.</p>
Full article ">Figure 12
<p>A snapshot of the classification map learned for clean air (<b>a</b>), ethanol (<b>b</b>) and acetone (<b>c</b>) in the trial Experiment 1.</p>
Full article ">Figure 13
<p>A snapshot of the classification map learned for clean air (<b>a</b>), ethanol (<b>b</b>) and acetone (<b>c</b>) in the trial Experiment 2.</p>
Full article ">Figure 14
<p>A snapshot of the classification map learned for clean air (<b>a</b>), ethanol (<b>b</b>) and acetone (<b>c</b>) in the trial Experiment 3.</p>
Full article ">Figure 15
<p>The baseline and exposure states found in Experiment 1. Before the second determined exposure state, some gas detection events are missed by the proposed gas detection module.</p>
Full article ">
24 pages, 1376 KiB  
Article
Analysis of Model Mismatch Effects for a Model-Based Gas Source Localization Strategy Incorporating Advection Knowledge
by Thomas Wiedemann, Achim J. Lilienthal and Dmitriy Shutin
Sensors 2019, 19(3), 520; https://doi.org/10.3390/s19030520 - 26 Jan 2019
Cited by 14 | Viewed by 4155
Abstract
In disaster scenarios, where toxic material is leaking, gas source localization is a common but also dangerous task. To reduce threats for human operators, we propose an intelligent sampling strategy that enables a multi-robot system to autonomously localize unknown gas sources based on [...] Read more.
In disaster scenarios, where toxic material is leaking, gas source localization is a common but also dangerous task. To reduce threats for human operators, we propose an intelligent sampling strategy that enables a multi-robot system to autonomously localize unknown gas sources based on gas concentration measurements. This paper discusses a probabilistic, model-based approach for incorporating physical process knowledge into the sampling strategy. We model the spatial and temporal dynamics of the gas dispersion with a partial differential equation that accounts for diffusion and advection effects. We consider the exact number of sources as unknown, but assume that gas sources are sparsely distributed. To incorporate the sparsity assumption we make use of sparse Bayesian learning techniques. Probabilistic modeling can account for possible model mismatch effects that otherwise can undermine the performance of deterministic methods. In the paper we evaluate the proposed gas source localization strategy in simulations using synthetic data. Compared to real-world experiments, a simulated environment provides us with ground truth data and reproducibility necessary to get a deeper insight into the proposed strategy. The investigation shows that (i) the probabilistic model can compensate imperfect modeling; (ii) the sparsity assumption significantly accelerates the source localization; and (iii) a-priori advection knowledge is of advantage for source localization, however, it is only required to have a certain level of accuracy. These findings will help in the future to parameterize the proposed algorithm in real world applications. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of the simulated gas concentration based on mantaflow in (<b>a</b>) and the PDE (<a href="#FD1-sensors-19-00520" class="html-disp-formula">1</a>) in (<b>b</b>); While in (<b>b</b>) the space discretization is 26 × 26, in (<b>b</b>) the resolution of the simulation is 96 × 96 for a more accurate visualization.</p>
Full article ">Figure 2
<p>Comparison of the exploration performance: The two plots compare performance measures with and without the use of the sparsity assumption of the source distribution. In (<b>a</b>) the error of the estimated concentration field is shown as NMSE. In (<b>b</b>) the error of the estimated source distribution is plotted by means of the EMD. The curves are averaged over several simulation runs.</p>
Full article ">Figure 3
<p>Effect of different ratios of <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>m</mi> </msub> <mo stretchy="false">/</mo> <msub> <mi>τ</mi> <mi>s</mi> </msub> </mrow> </semantics></math>: On the left in (<b>a</b>,<b>c</b>) the case with a sparsity inducing prior is shown, on the right in (<b>b</b>,<b>d</b>) the case without. The first row (<b>a</b>,<b>b</b>) depicts the exploration performance by means of the NMSE of the estimated concentration field. The second row (<b>c</b>,<b>d</b>) plots the number of measurement locations. Note that in each iteration 5 measurements are carried out, however eventually at the same locations as before.</p>
Full article ">Figure 4
<p>The two plots show as an example the sampling pattern for the strategy with a sparsity prior in (<b>a</b>) and without the use of a sparsity prior in (<b>b</b>). The parameters were chosen as <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>m</mi> </msub> </semantics></math> = <math display="inline"><semantics> <msup> <mn>10</mn> <mn>5</mn> </msup> </semantics></math>, <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>s</mi> </msub> </semantics></math> = <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math>. The white stars indicate the measurement locations. The color map represents the ground truth gas concentration field. The sources are obviously located at the locations with the highest concentration. In both cases the scene is a snapshot after 15 iterations.</p>
Full article ">Figure 5
<p>Analysis of a wrong wind prior on the performance of the exploration strategy measured by the EMD of the estimated source distribution: In (<b>a</b>) the exploration strategy was informed by the wind prior that there is no wind. The different curves show the performance for different simulated wind speeds contradicting the wind prior. Similarly (<b>b</b>) depicts the case, when the assumption about the wind speed encoded in the wind prior of the exploration strategy coincides with the simulation, but the wind direction differs between the actual simulation and the prior assumption. For the sparsity prior, relaxation parameter and measurement noise we have chosen the best values found in the previous section.</p>
Full article ">Figure 6
<p>Studies of the precision of the wind prior <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>v</mi> </msub> </semantics></math> on the performance of the exploration strategy measured by the EMD of the estimated source distribution: In (<b>a</b>) the prior of the wind matches with the simulated wind, in (<b>b</b>) the wind direction differs by 30 deg.</p>
Full article ">Figure 7
<p>Comparison of different values of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math>, i.e., the time discretization of the PDE (note that <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> can be interpreted as the inverse sampling rate): The plots in (<b>a</b>) show the performance of the exploration measured by the error in the estimated source distribution by the EMD. In (<b>b</b>) the number of measurement locations is shown. Note that in each iteration 5 measurements are carried out, however measurements may be carried out at the same locations. The plots in (<b>c</b>,<b>d</b>) show the same data, but the x-axis is scaled according to time, not iterations.</p>
Full article ">Figure 8
<p>Performance of our exploration strategy in case of the mantaflow simulation: the plot in (<b>a</b>) shows the NMSE of the estimated concentration field. In (<b>b</b>) the EMD of the estimated source distribution is plotted. Besides, the Euclidean distance of the location of the simulated source and the peak in the estimated source distribution is shown.</p>
Full article ">Figure 9
<p>Exploration of a gas distribution simulated with mantaflow after 10 iterations (50 measurements): Plot (<b>a</b>) shows a snapshot of the instantaneous gas concentration field simulated by mantaflow. In (<b>b</b>) the estimated concentration field is shown and (<b>c</b>) depicts the estimated source distribution.</p>
Full article ">
25 pages, 10669 KiB  
Article
Smelling Nano Aerial Vehicle for Gas Source Localization and Mapping
by Javier Burgués, Victor Hernández, Achim J. Lilienthal and Santiago Marco
Sensors 2019, 19(3), 478; https://doi.org/10.3390/s19030478 - 24 Jan 2019
Cited by 105 | Viewed by 16621
Abstract
This paper describes the development and validation of the currently smallest aerial platform with olfaction capabilities. The developed Smelling Nano Aerial Vehicle (SNAV) is based on a lightweight commercial nano-quadcopter (27 g) equipped with a custom gas sensing board that can host up [...] Read more.
This paper describes the development and validation of the currently smallest aerial platform with olfaction capabilities. The developed Smelling Nano Aerial Vehicle (SNAV) is based on a lightweight commercial nano-quadcopter (27 g) equipped with a custom gas sensing board that can host up to two in situ metal oxide semiconductor (MOX) gas sensors. Due to its small form-factor, the SNAV is not a hazard for humans, enabling its use in public areas or inside buildings. It can autonomously carry out gas sensing missions of hazardous environments inaccessible to terrestrial robots and bigger drones, for example searching for victims and hazardous gas leaks inside pockets that form within the wreckage of collapsed buildings in the aftermath of an earthquake or explosion. The first contribution of this work is assessing the impact of the nano-propellers on the MOX sensor signals at different distances to a gas source. A second contribution is adapting the ‘bout’ detection algorithm, proposed by Schmuker et al. (2016) to extract specific features from the derivative of the MOX sensor response, for real-time operation. The third and main contribution is the experimental validation of the SNAV for gas source localization (GSL) and mapping in a large indoor environment (160 m2) with a gas source placed in challenging positions for the drone, for example hidden in the ceiling of the room or inside a power outlet box. Two GSL strategies are compared, one based on the instantaneous gas sensor response and the other one based on the bout frequency. From the measurements collected (in motion) along a predefined sweeping path we built (in less than 3 min) a 3D map of the gas distribution and identified the most likely source location. Using the bout frequency yielded on average a higher localization accuracy than using the instantaneous gas sensor response (1.38 m versus 2.05 m error), however accurate tuning of an additional parameter (the noise threshold) is required in the former case. The main conclusion of this paper is that a nano-drone has the potential to perform gas sensing tasks in complex environments. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the UAV landscape, from insect-sized drones to military aircrafts, classified according to the approximate weight and size. The graphic shows the large range of UAV sizes, which spans seven orders of magnitude.</p>
Full article ">Figure 2
<p>Gas source localization strategies. (<b>left</b>) Reactive plume tracking; (<b>center</b>) Plume modelling; (<b>right</b>) Map-based.</p>
Full article ">Figure 3
<p>The CrazyFlie 2.0 equipped with the MOX deck and the UWB tag (<b>center</b>) gets its 3D position from an external localization system composed of six ultra-wide band anchors (<b>left</b>). The location and sensor data are communicated to the ground station (<b>right</b>) over the 2.4 GHz ISM band.</p>
Full article ">Figure 4
<p>Schematic of the conditioning electronic circuit for each MOX sensor in the MOX deck, using PWM for powering and a voltage divider for read-out.</p>
Full article ">Figure 5
<p>Experimental arena. (<b>left</b>) Frontal picture; (<b>right</b>) Schematic top view. The green squares indicate the position of the UWB anchors, which are positioned along two inverted triangles (green lines).</p>
Full article ">Figure 6
<p>Gas source location in the three experiments. (<b>a</b>) Experiment 1: inside small room; (<b>b</b>) Experiment 2: hidden in suspended ceiling; (<b>c</b>) Experiment 3: hidden in a power outlet box.</p>
Full article ">Figure 7
<p>Flow diagram of the improved bout computation. The meaning of each symbol is given in the text.</p>
Full article ">Figure 8
<p>Setup for assessing the effect of the rotors on the MOX sensor signals. (<b>a</b>) Top view of the stand used to hold the drone at different heights while minimizing interference with the rotors air flow; (<b>b</b>) Photo of an experiment with the drone placed 25 cm above an ethanol bottle (gas source), overlaid with an illustration of a gas cloud.</p>
Full article ">Figure 9
<p>Predefined navigation strategy based on zig-zag sweeping at two heights (0.9 and 1.8 m). The green squares indicate the location of the UWB anchors.</p>
Full article ">Figure 10
<p>(<b>A</b>) 2D map of MOX sensor response during 15 min of random exploration of the target area without gas; (<b>B</b>) Histogram of blank readings, with a Gaussian curve <span class="html-italic">Ν</span>(<math display="inline"><semantics> <mi>μ</mi> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> </mrow> </semantics></math>) superimposed.</p>
Full article ">Figure 11
<p>(<b>A</b>) Calibration line in the range 1–50 ppm (log-log plot), with blank variability superimposed at each concentration level (see inset). The LOD is estimated using Equation (2); (<b>B</b>) Histogram of amplitudes of bouts detected in the calibrated blank signals. <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> (Equation (5)) is indicated by a red dashed vertical line.</p>
Full article ">Figure 12
<p>Sensor signals (log scale) near an evaporating source. (<b>A</b>) Propellers switched off; (<b>B</b>) Propellers switched on. The ethanol bottle is opened at t = 2 min.</p>
Full article ">Figure 13
<p>Smoothed derivative (i.e., <math display="inline"><semantics> <mrow> <msubsup> <mi>x</mi> <mi>s</mi> <mo>′</mo> </msubsup> </mrow> </semantics></math> in <a href="#sensors-19-00478-f007" class="html-fig">Figure 7</a>) of the sensor signals at 50 cm in front of the source (blue line), 65 cm above the source (yellow line) and 25 cm above the source (green line). Bouts with amplitude higher than <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>+</mo> <mn>3</mn> <mi>σ</mi> </mrow> </semantics></math> are highlighted in red. In the left column, the propellers are switched off whereas in the right column they are switched on. The ethanol bottle is opened at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> min.</p>
Full article ">Figure 14
<p>Aerodynamics of Crazyflie 2.0 when the four rotors are spinning, visualized using a Deskbreeze wind tunnel (Courtesy of Bitcraze AB). The drone is fixed to one of the walls of the tunnel using a 3D printed stand and dry ice fog is emitted from (<b>A</b>) below the drone or (<b>B</b>) above the drone. It shows the downwash of the propellers and how part of the fog reaches the MOX gas sensor (red arrow). The MOX deck has been overlaid to the original images for visual clarity.</p>
Full article ">Figure 15
<p>Results of Experiment 1. (<b>A</b>) 2D map of the instantaneous concentration (ppm) in log scale, with bouts represented by blue circles (<math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0.52</mn> </mrow> </semantics></math> ppm/s). A hand-drawn ellipse outlines the approximate plume shape based on the location of bouts. The average bout frequency along the y-axis is shown in the leftmost panel. The box plot below the map represents the instantaneous concentration along the x-axis; (<b>B</b>) Trajectory of the drone along the z-axis. (<b>C</b>) Temporal evolution of the instantaneous concentration (ppm) on a log scale, with detected bouts highlighted in red (the black star indicates the start of a bout). The identifiers R1–R4 between panels (<b>B</b>) and (<b>C</b>) indicate the area of the map in which the drone is flying at each moment. The maximum instantaneous concentration and the maximum bout frequency are indicated by a green star and a blue triangle, respectively.</p>
Full article ">Figure 16
<p>Effect of the bout amplitude threshold in the results of Experiment 1. The blue circles represent bouts with amplitude higher than 0.04 ppm/s (<math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>+</mo> <mn>3</mn> <mi>σ</mi> </mrow> </semantics></math> threshold) and the green circles represent bouts with amplitude higher than 1.0 ppm/s. In each case, a hand-drawn ellipse outlines the approximate plume shape based on the location of the bouts. The green and blue stars indicate the source location estimate in each case, according to the maximum bout frequency.</p>
Full article ">Figure 17
<p>3D map of the instantaneous concentration (ppm) in Experiment 2. The black square indicates the gas source location (<span class="html-italic">x,y,z</span>) = (14.0, 5.2, 2.7) m, the black arrow the wind direction (positive x-axis) and the letter ‘S’ the starting point of the drone (<span class="html-italic">x,y,z</span>) = (13.5, 5.2, 0.0) m.</p>
Full article ">Figure 18
<p>Results of Experiment 2. (<b>A</b>) 2D map of the instantaneous concentration (ppm), with odor hits represented by blue circles (<math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0.04</mn> </mrow> </semantics></math> ppm/s). A hand-drawn ellipse outlines the approximate plume shape based on the location of bouts. The average bout frequency along the y-axis is shown in the panel on the left. The box plots below the map represents the instantaneous concentration along the x-axis; (<b>B</b>) Drone trajectory in the z-axis. (<b>C</b>) Temporal evolution of the instantaneous concentration (ppm), with detected bouts highlighted in red (the black star indicates the start of the bout). The bout frequency (gray line) is computed using a sliding window of 5 s. The identifiers R1–R4 between panels (<b>B</b>) and (<b>C</b>) indicate the area of the map in which the drone is flying at each moment. The maximum instantaneous concentration and the maximum bout frequency are indicated by a green star and a blue triangle, respectively.</p>
Full article ">Figure 19
<p>Results of Experiment 2 when <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> is increased to 0.18 ppm/s. (<b>top</b>) 2D map of the instantaneous concentration (ppm), with odor hits represented by blue circles. A hand-drawn ellipse outlines the approximate plume shape based on the location of bouts. (<b>bottom</b>) Temporal evolution of the instantaneous concentration (ppm), with detected bouts highlighted in red (the black star indicates the start of the bout). The bout frequency (gray line) is computed using a sliding window of 5 s. The maximum bout frequency is indicated by a blue triangle.</p>
Full article ">Figure 20
<p>Results of Experiment 3. (<b>A</b>) 2D map of the instantaneous concentration (ppm), with bouts represented by blue circles (<math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>0.20</mn> </mrow> </semantics></math> ppm/s). A hand-drawn ellipse outlines the approximate plume shape based on the location of bouts. The average bout frequency along the y-axis is shown in the leftmost panel. The box plot below the map represents the instantaneous concentration along the x-axis; (<b>B</b>) Drone trajectory in the z-axis. (<b>C</b>) Temporal evolution of the instantaneous concentration (ppm), with detected bouts highlighted in red (the black star indicates the start of the bout). The bout frequency (gray line) is computed using a sliding window of 5 s. The identifiers R1–R4 between panels (<b>B</b>) and (<b>C</b>) indicate the area of the map in which the drone is flying at each moment. The maximum of the instantaneous concentration and the bout frequency are indicated by a green star and a blue triangle, respectively.</p>
Full article ">
Back to TopTop