[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Collective Mobile Robotics: From Theory to Real-World Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 September 2024) | Viewed by 7090

Special Issue Editor


E-Mail Website
Guest Editor
1. LITIS Laboratory, Normandy University of Le Havre, Le Havre, Normandy, France
2. Faculty of Mathematics and Natural Sciences, Cardinal Stefan Wyszynski University, 01-815 Warsaw, Poland
Interests: interaction networks; dynamic graphs; nature-inspired comp.; complex systems; swarm robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

From theoretical models to deployment of real robots, Collective Mobile Robotics (CMR) have witnessed a growing interest last two decades. In the context of this special issue, a swarm is considered as a set of autonomous machines presenting a collective behavior without relying on any centralized mechanism. This special issue aims at offering a venue for specialists of any domain related to robot swarming, from theoretical foundations to real-world applications including robotic platforms and testbeds. Measuring the distance that may exist between theory and practical experiments, and raising open questions related to swarm robotics is one of the objectives of this SI. Contributions of mature works as well as emerging new ideas from theoretical models to existing hardware solutions enabling robots to behave as swarms are welcome. More generally, topics of interest for this special issue include (but are not limited to) the following topics :

  • Theoretical Models (e.g. Look-Compute-Move, graph-based models)
  • Distributed Algorithms
  • Cooperation versus Competition
  • Bio-Inspired Approaches
  • Communication issues
  • Optimization, Robustness, Fault Tolerance and Resilience
  • Simulations
  • Pattern Formation
  • Environment Perception
  • Technological Hardware/Software Solutions and Platforms
  • Collective Environment Perception
  • Human - CMR interactions
  • Applications

Prof. Dr. Frédéric Guinand
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 5715 KiB  
Article
Decentralized Coordination of a Multi-UAV System for Spatial Planar Shape Formations
by Etienne Petitprez, François Guérin, Frédéric Guinand, Florian Germain and Nicolas Kerthe
Sensors 2023, 23(23), 9553; https://doi.org/10.3390/s23239553 - 1 Dec 2023
Cited by 2 | Viewed by 1316
Abstract
Motivated by feedback from firefighters in Normandy, this work aims to provide a simple technique for a set of identical drones to collectively describe an arbitrary planar virtual shape in a 3D space in a decentralized manner. The original problem involved surrounding a [...] Read more.
Motivated by feedback from firefighters in Normandy, this work aims to provide a simple technique for a set of identical drones to collectively describe an arbitrary planar virtual shape in a 3D space in a decentralized manner. The original problem involved surrounding a toxic cloud to monitor its composition and short-term evolution. In the present work, the pattern is described using Fourier descriptors, a convenient mathematical formulation for that purpose. Starting from a reference point, which can be the center of a fire, Fourier descriptors allow for more precise description of a shape as the number of harmonics increases. This pattern needs to be evenly occupied by the fleet of drones under consideration. To optimize the overall view, the drones must be evenly distributed angularly along the shape. The proposed method enables virtual planar shape description, decentralized bearing angle assignment, drone movement from takeoff positions to locations along the shape, and collision avoidance. Furthermore, the method allows for the number of drones to change during the mission. The method has been tested both in simulation, through emulation, and in outdoor experiments with real drones. The obtained results demonstrate that the method is applicable in real-world contexts. Full article
(This article belongs to the Special Issue Collective Mobile Robotics: From Theory to Real-World Applications)
Show Figures

Figure 1

Figure 1
<p>Example of an angular equidistribution of UAVs around an elliptical shape with its center at <span class="html-italic">C</span>. The red and green dots allow distinguishing the front from the back of the drone, with red indicating the front and green indicating the back.</p>
Full article ">Figure 2
<p>Example of a closed curve defined by <math display="inline"><semantics> <msub> <mi>d</mi> <mi>β</mi> </msub> </semantics></math> with FD from the reference point <span class="html-italic">C</span>.</p>
Full article ">Figure 3
<p>Average total number of messages broadcast by drones for reaching consensus during the bearing angle choice phase. On average, 2.5 messages are broadcast by the drones. Each point on the graph corresponds to 100 runs. In this scenario, it is assumed that no messages are lost.</p>
Full article ">Figure 4
<p>Average number of broadcast messages needed when a given percentage of messages are lost. The considered loss rates range from 10 to 50%. The increase in the number of broadcast messages remains limited even when the rate of loss is high.</p>
Full article ">Figure 5
<p>Average number of broadcast messages when drones join or drones leave the group; the considered numbers of changes are 0, 5, and 10.</p>
Full article ">Figure 6
<p>Increase in the number of broadcast messages with respect to the number of drones joining the group. The considered number of changes is always 10; thus, three joins means that three drones joined the group and seven left it.</p>
Full article ">Figure 7
<p>Definition of the position errors for UAV 1.</p>
Full article ">Figure 8
<p>Collision avoidance scheme based on attractive and repulsive speeds, as described by Formulas (<a href="#FD16-sensors-23-09553" class="html-disp-formula">16</a>) and (<a href="#FD18-sensors-23-09553" class="html-disp-formula">18</a>).</p>
Full article ">Figure 9
<p>From left to right: the astroidal, peanut, pear, shell, and square signal shapes.</p>
Full article ">Figure 10
<p>Squadrone Systems “MiniSim” simulators and their Raspberry PI 3B+ companion computers.</p>
Full article ">Figure 11
<p>Flight visualisation on FlightGear during the emulation phase [<a href="#B28-sensors-23-09553" class="html-bibr">28</a>].</p>
Full article ">Figure 12
<p>Evolution of the retrieval error according to the curve factor of the peanut and star shapes based on <a href="#sensors-23-09553-t0A1" class="html-table">Table A1</a> and <a href="#sensors-23-09553-t0A2" class="html-table">Table A2</a> in <a href="#app2-sensors-23-09553" class="html-app">Appendix A</a>.</p>
Full article ">Figure 13
<p>Evolution of the retrieval error according to the number of harmonics (based on <a href="#sensors-23-09553-t0A3" class="html-table">Table A3</a> in <a href="#app2-sensors-23-09553" class="html-app">Appendix A</a>).</p>
Full article ">Figure 14
<p>Astroidal shape transformation from previous parameters to new ones: scale = 100, rotation = <math display="inline"><semantics> <mrow> <mi>π</mi> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>, reference point = <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>15</mn> <mo>,</mo> <mn>5</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Simulation of astroidal, pear, and shell shape with twenty drones.</p>
Full article ">Figure 16
<p>Simulation of astroidal, pear, and shell shapes with twelve drones after removing eight drones from the initial formation of twenty drones.</p>
Full article ">Figure 17
<p>Pear shape simulation results, showing the signature shape (blue), approximated shape (red), and drone’s path during the simulation (black).</p>
Full article ">Figure 18
<p>Astroidal shape simulation, showing the signature shape (blue), approximated shape (red), and drone’s path during the simulation (black).</p>
Full article ">Figure 19
<p>Pear shape simulation, showing the signature shape (blue), rotated approximated shape (pink), and drone’s path during the simulation (black).</p>
Full article ">Figure 20
<p>Minimum distance between drones during flight in the simulation. After take off, the drones move to their final positions. During the first two seconds, while the drones remain close to each other (the distance between them is less than 6 m), they never collide. Afterwards, they move further from one another. During the interval from 4 s to 6 s, drones 0 and 2 are moving closer, allowing the effect of the collision avoidance mechanism described in <a href="#sec4dot1dot4-sensors-23-09553" class="html-sec">Section 4.1.4</a> to be observed. The same phenomenon can be observed for the period from 7 s to 9 s, during which drones 0 and 1 are moving closer to each other without colliding.</p>
Full article ">Figure 21
<p>Our experimental plaform consisting of DJI F450 drones. The upper left-hand picture shows the computing devices (MiniSim and Raspberry PI), which were the same those used for the simulation (<a href="#sensors-23-09553-f010" class="html-fig">Figure 10</a>). The images at the bottom were taken during the experiments. The red circles in the bottom left-hand image indicate the drones’ positions.</p>
Full article ">Figure 22
<p>Pear shape followed by the UAV during the real experiment. The reference point and shape description were provided as the parameters of the method before the experiment. After take off, the drone successfully followed the shape while pointing in the direction of the reference point at each moment.</p>
Full article ">Figure 23
<p>Collision avoidance of drones with obstacles and with each other. Two drones were programmed to move from point A to point B, with the first drone, represented by a yellow dot) starting at point A and the second (red dot) starting at point B. In the middle of the path between A and B is a static obstacle, C. The black dots correspond to the coordinates of each drone during their back and forth movement, positions that were logged during our real flight experiments. During the flight we can distinguish three periods. During the first period, node are moving toward their target. At time 1 they get close to the obstacle. Then, at time 2, due to the collision avoidance process, only one drone (the yellow one) is advancing toward its target, preventing the second one (red dot) to do the same and forcing it to remain at the same position. Finally during the last period (time 3), red drone has enough room to advance toward its target.</p>
Full article ">Figure A1
<p>Peanut shape concave factor (<span class="html-italic">n</span>) variation. From left to right, the figures were obtained using values of <span class="html-italic">n</span> equal to <math display="inline"><semantics> <mrow> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A2
<p>Star shape concave factor (<span class="html-italic">n</span>) variation. From left to right, the figures were obtained using values of <span class="html-italic">n</span> equal to <math display="inline"><semantics> <mrow> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p>Shapes retrieved for various number of harmonics. From top to bottom, the number of harmonics equals <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>50</mn> <mo>,</mo> </mrow> </semantics></math> and 250.</p>
Full article ">Figure A3 Cont.
<p>Shapes retrieved for various number of harmonics. From top to bottom, the number of harmonics equals <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>50</mn> <mo>,</mo> </mrow> </semantics></math> and 250.</p>
Full article ">
24 pages, 3072 KiB  
Article
Intelligent Drone Swarms to Search for Victims in Post-Disaster Areas
by Matheus Nohra Haddad, Andréa Cynthia Santos, Christophe Duhamel and Amadeu Almeida Coco
Sensors 2023, 23(23), 9540; https://doi.org/10.3390/s23239540 - 30 Nov 2023
Cited by 1 | Viewed by 2162
Abstract
This study presents the Drone Swarms Routing Problem (DSRP), which consists of identifying the maximum number of victims in post-disaster areas. The post-disaster area is modeled in a complete graph, where each search location is represented by a vertex, and the edges are [...] Read more.
This study presents the Drone Swarms Routing Problem (DSRP), which consists of identifying the maximum number of victims in post-disaster areas. The post-disaster area is modeled in a complete graph, where each search location is represented by a vertex, and the edges are the shortest paths between destinations, with an associated weight, corresponding to the battery consumption to fly to a location. In addition, in the DSRP addressed here, a set of drones are deployed in a cooperative drone swarms approach to boost the search. In this context, a V-shaped formation is applied with leader replacements, which allows energy saving. We propose a computation model for the DSRP that considers each drone as an agent that selects the next search location to visit through a simple and efficient method, the Drone Swarm Heuristic. In order to evaluate the proposed model, scenarios based on the Beirut port explosion in 2020 are used. Numerical experiments are presented in the offline and online versions of the proposed method. The results from such scenarios showed the efficiency of the proposed approach, attesting not only the coverage capacity of the computational model but also the advantage of adopting the V-shaped formation flight with leader replacements. Full article
(This article belongs to the Special Issue Collective Mobile Robotics: From Theory to Real-World Applications)
Show Figures

Figure 1

Figure 1
<p>Example of a DSRP with eight drones, one base and eight search areas.</p>
Full article ">Figure 2
<p>Flowchart of the computational model for the DSRP.</p>
Full article ">Figure 3
<p>How to build the heatmap and transform it into a complete graph.</p>
Full article ">Figure 4
<p>Case study: Beirut Port explosion.</p>
Full article ">Figure 5
<p>Example of a solution loaded in CoppeliaSim.</p>
Full article ">Figure 6
<p>Scenario 1: Results varying the number of drones, in terms of time and percentage of expected victims.</p>
Full article ">Figure 7
<p>Scenario 2: Results varying the number of drones, in terms of time and percentage of expected victims.</p>
Full article ">Figure 8
<p>Scenario 3: Results varying the number of drones, in terms of time and percentage of expected victims.</p>
Full article ">Figure 9
<p>Additional cost required without V-shaped formation savings.</p>
Full article ">Figure 10
<p>Number of extra expected victims identified with V-shaped formation savings.</p>
Full article ">
22 pages, 8488 KiB  
Article
Swarm Metaverse for Multi-Level Autonomy Using Digital Twins
by Hung Nguyen, Aya Hussein, Matthew A. Garratt and Hussein A. Abbass
Sensors 2023, 23(10), 4892; https://doi.org/10.3390/s23104892 - 19 May 2023
Cited by 4 | Viewed by 2737
Abstract
Robot swarms are becoming popular in domains that require spatial coordination. Effective human control over swarm members is pivotal for ensuring swarm behaviours align with the dynamic needs of the system. Several techniques have been proposed for scalable human–swarm interaction. However, these techniques [...] Read more.
Robot swarms are becoming popular in domains that require spatial coordination. Effective human control over swarm members is pivotal for ensuring swarm behaviours align with the dynamic needs of the system. Several techniques have been proposed for scalable human–swarm interaction. However, these techniques were mostly developed in simple simulation environments without guidance on how to scale them up to the real world. This paper addresses this research gap by proposing a metaverse for scalable control of robot swarms and an adaptive framework for different levels of autonomy. In the metaverse, the physical/real world of a swarm symbiotically blends with a virtual world formed from digital twins representing each swarm member and logical control agents. The proposed metaverse drastically decreases swarm control complexity due to human reliance on only a few virtual agents, with each agent dynamically actuating on a sub-swarm. The utility of the metaverse is demonstrated by a case study where humans controlled a swarm of uncrewed ground vehicles (UGVs) using gestural communication, and via a single virtual uncrewed aerial vehicle (UAV). The results show that humans could successfully control the swarm under two different levels of autonomy, while task performance increases as autonomy increases. Full article
(This article belongs to the Special Issue Collective Mobile Robotics: From Theory to Real-World Applications)
Show Figures

Figure 1

Figure 1
<p>Proximal (<b>a</b>) and remote (<b>b</b>) swarm control techniques.</p>
Full article ">Figure 2
<p>The proposed digital-twin-enabled metaverse for swarm control. * Updated agents’ positions can be the updated positions of the control agents or of the swarm members, as discussed in <a href="#sec3-sensors-23-04892" class="html-sec">Section 3</a>.</p>
Full article ">Figure 3
<p>Implementation of the digital-twin-enabled metaverse as used in the case study.</p>
Full article ">Figure 4
<p>Schematic diagram of the task.</p>
Full article ">Figure 5
<p>Physical environment.</p>
Full article ">Figure 6
<p>The simulated environment in Gazebo 9.</p>
Full article ">Figure 7
<p>The GUI used in the case study. The right-hand side shows the colour coding for the height of the UAV.</p>
Full article ">Figure 8
<p>Low level gestures.</p>
Full article ">Figure 9
<p>High level gestures.</p>
Full article ">Figure 10
<p>Human control.</p>
Full article ">Figure 11
<p>Examples of trajectories from episodes with low level of autonomy.</p>
Full article ">Figure 12
<p>Examples of trajectories from episodes with high level of autonomy.</p>
Full article ">
Back to TopTop