Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods
<p>Storage tank in a refinery. The weld beads are highlighted in red.</p> "> Figure 2
<p>Autonomous Inspection Robot 1 (AIR1). A robot conceived/developed to inspect weld beads in storage tanks containing liquefied petroleum gas. A profile sensor, facing downwards, is positioned in front of the robot.</p> "> Figure 3
<p>Sensors for perception of the environment previously attached to AIR1. Depth cameras and Lidar sensors were used to map the tanks and to predict the sizes of spherical pressure vessels (adapted from [<a href="#B34-sensors-20-05960" class="html-bibr">34</a>]).</p> "> Figure 4
<p>System architecture. An industrial joystick is operated by sending signals via radio frequency to a receiver. The data are processed in the Arduino DUE and sent to the computer that controls the robot. The operator also wears a Myo armband to assist with the robotic control.</p> "> Figure 5
<p>Diagram of the Fuzzy controller implemented.</p> "> Figure 6
<p>Membership functions of the variable MyoRMS.</p> "> Figure 7
<p>Membership functions of the variable MyoRoll.</p> "> Figure 8
<p>Membership functions of the variable JoyAngular.</p> "> Figure 9
<p>Membership functions of the variable WeldPos.</p> "> Figure 10
<p>Membership functions of the variable LoA—i.e., level of autonomy.</p> "> Figure 11
<p>Robot controls. There is a fusion between data from an industrial joystick and a Myo armband to control the robot.</p> "> Figure 12
<p>The velocities of the robot are published in ROS topics.</p> "> Figure 13
<p>Block diagram of the autonomous mode. The control system reported by Terres [<a href="#B52-sensors-20-05960" class="html-bibr">52</a>] was adopted. A fuzzy controller controls the robot in an autonomous way to follow the previously identified weld bead.</p> "> Figure 14
<p>Output of the Fuzzy controller for the input vector (50, −1.5, −0.5 −1). The output is Shared Mode, with 81% of the final speed being controlled by the operator and 19% by the autonomous system.</p> "> Figure 15
<p>Output of the Fuzzy controller for the input vector (50, −1.5, −0.5 −0.3). The level of autonomy in the output is Manual Mode.</p> "> Figure 16
<p>Autonomous Inspection Robot 1 (AIR-1) in the V-REP simulator.</p> "> Figure 17
<p>Scene used of a refinery with storage ranks and weld beads in the V-REP simulator.</p> "> Figure 18
<p>Navigation goal of the experiments. Point A is the initial position of the robot and point B is the final position. The optimal course is shown in orange and the turns are numerated.</p> "> Figure 19
<p>Manual Mode. The yellow line is the route traveled by the robot.</p> "> Figure 20
<p>Variation in the alignment error of the robot during experiment 1 in manual mode. The spikes in the error occur during turns due to the robot’s topology and are circled in red, with a number representing the associated curve.</p> "> Figure 21
<p>The experiment was performed in shared mode with 50% of the velocity controlled by the operator and 50% by the robot.</p> "> Figure 22
<p>Variation in the alignment error during experiment 3 in Shared Mode.</p> "> Figure 23
<p>Experiment carried out in Supervisory Mode.</p> "> Figure 24
<p>Variation in the alignment error of the robot in the course of experiment 2 in Supervisory Mode.</p> "> Figure 25
<p>Experiment performed with Autonomous Mode.</p> "> Figure 26
<p>Variation of the alignment error during experiment 3 with Autonomous Mode.</p> "> Figure 27
<p>Sliding Autonomy experiment.</p> "> Figure 28
<p>Autonomy and alignment error during the sliding autonomy experiment. The panel above shows the output of the fuzzy controller during the experiment, representing the level of autonomy. The panel below shows the alignment error. The dashed lines indicate the instances when there were spikes in the errors and in the level of autonomy.</p> "> Figure 29
<p>Path described by AIR1 during a curve.</p> "> Figure 30
<p>Emphasis on a curve where difficulties were encountered.</p> ">
Abstract
:1. Introduction
2. Overview of Levels of Autonomy in Robotics
3. Implementation of the Human-Robot Interaction
3.1. Tank Inspection Task
3.2. Autonomous Inspection Robot
3.3. Input Devices
- /myo_raw/myo_emg: This topic contains a vector with the raw data from eight electrodes;
- /myo_raw/myo_ori: Contains the value, in radians, of the roll, pitch, and yaw rotation angles from the IMU;
- /myo_raw/myo_gest_str: This string contains the name of the gesture being performed by the user. These gestures are standardized and are comprised of six hand positions: rest, fist, wave in, wave out, fingers spread and thumb to pinky;
- /myo_raw/vibrate: Contains a value ranging from 0 to 3. A value of 0 corresponds to no vibration, 1 is weak vibration, 2 is medium vibration, and 3 indicates strong vibration providing haptic feedback to the user.
3.4. Autonomy Levels and Sliding Autonomy
3.4.1. Manual Mode
3.4.2. Shared Mode
3.4.3. Supervisory Mode
3.4.4. Autonomous Mode
3.4.5. Application Examples
4. Experiments and Results
4.1. Experiments with a Fixed Level of Autonomy
4.1.1. Manual Mode Experiments
4.1.2. Shared Mode Experiments
4.1.3. Supervisory Mode Experiments
4.1.4. Autonomous Mode Experiments
4.2. Experiments with Sliding Autonomy
- First, the participant and researcher were brought/accompanied to the computer, with the simulator and scene already open.
- The goal of the experiment—i.e., going from point A to point B, covering as much as the weld bead as possible, and in as little time as possible—was presented orally to the participant.
- The participant was asked to hold the joystick and wear the Myo armband and its controls were presented orally.
- The participant was allowed a 15 min practice session with the same scene in manual mode.
- At the end of the practice session, the scene was reset to the initial position and the fuzzy controller was activated to start the experiment.
4.3. Results and Discussion
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Heyer, C. Human-robot interaction and future industrial robotics applications. In Proceedings of the IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4749–4754. [Google Scholar] [CrossRef]
- Masinga, P.; Campbell, H.; Trimble, J.A. A framework for human collaborative robots, operations in South African automotive industry. In Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management, Singapore, 6–9 December 2016; pp. 1494–1497. [Google Scholar] [CrossRef]
- Wojtara, T.; Uchihara, M.; Murayama, H.; Shimoda, S.; Sakai, S.; Fujimoto, H.; Kimura, H. Human-robot collaboration in precise positioning of a three-dimensional object. Automatica 2009, 45, 333–342. [Google Scholar] [CrossRef]
- Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Available online: https://dl.acm.org/doi/10.1561/1100000005 (accessed on 10 March 2020).
- Lasota, P.A.; Fong, T.; Shah, J.A. A Survey of Methods for Safe Human-Robot Interaction. Found. Trends Robot. 2017, 5, 261–349. [Google Scholar] [CrossRef]
- Tellaeche, A.; Maurtua, I.; Ibarguren, A. Human robot interaction in industrial robotics. Examples from research centers to industry. In Proceedings of the 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 8–11 September 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Jiang, S.; Arkin, R.C. Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015, Kowloon, China, 9–12 October 2015; pp. 954–961. [Google Scholar] [CrossRef]
- Beavers, G.; Hexmoor, H. Types and Limits of Agent Autonomy. In Agents and Computational Autonomy. Potential, Risks, and Solutions; Springer: Berlin/Heidelberg, Germany, 2004; pp. 95–102. [Google Scholar] [CrossRef]
- Çürüklü, B.; Dodig-Crnkovic, G.; Akan, B. Towards industrial robots with human-like moral responsibilities. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction—HRI ’10, Osaka, Japan, 2–5 March 2010; p. 85. [Google Scholar] [CrossRef]
- Ball, M.; Callaghan, V. Explorations of autonomy: An investigation of adjustable autonomy in intelligent environments. In Proceedings of the 8th International Conference on Intelligent Environments, Guanajuato, Mexico, 26–29 June 2012; pp. 114–121. [Google Scholar] [CrossRef]
- Desai, M.; Yanco, H.A. Blending human and robot inputs for sliding scale autonomy. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 537–542. [Google Scholar] [CrossRef] [Green Version]
- Hardin, B.; Goodrich, M.A. On using mixed-initiative control: A perspective for managing large-scale robotic teams. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, CA, USA, 11–13 March 2009; pp. 165–172. [Google Scholar] [CrossRef]
- Crandall, J.; Goodrich, M. Experiments in adjustable autonomy. In Proceedings of the 2001 IEEE International Conference on Systems, Man and Cybernetics, e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236), Tucson, AZ, USA, 7–10 October 2001; Volumne 3, pp. 1624–1629. [Google Scholar] [CrossRef]
- Lewis, B.; Tastan, B.; Sukthankar, G. An adjustable autonomy paradigm for adapting to expert-novice differences. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1656–1662. [Google Scholar] [CrossRef]
- Muszynski, S.; Stuckler, J.; Behnke, S. Adjustable autonomy for mobile teleoperation of personal service robots. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 933–940. [Google Scholar] [CrossRef]
- Milliken, L.; Hollinger, G.A. Modeling user expertise for choosing levels of shared autonomy. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 2285–2291. [Google Scholar] [CrossRef]
- Alhammadi, M.; Svetinovic, D. Autonomy requirements engineering for micro-satellite systems: CubeSat case study. In Proceedings of the ICAT 2017—26th International Conference on Information, Communication and Automation Technologies, Proceedings, Sarajevo, Bosnia-Herzegovina, 26–28 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Heger, F.; Singh, S. Sliding autonomy for complex coordinated multi-robot tasks: Analysis & experiments. In Proceedings, Robotics: Systems and Science, Philadelphia; The MIT Press: Philadelphia, PA, USA, 2006. [Google Scholar]
- Saeidi, H.; Wagner, J.R.; Wang, Y. A Mixed-Initiative Haptic Teleoperation Strategy for Mobile Robotic Systems Based on Bidirectional Computational Trust Analysis. IEEE Trans. Robot. 2017, 4, 1500–1507. [Google Scholar] [CrossRef]
- Ho, N.; Johnson, W.; Panesar, K.; Sadler, G.; Wilson, N.; Lachter, J. Application of Human-Autonomy Teaming to an Advanced Ground Station for Reduced Crew Operations. In Proceedings of the 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC), St. Petersburg, FL, USA, 17–21 September 2017. [Google Scholar]
- Whitlow, S.D.; Dorneich, M.C.; Funk, H.B.; Miller, C.A. Providing appropriate situation awareness within a mixed-initiative control system. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Yasmine Hammamet, Tunisia, 6–9 October 2002; Volume 5. [Google Scholar] [CrossRef]
- Bush, L.A.; Wang, A.J.; Williams, B.C. Risk-based sensing in support of adjustable autonomy. In Proceedings of the IEEE Aerospace Conference Proceedings, Big Sky, MT, USA, 3–10 March 2012. [Google Scholar] [CrossRef]
- U.S. Department of Transportation’s New Policy on Automated Vehicles Adopts SAE International’s Levels of Automation for Defining Driving Automation in On-Road Motor Vehicles. Available online: https://www.newswire.com/news/u-s-dept-of-transportations-new-policy-on-automated-vehicles-adopts-sae-5217899 (accessed on 15 September 2019).
- Naser, F.; Dorhout, D.; Proulx, S.; Pendleton, S.D.; Andersen, H.; Schwarting, W.; Paull, L.; Alonso-Mora, J.; Ang, M.H.; Karaman, S.; et al. A parallel autonomy research platform. IEEE Intell. Veh. Symp. 2017, 4, 933–940. [Google Scholar] [CrossRef] [Green Version]
- Rezvani, T.; Driggs-Campbell, K.; Bajcsy, R. Optimizing interaction between humans and autonomy via information constraints on interface design. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Van Der Vecht, B.; Dignum, F.; Meyer, J.J.C. Autonomy and coordination: Controlling external influences on decision making. In Proceedings of the IEEE 2009 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Milan, Italy, 15–18 September 2009; Volumne 2, pp. 92–95. [Google Scholar] [CrossRef]
- McGill, S.; Yi, S.J.; Lee, D.D. Team THOR’s adaptive autonomy for disaster response humanoids. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Seoul, Korea, 3–5 November 2015; pp. 453–460. [Google Scholar] [CrossRef]
- Barnes, M.J.; Chen, J.Y.; Jentsch, F. Designing for Mixed-Initiative Interactions between Human and Autonomous Systems in Complex Environments. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015; pp. 1386–1390. [Google Scholar] [CrossRef]
- Freedy, A.; Sert, O.; Freedy, E.; McDonough, J.; Weltman, G.; Tambe, M.; Gupta, T.; Grayson, W.; Cabrera, P. Multiagent Adjustable Autonomy Framework (MAAF) for multi-robot, multi-human teams. In Proceedings of the 2008 International Symposium on Collaborative Technologies and Systems, Irvine, CA, USA, 19–23 May 2008; pp. 498–505. [Google Scholar] [CrossRef]
- Rovani, A. Desenvolvimento do prtótipo de um robô para inspeção de cordões de solda em superfícies metálicas verticais. Bachelor’s Thesis, Universidade Tecnológica Federal do Paraná, Curitiba, Brazil, December 2013. [Google Scholar]
- Espinoza, R.V. Estimador de adesão de um robô escalador com rodas magnéticas. Master’s Thesis, Universidade Tecnológica Federal do Paraná, Curitiba, Brazil, July 2014. [Google Scholar]
- Rosa, A.B.; Gnoatto, R. Reprojeto e construção de protótipo de um robô de inspeção de cordões de solda em superfícies metálicas verticais e esféricas (segunda geração). Bachelor’s Thesis, Universidade Tecnológica Federal do Paraná, Curitiba, Brazil, February 2015. [Google Scholar]
- Teixeira, M.A.S. Predição de mapeamento para navegação autônoma de um robô de inspeção em vasos de pressão esféricos. Master’s Thesis, Universidade Tecnológica Federal do Paraná, Curitiba, Brazil, March 2017. [Google Scholar]
- Teixeira, M.A.S.; Santos, H.B.; Dalmedico, N.; de Arruda, L.V.R.; Neves-Jr, F.; de Oliveira, A.S. Intelligent environment recognition and prediction for NDT inspection through autonomous climbing robot. J. Intell. Robot. Syst. 2018, 92, 323–342. [Google Scholar] [CrossRef]
- Krishnan, K.S.; Saha, A.; Ramachandran, S.; Kumar, S. Recognition of human arm gestures using Myo armband for the game of hand cricket. In Proceedings of the 2017 IEEE 5th International Symposium on Robotics and Intelligent Sensors, Ottawa, ON, Canada, 5–7 October 2017; pp. 389–394. [Google Scholar] [CrossRef]
- Tortora, S.; Moro, M.; Menegatti, E. Dual-Myo Real-Time Control of a Humanoid Arm for Teleoperation. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Daegu, Korea, 11–14 March 2019; pp. 624–625. [Google Scholar] [CrossRef]
- Xu, Y.; Yang, C.; Liang, P.; Zhao, L.; Li, Z. Development of a hybrid motion capture method using MYO armband with application to teleoperation. In Proceedings of the 2016 IEEE International Conference on Mechatronics and Automation, Harbin, China, 7–10 August 2016; pp. 1179–1184. [Google Scholar] [CrossRef]
- Kim, H.J.; Lee, Y.S.; Kim, D. Arm motion estimation algorithm using MYO armband. In Proceedings of the 2017 1st IEEE International Conference on Robotic Computing, Taichung, Taiwan, 10–12 April 2017; Volume 4, pp. 376–381. [Google Scholar] [CrossRef]
- Çoban, M.; Gelen, G. Wireless teleoperation of an industrial robot by using myo arm band. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing, Malatya, Turkey, 28–30 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Pambudi, M.R.; Sigit, R.; Harsono, T. The bionic hand movement using myo sensor and neural networks. In Proceedings of the 2016 International Conference on Knowledge Creation and Intelligent Computing, Manado, Indonesia, 15–17 November 2016; pp. 259–264. [Google Scholar] [CrossRef]
- Xu, Y.; Yang, C.; Liu, X.; Li, Z. A Teleoperated Shared Control Scheme for Mobile Robot Based sEMG. In Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics, Singapore, 18–20 July 2018; pp. 288–293. [Google Scholar] [CrossRef]
- Luh, G.C.; Ma, Y.H.; Yen, C.J.; Lin, H.A. Muscle-gesture robot hand control based on sEMG signals with wavelet transform features and neural network classifier. In Proceedings of the International Conference on Machine Learning and Cybernetics, Jeju, Korea, 10–13 July 2016; Volume 2, pp. 627–632. [Google Scholar] [CrossRef]
- Kurniawan, S.R.; Pamungkas, D. MYO Armband sensors and Neural Network Algorithm for Controlling Hand Robot. In Proceedings of the 2018 International Conference on Applied Engineering, Batam, Indonesia, 3–4 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Li, W.; Yang, C.; Zhang, X.; Li, Z. Teleoperation System for Omnidirectional Mobile Robot Based on Shared Control Scheme. In Proceedings of the 8th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, Tianjin, China, 19–23 July 2018; pp. 741–746. [Google Scholar] [CrossRef]
- Kaya, E.; Kumbasar, T. Hand gesture recognition systems with the wearable myo armband. In Proceedings of the 2018 6th International Conference on Control Engineering and Information Technology, Istanbul, Turkey, 25–27 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wijayasinghe, I.B.; Saadatzi, M.N.; Peetha, S.; Popa, D.O.; Cremer, S. Adaptive Interface for Robot Teleoperation using a Genetic Algorithm. In Proceedings of the IEEE International Conference on Automation Science and Engineering, Munich, Germany, 20–24 August 2018; pp. 50–56. [Google Scholar] [CrossRef]
- Chen, W.; Yang, C.; Feng, Y. Shared Control For Omnidirectional Mobile Robots. In Proceedings of the 31st Chinese Control and Decision Conference, Nanchang, China, 3–5 June 2019; pp. 6185–6190. [Google Scholar] [CrossRef]
- Kunapipat, M.; Phukpattaranont, P.; Neranon, P.; Thongpull, K. Sensor-assisted EMG data recording system. In Proceedings of the 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Chiang Rai, Thailand, 18–21 July 2018; pp. 772–775. [Google Scholar]
- Geryes, M.; Charara, J.; Skaiky, A.; Mcheick, A.; Girault, J.M. A novel biomedical application for the Myo gesture control armband. In Proceedings of the International Conference on Microelectronics, ICM, Beirut, Lebanon, 10–13 December 2017; pp. 1–4. [Google Scholar] [CrossRef]
- Côté, N.; Canu, A.; Bouzid, M.; Mouaddib, A.I. Humans-robots sliding collaboration control in complex environments with adjustable autonomy. In Proceedings of the 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Macau, China, 4–7 December 2012; Volume 2, pp. 146–153. [Google Scholar] [CrossRef] [Green Version]
- Terres, V.V. Controle de seguimento de cordão de solda para um robô de inspeção. Bachelor’s Thesis, Universidade Tecnológica Federal do Paraná, Curitiba, Brazil, December 2019. [Google Scholar]
Scope | Levels | Offline | Online |
---|---|---|---|
Mobile Robots | 2-4 | [15] [16] [12] [14] | - |
Aviation and Space | 3-4 | [17] [18] [19] | [20] [21] [22] |
Self-Driving Cars | 3-6 | [23] | [24] [25] |
Search and Rescue | 2-3 | [26] | [27] |
Military | 2-3 | [28] | [29] |
JoyAngular|WeldPos | LH | LL | CT | RL | RH |
---|---|---|---|---|---|
LH | MN | SH | SP | AT | AT |
LL | SH | MN | SH | SP | AT |
CT | SP | SH | MN | SH | SP |
RL | AT | SP | SH | MN | SH |
RH | AT | AT | SP | SH | MN |
MyoRoll|WeldPos | LH | LL | CT | RL | RH |
---|---|---|---|---|---|
NH | MN | SH | SP | AT | AT |
NL | SH | MN | SH | SP | AT |
ZR | SP | SH | MN | SH | SP |
CL | AT | SP | SH | MN | SH |
CH | AT | AT | SP | SH | MN |
MyoRMS | Level of Autonomy |
---|---|
MH | Supervisory |
HG | Autonomous |
Experiment | Elapsed Time (s) | Average of the Alignment Error (m) |
---|---|---|
1 | 173.26 | 0.1283 |
2 | 173.27 | 0.1484 |
3 | 169.53 | 0.1349 |
4 | 176.73 | 0.1437 |
5 | 181.48 | 0.1460 |
Average | 174.85 | 0.1403 |
Standard deviation | 4.49 | 0.0084 |
Experiment | Elapsed Time (s) | Average of the Alignment Error (m) |
---|---|---|
1 | 198.06 | 0.0992 |
2 | 176.91 | 0.1051 |
3 | 184.46 | 0.0955 |
4 | 180.81 | 0.1014 |
5 | 185.97 | 0.1073 |
Average | 185.24 | 0.1017 |
Standard deviation | 7.98 | 0.0047 |
Experiment | Elapsed Time (s) | Average of the Alignment Error (m) |
---|---|---|
1 | 160.25 | 0.1078 |
2 | 158.02 | 0.0797 |
3 | 159.01 | 0.0954 |
4 | 158.85 | 0.0927 |
5 | 156.99 | 0.0883 |
Average | 158.62 | 0.0928 |
Standard deviation | 1.21 | 0.0103 |
Experiment | Elapsed Time (s) | Average of the Alignment Error (m) |
---|---|---|
1 | 162.29 | 0.1067 |
2 | 156.47 | 0.1066 |
3 | 156.59 | 0.1021 |
4 | 156.21 | 0.1029 |
5 | 157.44 | 0.1068 |
Average | 157.80 | 0.1050 |
Standard deviation | 2.55 | 0.0024 |
E. | Time (s) | % Man. | % Shar. | % Sup. | % Aut. | A. Aut. | Tr. | Error |
---|---|---|---|---|---|---|---|---|
1 | 204.51 | 43.01 | 55.45 | 1.54 | 0.00 | 1.06 | 108 | 0.1032 |
2 | 195.01 | 13.55 | 83.45 | 3.00 | 0.00 | 1.29 | 60 | 0.1019 |
3 | 189.76 | 5.18 | 83.98 | 10.84 | 0.00 | 1.53 | 37 | 0.0907 |
4 | 215.25 | 10.32 | 76.32 | 13.36 | 0.00 | 1.51 | 66 | 0.1029 |
5 | 202.00 | 13.36 | 81.73 | 4.26 | 0.65 | 1.34 | 55 | 0.1068 |
Level of Autonomy | Average of the Alignment Error (m) | Average of Time (s) |
---|---|---|
Supervisory | 0.0922 | 158.28 |
Sliding | 0.0982 | 204.09 |
Shared | 0.1019 | 185.26 |
Autonomous | 0.1054 | 158.32 |
Manual | 0.1415 | 175.91 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sfair Palar, P.; de Vargas Terres, V.; Schneider de Oliveira, A. Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods. Sensors 2020, 20, 5960. https://doi.org/10.3390/s20205960
Sfair Palar P, de Vargas Terres V, Schneider de Oliveira A. Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods. Sensors. 2020; 20(20):5960. https://doi.org/10.3390/s20205960
Chicago/Turabian StyleSfair Palar, Piatan, Vinícius de Vargas Terres, and André Schneider de Oliveira. 2020. "Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods" Sensors 20, no. 20: 5960. https://doi.org/10.3390/s20205960
APA StyleSfair Palar, P., de Vargas Terres, V., & Schneider de Oliveira, A. (2020). Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods. Sensors, 20(20), 5960. https://doi.org/10.3390/s20205960