Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration
<p>Prototype of HMI: (<b>a</b>) 3D model of HMI prototype, HMI cover is set as translucent in order to visualise the internal placement of the components; (<b>b</b>) implemented prototype.</p> "> Figure 2
<p>HMI hardware design - main dimensions.</p> "> Figure 3
<p>The concept of improved HRC combines principles of active collision avoidance with an increased awareness provided by the wearable haptic feedback device.</p> "> Figure 4
<p>(<b>a</b>) Trajectory execution: already executed segment of trajectory, current state, and future segment (planned trajectory segment yet to be executed); (<b>b</b>) human worker equipped with haptic feedback device, which provides vibration alert about the proximity to the future trajectory of the robot; collision vector is denoted as <math display="inline"><semantics> <mover accent="true"> <mi>d</mi> <mo>→</mo> </mover> </semantics></math>.</p> "> Figure 5
<p>Tactors placement around the hand: (<b>a</b>) function of each tactor (positive and negative directions of X, Y, Z axes); (<b>b</b>) spatial organisation represents a coordinate frame.</p> "> Figure 6
<p>In directional feedback mode, HMI notifies the user about the direction of a possible collision through a difference in activation of tactors located around the hand.</p> "> Figure 7
<p>Visualisation of the vibration intensity of the tactors, lengths of the red vectors depict the intensity of vibration of right HMI tactors, which are dependent on the magnitude of the collision vector (blue arrow) and the relative orientation of the corresponding HMI. The robot is moving from the bottom (<b>a</b>) through intermediate position (<b>b</b>) to the upper position (<b>c</b>) until no vibration is provided (<b>d</b>) since the magnitude of the collision vector is higher than the reaction distance. The red sphere represents right HMI; red points represent HMI point cloud; HMI axes are shown by XYZ frame (<b>d</b>).</p> "> Figure 8
<p>Experimental workspace with depicted locations of three RGB-D sensors (1–3). View directions (Z-axes) are depicted by the blue vectors.</p> "> Figure 9
<p>Determining HMI pose.</p> "> Figure 10
<p>HMI tracker data flow chart.</p> "> Figure 11
<p>Example of how collision vector changes during a motion sequence: robot moving from left to the right (a–c), collision vector is depicted by a blue arrow, HMIs are represented by green and red spheres.</p> "> Figure 12
<p>Hand position above the table during the experiment. The participants were instructed to identify the direction communicated by the device by moving the hand in the opposite direction (repulsive prompt) according to tactors’ placement. The movement directions corresponding to the activation of each tactor (+X, −X, +Y, etc.) are visualised as a coordinate frame.</p> "> Figure 13
<p>Confusion matrix of all combinations of stimulation intensities (S—strong; W—weak) and directions (+X, −X, +Y, etc.) for all participants. The actual stimuli (ground truth) applied to the participants are pictured along the vertical axis. The participants’ responses are listed along the horizontal axis. Darker values indicate combinations reported with higher frequency. Precision and accuracy values are indicated below and to the right of the confusion matrix, respectively.</p> "> Figure 14
<p>Preview of the workplace and the robot’s goal positions (1–5); hand positions are marked with red (right hand) and green (left hand) spheres; the robot is in the initial position.</p> "> Figure 15
<p>Average task completion time with standard errors for all 17 participants: lower is better.</p> "> Figure 16
<p>Standard deviations of the task completion time: lower is better.</p> "> Figure 17
<p>Average task success rate for all 17 participants: higher is better.</p> "> Figure 18
<p>Average scores with standard errors for the questions Q1–Q4 used in the user study. Score 5 denotes “totally agree” and 1—“totally disagree”. Questions Q1, Q3—higher is better; Q2, Q4—lower is better.</p> "> Figure 19
<p>Average scores with standard errors for the questions QH1–QH9 used for evaluation of the HMI usability. Score 5 denotes “totally agree”, and 1—“totally disagree”. Questions QH1, QH3, QH7—higher is better; QH2, QH4, QH5, QH6, QH8, QH9—lower is better.</p> "> Figure 20
<p>Average scores with standard errors for the questions QHD1–QHD3 used for evaluation of the usability of V3 interface. Score 5 denotes “totally agree”, and 1—“totally disagree”. Questions QHD1, QHD2—higher is better; QHD3—lower is better.</p> "> Figure 21
<p>Collision vector computed during movement of the robot to the first goal position: (<b>a</b>) names of the related robot links; (<b>b</b>) collision vector starting from upper arm of the robot.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Improved Mutual Awareness for Human–Robot Collaboration
- —maximum vibration intensity for distance notification,
- —minimum vibration intensity for distance notification,
- —reaction distance—distance threshold at which the distance notification is activated,
- —current collision vector (see Figure 4).
2.2. Improved Haptic Notification Devices
2.3. Experimental Workspace
2.4. Improved Hand Tracking Stability
2.5. Motion Planning
Algorithm 1. Calculation of the collision vectors. |
3. First User Study—Stimuli Differentiation
3.1. Experiment Description
3.2. Results
4. Second User Study—Trajectory Awareness
4.1. Experiment Description
- V1—Without HMI: the volunteer is not equipped with HMIs and has no feedback on approaching the robot’s future trajectory. Only visual information is available for determining the goal position of the robot (the volunteer sees the whole robot).
- V2—Equipped HMIs in SMAM: the volunteer is equipped with HMIs on both hands. The volunteer can use the HMI feedback (when moving hands) to determine the goal position of the robot. The volunteer is also instructed to watch their hands instead of watching the robot.
- V3—Equipped HMIs in DMAM: the volunteer is equipped with HMIs on both hands. Activation of the vibration motors depends on the relative orientation between the collision vector and the corresponding HMI. The volunteer can use the HMI feedback (when moving hands) to determine the goal position of the robot. The volunteer is instructed to watch their hands instead of watching the robot.
4.2. Hypotheses
- H1.1: The efficiency of the test subjects will be greater with equipped HMIs (in both modes: V2, V3) than without HMIs. Higher efficiency is categorised as lower task completion time. This hypothesis is based on the suggestion that both HMI modes contribute to task performance.
- H1.2: The efficiency of the test subjects will be greater with equipped HMIs with directional feedback enabled (V3) than with equipped HMIs in simultaneous motor activation mode (V2). Higher efficiency is categorised as lower task completion time. This hypothesis is based on the suggestion that the directional haptic feedback significantly contributes to the user awareness about the future trajectory of the robot.
- H1.3: Time taken by each test subject to correctly determine the robot’s goal position will be similar during all rounds when equipped with HMIs (V2, V3). In contrast, there will be high variation in task completion times between the rounds when the determination will be based solely on the available visual information. To test this hypothesis, task completion time will be measured for each subject in each task condition, and the standard deviation of the measurements will be compared. This hypothesis is based on the suggestion that users’ awareness enhanced by the haptic feedback is not influenced by the differences in the trajectories, whereas in the case of visual feedback, the awareness is dependent on the actual trajectory shape.
- H2: The task efficacy of the test subjects will be greater when equipped with V3 compared to V2 interface. This hypothesis is based on the suggestion that the additional information provided by the directional feedback will be more intuitive and comprehensive. However, it is also suggested that the success rates of V1 and V3 interfaces will be similar, as it is assumed that V3 provides enough information for the users to recognise the goal position of the robot even though they will not explicitly observe the robot’s trajectory.
- H3.1: Volunteers will subjectively percept the task as simpler when performing the tasks equipped with HMIs (V2, V3) than by relying solely on the available visual information (V1). This hypothesis is based on the suggestion that the haptic feedback significantly contributes to the user awareness about the future trajectory of the robot, thus making the task cognitively easier.
- H3.2: Volunteers will subjectively percept the task as simpler when performing the tasks when equipped with HMI in directional feedback mode (V3) than with the other two interfaces (V1, V2). This hypothesis is based on the suggestion that the directional haptic feedback significantly contributes to the user awareness about the future trajectory of the robot, thus making the task cognitively easier.
- H3.3: Volunteers will report better subjective ratings when performing the tasks when equipped with HMI in directional feedback mode (V3) than in the case of HMI without directional feedback (V2). This hypothesis is based on the suggestion that the directional haptic feedback further improves the user awareness about the future trajectory of the robot.
4.3. Results—Objective Data Evaluation
4.4. Results—Subjective Data Evaluation
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Vysocký, A.; Novak, P. Human—Robot Collaboration in Industry. MM Sci. J. 2016, 2016, 903–906. [Google Scholar] [CrossRef]
- Bolano, G.; Roennau, A.; Dillmann, R. Transparent Robot Behavior by Adding Intuitive Visual and Acoustic Feedback to Motion Replanning. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1075–1080. [Google Scholar]
- Mohammadi Amin, F.; Rezayati, M.; van de Venn, H.W.; Karimpour, H. A Mixed-Perception Approach for Safe Human–Robot Collaboration in Industrial Automation. Sensors 2020, 20, 6347. [Google Scholar] [CrossRef] [PubMed]
- Casalino, A.; Messeri, C.; Pozzi, M.; Zanchettin, A.M.; Rocco, P.; Prattichizzo, D. Operator Awareness in Human–Robot Collaboration Through Wearable Vibrotactile Feedback. IEEE Robot. Autom. Lett. 2018, 3, 4289–4296. [Google Scholar] [CrossRef]
- Bonci, A.; Cen Cheng, P.D.; Indri, M.; Nabissi, G.; Sibona, F. Human-Robot Perception in Industrial Environments: A Survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef] [PubMed]
- Lee, W.; Park, C.H.; Jang, S.; Cho, H.-K. Design of Effective Robotic Gaze-Based Social Cueing for Users in Task-Oriented Situations: How to Overcome In-Attentional Blindness? Appl. Sci. 2020, 10, 5413. [Google Scholar] [CrossRef]
- Kalpagam Ganesan, R.; Rathore, Y.K.; Ross, H.M.; Ben Amor, H. Better Teaming Through Visual Cues: How Projecting Imagery in a Workspace Can Improve Human-Robot Collaboration. IEEE Robot. Autom. Mag. 2018, 25, 59–71. [Google Scholar] [CrossRef]
- Andersen, R.S.; Madsen, O.; Moeslund, T.B.; Amor, H.B. Projecting Robot Intentions into Human Environments. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 294–301. [Google Scholar]
- Bambuŝek, D.; Materna, Z.; Kapinus, M.; Beran, V.; Smrž, P. Combining Interactive Spatial Augmented Reality with Head-Mounted Display for End-User Collaborative Robot Programming. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–8. [Google Scholar]
- Fang, H.C.; Ong, S.K.; Nee, A.Y.C. A Novel Augmented Reality-Based Interface for Robot Path Planning. Int. J. Interact. Des. Manuf. 2014, 8, 33–42. [Google Scholar] [CrossRef]
- Hietanen, A.; Pieters, R.; Lanz, M.; Latokartano, J.; Kämäräinen, J.-K. AR-Based Interaction for Human-Robot Collaborative Manufacturing. Robot. Comput.-Integr. Manuf. 2020, 63, 101891. [Google Scholar] [CrossRef]
- Clair, A.S.; Matarić, M. How Robot Verbal Feedback Can Improve Team Performance in Human-Robot Task Collaborations. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 213–220. [Google Scholar]
- Li, H.; Sarter, N.B.; Sebok, A.; Wickens, C.D. The Design and Evaluation of Visual and Tactile Warnings in Support of Space Teleoperation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications: Los Angeles, CA, USA, 2012. [Google Scholar] [CrossRef]
- Sziebig, G.; Korondi, P. Remote Operation and Assistance in Human Robot Interactions with Vibrotactile Feedback. In Proceedings of the 2017 IEEE 26th International Symposium on Industrial Electronics (ISIE), Edinburgh, UK, 19–21 June 2017; pp. 1753–1758. [Google Scholar]
- de Barros, P.G.; Lindeman, R.W.; Ward, M.O. Enhancing Robot Teleoperator Situation Awareness and Performance Using Vibro-Tactile and Graphical Feedback. In Proceedings of the 2011 IEEE Symposium on 3D User Interfaces (3DUI), Singapore, 19–20 March 2011; pp. 47–54. [Google Scholar]
- Mucha, Ł.; Lis, K. Force Feedback Sleeve Using Pneumatic and Micro Vibration Actuators. In Soft and Stiffness-controllable Robotics Solutions for Minimally Invasive Surgery: The STIFF-FLOP Approach; River Publishers: Aalborg, Denmark, 2018. [Google Scholar]
- Zikmund, P.; Macik, M.; Dubnicky, L.; Horpatzska, M. Comparison of Joystick Guidance Methods. In Proceedings of the 2019 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy, 23–25 October 2019; pp. 265–270. [Google Scholar]
- Stanley, A.A.; Kuchenbecker, K.J. Design of Body-Grounded Tactile Actuators for Playback of Human Physical Contact. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 563–568. [Google Scholar]
- Scalera, L.; Seriani, S.; Gallina, P.; Di Luca, M.; Gasparetto, A. An Experimental Setup to Test Dual-Joystick Directional Responses to Vibrotactile Stimuli. IEEE Trans. Haptics 2018, 11, 378–387. [Google Scholar] [CrossRef] [PubMed]
- Grushko, S.; Vysocký, A.; Oščádal, P.; Vocetka, M.; Novák, P.; Bobovský, Z. Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory. Sensors 2021, 21, 3673. [Google Scholar] [CrossRef] [PubMed]
- Brock, A.; Kammoun, S.; Macé, M.; Jouffrais, C. Using Wrist Vibrations to Guide Hand Movement and Whole Body Navigation. i-com 2014, 3, 19–28. [Google Scholar] [CrossRef]
- Zeng, L.; Miao, M.; Weber, G. Interactive Audio-Haptic Map Explorer on a Tactile Display. Interact. Comput. 2015, 27, 413–429. [Google Scholar] [CrossRef]
- Zhao, K.; Rayar, F.; Serrano, M.; Oriola, B.; Jouffrais, C. Vibrotactile Cues for Conveying Directional Information during Blind Exploration of Digital Graphics. In Proceedings of the 31st Conference on l’Interaction Homme-Machine, Grenoble, France, 10–13 December 2019; ACM: New York, NY, USA, 2019; p. 12. [Google Scholar]
- Zhang, C.; Pearson, J.; Robinson, S.; Hopkins, P.; Jones, M.; Sahoo, R.; Holton, M. Active PinScreen: Exploring Spatio-Temporal Tactile Feedback for Multi-Finger Interaction. In Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, Oldenburg, Germany, 5–8 October 2020; ACM: New York, NY, USA, 2020. [Google Scholar]
- Barbosa, C.; Souza, L.; Bezerra de Lima, E.; Silva, B.; Santana, O., Jr.; Alberto, G. Human Gesture Evaluation with Visual Detection and Haptic Feedback. In Proceedings of the 24th Brazilian Symposium on Multimedia and the Web, Salvador, Brazil, 16–19 October 2018; ACM: New York, NY, USA, 2018; p. 282. [Google Scholar]
- Jung, J.; Son, S.; Lee, S.; Kim, Y.; Lee, G. ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple Objects. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; p. 13. [Google Scholar]
- Williams, S.R.; Okamura, A.M. Body-Mounted Vibrotactile Stimuli: Simultaneous Display of Taps on the Fingertips and Forearm. IEEE Trans. Haptics 2021, 14, 432–444. [Google Scholar] [CrossRef] [PubMed]
- Aggravi, M.; Salvietti, G.; Prattichizzo, D. Haptic Wrist Guidance Using Vibrations for Human-Robot Teams. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 113–118. [Google Scholar]
- Scheggi, S.; Chinello, F.; Prattichizzo, D. Vibrotactile Haptic Feedback for Human-Robot Interaction in Leader-Follower Tasks. In Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments, Crete, Greece, 6–8 June 2012; ACM: New York, NY, USA, 2012; p. 4. [Google Scholar]
- Scheggi, S.; Aggravi, M.; Morbidi, F.; Prattichizzo, D. Cooperative Human-Robot Haptic Navigation. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
- Yatani, K.; Banovic, N.; Truong, K. SpaceSense: Representing Geographical Information to Visually Impaired People Using Spatial Tactile Feedback. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 415–424. [Google Scholar]
- Wang, Y.; Millet, B.; Smith, J.L. Designing Wearable Vibrotactile Notifications for Information Communication. Int. J. Hum.-Comput. Stud. 2016, 89, 24–34. [Google Scholar] [CrossRef]
- Alvina, J.; Zhao, S.; Perrault, S.T.; Azh, M.; Roumen, T.; Fjeld, M. OmniVib: Towards Cross-Body Spatiotemporal Vibrotactile Notifications for Mobile Phones. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 2487–2496. [Google Scholar]
- Israr, A.; Poupyrev, I. Tactile Brush: Drawing on Skin with a Tactile Grid Display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada , 7–12 May 2011; ACM: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
- Ioan Sucan, S.C. MoveIt! Available online: http://moveit.ros.org (accessed on 28 March 2021).
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton Robot 2013, 34, 189–206. [Google Scholar] [CrossRef]
- Pan, J.; Chitta, S.; Manocha, D. FCL: A General Purpose Library for Collision and Proximity Queries. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3859–3866. [Google Scholar]
- Johansson, R.S.; Vallbo, A.B. Tactile Sensibility in the Human Hand: Relative and Absolute Densities of Four Types of Mechanoreceptive Units in Glabrous Skin. J. Physiol. 1979, 286, 283–300. [Google Scholar] [CrossRef] [PubMed]
- Fransson-Hall, C.; Kilbom, Å. Sensitivity of the Hand to Surface Pressure. Appl. Ergon. 1993, 24, 181–189. [Google Scholar] [CrossRef]
- Practice Effect—APA Dictionary of Psychology. Available online: https://dictionary.apa.org/practice-effect (accessed on 4 March 2021).
- Scalera, L.; Giusti, A.; Cosmo, V.; Riedl, M.; Vidoni, R.; Matt, D. Application of Dynamically Scaled Safety Zones Based on the ISO/TS 15066:2016 for Collaborative Robotics. Int. J. Mech. Control 2020, 21, 41–50. [Google Scholar]
Notification | Vibration Intensity | Directional | Duration | Description |
---|---|---|---|---|
Inaccessible goal notification | 85% | No | Continuous | The robot was not able to find a feasible path to the goal |
Replanning notification | 95% | No | 0.3 s | The robot has replanned its motion in order to avoid a collision |
Distance notification (maximum) | 80% | Yes | Continuous | The user’s hand is about to block the currently planned robot trajectory |
Distance notification (minimum) | 60% | Yes | Continuous | The user’s hand is approaching the future segment of the robot’s trajectory |
General Question |
---|
Q1. The task was clear for me |
Q2. The task was demanding |
Q3. It was simple to determine the goal position of the robot |
Q4. More information was needed to accurately determine goal position of the robot |
HMI-Related Questions |
---|
QH1. HMI improved my awareness of the robot’s future trajectory |
QH2. Work with HMI required long training |
QH3. Work with HMI improved my confidence in safety during the task |
QH4. Haptic feedback (vibration) from HMI was misleading |
QH5. Haptic feedback (vibration) from HMI was too strong |
QH6. Haptic feedback (vibration) from HMI was too weak |
QH7. Haptic feedback (vibration) from HMI was sufficient |
QH8. Haptic feedback (vibration) from HMI overwhelmed my perceptions |
QH9. Use of HMI was inconvenient or caused unpleasant sensations during activation |
HMI DMAM-Related Questions |
---|
QHD1. Haptic feedback (vibration) from HMI about the expected direction of the collision was clear for me |
QHD2. Haptic feedback (vibration) from HMI about the expected direction of the collision captured the real trajectory of the robot correctly |
QHD3. I was not able to determine the direction of the haptic feedback |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Grushko, S.; Vysocký, A.; Heczko, D.; Bobovský, Z. Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration. Sensors 2021, 21, 5748. https://doi.org/10.3390/s21175748
Grushko S, Vysocký A, Heczko D, Bobovský Z. Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration. Sensors. 2021; 21(17):5748. https://doi.org/10.3390/s21175748
Chicago/Turabian StyleGrushko, Stefan, Aleš Vysocký, Dominik Heczko, and Zdenko Bobovský. 2021. "Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration" Sensors 21, no. 17: 5748. https://doi.org/10.3390/s21175748
APA StyleGrushko, S., Vysocký, A., Heczko, D., & Bobovský, Z. (2021). Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration. Sensors, 21(17), 5748. https://doi.org/10.3390/s21175748