Abstract
This work builds a panorama of resources in Remote HRI identified in a systematic literature review with focus on Mixed reality solutions. This study builds a panorama of HRI with mixed reality through the definition of the terms presented in this ontology which serves as a reference to facilitate to create new robotic solutions relying on this resource.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- HRI
- Human robot interaction
- Remote control
- Teleoperation
- Augmented reality
- Virtual reality
- Mixed reality
- Augmented virtuality
1 Introduction
Robotics is an area in great expansion, the improvement of remote human robot interaction (HRI) side by side with lower costs of the robotic systems, has made robots accessible to the masses the same way personal computers have become part of customary life, such as irobot roomba and other home solutions. Many studies [1] focus on the study of human robot interaction design, but there is still a lot of work to do to improve interaction with the users.
Robots have evolved throughout the years focusing on more complex tasks, however, completely autonomous robots are usually specialized for a few specific tasks. When the task requires a higher level of comprehension of the surroundings, the robot may need help from a human operator to execute the adequate action.
Remotely operated robots have been developed to perform complex activities in inaccessible places or places that present high risks to humans. For example, the maintenance of nuclear plants where there are high levels of radiation [2] as well as the use of robots in space exploration missions [3]. Robots are also used for search and rescue after urban disasters such as WTC attack [4] and earthquakes. The enhancement of robots for urban search and rescue has been constantly improved in international competitions [5].
The objective of this paper is to produce an ontology that maps the current possibilities of HRI and the use of Mixed reality as a facilitator for remote operations. In order to achieve this goal, a systematic review was conducted including articles published up to July 2015. The contents of this paper are organized in 6 sections. In subsequent sections of this work will be presented Related Works in Sect. 2, followed by methodology design in Sect. 3. The resulting Ontology is presented in Sect. 4, followed by Conclusion and Future Works in Sect. 5.
2 Related Works
The field of HRI was the subject of a survey [1] that mapped key topics of matter. After presenting a brief history of robotics and interaction, the study presented HRI problems regarding robots design, information exchange, application areas, and possible solutions. Goodrich and Schultz also pointed information fusion as a possible solution to provide operational presence to the robot in remote interaction. They also claim that new ways to integrate those information have to be found. In this ontology we present characteristics of Mixed Reality Displays for HRI and resources to create interfaces with integrated information. Based on this study, we identified the four first classes of our ontology and named the topics robots, interaction, human factors and scenarios.
In turn, Green et al. [6] conducted a literature review about human robot collaboration with emphasis on AR solutions to aid interactions. Our ontology expands this approach in its Mixed Realities branch by including solutions not only with AR, but also Virtual Reality, Augmented Virtuality and Videocentric displays.
Sheridan defined the concept of Levels of Automation (LOA) in [7] on a scale that includes ten levels of automation in a teleoperated system. He subsequently compiled in [8] frameworks that resulted in the flexibilization of this scale under the concept of adaptive autonomy. This concept of autonomy was added as a subclass of the branch robots in this study since the variation range of the robot directly impacts the design of HRI.
Milgram [9] defined a taxonomy of mixed reality visual displays and a virtuality continuum to classify the level of reality or virtuality in a visual display. In our work we adapt the continuum to the field of HRI and presented display possibilities as branches in interaction output, next to the resources that arise from this mixed reality interfaces. The far end of Milgram continuum are Virtual Environment and Real Environment. In the context of remote operation, the operator is never in the same environment with the robot, therefore, the most realistic interface that we identified in our field of interest was videocentric display. In this ontology we believe that by means of vast literature review it is possible to organize the knowledge field of Remote HRI and Mixed Reality Displays.
3 Methodology
The search for articles began in 5 search engines: ACM Digital Library, IEEE Xplore, Science Direct, Springer Link and Scopus covering from 2005 to 2015, accessed in June, 2015. Search terms included the words and variations for Robot, Teleoperation, interaction, HRI, “augmented reality”, “Virtual reality”, “augmented virtuality” and “mixed reality”. The result was 893 articles, 46 of which were found to be duplicated, leaving a total of 847 for the initial analysis. Articles within the scope of the review were identified by the analysis of title, abstract and keywords, therefore 316 articles passed on to the next phase. In addition to, 12 articles were manually included through related studies and manual search in conferences HRI and IFAC, totaling 328.
Inclusion and exclusion criteria were then applied to the 328 remaining articles to ensure the scope of the articles relates to the field of interest of this study. The inclusion criteria were presence of remote interaction and mixed reality display. Studies without human factors, and robots were excluded from the review. Therefore 222 articles were eliminated, resulting in 106 articles for reading and quality evaluation.
To ensure that the articles were complete and adequate to empirical methods, according to the proposed solutions, the articles were meticulously evaluated to what regards to completeness and quality of the studies. The articles were evaluated in 16 parameters, with attributed grades from 0 to 1. The final score of the article was the product of the weighted average of these factors, 13 with weight 1 and 3 with weight 2, resulting in a grade from zero to 100 %.The three parameters with greater weight refer to: presence or absence of a detailed description of user testing; if the paper proposes an interface or interaction tool; and if the interface proposed was clearly described in the study. These parameters were defined with greater weight due to its importance for this review since they aim to certify presence of relevant information to this review. At the end of the process, 32 articles were selected to the data extraction phase, and elements of proposed solutions in HRI with mixed reality were selected based on the occurrence on them to build the ontology.
According to Noy and McGuinness [10], ontology is a method to share and annotate information and common vocabulary of a field of study. This study aims to group relevant terms of HRI remote interaction and mixed realities. After selecting the articles, the relevant information about the four initial classes of the ontology, Robots, Interaction, Human Factors and Scenarios, and Mixed Reality Display subclass were extracted and organized to compose the subsequent branches of the ontology.
The identified factors were cataloged hierarchically, building a structure of classes and subclasses. Classes and subclasses were graphically organized in form of branches to represent the relation between them. Due to this representation classes and subclasses are often referred as branches in this study. Interaction was the class with more branches since the scope of the ontology covers the factors that influence the interface design using mixed reality displays for HRI.
4 Mixed Reality in HRI Ontology
The ontology was organized with four initial branches (Fig. 1), Humans Factors, Robots, Interaction and Scenarios, which subdivide into subsequent branches, detailing the universe of remote HRI and mixed reality displays. The resulting terms were defined according to the adequate meaning for the field of mixed reality in remote HRI.
4.1 Scenarios
Scenarios are the ambient robot solutions are designed to operate. Each environment presents different challenges that applications of remotely controlled robot have to overcome to execute the task. [11] developed an interface to control robot in a military scenario in target recognition tasks, they simulated situation of a multitask environment to evaluate the interfaces proposed in the experiments.
To enhance the performance of an Urban Search and Rescue (USAR) robotic system [12] developed an interface to aid navigation tasks, once the scenario of USAR involves challenging navigation terrain. It is important to identify and understand peculiar characteristics in each environment, and the specific tasks to each scenario in order design HRI systems.
4.2 Robots
Remotely controlled Robots receive the instructions from the remote user and operate the action in the surroundings. Robots also send relevant information and feedbacks to the operator. With several building possibilities and different levels of automation, robot design should suit the nature of the operations that it will execute, with proper components.
Operations. According to Steinfeld et al. [13] robot operations may be classified in five categories: navigation, manipulation, social, perception and monitoring. The remote interface design must take into consideration the specific needs from the task to be performed in order to provide the necessary information in an objective way.
Automation. Robots have different levels of automation (LOA), Sheridan [8] presents frameworks and automation classification models that evolved throughout the years covers the concept of adaptive autonomy, where the level of automation varies according to the necessity. In Adaptive autonomy exists the presence of an Authority Allocation Agent, the role can be executed by a computer or human. In the model defined by Parassuraman et al. [14] Adaptative Automation is partitioned in four information processing stages: information acquisition, analysis, decision and action implementation. Which steps should be automated and which should be controlled by an operator should be taken into consideration when designing a remotely operated system.
Components. When building a robot the components that will be needed to attach to execute the desired task must be taken under consideration. Sensors are the equipment responsible for detection of environmental signals, as for instance cameras, laser radars and GPS. Actuators are the mechanisms by which the robot will interact with the environment. Robotic arms, wheels and claws are common examples of robot actuators. The components that compose the robot may cause impact on the way that the remote control interfaces are designed.
4.3 Interaction
Interaction covers the means by which robot and user exchange information. The operator sends controlling commands, and the information is sent by the robot and presented to the operator. This branch covers controls techniques and information design resources to create HRI interfaces. Mixed Reality Displays modalities and interactive resources are described in Output branches. In Input branch, some possibilities that enable the user to send commands to the robot are exemplified.
Output. Is the information reaching the user through the system. Define how and which information is part of the interface design process. It should also be noted how the information will be arranged for the operator. Subsequent branches refer to mixed reality displays and resources that can be used to build them.
Mixed Reality Display. This branch relates to which information is displayed to the user and how it will be organized, each modality has characteristics that will benefit usability in distinct ways. Milgram and Kishino [9] defines a taxonomy concerning mixed reality displays and represented the variation of display environments in a virtuality continuum. In this study from the point of view of remote HRI, we considered videocentric display as the reality extreme of the continuum. In order to design the remote operation interface, a display must be chosen to suit the system’s purpose.
-
Video Centric. A teleoperation interface that displays streaming video sent by the robot as the center of attention of the user, may or may not be composed of more adjacent information, but independently, separate from the video image. This type of interface can be used as comparison to other interfaces in the mixed reality continuum because it does not use overlapping virtual elements. [15] developed an interface denominated videocentric with the objective to compare with other interface with augmented reality. [16] compared multiple displays with mixed reality elements to a videocentric interface.
-
Augmented Reality. According to Azuma et al. [17] AR interface is defined by the coexistence of real and virtual objects in a real environment aligned with each other and running interactively in real time. In HRI remote interaction AR is a powerful resource to help the operator to perform teleoperated robot tasks. Often used to enhance depth by introducing distance information, AR interfaces may help reduce collision rate. In [18] AR interfaces were developed to aid robot navigation tasks to enhance accuracy by reducing collision and close calls.
-
Augmented Virtuality. Real images or real objects are added to the virtual environment. Interfaces with these features were positive for assisting robot navigation tasks in [15] and in [12] streaming video was sent from the environment and combined with virtual maps.
-
Virtual Reality. Composed only by virtual objects without images of the real environment displayed [9]. [19] developed a laser based teleoperation interface for navigation tasks that renders a Virtual Reality environment to the operator. The laser-based data transferred faster than video streaming, reducing delay between operator and robot.
Resources. In this branch are presented recurrent resources used in reviewed studies. Other resources may be added to this topic as other solutions are developed.
-
Force feedback. It can be stated to the user both by haptic devices and by vibration. Barros et al. [20] developed an interface that informs the user through a vibration feedback about objects proximity when teleoperating a robot. The distance is informed through the intensity of the vibration and the position of objects advised by a belt attached to the operator’s waist with vibrotactiles distributed accordingly to the relative position of the robot. In [21], haptic feedback was applied to indicate object proximity in a navigational task resulting in better performance and presence than other interaction variables in the conducted experiment.
-
Sound. Audible feedback can be used as a form of verbal warning or also abstractly, with specific meanings to the sound in the system. [22] used spatial audio as one of the elements of the robot control interface to perform a search task in order to reduce the operator’s workload.
-
Graphic elements. Points, lines, planes and icons are recurrent elements in interface design. In mixed reality displays for remote HRI these elements can be used in an integrated way to robot environment. [18] developed a interface which allows the user to plan robot trajectory with previous line drawing of the path that the robot will navigate.
-
Stereoscopic view. Resource that enhances depth visualization by displaying stereo images to the user. [21] used stereoscopic view in navigation tasks and noted enhanced performance and sense of presence when the user received only stereoscopic feedback.
-
3D models. Objects can be represented in mixed reality display with 3D models, studies represented the robot with virtual 3D models with the aim to enhance situational awareness in manipulation [23] and navigational tasks [20]. The objects can be represented with realism or simplified elements.
-
Map. Especially useful in navigation tasks, maps can be displayed in the mixed reality display aside the video streaming or virtual environment or integrated. Maps can be static or animated to match the robot’s position; can be represented in 2D or 3D. [24] studied the impact of different map representations to teleoperate a robot in a home environment.
-
Point of view. The position of the virtual or real camera can change the remote interface. In remote HRI, two possibilities are: Exocentric view that shows the robot on the interface; and Egocentric view that shows the scenario from the point of view of the robot, the camera can also be mobile, controlled by the operator. [3] studied the impact of the camera position on the task performance in space telerobotic manipulation by comparing exocentric, egocentric frames of reference, results indicated that egocentric view may promote potential improvement of the performance.
Input. Regards to the means by which the user sends commands to the robot, controllers and sensors focused on the operator. Gestures recognition by image [18] or haptics devices [21], joypads, joysticks [16], and the more common mouse and keyboard are identified devices in the literature review. User input has to merge with the mixed reality device to afford appropriate user experience.
4.4 Human Factors
Human Factors is the collection of variables deriving from the user that should be taken under consideration when the interaction is being designed or evaluated. To evaluate user human factors the studies reviewed used both qualitative and quantitative approaches. Validated questionnaires were used to gather information about qualitative information. [15] measured subjective operational workload based on NASA task load index (NASA-TLX) [25].
Variables such as time to completion, number of collisions, and robot idle time without instructions were quantified to estimate and compare performance. To evaluate performance, [20] analyzed quantitative data of time taken to complete the search task, average robot speed, the number of collisions and other quantitative variables combined.
5 Conclusion and Future Works
A literature review was conducted in order to identify resources and mixed reality display solutions for remote HRI. Starting from 893 reviewed articles, 32 were selected after adequacy assessment of the scope and completeness and quality of the studies. After articles selection, identified factors were organized in an ontology with four initial classes and 18 subclasses.
The categories were described to organize a specific vocabulary of remote HRI and mixed reality displays solutions highlighting particular potential and challenges of each branch. Through the definition of the terms present in this ontology, this study builds a panorama of HRI with Mixed Reality Displays, which serves as a reference to facilitate the creation of new robotic solutions with integrated information. With 27 branches, this study also aims the identification of subjects to be studied as well as aiding the design of experiments with users, based on the proposed field mapping.
The scope of this work focused on interaction techniques, in future works we aim to expand this ontology in what regards to scenarios, human factors and robots construction and development. This ontology will also expand throughout technology development to embody the progress.
References
Goodrich, M.A., Schultz, A.C.: Human-robot interaction: a survey. Trends Hum. Comput. Interact. 1(3), 203–275 (2007)
Heemskerk, C., Eendebak, P., Schropp, G., Hermes, H., Elzendoorn, B., Magielsen, A.: Introducing artificial depth cues to improve task performance in ITER maintenance actions. Fus. Eng. Des. 88(9–10), 1969–1972 (2013). Proceedings of the 27th Symposium on Fusion Technology, Belgium, pp. 24–28 (2012)
Lamb, P., Owen, D.: human performance in space telerobotic manipulation. In: ACM Symposium on Virtual Reality Software and Technology, VRST 2005, pp. 31–37. ACM, New York (2005)
Casper, J., Murphy, R.R.: Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center. IEEE Trans. Syst. Man Cybern. Part B Cybern. 33(3), 367–385 (2003)
Liu, Y., Nejat, G.: Robotic urban search and rescue: a survey from the control perspective. J. Intell. Robot. Syst. 72(2), 147–165 (2013)
Green, S., Billinghurst, M., Chen, X., Chase, G.: Human-robot collaboration: a literature review and augmented reality approach in design. J. ARS, 1–18 (2007)
Sheridan, T.B., Verplanck, W.L.: Human and computer control of undersea teleoperators. Technical report, MIT Man-Machine Laboratory, Cambridge, MA (1978)
Sheridan, T.B.: Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: distinctions and modes of adaptation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 41(4), 662–667 (2011)
Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77(12), 1321–1329 (1994)
Noy, N.F., McGuinness, D.L., et al.: Ontology development 101: a guide to creating your first ontology. Volume 15, Stanford knowledge systems laboratory technical report KSL-01-05 and Stanford medical informatics technical report SMI-2001-0880, Stanford, CA (2001)
Chen, J.Y., Barnes, M.J.: Robotics operator performance in a military multi-tasking environment. In: 3rd ACM/IEEE International Conference on Human Robot Interaction, HRI 2008, pp. 279–286. ACM, New York (2008)
Nielsen, C.W., Goodrich, M.A., Ricks, R.W.: Ecological interfaces for improving mobile robot teleoperation. IEEE Trans. Robot. 23(5), 927–941 (2007)
Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., Goodrich, M.: Common metrics for human-robot interaction. In: 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2006, pp. 33–40. ACM, New York (2006)
Sanguino, J.T.M., Márquez, M.J.A., Carlson, T., Millán, J.: Improving skills and perception in robot navigation by an augmented virtuality assistance system. J. Intell. Robot. Syst. 76(2), 255–266 (2014)
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30(3), 286–297 (2000)
Michaud, F., Boissy, P., Labonté, D., Brière, S., Perreault, K., Corriveau, H., Grant, A., Lauria, M., Cloutier, R., Roux, M.-A., Iannuzzi, D., Royer, M.-P., Ferland, F., Pomerleau, F., Létourneau, D.: Exploratory design and evaluation of a homecare teleassistive mobile robotic system. Mechatronics 20(7), 751–766 (2010). Special Issue on Design and Control Methodologies in Telerobotics (2010)
Azuma, R.T.: A survey of augmented reality. Presence Teleoper. Virtual Environ. 6(4), 355–385 (1997)
Green, S.A., Chase, J.G., Chen, V., Billinghurst, M.: Evaluating the augmented reality human-robot collaboration system. In: 2008 15th International Conference on Mechatronics and Machine Vision in Practice, M2VIP 2008, pp. 521–526 (2008)
Livatino, S., Muscato, G., Sessa, S., Neri, V.: Depth-enhanced mobile robot teleguide based on laser images. Mechatronics 20(7), 739–750 (2010). Special Issue on Design and Control Methodologies in Telerobotics (2010)
de Barros, P.G., Lindeman, R.W.: Performance effects of multi-sensory displays in virtual teleoperation environments. In: 1st Symposium on Spatial User Interaction, SUI 2013, pp. 41–48. ACM, New York (2013)
Lee, S., Kim, G.J.: Effects of haptic feedback, stereoscopy, and image resolution on performance and presence in remote navigation. Int. J. Hum Comput Stud. 66(10), 701–717 (2008)
Haas, E., Stachowiak, C.: Multimodal displays to enhance human robot interaction on-the-move. In: 2007 Workshop on Performance Metrics for Intelligent Systems, PerMIS 2007, pp. 135–140. ACM, New York (2007)
Sauer, M., Leutert, F., Schilling, K.: An augmented reality supported control system for remote operation and monitoring of an industrial work cell. In: 2nd IFAC Symposium on Telematics Applications, pp. 83–88 (2010)
Ryu, H., Lee, W.: Where you point is where the robot is. In: 7th ACM SIGCHI New Zealand Chapter’s International Conference on Computer-Human Interaction: Design Centered HCI, CHINZ 2006, pp. 33–42. ACM, New York (2006)
Hart, S.G.: NASA-task load index (NASA-TLX); 20 years later. Hum. Factors Ergonomics Soc. Annu. Meet. 50(9), 904–908 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Cani D.L., C., Breyer, F.B., Kelner, J. (2016). Remote HRI and Mixed Reality, an Ontology. In: Marcus, A. (eds) Design, User Experience, and Usability: Technological Contexts. DUXU 2016. Lecture Notes in Computer Science(), vol 9748. Springer, Cham. https://doi.org/10.1007/978-3-319-40406-6_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-40406-6_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-40405-9
Online ISBN: 978-3-319-40406-6
eBook Packages: Computer ScienceComputer Science (R0)