Virtual Object Manipulation by Combining Touch and Head Interactions for Mobile Augmented Reality
<p>Conceptual diagram of manipulating a virtual object: An interaction that combines touch interaction and head movements in a mobile augmented reality (AR) environment on a head-mounted display (HMD). (<b>A</b>) The user can select the virtual object with ray casting using touch interaction and head motion. (<b>B</b>) The user can manipulate the selected virtual object using touch interaction and head motion. (<b>C</b>) A picture of the index finger on a nail-mounted internal measurement unit (IMU) sensor.</p> "> Figure 2
<p>Flow diagram of virtual object manipulation.</p> "> Figure 3
<p>Conceptual illustration of manipulating the height of a virtual object with head rotation.</p> "> Figure 4
<p>Example image of how to manipulate the position of a virtual object using Head Interaction.</p> "> Figure 5
<p>Pictures of each task about the target object (brown Car) and objective object (green Car). (<b>A</b>) The task of translating the target object (brown Car) to the objective object (green Car) along the <span class="html-italic">y</span>-axis (Manipulation 1). (<b>B</b>) The task of translating the target object (brown Car) to objective object (green Car) on the XY plane (Manipulation 2). (<b>C</b>) The task of rotating the target object (brown Car) to match the objective object (green Car) around the y- and -axes (Manipulation 3). (<b>D</b>) The task of translating and rotating the target object (brown Car) to match the objective object (green Car; Manipulation 4).</p> "> Figure 6
<p>Mean completion time for each interaction (+/- standard deviation of the average).</p> "> Figure 7
<p>Position error graph (+/− standard deviation of the average). The mean of the (<b>a</b>) position error for each interaction, (<b>b</b>) <span class="html-italic">x</span>-axis position error for each interaction, (<b>c</b>) <span class="html-italic">y</span>-axis position error for each interaction, and (<b>d</b>) <span class="html-italic">z</span>-axis position error for each interaction.</p> "> Figure 7 Cont.
<p>Position error graph (+/− standard deviation of the average). The mean of the (<b>a</b>) position error for each interaction, (<b>b</b>) <span class="html-italic">x</span>-axis position error for each interaction, (<b>c</b>) <span class="html-italic">y</span>-axis position error for each interaction, and (<b>d</b>) <span class="html-italic">z</span>-axis position error for each interaction.</p> "> Figure 8
<p>Angle error graph (+/- standard deviation of the average). The mean of the (<b>a</b>) angle error for each interaction, (<b>b</b>) <span class="html-italic">x</span>-axis angle error for each interaction, (<b>c</b>) <span class="html-italic">y</span>-axis angle error for each interaction, and (<b>d</b>) <span class="html-italic">z</span>-axis angle error for each interaction.</p> "> Figure 8 Cont.
<p>Angle error graph (+/- standard deviation of the average). The mean of the (<b>a</b>) angle error for each interaction, (<b>b</b>) <span class="html-italic">x</span>-axis angle error for each interaction, (<b>c</b>) <span class="html-italic">y</span>-axis angle error for each interaction, and (<b>d</b>) <span class="html-italic">z</span>-axis angle error for each interaction.</p> "> Figure 9
<p>Average of the total simulator sickness questionnaire (SSQ) score and three scores for nausea, oculomotor symptoms, and disorientation for each interaction (+/- standard deviation of the average).</p> "> Figure 10
<p>Average of the computer system usability questionnaire (CSUQ) scores for each interaction (+/- standard deviation of the average).</p> "> Figure 11
<p>Average score of questionnaires for each interaction (+/- standard deviation of the average).</p> ">
Abstract
:1. Introduction
- We design two interaction methods that combine touch interaction and head movements, which can conveniently manipulate a virtual object in a mobile AR environment.
- We confirm that the proposed interaction methods, which are compared with mid-air hand interaction, can conveniently perform the task of repeatedly manipulating a virtual object.
- Through user evaluation, we confirm that the interaction methods using head movements do not cause serious dizziness.
2. Related Work
2.1. Virtual Object Manipulation Method Using a Mouse
2.2. Virtual Object Manipulation Method with Touch Interaction
2.3. Virtual Object Manipulation Method with Hand Gestures
2.4. Virtual Object Manipulation Method with More Than Two Input Modalities
3. Methods
3.1. Finger Interaction: Touch Interaction for Manipulating a Virtual Object
3.2. Head Interaction: Head Motion (Rotation) to Manipulate a Virtual Object
3.3. Hand Interaction: Hand Gesture for Manipulating a Virtual Object
4. Experiments
4.1. Task
4.2. Apparatus
4.3. Participants
5. Results
5.1. Completion Time
5.2. Position Accuracy
5.3. Angle Accuracy
5.4. Dizziness
5.5. Usability
6. Discussion
6.1. Completion Time
6.2. Position Accuracy
6.3. Angle Accuracy
6.4. Dizziness
6.5. Usability
7. Conclusion
Author Contributions
Funding
Conflicts of Interest
References
- LaViola, J.J., Jr.; Kruijff, E.; McMahan, R.P.; Bowman, D.; Poupyrev, I.P. 3D User Interfaces: Theory and Practice; Addison-Wesley Professional: Boston, MA, USA, 2017. [Google Scholar]
- Google Glass. Available online: https://www.google.com/glass/start/ (accessed on 21 July 2019).
- Leap, M. Magic Leap. Available online: https://www.magicleap.com/ (accessed on 21 July 2019).
- Microsoft Hololens. Available online: https://www.microsoft.com/en-us/p/microsoft-hololens-development-edition/8xf18pqz17ts?activetab=pivot:overviewtab (accessed on 21 July 2019).
- Sony Smarteyeglass SED E1. Available online: https://developer.sony.com/develop/smarteyeglass-sed-e1/ (accessed on 21 July 2019).
- Joolee, J.; Uddin, M.; Khan, J.; Kim, T.; Lee, Y.K. A Novel Lightweight Approach for Video Retrieval on Mobile Augmented Reality Environment. Appl. Sci. 2018, 8, 1860. [Google Scholar] [CrossRef]
- Phan, V.T.; Choo, S.Y. Interior design in augmented reality environment. Int. J. Comput. Appl. 2010, 5, 16–21. [Google Scholar] [CrossRef]
- Nóbrega, R.; Correia, N. CHI’11 Extended Abstracts on Human Factors in Computing Systems. In Design Your Room: Adding Virtual Objects to a Real Indoor Scenario; ACM: New York, NY, USA, 2011; pp. 2143–2148. [Google Scholar]
- Gugenheimer, J.; Dobbelstein, D.; Winkler, C.; Haas, G.; Rukzio, E. Face Touch: Enabling Touch Interaction in Display fixed Uis for Mobile Virtual Reality. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; ACM: New York, NY, USA, 2016; pp. 49–60. [Google Scholar]
- Tung, Y.C.; Hsu, C.Y.; Wang, H.Y.; Chyou, S.; Lin, J.W.; Wu, P.J.; Valstar, A.; Chen, M.Y. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; ACM: New York, NY, USA, 2015; pp. 3327–3336. [Google Scholar]
- Mendes, D.; Relvas, F.; Ferreira, A.; Jorge, J. The Benefits of DOF Separation in Mid-Air 3d Object Manipulation. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, Munich, Germany, 2–4 November 2016; ACM: New York, NY, USA, 2016; pp. 261–268. [Google Scholar]
- Bader, T.; Vogelgesang, M.; Klaus, E. Multimodal Integration of Natural Gaze Behavior for Intention Recognition During Object Manipulation. In Proceedings of the 2009 International Conference on Multimodal Interfaces, Cambridge, UK, 2–4 November 2009; ACM: New York, NY, USA, 2009; pp. 199–206. [Google Scholar]
- Fukazawa, R.; Takashima, K.; Shoemaker, G.; Kitamura, Y.; Itoh, Y.; Kishino, F. Comparison of multimodal interactions in perspective-corrected multi-display environment. In Proceedings of the 2010 IEEE Symposium on 3D User Interfaces (3DUI), Waltham, MA, USA, 20–21 March 2010; pp. 103–110. [Google Scholar]
- Oh, J.Y.; Lee, J.; Lee, J.H.; Park, J.H. Anywheretouch: Finger tracking method on arbitrary surface using nailed-mounted imu for mobile hmd. In Proceedings of the International Conference on Human-Computer Interaction, Vancouver, BC, Canada, 9–14 July 2017; pp. 185–191. [Google Scholar]
- Özacar, K.; Hincapié-Ramos, J.D.; Takashima, K.; Kitamura, Y. 3D Selection Techniques for Mobile Augmented Reality Head-Mounted Displays. Interact. Comput. 2016, 29, 579–591. [Google Scholar] [CrossRef]
- Mine, M.R. Virtual Environment Interaction Techniques; UNC Chapel Hill CS Dept: Columbia, SC, USA, 1995. [Google Scholar]
- Microsoft Bounding Box Default Handle Sytle. Available online: https://github.com/microsoft/MixedRealityToolkit-Unity/blob/mrtk_release/Documentation/README_HandInteractionExamples.md (accessed on 21 July 2019).
- Bowman, D.A.; Johnson, D.B.; Hodges, L.F. Testbed evaluation of virtual environment interaction techniques. Presence Teleoperators Virtual Environ. 2001, 10, 75–95. [Google Scholar] [CrossRef]
- Hoppe, A.H.; van de Camp, F.; Stiefelhagen, R. Interaction with three dimensional objects on diverse input and output devices: A survey. In Proceedings of the International Conference on Human-Computer Interaction, Vancouver, BC, Canada, 9–14 July 2017; pp. 130–139. [Google Scholar]
- Chen, M.; Mountford, S.J.; Sellen, A. A Study in Interactive 3-D Rotation Using 2-D Control Devices; ACM: New York, NY, USA, 1988; pp. 121–129. [Google Scholar]
- Conner, B.D.; Snibbe, S.S.; Herndon, K.P.; Robbins, D.C.; Zeleznik, R.C.; Van Dam, A. Three-dimensional widgets. In Proceedings of the 1992 Symposium on Interactive 3D graphics, San Diego, CA, USA, 10–11 December 1990; ACM: New York, NY, USA, 1992; pp. 183–188. [Google Scholar]
- Houde, S. Iterative design of an interface for easy 3-D direct manipulation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Monterey, CA, USA 3–7 May 1992; ACM: New York, NY, USA, 1992; pp. 135–142. [Google Scholar]
- Shoemake, K. ARCBALL: A User interface for specifying three-dimensional orientation using a mouse. In Graphics Interface; Canadian Information Processing Society: Mississauga, ON, Canada, 1992; pp. 151–156. [Google Scholar]
- Techologies, U. Unity3d. Available online: https://unity.com/ (accessed on 21 July 2019).
- Autodesk Maya. Available online: https://www.autodesk.com/products/maya/overview (accessed on 21 July 2019).
- Blender Blender. Available online: https://www.blender.org/ (accessed on 21 July 2019).
- Mendes, D.; Caputo, F.M.; Giachetti, A.; Ferreira, A.; Jorge, J. Computer Graphics Forum. In A Survey on 3d Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments; Wiley Online Library: Hoboken, NJ, USA, 2019; pp. 21–45. [Google Scholar]
- Cohé, A.; Dècle, F.; Hachet, M. tBox: A 3d Transformation Widget Designed for Touch-Screens. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; ACM: New York, NY, USA, 2011; pp. 3005–3008. [Google Scholar]
- Mendes, D.; Lopes, P.; Ferreira, A. Hands-on Interactive Tabletop Lego Application. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, Lisbon, Portugal, 8–11 November 2011; ACM: New York, NY, USA, 2011; pp. 19–26. [Google Scholar]
- Au, O.K.C.; Tai, C.L.; Fu, H. Computer Graphics Forum. In Multitouch Gestures for Constrained Transformation of 3d Objects; Wiley Online Library: Hoboken, NJ, USA, 2012; pp. 651–660. [Google Scholar]
- Xiao, R.; Schwarz, J.; Throm, N.; Wilson, A.D.; Benko, H. MRTouch: Adding Touch Input to Head-Mounted Mixed Reality. IEEE Trans. Vis. Comput. Graph. 2018, 24, 1653–1660. [Google Scholar] [CrossRef] [PubMed]
- Sharp, T.; Keskin, C.; Robertson, D.; Taylor, J.; Shotton, J.; Kim, D.; Rhemann, C.; Leichter, I.; Vinnikov, A.; Wei, Y. Accurate, Robust, and Flexible Real-Time Hand Tracking. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; ACM: New York, NY, USA, 2015; pp. 3633–3642. [Google Scholar]
- Frees, S.; Kessler, G.D. Precise and rapid interaction through scaled manipulation in immersive virtual environments. In Proceedings of the IEEE Proceedings Virtual Reality (VR 2005), Bonn, Germany, 12–16 March 2005; pp. 99–106. [Google Scholar]
- Poupyrev, I.; Billinghurst, M.; Weghorst, S.; Ichikawa, T. The go-go interaction technique: Non-linear mapping for direct manipulation in VR. In Proceedings of the ACM Symposium on User Interface Software and Technology, Seattle, WA, USA, 6–8 November 1996; pp. 79–80. [Google Scholar]
- Bowman, D.A.; Hodges, L.F. An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments. SI3D 1997, 97, 35–38. [Google Scholar]
- Nancel, M.; Chapuis, O.; Pietriga, E.; Yang, X.D.; Irani, P.P.; Beaudouin-Lafon, M. High-precision pointing on large wall displays using small handheld devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; ACM: New York, NY, USA, 2013; pp. 831–840. [Google Scholar] [Green Version]
- Lee, M.; Billinghurst, M.; Baek, W.; Green, R.; Woo, W. A usability study of multimodal input in an augmented reality environment. Virtual Real. 2013, 17, 293–305. [Google Scholar] [CrossRef]
- Piumsomboon, T.; Altimira, D.; Kim, H.; Clark, A.; Lee, G.; Billinghurst, M. Grasp-Shell vs gesture-speech: A comparison of direct and indirect natural interaction techniques in augmented reality. In Proceedings of the 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 10–12 September 2014; pp. 73–82. [Google Scholar]
- Ens, B.; Byagowi, A.; Han, T.; Hincapié-Ramos, J.D.; Irani, P. Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces. In Proceedings of the 2016 Symposium on Spatial User Interaction, Tokyo, Japan, 15–16 October 2016; ACM: New York, NY, USA, 2016; pp. 99–102. [Google Scholar]
- Lewis, J.R. IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. Int. J. Hum. Comput. Interact. 1995, 7, 57–78. [Google Scholar] [CrossRef] [Green Version]
- Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
- Argelaguet, F.; Andujar, C. A survey of 3D object selection techniques for virtual environments. Comput. Graph. 2013, 37, 121–136. [Google Scholar] [CrossRef] [Green Version]
- Rebenitsch, L.; Owen, C. Review on cybersickness in applications and visual displays. Virtual Real. 2016, 20, 101–125. [Google Scholar] [CrossRef]
Interaction | Finger Interaction | Head Interaction | Hand Interaction | |
Manipulation | ||||
Selection | ||||
Translation | ||||
Rotation |
No. | Question |
---|---|
1 | This interaction is convenient to select the target object. |
2 | This interaction is convenient to translate the target object. |
3 | This interaction is convenient to rotate the target object. |
4 | This interaction is convenient for manipulating a virtual object. |
5 | This interaction is useful for manipulating virtual objects repeatedly. |
6 | This interaction is convenient for use in public places. |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Oh, J.Y.; Park, J.H.; Park, J.-M. Virtual Object Manipulation by Combining Touch and Head Interactions for Mobile Augmented Reality. Appl. Sci. 2019, 9, 2933. https://doi.org/10.3390/app9142933
Oh JY, Park JH, Park J-M. Virtual Object Manipulation by Combining Touch and Head Interactions for Mobile Augmented Reality. Applied Sciences. 2019; 9(14):2933. https://doi.org/10.3390/app9142933
Chicago/Turabian StyleOh, Ju Young, Ji Hyung Park, and Jung-Min Park. 2019. "Virtual Object Manipulation by Combining Touch and Head Interactions for Mobile Augmented Reality" Applied Sciences 9, no. 14: 2933. https://doi.org/10.3390/app9142933
APA StyleOh, J. Y., Park, J. H., & Park, J. -M. (2019). Virtual Object Manipulation by Combining Touch and Head Interactions for Mobile Augmented Reality. Applied Sciences, 9(14), 2933. https://doi.org/10.3390/app9142933