Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments
<p>Testing our AR annotation method in different scenarios with different devices: HoloLens (<b>a</b>,<b>b</b>) and iPad (<b>c</b>).</p> "> Figure 2
<p>Isometric projection (<b>a</b>) and plant (<b>b</b>) of the calibration method for low-specs device.</p> "> Figure 3
<p>Three hundred and sixty degree image of the environment in which the experimental study was carried out.</p> "> Figure 4
<p>Screenshots of the application during the calibration process.</p> "> Figure 5
<p>User testing the application (<b>a</b>) and screenshot of the application during the AR annotation search (<b>b</b>).</p> "> Figure 6
<p>AR annotations that users had to find during the experimental study.</p> "> Figure 7
<p>Number of successful AR annotations for each user.</p> "> Figure 8
<p>Percentage of hits for each of the AR annotations: large (L1 and H3), medium (L2 and H2) and small (L3 and H1).</p> "> Figure 9
<p>Overlay of the AR annotations found by the 40 study participants.</p> "> Figure 10
<p>Total time taken by participants to complete all tasks (calibration and annotation search).</p> "> Figure 11
<p>Time spent by participants to calibrate the device.</p> "> Scheme 1
<p>XML that implements an AR annotation for any device.</p> ">
Abstract
:1. Introduction
2. Related Work
3. System Description
- High-specs smartphone configuration. In this configuration the device relies on Google’s ARCore platform in order to track its position and its surrounding environment. We have developed the app using Unity3D engine that provides an abstraction layer for AR development called ARFoundation. With this layer we can access the functionalities of both ARCore in android and ARKit in iOS devices.
- Low-specs smartphone configuration. To simulate this device all ARCore’s functionalities are disabled, and tracking relies only on the gyroscope sensor present in the device. This type of tracking can only estimate the rotation of the device without 3D position information and limits the movement of the users as they only can rotate the device around them to find annotations. The accuracy of this tracking system may be highly dependent on the users’ steadiness in handling the device.
3.1. Calibration Method
3.2. Data Model
4. Study
4.1. Protocol Design
4.2. Task Description
- Set 1 (low-specs configuration): (L1) a computer monitor placed next to two other similar monitors, (L2) some filing cabinets placed between other objects of similar characteristics and (L3) an A4-size poster on the wall, placed next to others of the same size.
- Set 2 (high-specs configuration): (H1) an A4-size poster on the wall, placed next to others of the same size, (H2) a projector placed between other objects of similar characteristics and (H3) a computer monitor placed next to two other similar monitors.
4.3. Participants and Groups
5. Results and Discussion
5.1. Annotations Found and Objects Correctly Identified
5.2. User Satisfaction
5.3. Execution Times
6. Conclusions and Future Work
Author Contributions
Funding
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Bullen, C.V.; Johansen, R. Groupware, A Key to Managing Business Teams; Technical Report; MIT Sloan School of Management: Cambridge, MA, USA, 1988. [Google Scholar]
- Ellis, C.A.; Gibbs, S.J.; Rein, G. Groupware: Some issues and experiences. Commun. ACM 1991, 34, 39–58. [Google Scholar] [CrossRef] [Green Version]
- Irlitti, A.; Smith, R.T.; Itzstein, S.V.; Billinghurst, M.; Thomas, B.H. Challenges for Asynchronous Collaboration in Augmented Reality. In Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Merida, Mexico, 19–23 September 2016; pp. 31–35. [Google Scholar] [CrossRef]
- Pidel, C.; Ackermann, P. Collaboration in Virtual and Augmented Reality: A Systematic Overview. In Augmented Reality, Virtual Reality, and Computer Graphics; Springer: Cham, Switzerland, 2020; pp. 141–156. [Google Scholar] [CrossRef]
- Ens, B.; Lanir, J.; Tang, A.; Bateman, S.; Lee, G.; Piumsomboon, T.; Billinghurst, M. Revisiting collaboration through mixed reality: The evolution of groupware. Int. J. Hum. Comput. Stud. 2019, 131, 81–98. [Google Scholar] [CrossRef]
- Sereno, M.; Wang, X.; Besancon, L.; Mcguffin, M.J.; Isenberg, T. Collaborative Work in Augmented Reality: A Survey. IEEE Trans. Vis. Comput. Graph. 2020. [Google Scholar] [CrossRef] [PubMed]
- Speicher, M.; Hall, S.D.; Yu, A.; Zhang, B.; Zhang, H.; Nebeling, J. XD-AR: Challenges and Opportunities in Cross-Device Augmented Reality Application Development. Proc. ACM Hum. Comput. Interact. 2018, 2, 7:1–7:24. [Google Scholar] [CrossRef]
- Wither, J.; DiVerdi, S.; Höllerer, T. Annotation in outdoor augmented reality. Comput. Graph. 2009, 33, 679–689. [Google Scholar] [CrossRef]
- Irizarry, J.; Gheisari, M.; Williams, G.; Walker, B.N. InfoSPOT: A mobile Augmented Reality method for accessing building information through a situation awareness approach’. Autom. Constr. 2013, 33, 11–23. [Google Scholar] [CrossRef]
- Jalo, H.; Pirkkalainen, H.; Torro, O.; Kärkkäinen, H.; Puhto, J.; Kankaanpää, T. How Can Collaborative Augmented Reality Support Operative Work in the Facility Management Industry? In Proceedings of the 10th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, Seville, Spain, 18–20 September 2018; pp. 41–51. [Google Scholar] [CrossRef]
- Ioannidi, A.; Gavalas, D.; Kasapakis, V. Flaneur: Augmented exploration of the architectural urbanscape. In Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 3–6 July 2017; pp. 529–533. [Google Scholar] [CrossRef]
- Daiber, F.; Kosmalla, F.; Krüger, A. BouldAR: Using augmented reality to support collaborative boulder training. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems; ACM: New York, NY, USA, 2013; pp. 949–954. [Google Scholar] [CrossRef]
- Kasahara, S.; Heun, V.; Lee, A.S.; Ishii, H. Second surface: Multi-user spatial collaboration system based on augmented reality. In SIGGRAPH Asia 2012 Emerging Technologies; ACM: New York, NY, USA, 2012; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
- Martín-Gutiérrez, J.; Fabiani, P.; Benesova, W.; Meneses, M.D.; Mora, C.E. Augmented reality to promote collaborative and autonomous learning in higher education. Comput. Hum. Behav. 2015, 51, 752–761. [Google Scholar] [CrossRef]
- Huang, F.; Zhou, Y.; Yu, Y.; Wang, Z.; Du, S. Piano AR: A Markerless Augmented Reality Based Piano Teaching System. In Proceedings of the 2011 Third International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2011; Volume 2, pp. 47–52. [Google Scholar] [CrossRef]
- Ahuja, K.; Pareddy, S.; Xiao, R.; Goel, M.; Harrison, C. LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 20–23 October 2019; pp. 189–196. [Google Scholar] [CrossRef] [Green Version]
- Tregel, T.; Dutz, T.; Hock, P.; Müller, P.N.; Achenbach, P.; Göbel, S. StreetConqAR: Augmented Reality Anchoring in Pervasive Games. In Serious Games; Springer: Cham, Switzerland, 2020; pp. 3–16. [Google Scholar] [CrossRef]
- Lee, T.; Hollerer, T. Hybrid Feature Tracking and User Interaction for Markerless Augmented Reality. In Proceedings of the 2008 IEEE Virtual Reality Conference, Reno, NV, USA, 8–12 March 2008; pp. 145–152. [Google Scholar] [CrossRef]
- Azuma, R.; Weon Lee, J.; Jiang, B.; Park, J.; You, S.; Neumann, U. Tracking in unprepared environments for augmented reality systems. Comput. Graph. 1999, 23, 787–793. [Google Scholar] [CrossRef]
- Höllerer, T.; Wither, J.; DiVerdi, S. “Anywhere Augmentation”: Towards Mobile Augmented Reality in Unprepared Environments. In Location Based Services and TeleCartography; Gartner, G., Cartwright, W., Peterson, M.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 393–416. [Google Scholar] [CrossRef] [Green Version]
- Afif, F.N.; Basori, A.H. Orientation Control for Indoor Virtual Landmarks based on Hybrid-based Markerless Augmented Reality. Procedia Soc. Behav. Sci. 2013, 97, 648–655. [Google Scholar] [CrossRef] [Green Version]
- Xu, K.; Prince, S.J.D.; Cheok, A.D.; Qiu, Y.; Kumar, K.G. Visual registration for unprepared augmented reality environments. Pers Ubiquit Comput. 2003, 7, 287–298. [Google Scholar] [CrossRef]
- Langlotz, T.; Wagner, D.; Mulloni, A.; Schmalstieg, D. Online Creation of Panoramic Augmented Reality Annotations on Mobile Phones. IEEE Pervasive Comput. 2012, 11, 56–63. [Google Scholar] [CrossRef]
- Casas, S.; Portalés, C.; García-Pereira, I.; Gimeno, J. Mixing Different Realities in a Single Shared Space: Analysis of Mixed-Platform Collaborative Shared Spaces. In Harnessing the Internet of Everything (IoE) for Accelerated Innovation Opportunities; IGI Global: Hershey, PA, USA, 2019; pp. 175–192. [Google Scholar] [CrossRef] [Green Version]
- García-Pereira, I.; Gimeno, J.; Pérez, M.; Portalés, C.; Casas, S. MIME: A Mixed-Space Collaborative System with Three Immersion Levels and Multiple Users. In Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 16–20 October 2018; pp. 179–183. [Google Scholar] [CrossRef] [Green Version]
- Hoppe, A.H.; Westerkamp, K.; Maier, S.; van de Camp, F.; Stiefelhagen, R. Multi-user Collaboration on Complex Data in Virtual and Augmented Reality. In Proceedings of the HCI International 2018—Posters’ Extended Abstracts, Las Vegas, NV, USA, 15–20 July 2018; pp. 258–265. [Google Scholar]
- Butz, A.; Hollerer, T.; Feiner, S.; MacIntyre, B.; Beshers, C. Enveloping users and computers in a collaborative 3D augmented reality. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), Washington, DC, USA, 20–21 October 1999; pp. 35–44. [Google Scholar] [CrossRef]
- MacWilliams, A.; Sandor, C.; Wagner, M.; Bauer, M.; Klinker, G.; Bruegge, B. Herding sheep: Live system for distributed augmented reality. In Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality, Tokyo, Japan, 10 October 2003; pp. 123–132. [Google Scholar] [CrossRef]
- Baillard, C.; Fradet, M.; Alleaume, V.; Jouet, P.; Laurent, A. Multi-device mixed reality TV: A collaborative experience with joint use of a tablet and a headset. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA, 8–10 November 2017; pp. 1–2. [Google Scholar] [CrossRef]
- Azure Spatial Anchors|Microsoft Azure. Available online: https://azure.microsoft.com/es-es/services/spatial-anchors/ (accessed on 5 August 2021).
- Portalés, C.; Casanova-Salas, P.; Casas, S.; Gimeno, J.; Fernández, M. An interactive cameraless projector calibration method. Virtual Real. 2020, 24, 109–121. [Google Scholar] [CrossRef]
- García-Pereira, I.; Gimeno, J.; Morillo, P.; Casanova-Salas, P. A Taxonomy of Augmented Reality Annotations’, Valletta, Malta. 2020. pp. 412–419. Available online: https://www.scitepress.org/Link.aspx?doi=10.5220/0009193404120419 (accessed on 13 April 2021).
- Witmer, B.G.; Singer, M.J. Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence Teleoperators Virtual Environ. 1998, 7, 225–240. [Google Scholar] [CrossRef]
- Juan, M.-C.; García-García, I.; Mollá, R.; López, R. Users’ Perceptions Using Low-End and High-End Mobile-Rendered HMDs: A Comparative Study. Computers 2018, 7, 15. [Google Scholar] [CrossRef] [Green Version]
- Polvi, J.; Taketomi, T.; Yamamoto, G.; Dey, A.; Sandor, C.; Kato, H. SlidAR: A 3D positioning method for SLAM-based handheld augmented reality. Comput. Graph. 2016, 55, 33–43. [Google Scholar] [CrossRef] [Green Version]
Question | Factor |
---|---|
It was easy to calibrate the device (mark the three initial points). | RF |
It was easy to find annotations with the high-specs configuration. | RF |
It was easy to find annotations with the low-specs configuration. | RF |
It was easy to find out which objects were annotated with the high-specs configuration. | RF |
It was easy to find out which objects were annotated with the low-specs configuration. | RF |
Not being able to move around with the low-specs configuration was NOT a problem. | CF |
The use of the application did NOT require a great mental effort. | SF |
The amount of information displayed on the screen was adequate | SF |
The information displayed on the screen was easy to read | SF |
The information displayed on the screen was easy to understand | SF |
The use of the application did NOT require a great physical effort. | EF |
The use of the smartphone during the experiment was comfortable (neck, shoulders, back, etc.) | EF |
At no time did I feel that the smartphone was going to fall out of my hands. | CF |
The handling of the application was simple and without complications. | CF |
The handling of the application was natural | CF |
The application responded to my actions adequately. | CF |
I did NOT feel delays between my actions and the expected results. | CF |
I quickly got used to the application | CF |
I focused on the contents within the application and not on the mobile device. | DF |
I think I have learned concepts and ideas about Augmented Reality annotations. | CF |
I would like to use a similar application for other purposes. | OF |
At the end of the experience I felt expert in the management of the application. | CF |
I felt motivated during the experience. | OF |
I liked the experience. | OF |
What did you like most about the Augmented Reality annotation tool? | |
What improvements or changes would you suggest? | |
Rate the system of high-specs. | |
Rate the system of low-specs. |
Parameter | Mean ± SD | t | p | Cohen’s d |
---|---|---|---|---|
RF | 5.145 ± 1.009 | −1.037 | 0.306 | −0.505 |
CF | 5.251 ± 0.834 | −2.046 | 0.048 | −1.089 |
SF | 5.35 ± 0.681 | −3.072 | 0.004 | −0.930 |
EF | 5.675 ± 0.685 | −1.16 | 0.253 | −1.083 |
DF | 5.410 ± 0.715 | 0.091 | 0.928 | −1.520 |
OF | 5.375 ± 0.632 | −1.99 | 0.054 | −4.349 |
SCORE H-S | 9.218 ± 0.951 | −1.594 | 0.119 | −0.511 |
SCORE L-S | 7.885 ± 1.855 | −1.182 | 0.245 | −0.379 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
García-Pereira, I.; Casanova-Salas, P.; Gimeno, J.; Morillo, P.; Reiners, D. Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments. Information 2021, 12, 519. https://doi.org/10.3390/info12120519
García-Pereira I, Casanova-Salas P, Gimeno J, Morillo P, Reiners D. Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments. Information. 2021; 12(12):519. https://doi.org/10.3390/info12120519
Chicago/Turabian StyleGarcía-Pereira, Inma, Pablo Casanova-Salas, Jesús Gimeno, Pedro Morillo, and Dirk Reiners. 2021. "Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments" Information 12, no. 12: 519. https://doi.org/10.3390/info12120519
APA StyleGarcía-Pereira, I., Casanova-Salas, P., Gimeno, J., Morillo, P., & Reiners, D. (2021). Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments. Information, 12(12), 519. https://doi.org/10.3390/info12120519