[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2836041.2841207acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmumConference Proceedingsconference-collections
poster

InfoFinder: just-in-time information interface from the combination of an HWD with a smartwatch

Published: 30 November 2015 Publication History

Abstract

We present InfoFinder, a novel interface running on wearable devices that allows a user to perceive information using a simple finger-framing gesture. InfoFinder works in combination with a see-through head-worn display (HWD) and a smartwatch. Whenever the smartwatch detects a user's finger-framing gesture, it activates the HWD to extract the finger-framing area and display recognized object's information. InfoFinder avoids continuously tracking with the HWD camera, offering the advantages of fast, easily intuitive information acquisition as well as decreasing the possibility of misrecognition. An experiment shows InfoFinder responds in a timely manner to information acquisition requests (4.1 seconds), more than 5 times faster than using an HWD's conventional controller. We also verified that the task success rate of InfoFinder was improved by 32.8% compared to a reference prototype using real-time gesture tracking on the HWD.

References

[1]
Ali Erol, George Bebis, Mircea Nicolescu, Richard D. Boyle, and Xander Twombly. 2007. Vision-based hand pose estimation: A review. Computer Vision and Image Understanding, 108, 52--73.
[2]
Allu Sneha. (October 11, 2011). Advantages and Disadvantages about the technology. Retrieved June 5, 2015 from http://sallu-sixthsense.blogspot.jp/2011/10/sixth-sense-advantages-and.html
[3]
Andrea Colaço. 2013. Sensor design and interaction techniques for gestural input to smart glasses and mobile devices. In Proceedings of the adjunct publication of the 26th annual ACM symposium on User interface software and technology (UIST 2013), 49--52.
[4]
David Dobbelstein, Hock Philipp, and Rukzio Enrico. 2015. Belt: An Unobtrusive Touch Input Device for Head-worn Displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015), 2135--2138.
[5]
Google Goggles. Retrieved July 20, 2014 from http://www.google.com/mobile/goggles/
[6]
Grubert Jens, Heinisch Matthias, Quigley Aaron and Schmalstieg Dieter. (2015). MultiFi: multi-fidelity interaction with displays on and around the body. In Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM Press-Association for Computing Machinery.
[7]
Holger Junker, Oliver Amft, Paul Lukowicz, and Gerhard Tröster. 2008. Gesture spotting with body-worn inertial sensors to detect user activities. Pattern Recognition, 41(6), 2010--2024.
[8]
Karen E. Fisher. 2005. Theories of information behavior, ISBN 978-1-57387-230-0
[9]
Lv Zhihan, Feng Liangbing, Li Haibo, and Feng Shengzhong. 2014. Hand-free motion interaction on Google glass. In Proc. SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, 21.
[10]
Mayer Simon and Soros Gabor. 2014. User Interface Beaming--Seamless Interaction with Smart Things Using Personal Wearable Computers. In Wearable and Implantable Body Sensor Networks Workshops (BSN Workshops 2014), 46--49.
[11]
Ohannessian Kevin. (Nov 7, 2013). Wearable Tech: Americans Interested but Doubtful, Poll Finds. Retrieved April 20, 2015 from http://www.tomsguide.com/us/wearable-tech-americans-doubtful,news-17833.html
[12]
Orbeus Rekognition API. Retrieved July 20, 2014 from https://rekognition.com/
[13]
Pierce S. Jeffrey, et al. 1997. Image plane interaction techniques in 3D immersive environments. In Proceedings of the 1997 symposium on Interactive 3D graphics, 39-ff.
[14]
Pranav Mistry and Pattie Maes. SixthSense: a wearable gestural interface. In Proc. ACM SIGGRAPH ASIA 2009 Sketches, 11.
[15]
Pratham Parikh, et al. 2013. Fgest: Finger Tracking and Gesture Recognition in Smartphones. International Journal of Advanced Research in Computer Engineering & Technology (IJARCET 2013), 2(3), pp-0947.

Cited By

View all
  • (2022)DVF: Toward Semiautomatic Composition of Perceptual Images of a Virtual Scene Through Hand Gesture Interface2022 International Conference on Cyberworlds (CW)10.1109/CW55638.2022.00040(169-170)Online publication date: Sep-2022

Index Terms

  1. InfoFinder: just-in-time information interface from the combination of an HWD with a smartwatch

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    MUM '15: Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia
    November 2015
    442 pages
    ISBN:9781450336055
    DOI:10.1145/2836041
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    • FH OOE: University of Applied Sciences Upper Austria
    • Johannes Kepler Univ Linz: Johannes Kepler Universität Linz

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 30 November 2015

    Check for updates

    Author Tags

    1. HWD
    2. combination
    3. just-in-time
    4. smartwatch

    Qualifiers

    • Poster

    Conference

    MUM '15
    Sponsor:
    • FH OOE
    • Johannes Kepler Univ Linz

    Acceptance Rates

    MUM '15 Paper Acceptance Rate 33 of 89 submissions, 37%;
    Overall Acceptance Rate 190 of 465 submissions, 41%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)4
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 15 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)DVF: Toward Semiautomatic Composition of Perceptual Images of a Virtual Scene Through Hand Gesture Interface2022 International Conference on Cyberworlds (CW)10.1109/CW55638.2022.00040(169-170)Online publication date: Sep-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media