default search action
AHs 2021: Rovaniemi, Finland
- Jonna Häkkilä, Paul Strohmeier:
AHs '21: Augmented Humans Conference 2021, Rovaniemi, Finland, February 22-24, 2021. ACM 2021, ISBN 978-1-4503-8428-5
Session 1: Remixed Bodies
- Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, Maki Sugimoto:
MultiSoma: Distributed Embodiment with Synchronized Behavior and Perception. 1-9 - Ryo Takizawa, Takayoshi Hagiwara, Adrien Verhulst, Masaaki Fukuoka, Michiteru Kitazaki, Maki Sugimoto:
Dynamic Shared Limbs: An Adaptive Shared Body Control Method Using EMG Sensors. 10-18 - Hideki Shimobayashi, Tomoya Sasaki, Arata Horie, Riku Arakawa, Zendai Kashino, Masahiko Inami:
Independent Control of Supernumerary Appendages Exploiting Upper Limb Redundancy. 19-30 - Ryoichi Ando, Isao Uebayashi, Hayato Sato, Hayato Ohbayashi, Shota Katagiri, Shuhei Hayakawa, Kouta Minamizawa:
Research on the transcendence of bodily differences, using sport and human augmentation medium. 31-39 - Yukiko Iwasaki, Hiroyasu Iwata:
Ubiquitous Body: Effect of Spatial Arrangement of Task's View on Managing Multiple Tasks. 40-44
Session 2: Augmented Cameras
- Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein, Michael Beyeler:
Deep Learning-Based Scene Simplification for Bionic Vision. 45-54 - Hiroaki Aoki, Ayumi Ohnishi, Naoya Isoyama, Tsutomu Terada, Masahiko Tsukamoto:
FaceRecGlasses: A Wearable System for Recognizing Self Facial Expressions Using Compact Wearable Cameras. 55-65 - Takumi Tochimoto, Yuichi Hiroi, Yuta Itoh:
CircadianVisor: Image Presentation with an Optical See-Through Display in Consideration of Circadian Illuminance. 66-76 - Chloe Eghtebas, Francisco Kiss, Marion Koelle, Pawel W. Wozniak:
Advantage and Misuse of Vision Augmentation - Exploring User Perceptions and Attitudes using a Zoom Prototype. 77-85
Session 3: Future of Speech Interfaces
- Hirotaka Hiraki, Jun Rekimoto:
SilentMask: Mask-type Silent Speech Interface with Measurement of Mouth Movement. 86-90 - Jun Rekimoto, Yu Nishimura:
Derma: Silent Speech Interaction Using Transcutaneous Motion Sensing. 91-100 - Jacob Logas, Georgianna Lin, Kelsie Belan, Advait Gogate, Thad Starner:
Conversational Partner's Perception of Subtle Display Use for Monitoring Notifications. 101-110
Session 4: Wearables Beyond the Wrist
- Tobias Röddiger, Michael Beigl, Michael Hefenbrock, Daniel Wolffram, Erik Pescara:
Detecting Episodes of Increased Cough Using Kinetic Earables. 111-115 - Kohei Aso, Dong-Hyun Hwang, Hideki Koike:
Portable 3D Human Pose Estimation for Human-Human Interaction using a Chest-Mounted Fisheye Camera. 116-120 - Denys J. C. Matthies, Chamod Weerasinghe, Bodo Urban, Suranga Nanayakkara:
CapGlasses: Untethered Capacitive Sensing with Smart Glasses. 121-130 - Atieh Taheri, Ziv Weissman, Misha Sra:
Exploratory Design of a Hands-free Video Game Controller for a Quadriplegic Individual. 131-140 - Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada, Maki Sugimoto:
Virtual Whiskers: Spatial Directional Guidance using Cheek Haptic Stimulation in a Virtual Environment. 141-151
Session 5: Physical Interfaces for Movement
- Swagata Das, Velika Wongchadakul, Yuichi Kurita:
SmartAidView Jacket: Providing visual aid to lower the underestimation of assistive forces. 152-156 - Jens Reinhardt, Marco Kurzweg, Katrin Wolf:
Virtual Physical Task Training: Comparing Shared Body, Shared View and Verbal Task Explanation. 157-168 - Mikihito Matsuura, Shio Miyafuji, Erwin Wu, Satoshi Kiyofuji, Taichi Kin, Takeo Igarashi, Hideki Koike:
CV-Based Analysis for Microscopic Gauze Suturing Training. 169-173
Session 6: Augmented Vision
- Yuki Kubota, Atsushi Hiyama, Masahiko Inami:
A Machine Learning Model Perceiving Brightness Optical Illusions: Quantitative Evaluation with Psychophysical Data. 174-182 - Yura Tamai, Maho Oki, Koji Tsukada:
POV Display and Interaction Methods extending Smartphone. 183-191 - Mikko Kytö, Ilyena Hirskyj-Douglas, David K. McGookin:
From Strangers to Friends: Augmenting Face-to-face Interactions with Faceted Digital Self-Presentations. 192-203 - Kenta Yamamoto, Ippei Suzuki, Kosaku Namikawa, Kaisei Sato, Yoichi Ochiai:
Interactive Eye Aberration Correction for Holographic Near-Eye Display. 204-214
Session 7: Augmentations from Head to Toes
- Kai Washino, Ayumi Ohnishi, Tsutomu Terada, Masahiko Tsukamoto:
Wearable System for Promoting Salivation. 215-222 - Hiroo Yamamura, Holger Baldauf, Kai Kunze:
HemodynamicVR - Adapting the User's Field Of View during Virtual Reality Locomotion Tasks to Reduce Cybersickness using Wearable Functional Near-Infrared Spectroscopy. 223-227 - Don Samitha Elvitigala, Jochen Huber, Suranga Nanayakkara:
Augmented Foot: A Comprehensive Survey of Augmented Foot Interfaces. 228-239 - Natsuki Hamanishi, Jun Rekimoto:
Motion-specific browsing method by mapping to a circle for personal video Observation with Head-Mounted Displays. 240-250 - Myung Jin Kim, Andrea Bianchi:
Exploring Pseudo Hand-Eye Interaction on the Head-Mounted Display. 251-258
Posters, Demos, and Design Exhibition
- Jessica Broscheit, Susanne Draheim, Kai von Luck, Qi Wang:
REFLECTIONS ON AIR: An Interactive Mirror for the Multisensory Perception of Air. 259-264 - Mina Khan, Glenn Fernandes, Pattie Maes:
PAL: Wearable and Personalized Habit-support Interventions in Egocentric Visual and Physiological Contexts. 265-267 - Soohyun Shin, JaeKyung Cho, Seong-Woo Kim:
Jumple: Interactive Contents for the Virtual Physical Education Classroom in the Pandemic Era. 268-270 - Adarsh Ravi, Hsin-Liu Cindy Kao:
Sparkle: A Detachable and Versatile Wearable Sensing Platform in a Sustainable Casing. 271-273 - Dávid Rozenberszki, Gábor Sörös:
Demo: Towards Universal User Interfaces for Mobile Robots. 274-276 - Shuyi Sun, Neha Deshmukh, Xin Chen, Hao-Chuan Wang, Katia Vega:
GemiN' I: Seamless Skin Interfaces Aiding Communication through Unconscious Behaviors. 277-279 - Yuma Akimoto, Kazuya Murao:
Design and Implementation of an Input Interface for Wearable Devices using Pulse Wave Control by Compressing the Upper Arm. 280-282 - Ryoya Onishi, Tao Morisaki, Shun Suzuki, Saya Mizutani, Takaaki Kamigaki, Masahiro Fujiwara, Yasutoshi Makino, Hiroyuki Shinoda:
DualBreath: Input Method Using Nasal and Mouth Breathing. 283-285 - Likun Fang, Tobias Röddiger, Felix Schmid, Michael Beigl:
EarRecorder: A Multi-Device Earable Data Collection Toolkit. 286-288 - Arinobu Niijima, Toki Takeda, Ryosuke Aoki, Yukio Koike:
Reducing Muscle Activity when Playing Tremolo by Using Electrical Muscle Stimulation. 289-291 - Michi Kanda, Kai Kunze:
Tranquillity at Home: Designing Plant-mediated Interaction for Fatigue Assessment. 292-294 - Timo Luukkonen, Ashley Colley, Tapio Seppänen, Jonna Häkkilä:
Cough Activated Dynamic Face Visor. 295-297 - Edouard Ferrand, Adrien Verhulst, Masahiko Inami, Maki Sugimoto:
Exploring a Dynamic Change of Muscle Perception in VR, Based on Muscle Electrical Activity and/or Joint Angle. 298-300 - Zhuoqi Fu, Jiawen Han, Dingding Zheng, Moe Sugawa, Taichi Furukawa, George Chernyshov, Danny Hynds, Marcelo Padovani, Karola Marky, Kouta Minamizawa, Jamie A. Ward, Kai Kunze:
Boiling Mind - A Dataset of Physiological Signals during an Exploratory Dance Performance. 301-303 - Çaglar Genç, Jonna Häkkilä:
Using Body Tracking for Involving Museum Visitors in Digital Storytelling. 304-306 - Christian Nordstrøm Rasmussen, Minna Pakanen, Marianne Graves Petersen:
Designing Socially Acceptable Light Therapy Glasses for Self-managing Seasonal Affective Disorder. 307-312 - Justin Kasowski, Nathan Wu, Michael Beyeler:
Towards Immersive Virtual Reality Simulations of Bionic Vision. 313-315 - Eiichi Hasegawa, Naoya Isoyama, Nobuchika Sakata, Kiyoshi Kiyokawa:
Moving Visual Stimuli on Smart Glasses Affects the Performance of Subsequent Tasks. 316-318 - Pramod Vadiraja, Andreas Dengel, Shoya Ishimaru:
Text Summary Augmentation for Intelligent Reading Assistant. 319-321
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.