default search action
AH 2014: Kobe, Japan
- Tsutomu Terada, Masahiko Inami, Kai Kunze, Takuya Nojima:
5th Augmented Human International Conference, AH '14, Kobe, Japan, March 7-9, 2014. ACM 2014, ISBN 978-1-4503-2761-9 - Yuki Muramatsu, Takatsugu Hirayama, Kenji Mase:
Video generation method based on user's tendency of viewpoint selection for multi-view video contents. 1:1-1:4 - Marina Mitani, Yasuaki Kakehi:
Tearsense: a sensor system for illuminating and recording teardrops. 2:1-2:4 - Kozue Nojiri, Suzanne Low, Koki Toda, Yuta Sugiura, Yoichi Kamiyama, Masahiko Inami:
Present information through afterimage with eyes closed. 3:1-3:4 - Yuichi Kurita, Jumpei Sato, Takayuki Tanaka, Minoru Shinohara, Toshio Tsuji:
Unloading muscle activation enhances force perception. 4:1-4:4 - Shunsuke Koyama, Yuta Sugiura, Masa Ogata, Anusha I. Withana, Yuji Uema, Makoto Honda, Sayaka Yoshizu, Chihiro Sannomiya, Kazunari Nawa, Masahiko Inami:
Multi-touch steering wheel for in-car tertiary applications using infrared sensors. 5:1-5:4 - Kazuya Murao, Tsutomu Terada:
Evaluating effect of types of instructions for gesture recognition with an accelerometer. 6:1-6:4 - Asako Hosobori, Yasuaki Kakehi:
Eyefeel & EyeChime: a face to face communication environment by augmenting eye gaze information. 7:1-7:4 - Tsutomu Terada, Seiji Takeda, Masahiko Tsukamoto, Yutaka Yanagisawa, Yasue Kishino, Takayuki Suyama:
On achieving dependability for wearable computing by device bypassing. 8:1-8:4 - Erik Hill, Hiroyuki Hatano, Masahiro Fujii, Yu Watanabe:
Haptic foot interface for language communication. 9:1-9:4 - Emi Tamaki, Ken Iwasaki:
A half-implant device on fingernails. 10:1-10:4 - Suzanne Low, Yuta Sugiura, Dixon Lo, Masahiko Inami:
Pressure detection on mobile phone by camera and flash. 11:1-11:4 - Jingyuan Cheng, Ayano Okoso, Kai Kunze, Niels Henze, Albrecht Schmidt, Paul Lukowicz, Koichi Kise:
On the tip of my tongue: a non-invasive pressure-based tongue interface. 12:1-12:4 - Kei Nitta, Keita Higuchi, Jun Rekimoto:
HoverBall: augmented sports with a flying ball. 13:1-13:4 - Gilang Andi Pradana, Adrian David Cheok, Masahiko Inami, Jordan Tewell, Yongsoon Choi:
Emotional priming of mobile text messages with ring-shaped wearable device using color lighting and tactile expressions. 14:1-14:8 - Shoya Ishimaru, Kai Kunze, Koichi Kise, Jens Weppner, Andreas Dengel, Paul Lukowicz, Andreas Bulling:
In the blink of an eye: combining head motion and eye blink frequency for activity recognition with Google Glass. 15:1-15:4 - Kei Nitta, Toshiki Sato, Hideki Koike, Takuya Nojima:
PhotoelasticBall: a touch detectable ball using photoelasticity. 16:1-16:4 - Kohei Matsumura, Yasuyuki Sumi:
CarCast: a framework for situated in-car conversation sharing. 17:1-17:4 - Markus Funk, Robin Boldt, Bastian Pfleging, Max Pfeiffer, Niels Henze, Albrecht Schmidt:
Representing indoor location of objects on wearable computers with head-mounted displays. 18:1-18:4 - Kazutaka Kurihara, Yoko Sasaki, Jun Ogata, Masataka Goto:
Two-level fast-forwarding using speech detection for rapidly perusing video. 19:1-19:2 - Risa Ishijima, Kayo Ogawa, Masakazu Higuchi, Takashi Komuro:
Real-time typing action detection in a 3D pointing gesture interface. 20:1-20:2 - Victor A. Mateevitsi, Khairi Reda, Jason Leigh, Andrew E. Johnson:
The health bar: a persuasive ambient display to improve the office worker's well being. 21:1-21:2 - Yaming Xu, Yoshikazu Nakajima:
Single-trial decoding for an event-related potential-based brain-computer interface. 22:1-22:2 - Hyeonjoong Cho, Chulwon Kim:
BubStack: a self-revealing chorded keyboard on touch screens to type for remote wall displays. 23:1-23:2 - Li-Wei Chan, Chien-Ting Weng, Rong-Hao Liang, Bing-Yu Chen:
AnyButton: unpowered, modeless and highly available mobile input using unmodified clothing buttons. 24:1-24:2 - Ini Ryu, Itiro Siio:
TongueDx: a tongue diagnosis for health care on smartphones. 25:1-25:2 - Takashi Ogata, Shohei Imabuchi, Taisuke Akimoto:
Narratology and narrative generation: expanded literary theory and the integration as a narrative generation system (2). 26:1-26:2 - Keijiro Nakagawa, Hill Hiroki Kobayashi, Kaoru Sezaki:
Carrier pigeon-like sensing system: animal-computer interface design for opportunistic data exchange interaction for a wildlife monitoring application. 27:1-27:2 - Ming Chang, Hiroyuki Iizuka, Yasushi Naruse, Hideyuki Ando, Taro Maeda:
An interface for unconscious learning using mismatch negativity neurofeedback. 28:1-28:2 - Haruna Ishimatsu, Ryoko Ueoka:
BITAIKA: development of self posture adjustment system. 30:1-30:2 - Tomomi Takashina, Miyuki Yanagi, Yoshiyuki Yamariku, Yoshikazu Hirayama, Ryota Horie, Michiko Ohkura:
Toward practical implementation of emotion driven digital camera using EEG. 31:1-31:2 - Masakazu Iwamura, Kai Kunze, Yuya Kato, Yuzuko Utsumi, Koichi Kise:
Haven't we met before?: a realistic memory assistance system to remind you of the person in front of you. 32:1-32:4 - Kouya Ishigaki, Ryoko Ueoka:
Development of tactile biofeedback system for amplifying horror experience. 34:1-34:2 - Shintaro Kawabata, Shoji Sano, Tsutomu Terada, Masahiko Tsukamoto:
A fault diagnostic system by line status monitoring for ubiquitous computers connected with multiple communication lines. 35:1-35:2 - Monica Perusquía-Hernández, Hella Kriening, Ana Carina Palumbo, Barbara Wajda:
User-centered design of a lamp customization tool. 36:1-36:2 - Nuth Otanasap, Poonpong Boonbrahm:
Fall prevention using head velocity extracted from visual based VDO sequences. 37:1-37:2 - Agnes Grünerbl, Venet Osmani, Gernot Bahle, José C. Carrasco-Jiménez, Stefan Oehler, Oscar Mayora, Christian Haring, Paul Lukowicz:
Using smart phone mobility traces for the diagnosis of depressive and manic episodes in bipolar patients. 38:1-38:8 - Takehiro Niikura, Yoshihiro Watanabe, Masatoshi Ishikawa:
Anywhere surface touch: utilizing any surface as an input area. 39:1-39:8 - Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, Mikio Nakano:
Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs. 40:1-40:10 - Eiji Suzuki, Takuji Narumi, Sho Sakurai, Tomohiro Tanikawa, Michitaka Hirose:
Illusion cup: interactive controlling of beverage consumption based on an illusion of volume perception. 41:1-41:8 - Marc Cavazza, Fred Charles, Gabor Aranyi, Julie Porteous, Stephen W. Gilroy, Gal Raz, Nimrod Jakob Keynan, Avihay Cohen, Gilan Jackont, Yael Jacob, Eyal Soreq, Ilana Klovatch, Talma Hendler:
Towards emotional regulation through neurofeedback. 42:1-42:8 - Junya Tominaga, Kensaku Kawauchi, Jun Rekimoto:
Around me: a system with an escort robot providing a sports player's self-images. 43:1-43:8 - Marcus Tönnis, Gudrun Klinker:
Boundary conditions for information visualization with respect to the user's gaze. 44:1-44:8 - Alireza Sahami Shirazi, Mariam Hassib, Niels Henze, Albrecht Schmidt, Kai Kunze:
What's on your mind?: mental task awareness using single electrode brain computer interfaces. 45:1-45:4 - Shunichi Kasahara, Jun Rekimoto:
JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation. 46:1-46:8 - Masayuki Nakao, Tsutomu Terada, Masahiko Tsukamoto:
An information presentation method for head mounted display considering surrounding environments. 47:1-47:8 - Max Pfeiffer, Stefan Schneegass, Florian Alt, Michael Rohs:
Let me grab this: a comparison of EMS and vibration for haptic feedback in free-hand interaction. 48:1-48:8 - Kevin Fan, Jochen Huber, Suranga Nanayakkara, Masahiko Inami:
SpiderVision: extending the human field of view for augmented awareness. 49:1-49:8 - Tomoya Ohta, Shumpei Yamakawa, Takashi Ichikawa, Takuya Nojima:
TAMA: development of trajectory changeable ball for future entertainment. 50:1-50:8 - Hirotaka Sumitomo, Takuya Katayama, Tsutomu Terada, Masahiko Tsukamoto:
Implementation and evaluation on a concealed interface using abdominal circumference. 51:1-51:8 - Makoto Tomioka, Sei Ikeda, Kosuke Sato:
Pseudo-transparent tablet based on 3D feature tracking. 52:1-52:2 - Kohki Ikeuchi, Tomoaki Otsuka, Akihito Yoshii, Mizuki Sakamoto, Tatsuo Nakajima:
KinecDrone: enhancing somatic sensation to fly in the sky with Kinect and AR.Drone. 53:1-53:2 - Ryo Izuta, Kazuya Murao, Tsutomu Terada, Masahiko Tsukamoto:
Early gesture recognition method with an accelerometer. 54:1-54:2 - Shuhei Tsuchida, Tsutomu Terada, Masahiko Tsukamoto:
A system for practicing formations in dance performance using a two-axis movable electric curtain track. 55:1-55:2 - Ayano Nishimura, Itiro Siio:
iMake: computer-aided eye makeup. 56:1-56:2 - Naoya Isoyama, Tsutomu Terada, Masahiko Tsukamoto:
An interactive system for recognizing user actions on a surface using accelerometers. 57:1-57:2 - Yoshiyuki Tei, Tsutomu Terada, Masahiko Tsukamoto:
A multi-modal interface for performers in stuffed suits. 58:1-58:2 - Hiroki Watanabe, Tsutomu Terada, Masahiko Tsukamoto:
A sound-based lifelog system using ultrasound. 59:1-59:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.