default search action
IUI 2014: Haifa, Israel
- Tsvi Kuflik, Oliviero Stock, Joyce Yue Chai, Antonio Krüger:
19th International Conference on Intelligent User Interfaces, IUI 2014, Haifa, Israel, February 24-27, 2014. ACM 2014, ISBN 978-1-4503-2184-6
Keynote talks
- Wolfgang Wahlster:
Multiadaptive interfaces to cyber-physical environments. 1-2 - Mark Billinghurst:
Using augmented reality to create empathic experiences. 5-6
John Riedl session
- Lin Luo, Fei Wang, Michelle X. Zhou, Yingxin Pan, Hang Chen:
Who have got answers?: growing the pool of answerers in a smart enterprise social QA system. 7-16 - Amit Tiroshi, Shlomo Berkovsky, Mohamed Ali Kâafar, David Vallet, Terence Chen, Tsvi Kuflik:
Improving business rating predictions using graph based features. 17-26 - Stephen Wan, Cécile Paris:
Improving government services with social media feedback. 27-36 - Sunghyun Park, Philippa Shoemark, Louis-Philippe Morency:
Toward crowdsourcing micro-level behavior annotations: the challenges of interface, training, and generalization. 37-46
From touch through air to brain
- Zhensong Zhang, Fengjun Zhang, Hui Chen, Jiasheng Liu, Hongan Wang, Guozhong Dai:
Left and right hand distinction for multi-touch tabletop interactions. 47-56 - Daniel Buschek, Oliver Schoenleben, Antti Oulasvirta:
Improving accuracy in back-of-device multitouch typing: a clustering-based approach to keyboard updating. 57-66 - Philipp Mock, Jörg Edelmann, Andreas Schilling, Wolfgang Rosenstiel:
User identification using raw sensor data from typing on interactive displays. 67-72 - Ankit Kamal, Yang Li, Edward Lank:
Teaching motion gestures via recognizer feedback. 73-82 - Thomas Lampe, Lukas Dominique Josef Fiederer, Martin Voelker, Alexander Knorr, Martin A. Riedmiller, Tonio Ball:
A brain-computer interface for high-level remote control of an autonomous, reinforcement-learning-based robotic system for reaching and grasping. 83-88 - Hanaë Rateau, Laurent Grisoni, Bruno De Araujo:
Mimetic interaction spaces: controlling distant displays in pervasive environments. 89-94
Learning and skills
- Pascual Martínez-Gómez, Akiko Aizawa:
Recognition of understanding level and language skill using measurements of reading behavior. 95-104 - Dereck Toker, Ben Steichen, Matthew Gingerich, Cristina Conati, Giuseppe Carenini:
Towards facilitating user skill acquisition: identifying untrained visualization users through eye tracking. 105-114 - Cheng-Zhi Anna Huang, David Duvenaud, Kenneth C. Arnold, Brenton Partridge, Josiah W. Oberholtzer, Krzysztof Z. Gajos:
Active learning of intuitive control knobs for synthesizers using gaussian processes. 115-124 - Or Seri, Kobi Gal:
Visualizing expert solutions in exploratory learning environments using plan recognition. 125-132 - Jennifer C. Lai, Jie Lu, Shimei Pan, Danny Soroker, Mercan Topkara, Justin D. Weisz, Jeff Boston, Jason Crawford:
Expediting expertise: supporting informal social learning in the enterprise. 133-142
Intelligent visual interaction
- Jakub Dostal, Uta Hinrichs, Per Ola Kristensson, Aaron J. Quigley:
SpiderEyes: designing attention- and proximity-aware collaborative interfaces for wall-sized displays. 143-152 - Adam Perer, Fei Wang:
Frequence: interactive mining and visualization of temporal frequent event sequences. 153-162 - Ron Artstein, David R. Traum, Oleg Alexander, Anton Leuski, Andrew Jones, Kallirroi Georgila, Paul E. Debevec, William R. Swartout, Heather Maio, Stephen Smith:
Time-offset interaction with a holocaust survivor. 163-168
Users and motion
- Fangzhou Wang, Yang Li, Daisuke Sakamoto, Takeo Igarashi:
Hierarchical route maps for efficient navigation. 169-178 - Freddy Lécué, Simone Tallevi-Diotallevi, Jer Hayes, Robert Tucker, Veli Bicer, Marco Luca Sbodio, Pierpaolo Tommasi:
STAR-CITY: semantic traffic analytics and reasoning for CITY. 179-188 - Jierui Xie, Bart P. Knijnenburg, Hongxia Jin:
Location sharing privacy preference: analysis and personalized recommendation. 189-198 - Jaime Sánchez, Márcia de Borba Campos, Matías Espinoza, Lotfi B. Merabet:
Audio haptic videogaming for developing wayfinding skills in learners who are blind. 199-208 - Melissa Roemmele, Haley Archer-McClellan, Andrew S. Gordon:
Triangle charades: a data-collection game for recognizing actions in motion trajectories. 209-214 - Hansjörg Hofmann, Vanessa Tobisch, Ute Ehrlich, André Berton, Angela Mahr:
Comparison of speech-based in-car HMI concepts in a driving simulation study. 215-224
Leveraging social competencies
- Gianluca Schiavo, Alessandro Cappelletti, Eleonora Mencarini, Oliviero Stock, Massimo Zancanaro:
Overt or subtlefi: supporting group conversations with automatically targeted directives. 225-234 - Denis Parra, Peter Brusilovsky, Christoph Trattner:
See what you want to see: visual user-driven approach for hybrid recommendation. 235-240 - Ronald Denaux, Vania Dimitrova, Lydia Lau, Paul Brna, Dhavalkumar Thakker, Christina M. Steiner:
Employing linked data and dialogue for modelling cultural awareness of a user. 241-246 - Kyumin Lee, Jalal Mahmud, Jilin Chen, Michelle X. Zhou, Jeffrey Nichols:
Who will retweet this?: Automatically Identifying and Engaging Strangers on Twitter to Spread Information. 247-256
Adaptive user interfaces
- Tina Walber, Chantal Neuhaus, Ansgar Scherp:
Tagging-by-search: automatic image region labeling using gaze information obtained from image search. 257-266 - Florian Alt, Stefan Schneegass, Jonas Auda, Rufat Rzayev, Nora Broy:
Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays. 267-272 - Benjamin Rosman, Subramanian Ramamoorthy, M. M. Hassan Mahmud, Pushmeet Kohli:
On user behaviour adaptation under interface change. 273-278 - André Freitas, Edward Curry:
Natural language queries over heterogeneous linked data graphs: a distributional-compositional semantics approach. 279-288 - David A. Joyner, Ashok K. Goel, Nicolas M. Papin:
MILA-S: generation of agent-based simulations from conceptual models of complex systems. 289-298 - Louis Li, Krzysztof Z. Gajos:
Adaptive click-and-cross: adapting to both abilities and task improves performance of users with impaired dexterity. 299-304
Posters
- Pontus Wärnestål, Fredrik Kronlid:
Towards a user experience design framework for adaptive spoken dialogue in automotive contexts. 305-310 - Salman Cheema, Sarah Buchanan, Sumit Gulwani, Joseph J. LaViola Jr.:
A practical framework for constructing structured drawings. 311-316 - Bo Wu, Pedro A. Szekely, Craig A. Knoblock:
Minimizing user effort in transforming data by example. 317-322 - Corey Pittman, Joseph J. LaViola Jr.:
Exploring head tracked head mounted displays for first person robot teleoperation. 323-328 - Takumi Toyama, Daniel Sonntag, Andreas Dengel, Takahiro Matsuda, Masakazu Iwamura, Koichi Kise:
A mixed reality head-mounted text translation system using eye gaze input. 329-334 - Giusy Di Lorenzo, Marco Luca Sbodio, Francesco Calabrese, Michele Berlingerio, Rahul Nair, Fabio Pinelli:
AllAboard: visual exploration of cellphone mobility data to optimise public transport. 335-340 - Ingo R. Keck, Robert J. Ross:
Exploring customer specific KPI selection strategies for an adaptive time critical user interface. 341-346 - Spyros Kotoulas, Vanessa López, Marco Luca Sbodio, Pierpaolo Tommasi, Martin Stephenson, Pol Mac Aonghusa:
Improving cross-domain information sharing in care coordination using semantic web technologies. 347-352 - Jeongyun Kim, Jonghoon Seo, Tack-Don Han:
AR Lamp: interactions on projection-based augmented reality for interactive learning. 353-358 - Henry Lieberman, Elizabeth Rosenzweig, Christopher Fry:
Steptorials: mixed-initiative learning of high-functionality applications. 359-364 - Mark Cartwright, Bryan Pardo, Josh Reiss:
MIXPLORATION: rethinking the audio mixer interface. 365-370
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.