default search action
37th UIST 2024: Pittsburgh, PA, USA
- Lining Yao, Mayank Goel, Alexandra Ion, Pedro Lopes:
Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, UIST 2024, Pittsburgh, PA, USA, October 13-16, 2024. ACM 2024, ISBN 979-8-4007-0628-8
Body as the interface
- Yijing Jiang, Julia Kleinau, Till Max Eckroth, Eve E. Hoggan, Stefanie Mueller, Michael Wessely:
MouthIO: Fabricating Customizable Oral User Interfaces with Integrated Sensing and Actuation. 1:1-1:16 - Akifumi Takahashi, Yudai Tanaka, Archit Tamhane, Alan Shen, Shan-Yuan Teng, Pedro Lopes:
Can a Smartwatch Move Your Fingers? Compact and Practical Electrical Muscle Stimulation in a Smartwatch. 2:1-2:15 - Andy Kong, Daehwa Kim, Chris Harrison:
Power-over-Skin: Full-Body Wearables Powered By Intra-Body RF Energy. 3:1-3:13 - Yu Lu, Dian Ding, Hao Pan, Yijie Li, Juntao Zhou, Yongjian Fu, Yongzhao Zhang, Yi-Chao Chen, Guangtao Xue:
HandPad: Make Your Hand an On-the-go Writing Pad via Human Capacitance. 4:1-4:16
Future of Typing
- Andreas Fender, Mohamed Kari:
OptiBasePen: Mobile Base+Pen Input on Passive Surfaces by Sensing Relative Base Motion Plus Close-Range Pen Position. 5:1-5:9 - Jisu Yim, Seoyeon Bae, Taejun Kim, Sunbum Kim, Geehyuk Lee:
Palmrest+: Expanding Laptop Input Space with Shear Force on Palm-Resting Area. 6:1-6:14 - Paul Streli, Mark Richardson, Fadi Botros, Shugao Ma, Robert Wang, Christian Holz:
TouchInsight: Uncertainty-aware Rapid Touch and Text Input for Mixed Reality from Egocentric Vision. 7:1-7:16 - Piyawat Lertvittayakumjorn, Shanqing Cai, Billy Dou, Cedric Ho, Shumin Zhai:
Can Capacitive Touch Images Enhance Mobile Keyboard Decoding? 8:1-8:17
Programming UI
- Yang Ouyang, Leixian Shen, Yun Wang, Quan Li:
NotePlayer: Engaging Computational Notebooks for Dynamic Presentation of Analytical Processes. 9:1-9:20 - Harrison Goldstein, Jeffrey Tao, Zac Hatfield-Dodds, Benjamin C. Pierce, Andrew Head:
Tyche: Making Sense of PBT Effectiveness. 10:1-10:16 - Ryan Yen, Jiawen Stefanie Zhu, Sangho Suh, Haijun Xia, Jian Zhao:
CoLadder: Manipulating Code Generation via Multi-Level Blocks. 11:1-11:20 - Yuan Tian, Jonathan K. Kummerfeld, Toby Jia-Jun Li, Tianyi Zhang:
SQLucid: Grounding Natural Language Database Queries with Interactive Explanations. 12:1-12:20
Beyond mobile
- Ryo Takahashi, Eric Whitmire, Roger Boldu, Shiu S. Ng, Wolf Kienzle, Hrvoje Benko:
picoRing: battery-free rings for subtle thumb-to-index input. 13:1-13:11 - Anandghan Waghmare, Ishan Chatterjee, Vikram Iyer, Shwetak N. Patel:
WatchLink: Enhancing Smartwatches with Sensor Add-Ons via ECG Interface. 14:1-14:13 - Riku Arakawa, Hiromu Yakura, Mayank Goel:
PrISM-Observer: Intervention Agent to Help Users Perform Everyday Procedures Sensed using a Smartwatch. 15:1-15:16
Dynamic Objects & Materials
- Lingyun Sun, Yitao Fan, Boyu Feng, Yifu Zhang, Deying Pan, Yiwen Ren, Yuyang Zhang, Qi Wang, Ye Tao, Guanyun Wang:
MagneDot: Integrated Fabrication and Actuation Methods of Dot-Based Magnetic Shape Displays. 16:1-16:18 - Aditya Retnanto, Emilie Faracci, Anup Sathya, Yukai Hung, Ken Nakagaki:
CARDinality: Interactive Card-shaped Robots with Locomotion and Haptics using Vibration. 17:1-17:14 - Yunyi Zhu, Cedric Honnet, Yixiao Kang, Junyi Zhu, Angelina J. Zheng, Kyle Heinz, Grace Tang, Luca Musk, Michael Wessely, Stefanie Mueller:
PortaChrome: A Portable Contact Light Source for Integrated Re-Programmable Multi-Color Textures. 18:1-18:13 - Mustafa Doga Dogan, Eric J. Gonzalez, Karan Ahuja, Ruofei Du, Andrea Colaço, Johnny Lee, Mar González-Franco, David Kim:
Augmented Object Intelligence with XR-Objects. 19:1-19:15
Manipulating Text
- Philippe Laban, Jesse Vig, Marti A. Hearst, Caiming Xiong, Chien-Sheng Wu:
Beyond the Chat: Executable and Verifiable Text-Editing with LLMs. 20:1-20:23 - Anyi Rao, Jean-Peïc Chou, Maneesh Agrawala:
ScriptViz: A Visualization Tool to Aid Scriptwriting based on a Large Movie Database. 21:1-21:13 - Zheer Xu, Shanqing Cai, Mukund Varma T., Subhashini Venugopalan, Shumin Zhai:
SkipWriter: LLM-Powered Abbreviated Writing on Tablets. 22:1-22:13 - Josh Pollock, Catherine Mei, Grace Huang, Elliot Evans, Daniel Jackson, Arvind Satyanarayan:
Bluefish: Composing Diagrams with Declarative Relations. 23:1-23:21
Hacking Perception
- Martin Feick, Kora Persephone Regitz, Lukas Gehrke, André Zenner, Anthony Tang, Tobias Patrick Jungbluth, Maurice Rekrut, Antonio Krüger:
Predicting the Limits: Tailoring Unnoticeable Hand Redirection Offsets in Virtual Reality to Individuals' Perceptual Boundaries. 24:1-24:13 - Andreia Valente, Dajin Lee, Seungmoon Choi, Mark Billinghurst, Augusto Esteves:
Modulating Heart Activity and Task Performance using Haptic Heartbeat Feedback: A Study Across Four Body Placements. 25:1-25:13 - Jas Brooks, Alex Mazursky, Janice Hixon, Pedro Lopes:
Augmented Breathing via Thermal Feedback in the Nose. 26:1-26:11 - Yatharth Singhal, Daniel Honrales, Haokun Wang, Jin Ryong Kim:
Thermal In Motion: Designing Thermal Flow Illusions with Tactile and Thermal Interaction. 27:1-27:13
Movement-based UIs
- Li Qiwei, Francesca Lameiro, Shefali Patel, Cristi Isaula-Reyes, Eytan Adar, Eric Gilbert, Sarita Schoenebeck:
Feminist Interaction Techniques: Social Consent Signals to Deter NCIM Screenshots. 28:1-28:14 - Munjeong Kim, Sunjun Kim:
Effects of Computer Mouse Lift-off Distance Settings in Mouse Lifting Action. 29:1-29:10 - Guanhua Zhang, Zhiming Hu, Andreas Bulling:
DisMouse: Disentangling Information from Mouse Movement Data. 30:1-30:13 - Md. Touhidul Islam, Noushad Sojib, Imran Kabir, Ashiqur Rahman Amit, Mohammad Ruhul Amin, Syed Masum Billah:
Wheeler: A Three-Wheeled Input Device for Usable, Efficient, and Versatile Non-Visual Interaction. 31:1-31:20
New Vizualizations
- Liqi Cheng, Hanze Jia, Lingyun Yu, Yihong Wu, Shuainan Ye, Dazhen Deng, Hui Zhang, Xiao Xie, Yingcai Wu:
VisCourt: In-Situ Guidance for Interactive Tactic Training in Mixed Reality. 32:1-32:14 - Vishnu Sarukkai, Lu Yuan, Mia Tang, Maneesh Agrawala, Kayvon Fatahalian:
Block and Detail: Scaffolding Sketch-to-Image Generation. 33:1-33:13 - Jun Wang, Chun-Cheng Chang, Jiafei Duan, Dieter Fox, Ranjay Krishna:
EVE: Enabling Anyone to Train Robots using Augmented Reality. 34:1-34:13 - Dizhi Ma, Xiyun Hu, Jingyu Shi, Mayank Patel, Rahul Jain, Ziyi Liu, Zhengzhe Zhu, Karthik Ramani:
avaTTAR: Table Tennis Stroke Training with Embodied and Detached Visualization in Augmented Reality. 35:1-35:16
Big to Small Fab
- Ilan E. Moyer, Samuelle Bourgault, Devon Frost, Jennifer Jacobs:
Don't Mesh Around: Streamlining Manual-Digital Fabrication Workflows with Domain-Specific 3D Scanning. 36:1-36:16 - Xiaolong Li, Cheng Yao, Shang Shi, Shuyue Feng, Yujie Zhou, Haoye Dong, Shichao Huang, Xueyan Cai, Kecheng Jin, Fangtian Ying, Guanyun Wang:
E-Joint: Fabrication of Large-Scale Interactive Objects Assembled by 3D Printed Conductive Parts with Copper Plated Joints. 37:1-37:18 - Daniel Campos Zamora, Liang He, Jon E. Froehlich:
MobiPrint: A Mobile 3D Printer for Environment-Scale Design and Fabrication. 38:1-38:10 - Zezhou Sun, Devin J. Balkcom, Emily Whiting:
StructCurves: Interlocking Block-Based Line Structures. 39:1-39:11
Shared Spaces
- Shwetha Rajaram, Nels Numan, Balasaravanan Thoravi Kumaravel, Nicolai Marquardt, Andrew D. Wilson:
BlendScape: Enabling End-User Customization of Video-Conferencing Environments through Generative AI. 40:1-40:19 - Nels Numan, Shwetha Rajaram, Balasaravanan Thoravi Kumaravel, Nicolai Marquardt, Andrew D. Wilson:
SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending. 41:1-41:25 - Clemens Nylandsted Klokmose, James R. Eagan, Peter van Hardenberg:
MyWebstrates: Webstrates as Local-first Software. 42:1-42:12 - Zhipeng Li, Christoph Gebhardt, Yves Inglin, Nicolas Steck, Paul Streli, Christian Holz:
SituationAdapt: Contextual UI Optimization in Mixed Reality with Situation Awareness via LLM Reasoning. 43:1-43:13 - Ludwig Sidenmark, Tianyu Zhang, Leen Al Lababidi, Jiannan Li, Tovi Grossman:
Desk2Desk: Optimization-based Mixed Reality Workspace Integration for Remote Side-by-side Collaboration. 44:1-44:15
Machine Learning for User Interfaces
- Jason Wu, Yi-Hao Peng, Xin Yue Amanda Li, Amanda Swearngin, Jeffrey P. Bigham, Jeffrey Nichols:
UIClip: A Data-driven Model for Assessing User Interface Design. 45:1-45:16 - Peitong Duan, Chin-Yi Cheng, Gang Li, Bjoern Hartmann, Yang Li:
UICrit: Enhancing Automated Design Evaluation with a UI Critique Dataset. 46:1-46:17 - Yue Jiang, Zixin Guo, Hamed Rezazadegan Tavakoli, Luis A. Leiva, Antti Oulasvirta:
EyeFormer: Predicting Personalized Scanpaths with Transformer-Guided Reinforcement Learning. 47:1-47:15 - Minh Duc Vu, Han Wang, Jieshan Chen, Zhuang Li, Shengdong Zhao, Zhenchang Xing, Chunyang Chen:
GPTVoiceTasker: Advancing Multi-step Mobile Task Efficiency Through Dynamic Interface Exploration and Learning. 48:1-48:17 - Yunpeng Song, Yiheng Bian, Yongtao Tang, Guiyu Ma, Zhongmin Cai:
VisionTasker: Mobile Task Automation Using Vision Based UI Understanding and LLM Task Planning. 49:1-49:17
Bodily Signals
- Liang Wang, Jiayan Zhang, Jinyang Liu, Devon McKeon, David Guy Brizan, Giles Blaney, Robert J. K. Jacob:
Empower Real-World BCIs with NIRS-X: An Adaptive Learning Framework that Harnesses Unlabeled Brain Signals. 50:1-50:16 - Hechuan Zhang, Xuewei Liang, Ying Lei, Yanjun Chen, Zhenxuan He, Yu Zhang, Lihan Chen, Hongnan Lin, Teng Han, Feng Tian:
Understanding the Effects of Restraining Finger Coactivation in Mid-Air Typing: from a Neuromechanical Perspective. 51:1-51:18 - Devyani McLaren, Jian Gao, Xiulun Yin, Rúbia Reis Guerra, Preeti Vyas, Chrys Morton, Xi Laura Cang, Yizhong Chen, Yiyuan Sun, Ying Li, John David Wyndham Madden, Karon E. MacLean:
What is Affective Touch Made Of? A Soft Capacitive Sensor Array Reveals the Interplay between Shear, Normal Stress and Individuality. 52:1-52:31 - Tianren Luo, Gaozhang Chen, Yijian Wen, Pengxiang Wang, Yachun Fan, Teng Han, Feng Tian:
Exploring the Effects of Sensory Conflicts on Cognitive Fatigue in VR Remappings. 53:1-53:16
Vision-based UIs
- Soroush Shahi, Vimal Mollyn, Cori Tymoszek Park, Runchang Kang, Asaf Liberman, Oron Levy, Jun Gong, Abdelkareem Bedri, Gierad Laput:
Vision-Based Hand Gesture Customization from a Single Demonstration. 54:1-54:14 - Xincheng Huang, Michael Yin, Ziyi Xia, Robert Xiao:
VirtualNexus: Enhancing 360-Degree Video AR/VR Collaboration with Environment Cutouts and Virtual Replicas. 55:1-55:12 - Nhan Tran, Ethan Yang, Angelique Taylor, Abe Davis:
Personal Time-Lapse. 56:1-56:13 - Ruyu Yan, Jiatian Sun, Abe Davis:
Chromaticity Gradient Mapping for Interactive Control of Color Contrast in Images and Video. 57:1-57:16
AI & Automation
- Ryan Yen, Jian Zhao:
Memolet: Reifying the Reuse of User-AI Conversational Memories. 58:1-58:22 - Anindya Das Antar, Somayeh Molaei, Yan-Ying Chen, Matthew L. Lee, Nikola Banovic:
VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-Making. 59:1-59:21 - Sera Lee, Dae R. Jeong, Junyoung Choi, Jaeheon Kwak, Seoyun Son, Jean Y. Song, Insik Shin:
SERENUS: Alleviating Low-Battery Anxiety Through Real-time, Accurate, and User-Friendly Energy Consumption Prediction of Mobile Applications. 60:1-60:20
Future Fabrics
- Mackenzie Leake, Ross Daly:
ScrapMap: Interactive Color Layout for Scrap Quilting. 61:1-61:17 - Hannah Twigg-Smith, Yuecheng Peng, Emily Whiting, Nadya Peek:
What's in a cable? Abstracting Knitting Design Elements with Blended Raster/Vector Primitives. 62:1-62:20 - Yu Jiang, Alice C. Haynes, Narjes Pourjafarian, Jan O. Borchers, Jürgen Steimle:
Embrogami: Shape-Changing Textiles with Machine Embroidery. 63:1-63:15 - Megan Hofmann:
KODA: Knit-program Optimization by Dependency Analysis. 64:1-64:15 - Guanyun Wang, Junzhe Ji, Yunkai Xu, Lei Ren, Xiaoyang Wu, Chunyuan Zheng, Xiaojing Zhou, Xin Tang, Boyu Feng, Lingyun Sun, Ye Tao, Jiaji Li:
X-Hair: 3D Printing Hair-like Structures with Multi-form, Multi-property and Multi-function. 65:1-65:14 - Junyi Zhao, Pornthep Preechayasomboon, Tyler Christensen, Amirhossein H. Memar, Zhenzhen Shen, Nicholas Colonnese, Michael Khbeis, Mengjia Zhu:
TouchpadAnyWear: Textile-Integrated Tactile Sensors for Multimodal High Spatial-Resolution Touch Inputs with Motion Artifacts Tolerance. 66:1-66:14
Poses as Input
- Erwin Wu, Rawal Khirodkar, Hideki Koike, Kris Kitani:
SolePoser: Full Body Pose Estimation using a Single Pair of Insole Sensor. 67:1-67:9 - Ching-Yi Tsai, Ryan Yen, Daekun Kim, Daniel Vogel:
Gait Gestures: Examining Stride and Foot Strike Variation as an Input Method While Walking. 68:1-68:16 - Vimal Mollyn, Chris Harrison:
EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras. 69:1-69:11 - Vasco Xu, Chenfeng Gao, Henry Hoffmann, Karan Ahuja:
MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from IMUs in Mobile Consumer Devices. 70:1-70:11 - Xinshuang Liu, Yizhong Zhang, Xin Tong:
Touchscreen-based Hand Tracking for Remote Whiteboard Interaction. 71:1-71:14 - Tianhong Catherine Yu, Manru Mary Zhang, Peter He, Chi-Jung Lee, Cassidy Cheesman, Saif Mahmud, Ruidong Zhang, François Guimbretière, Cheng Zhang:
SeamPose: Repurposing Seams as Capacitive Sensors in a Shirt for Upper-Body Pose Tracking. 72:1-72:13
Storytime
- Jan Henry Belz, Lina Madlin Weilke, Anton Winter, Philipp Hallgarten, Enrico Rukzio, Tobias Grosse-Puppendahl:
Story-Driven: Exploring the Impact of Providing Real-time Context Information on Automated Storytelling. 73:1-73:15 - Zhihao Yao, Yao Lu, Qirui Sun, Shiqing Lyu, Hanxuan Li, Xing-Dong Yang, Xuezhu Wang, Guanhong Liu, Haipeng Mi:
Lumina: A Software Tool for Fostering Creativity in Designing Chinese Shadow Puppets. 74:1-74:15 - Tongyu Zhou, Joshua Kong Yang, Vivian Hsinyueh Chan, Ji-Won Chung, Jeff Huang:
PortalInk: 2.5D Visual Storytelling with SVG Parallax and Waypoint Transitions. 75:1-75:16 - Karl Toby Rosenberg, Rubaiat Habib Kazi, Li-Yi Wei, Haijun Xia, Ken Perlin:
DrawTalking: Building Interactive Worlds by Sketching and Speaking. 76:1-76:25 - John Joon Young Chung, Max Kreminski:
Patchview: LLM-powered Worldbuilding with Generative Dust and Magnet Visualization. 77:1-77:19 - Rui He, Huaxin Wei, Ying Cao:
An Interactive System for Supporting Creative Exploration of Cinematic Composition Designs. 78:1-78:15
New realities
- Florian Fischer, Aleksi Ikkala, Markus Klar, Arthur Fleig, Miroslav Bachinski, Roderick Murray-Smith, Perttu Hämäläinen, Antti Oulasvirta, Jörg Müller:
SIM2VR: Towards Automated Biomechanical Testing in VR. 79:1-79:15 - Mathias N. Lystbæk, Thorbjørn Mikkelsen, Roland Krisztandl, Eric J. Gonzalez, Mar González-Franco, Hans Gellersen, Ken Pfeuffer:
Hands-on, Hands-off: Gaze-Assisted Bimanual 3D Interaction. 80:1-80:12 - Yeonsu Kim, Jisu Yim, Kyunghwan Kim, Yohan Yun, Geehyuk Lee:
Pro-Tact: Hierarchical Synthesis of Proprioception and Tactile Exploration for Eyes-Free Ray Pointing on Out-of-View VR Menus. 81:1-81:11 - Hyuna Seo, Juheon Yi, Rajesh Balan, Youngki Lee:
GradualReality: Enhancing Physical Object Interaction in Virtual Reality via Interaction State-Aware Blending. 82:1-82:14 - Mark Richardson, Fadi Botros, Yangyang Shi, Pinhao Guo, Bradford J. Snow, Linguang Zhang, Jingming Dong, Keith Vertanen, Shugao Ma, Robert Wang:
StegoType: Surface Typing from Egocentric Cameras. 83:1-83:14 - Uta Wagner, Andreas Asferg Jacobsen, Tiare Feuchtner, Hans Gellersen, Ken Pfeuffer:
Eye-Hand Movement of Objects in Near Space Extended Reality. 84:1-84:13
A11y
- Jaylin Herskovitz, Andi Xu, Rahaf Alharbi, Anhong Guo:
ProgramAlly: Creating Custom Visual Access Programs via Multi-Modal End-User Programming. 85:1-85:15 - Dan Zhang, Zhi Li, Vikas Ashok, William H. Seiple, I. V. Ramakrishnan, Xiaojun Bi:
Accessible Gesture Typing on Smartphones for People with Low Vision. 86:1-86:11 - Vinitha Ranganeni, Varad Dhat, Noah Ponto, Maya Cakmak:
AccessTeleopKit: A Toolkit for Creating Accessible Web-Based Interfaces for Tele-Operating an Assistive Robot. 87:1-87:12 - Shuchang Xu, Chang Chen, Zichen Liu, Xiaofu Jin, Linping Yuan, Yukang Yan, Huamin Qu:
Memory Reviver: Supporting Photo-Collection Reminiscence for People with Visual Impairment via a Proactive Chatbot. 88:1-88:17 - Joshua Gorniak, Yoon Kim, Donglai Wei, Nam Wook Kim:
VizAbility: Enhancing Chart Accessibility with LLM-based Conversational Interaction. 89:1-89:19 - Yuhao Zhu, Ethan Chen, Colin Hascup, Yukang Yan, Gaurav Sharma:
Computational Trichromacy Reconstruction: Empowering the Color-Vision Deficient to Recognize Colors Using Augmented Reality. 90:1-90:17
AI as Copilot
- Chengbo Zheng, Yuanhao Zhang, Zeyu Huang, Chuhan Shi, Minrui Xu, Xiaojuan Ma:
DiscipLink: Unfolding Interdisciplinary Information Seeking Process via Human-AI Co-Exploration. 91:1-91:20 - Majeed Kazemitabaar, Jack Williams, Ian Drosos, Tovi Grossman, Austin Zachary Henley, Carina Negreanu, Advait Sarkar:
Improving Steering and Verification in AI-Assisted Data Analysis with Interactive Task Decomposition. 92:1-92:19 - Xiaohang Tang, Sam Wong, Kevin Pu, Xi Chen, Yalong Yang, Yan Chen:
VizGroup: An AI-assisted Event-driven System for Collaborative Programming Learning Analytics. 93:1-93:22 - Johanna K. Didion, Krzysztof Wolski, Dennis Wittchen, David Coyle, Thomas Leimkühler, Paul Strohmeier:
Who did it? How User Agency is influenced by Visual Properties of Generated Images. 94:1-94:17 - Nabin Khanal, Chun Meng Yu, Jui-Cheng Chiu, Anav Chaudhary, Ziyue Zhang, Kakani Katija, Angus G. Forbes:
FathomGPT: A natural language interface for interactively exploring ocean science data. 95:1-95:15 - Lei Zhang, Jin Pan, Jacob Gettig, Steve Oney, Anhong Guo:
VRCopilot: Authoring 3D Layouts with Generative AI Models in VR. 96:1-96:13
Prototyping
- Hongbo Zhang, Pei Chen, Xuelong Xie, Chaoyi Lin, Lianyan Liu, Zhuoshu Li, Weitao You, Lingyun Sun:
ProtoDreamer: A Mixed-prototype Tool Combining Physical Model and Generative AI to Support Conceptual Design. 97:1-97:18 - Willa Yunqi Yang, Yifan Zou, Jingle Huang, Raouf Abujaber, Ken Nakagaki:
TorqueCapsules: Fully-Encapsulated Flywheel Actuation Modules for Designing and Prototyping Movement-Based and Kinesthetic Interaction. 98:1-98:15 - Boyu Li, Linping Yuan, Zhe Yan, Qianxi Liu, Yulin Shen, Zeyu Wang:
AniCraft: Crafting Everyday Objects as Physical Proxies for Prototyping 3D Character Animation in Mixed Reality. 99:1-99:14 - Peizhong Gao, Fan Liu, Di Wen, Yuze Gao, Linxin Zhang, Chikelei Wang, Qiwei Zhang, Yu Zhang, Shao-en Ma, Qi Lu, Haipeng Mi, Yingqing Xu:
Mul-O: Encouraging Olfactory Innovation in Various Scenarios Through a Task-Oriented Development Platform. 100:1-100:17
Hot Interfaces
- Haokun Wang, Yatharth Singhal, Hyunjae Gil, Jin Ryong Kim:
Fiery Hands: Designing Thermal Glove through Thermal and Tactile Integration for Virtual Object Manipulation. 101:1-101:15 - Ximing Shen, Youichi Kamiyama, Kouta Minamizawa, Jun Nishida:
DexteriSync: A Hand Thermal I/O Exoskeleton for Morphing Finger Dexterity Experience. 102:1-102:12 - Seongjun Kang, Gwangbin Kim, Seokhyun Hwang, Jeongju Park, Ahmed Ibrahim Ahmed Mohamed Elsharkawy, SeungJun Kim:
Flip-Pelt: Motor-Driven Peltier Elements for Rapid Thermal Stimulation and Congruent Pressure Feedback in Virtual Reality. 103:1-103:15 - Sosuke Ichihashi, Masahiko Inami, Hsin-Ni Ho, Noura Howell:
Hydroptical Thermal Feedback: Spatial Thermal Feedback Using Visible Lights and Water. 104:1-104:19
Generating Visuals
- Amrita Ganguly, Chuan Yan, John Joon Young Chung, Tong Steven Sun, Yoon Kiheon, Yotam I. Gingold, Sungsoo Ray Hong:
ShadowMagic: Designing Human-AI Collaborative Support for Comic Professionals' Shadowing. 105:1-105:15 - Nicholas Jennings, Han Wang, Isabel Li, James Smith, Bjoern Hartmann:
What's the Game, then? Opportunities and Challenges for Runtime Behavior Generation. 106:1-106:13 - Mingxu Zhou, Dengming Zhang, Weitao You, Ziqi Yu, Yifei Wu, Chenghao Pan, Huiting Liu, Tianyu Lao, Pei Chen:
StyleFactory: Towards Better Style Alignment in Image Creation through Style-Strength-Based Control and Evaluation. 107:1-107:15 - Liuqing Chen, Qianzhi Jing, Yixin Tsang, Qianyi Wang, Ruocong Liu, Duowei Xia, Yunzhan Zhou, Lingyun Sun:
AutoSpark: Supporting Automobile Appearance Design Ideation with Kansei Engineering and Generative AI. 108:1-108:19
Sustainable Interfaces
- Qiuyu Lu, Semina Yi, Mengtian Gan, Jihong Huang, Xiao Zhang, Yue Yang, Chenyi Shen, Lining Yao:
Degrade to Function: Towards Eco-friendly Morphing Devices that Function Through Programmed Sequential Degradation. 109:1-109:24 - Ruowang Zhang, Stefanie Mueller, Gilbert Louis Bernstein, Adriana Schulz, Mackenzie Leake:
WasteBanned: Supporting Zero Waste Fashion Design Through Linked Edits. 110:1-110:13 - Sutirtha Roy, Moshfiq-Us-Saleheen Chowdhury, Jurjaan Onayza Noim, Richa Pandey, Aditya Shekhar Nittala:
HoloChemie - Sustainable Fabrication of Soft Biochemical Holographic Devices for Ubiquitous Sensing. 111:1-111:19
Next Gen Input
- Unai Javier Fernández, Iosune Sarasate, Iñigo Fermín Ezcurdia, Manuel López-Amo, Ivan Fernández, Asier Marzo:
PointerVol: A Laser Pointer for Swept Volumetric Displays. 112:1-112:8 - Ratchanon Wattanaparinton, Kotaro Kitada, Kentaro Takemura:
RFTIRTouch: Touch Sensing Device for Dual-sided Transparent Plane Based on Repropagated Frustrated Total Internal Reflection. 113:1-113:10 - Maruchi Kim, Antonio Glenn, Bandhav Veluri, Yunseo Lee, Eyoel Gebre, Aditya Bagaria, Shwetak N. Patel, Shyamnath Gollakota:
IRIS: Wireless ring for vision-based smart home interaction. 114:1-114:16 - Junyong Park, Saelyne Yang, Sungho Jo:
Silent Impact: Tracking Tennis Shots from the Passive Arm. 115:1-115:15
LLM: New applications
- Akhil Padmanabha, Jessie Yuan, Janavi Gupta, Zulekha Karachiwalla, Carmel Majidi, Henny Admoni, Zackory Erickson:
VoicePilot: Harnessing LLMs as Speech Interfaces for Physically Assistive Robots. 116:1-116:18 - Tianjian Liu, Hongzheng Zhao, Yuheng Liu, Xingbo Wang, Zhenhui Peng:
ComPeer: A Generative Conversational Agent for Proactive Peer Support. 117:1-117:22 - Wanli Qian, Chenfeng Gao, Anup Sathya, Ryo Suzuki, Ken Nakagaki:
SHAPE-IT: Exploring Text-to-Shape-Display for Generative Shape-Changing Behaviors with LLMs. 118:1-118:29 - Liwenhan Xie, Chengbo Zheng, Haijun Xia, Huamin Qu, Zhu-Tian Chen:
WaitGPT: Monitoring and Steering Conversational LLM Agent in Data Analysis with On-the-Fly Code Visualization. 119:1-119:14
FABulous
- Daniel Ashbrook, Wei-Ju Lin, Nicholas Bentley, Diana Soponar, Zeyu Yan, Valkyrie Savage, Lung-Pan Cheng, Huaishu Peng, Hyunyoung Kim:
Rhapso: Automatically Embedding Fiber Materials into 3D Prints for Enhanced Interactivity. 120:1-120:20 - Mehmet Özdemir, Marwa Alalawi, Mustafa Doga Dogan, Jose Francisco Martinez Castro, Stefanie Mueller, Zjenja Doubrovski:
Speed-Modulated Ironing: High-Resolution Shade and Texture Gradients in Single-Material 3D Printing. 121:1-121:13 - Jaime Gould, Camila Friedman-Gerlicz, Leah Buechley:
TRAvel Slicer: Continuous Extrusion Toolpaths for 3D Printing. 122:1-122:17 - Johann Felipe Gonzalez, Thomas Pietrzak, Audrey Girouard, Géry Casiez:
Facilitating the Parametric Definition of Geometric Properties in Programming-Based CAD. 123:1-123:12 - Felix Hähnlein, Gilbert Bernstein, Adriana Schulz:
Understanding and Supporting Debugging Workflows in CAD. 124:1-124:14
Sound & Music
- Hyunsung Cho, Naveen Sendhilnathan, Michael Nebeling, Tianyi Wang, Purnima Padmanabhan, Jonathan Browder, David Lindlbauer, Tanya R. Jonker, Kashyap Todi:
SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR. 125:1-125:19 - Hyunsung Cho, Alexander Wang, Divya Kartik, Emily Liying Xie, Yukang Yan, David Lindlbauer:
Auptimize: Optimal Placement of Spatial Audio Cues for Extended Reality. 126:1-126:14 - Alexander Wang, David Lindlbauer, Chris Donahue:
Towards Music-Aware Virtual Assistants. 127:1-127:14 - Xia Su, Jon E. Froehlich, Eunyee Koh, Chang Xiao:
SonifyAR: Context-Aware Sound Generation in Augmented Reality. 128:1-128:13 - Shunta Suzuki, Takashi Amesaka, Hiroki Watanabe, Buntarou Shizuki, Yuta Sugiura:
EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage Signals. 129:1-129:13
Validation in AI/ML
- Susanne Schmidt, Tim Rolff, Henrik Voigt, Micha Offe, Frank Steinicke:
Natural Expression of a Machine Learning Model's Uncertainty Through Verbal and Non-Verbal Behavior of Intelligent Virtual Agents. 130:1-130:15 - Shreya Shankar, J. D. Zamfirescu-Pereira, Bjoern Hartmann, Aditya G. Parameswaran, Ian Arawjo:
Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences. 131:1-131:14 - Li Zhang, Shihe Wang, Xianqing Jia, Zhihan Zheng, Yunhe Yan, Longxi Gao, Yuanchun Li, Mengwei Xu:
LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation. 132:1-132:13 - Yoonho Lee, Michelle S. Lam, Helena Vasconcelos, Michael S. Bernstein, Chelsea Finn:
Clarify: Improving Model Robustness With Natural Language Corrections. 133:1-133:19 - Yu Fu, Shunan Guo, Jane Hoffswell, Victor S. Bursztyn, Ryan A. Rossi, John T. Stasko:
"The Data Says Otherwise" - Towards Automated Fact-checking and Communication of Data Claims. 134:1-134:20
Haptics
- Tetsushi Ikeda, Kazuyuki Fujita, Kumpei Ogawa, Kazuki Takashima, Yoshifumi Kitamura:
LoopBot: Representing Continuous Haptics of Grounded Objects in Room-scale VR. 135:1-135:10 - Zining Zhang, Jiasheng Li, Zeyu Yan, Jun Nishida, Huaishu Peng:
JetUnit: Rendering Diverse Force Feedback in Virtual Reality Using Water Jets. 136:1-136:15 - Takeru Hashimoto, Yutaro Hirao:
Selfrionette: A Fingertip Force-Input Controller for Continuous Full-Body Avatar Manipulation and Diverse Haptic Interactions. 137:1-137:14 - Chia-An Fan, En-Huei Wu, Chia-Yu Cheng, Yu-Cheng Chang, Alvaro Lopez, Yu Chen, Chia-Chen Chi, Yi-Sheng Chan, Ching-Yi Tsai, Mike Y. Chen:
SpinShot: Optimizing Both Physical and Perceived Force Feedback of Flywheel-Based, Directional Impact Handheld Devices. 138:1-138:15
Contextual Augmentations
- Gaurav Jain, Basel Hindi, Zihao Zhang, Koushik Srinivasula, Mingyu Xie, Mahshid Ghasemi, Daniel Weiner, Sophie Ana Paris, Xin Yi Therese Xu, Michael C. Malcolm, Mehmet Kerem Türkcan, Javad Ghaderi, Zoran Kostic, Gil Zussman, Brian A. Smith:
StreetNav: Leveraging Street Cameras to Support Precise Outdoor Navigation for Blind Pedestrians. 139:1-139:21 - Ruei-Che Chang, Yuxuan Liu, Anhong Guo:
WorldScribe: Towards Context-Aware Live Visual Descriptions. 140:1-140:18 - Jaewook Lee, Andrew D. Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E. Froehlich, Yapeng Tian, Yuhang Zhao:
CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision. 141:1-141:16 - Mina Huh, Amy Pavel:
DesignChecker: Visual Design Support for Blind and Low Vision Web Developers. 142:1-142:19
Learning to Learn
- Siyi Zhu, Robert Haisfield, Brendan Langen, Joel Chan:
Patterns of Hypertext-Augmented Sensemaking. 143:1-143:17 - Aditya Gunturu, Yi Wen, Nandi Zhang, Jarin Thundathil, Rubaiat Habib Kazi, Ryo Suzuki:
Augmented Physics: Creating Interactive and Embedded Physics Simulations from Static Textbook Diagrams. 144:1-144:12 - Raymond Fok, Joseph Chee Chang, Tal August, Amy X. Zhang, Daniel S. Weld:
Qlarify: Recursively Expandable Abstracts for Dynamic Information Retrieval over Scientific Papers. 145:1-145:21 - Haoxiang Fan, Guanzheng Chen, Xingbo Wang, Zhenhui Peng:
LessonPlanner: Assisting Novice Teachers to Prepare Pedagogy-Driven Lesson Plans with Large Language Models. 146:1-146:20
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.