Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleDecember 2024
DiffH2O: Diffusion-Based Synthesis of Hand-Object Interactions from Textual Descriptions
- Sammy Christen,
- Shreyas Hampali,
- Fadime Sener,
- Edoardo Remelli,
- Tomas Hodan,
- Eric Sauser,
- Shugao Ma,
- Bugra Tekin
SA '24: SIGGRAPH Asia 2024 Conference PapersArticle No.: 145, Pages 1–11https://doi.org/10.1145/3680528.3687563We introduce DiffH2O, a new diffusion-based framework for synthesizing realistic, dexterous hand-object interactions from natural language. Our model employs a temporal two-stage diffusion process, dividing hand-object motion generation into grasping and ...
- research-articleNovember 2024
EgoHDM: A Real-time Egocentric-Inertial Human Motion Capture, Localization, and Dense Mapping System
ACM Transactions on Graphics (TOG), Volume 43, Issue 6Article No.: 236, Pages 1–12https://doi.org/10.1145/3687907We present EgoHDM, an online egocentric-inertial human motion capture (mocap), localization, and dense mapping system. Our system uses 6 inertial measurement units (IMUs) and a commodity head-mounted RGB camera. EgoHDM is the first human mocap system ...
- ArticleOctober 2024
GraspXL: Generating Grasping Motions for Diverse Objects at Scale
AbstractHuman hands possess the dexterity to interact with diverse objects such as grasping specific parts of the objects and/or approaching them from desired directions. More importantly, humans can grasp objects of any shape without object-specific ...
- research-articleJune 2024
MARLUI: Multi-Agent Reinforcement Learning for Adaptive Point-and-Click UIs
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 8, Issue EICSArticle No.: 253, Pages 1–27https://doi.org/10.1145/3661147As the number of selectable items increases, point-and-click interfaces rapidly become complex, leading to a decrease in usability. Adaptive user interfaces can reduce this complexity by automatically adjusting an interface to only display the most ...
- research-articleMarch 2021
The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception
HRI '21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot InteractionPages 380–388https://doi.org/10.1145/3434073.3444656Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe negative effects on an individual's well-being. Based on previous research both within and outside of HRI, we propose six tenets (''...
- research-articleMay 2019
Demonstration-Guided Deep Reinforcement Learning of Control Policies for Dexterous Human-Robot Interaction
2019 International Conference on Robotics and Automation (ICRA)Pages 2161–2167https://doi.org/10.1109/ICRA.2019.8794065In this paper, we propose a method for training control policies for human-robot interactions such as handshakes or hand claps via Deep Reinforcement Learning. The policy controls a humanoid Shadow Dexterous Hand, attached to a robot arm. We propose a ...