It is our great pleasure to welcome you to the first ACM Symposium on Spatial User Interaction -- SUI'13. This new event focuses on the user interface challenges that appear when users interact in the space where the flat, twodimensional, digital world meets the volumetric, physical, three-dimensional (3D) space we live in. This considers both spatial input as well as output, with an emphasis on the issues around the interaction between humans and systems. The goal of the symposium is to provide an intensive exchange between academic and industrial researchers working in the area of SUI and to foster discussions among participants. The first SUI symposium was held July 20-21, 2013 in Los Angeles, USA.
The call for papers attracted 31 submissions from Asia, Europe, Australia, and North and South America in all areas of Spatial User Interaction research. The international program committee consisting of 15 experts in the topic areas and the two program chairs handled the highly competitive and selective review process. Every submission received at least four detailed reviews, two from members of the international program committee and two or more from external reviewers. The reviewing process was double-blind, where only the program chairs as well as the program committee member, who was assigned to each paper to identify external reviewers, knew the identity of the authors.
In the end, the program committee accepted overall 12 (8 full papers plus 4 short papers) out of 31 submissions, which corresponds to an acceptance rate of 26% for full papers (and 38% in total). Additionally 12 posters and demonstrations will complement the program and appear in the proceedings. The topics range from spatial interaction techniques, vision in 3D space, applications, to interaction with multi-touch technologies and in augmented reality. We hope that these proceedings will serve as a valuable reference for Spatial User Interaction researchers and developers.
Putting together the content for SUI'13 was a team effort. We first thank the authors for providing the content of the program. Special thanks go to the members of the international program committee, who successfully dealt with the reviewing load. We also thank the external reviewers.
Proceeding Downloads
Visualization of off-surface 3D viewpoint locations in spatial augmented reality
Spatial Augmented Reality (SAR) systems can be used to convey guidance in a physical task from a remote expert. Sometimes that remote expert is provided with a single camera view of the workspace but, if they are given a live captured 3D model and can ...
To touch or not to touch?: comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with ...
Novel metrics for 3D remote pointing
We introduce new metrics to help explain 3D pointing device movement characteristics. We present a study to assess these by comparing two cursor control modes using a Sony PS Move. "Laser" mode used ray casting, while "position" mode mapped absolute ...
Spatial user interface for experiencing Mogao caves
In this paper, we describe the design and implementation of the Pure Land AR, which is an installation that employs spatial user interface and allows users to virtually visit the UNESCO world heritage -- Mogao Caves by using handheld devices. The ...
Seamless interaction using a portable projector in perspective corrected multi display environments
In this work, we study ways to use a portable projector to extend the workspace in a perspective corrected multi display environment (MDE). This system uses the relative position between the user and displays in order to show the content perpendicularly ...
Free-hands interaction in augmented reality
The ability to use free-hand gestures is extremely important for mobile augmented reality applications. This paper proposes a computer vision-driven model for natural free-hands interaction in augmented reality. The novelty of the research is the use of ...
Performance effects of multi-sensory displays in virtual teleoperation environments
Multi-sensory displays provide information to users through multiple senses, not only through visuals. They can be designed for the purpose of creating a more-natural interface for users or reducing the cognitive load of a visual-only display. However, ...
Evaluating performance benefits of head tracking in modern video games
We present a study that investigates user performance benefits of using head tracking in modern video games. We explored four different carefully chosen commercial games with tasks which can potentially benefit from head tracking. For each game, ...
Volume cracker: a bimanual 3D interaction technique for analysis of raw volumetric data
Analysis of volume datasets often involves peering inside the volume to understand internal structures. Traditional approaches involve removing part of the volume through slicing, but this can result in the loss of context. Focus+context visualization ...
Direct 3D object manipulation on a collaborative stereoscopic display
IllusionHole (IH) is an interactive stereoscopic tabletop display that allows multiple users to interactively observe and directly point at a particular position of a stereoscopic object in a shared workspace. We explored a mid-air direct multi-finger ...
FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing
We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this ...
Effects of stereo and head tracking in 3d selection tasks
We report a 3D selection study comparing stereo and head-tracking with both mouse and pen pointing. Results indicate stereo was primarily beneficial to the pen mode, but slightly hindered mouse speed. Head tracking had fewer noticeable effects.
Towards bi-manual 3D painting: generating virtual shapes with hands
We aim at combining surface generation by hands with 3D painting in a large space, from 10 to ~200 m2 (for a stage setup). Our long-term goal is to phase 3D surface generation in choreography, in order to produce augmented dance shows where the dancer ...
User-defined SUIs: an exploratory study
In this poster we present an exploratory bottom-up experiment to assess the user's choices in terms of bodily interactions when facing a set of tasks. 29 subjects were asked to perform basic tasks on a large screen TV in three positions: standing, ...
Fusing depth, color, and skeleton data for enhanced real-time hand segmentation
As sensing technology has evolved, spatial user interfaces have become increasingly popular platforms for interacting with video games and virtual environments. In particular, recent advances in consumer-level motion tracking devices such as the ...
A virtually tangible 3D interaction system using an autostereoscopic display
We propose a virtually tangible 3D interaction system that enables direct interaction with three dimensional virtual objects which are presented on an autostereoscopic display.
Up- and downwards motions in 3D pointing
We present an experiment that examines 3D pointing in fish tank VR using the ISO 9241-9 standard. The experiment used three pointing techniques: mouse, ray, and touch using a stylus. It evaluated user pointing performance with stereoscopically displayed ...
Autonomous control of human-robot spacing: a socially situated approach
To enable socially situated human-robot interaction, a robot must both understand and control proxemics, the social use of space, to employ communication mechanisms analogous to those used by humans. In this work, we investigate speech and gesture ...
Real-time image-based animation using morphing with human skeletal tracking
We propose a real-time image-based animation technique for virtual fitting applications. Our method uses key image finding from a database which uses skeletal data as a search key, and then create in-between images by using image morphing. Comparing to ...
Augmenting multi-touch with commodity devices
We describe two approaches to augment multi-touch user input with commodity devices (Kinect and wiiMote).
Effectiveness of commodity BCI devices as means to control an immersive virtual environment
This poster focuses on research investigating the control of an immersive virtual environment using the Emotiv EPOC, a consumer-grade brain computer interface. The primary emphasis of the work is to determine the feasibility of the Emotiv EPOC at ...
Cited By
Index Terms
- Proceedings of the 1st symposium on Spatial user interaction