default search action
SAP 2016: Anaheim, California, USA
- Eakta Jain, Sophie Jörg:
Proceedings of the ACM Symposium on Applied Perception, SAP 2016, Anaheim, California, USA, July 22-23, 2016. ACM 2016, ISBN 978-1-4503-4383-1 - Jan Ondrej, Cathy Ennis, Niamh A. Merriman, Carol O'Sullivan:
FrankenFolk: distinctiveness and attractiveness of voice and motion. 1 - Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, Julia Woodward:
Is the motion of a child perceivably different from the motion of an adult? 1 - Atul Rungta, Sarah Rust, Nicolás Morales, Roberta L. Klatzky, Ming C. Lin, Dinesh Manocha:
Psychoacoustic characterization of propagation effects in virtual environments. 1 - Elham Ebrahimi, Sabarish V. Babu, Christopher Pagano, Sophie Jörg:
An empirical evaluation of visuo-haptic feedback on physical reaching behaviors during 3D interaction in real and immersive virtual environments. 1 - Nicholas T. Swafford, José Antonio Iglesias Guitián, Charalampos Koniaris, Bochang Moon, Darren Cosker, Kenny Mitchell:
User, metric, and computational evaluation of foveated rendering methods. 7-14 - Kamran Binaee, Gabriel J. Diaz, Jeff B. Pelz, Flip Phillips:
Binocular eye tracking calibration during a virtual ball catching task using head mounted display. 15-18 - Wenyan Bi, Bei Xiao:
Perceptual constancy of mechanical properties of cloth under variation of external forces. 19-23 - Pisut Wisessing, John Dingliana, Rachel McDonnell:
Perception of lighting and shading for animated virtual characters. 25-29 - Jonathan Gandrud, Victoria Interrante:
Predicting destination using head orientation and gaze direction during locomotion in VR. 31-38 - Ylva Ferstl, Elena Kokkinara, Rachel McDonnell:
Do I trust you, abstract creature?: a study on personality perception of abstract virtual faces. 39-43 - Sai Krishna Allani, Brendan John, Javier Ruiz, Saurabh Dixit, Jackson Carter, Cindy Grimm, Ravi Balasubramanian:
Evaluating human gaze patterns during grasping tasks: robot versus human hand. 45-52 - Bochao Li, Anthony Nordman, James W. Walker, Scott A. Kuhl:
The effects of artificially reduced field of view and peripheral frame stimulation on distance judgments in HMDs. 53-56 - Yuanyuan Jiang, Elizabeth E. O'Neal, Pooya Rahimian, Junghum Paul Yon, Jodie M. Plumert, Joseph K. Kearney:
Action coordination with agents: crossing roads with a computer-generated character in a virtual environment. 57-64 - Manfred Lau, Kapil Dev, Julie Dorsey, Holly E. Rushmeier:
Learning a human-perceived softness measure of virtual 3D objects. 65-68 - Lorraine Lin, Sophie Jörg:
Need a hand?: how appearance affects the virtual hand illusion. 69-76 - Colin Ware, Daniel Bolan, Ricky Miller, David H. Rogers, James P. Ahrens:
Animated versus static views of steady flow patterns. 77-84 - Florian Soyka, Markus Leyrer, Joe Smallwood, Chris Ferguson, Bernhard E. Riecke, Betty J. Mohler:
Enhancing stress management techniques using virtual reality. 85-88 - Pallavi Raiturkar, Andrea Kleinsmith, Andreas Keil, Arunava Banerjee, Eakta Jain:
Decoupling light reflex from pupillary dilation to measure emotional arousal in videos. 89-96 - Marc Spicker, Diana Arellano, Ulrich Max Schaller, Reinhold Rauh, Volker Helzle, Oliver Deussen:
Emotion recognition in autism spectrum disorder: does stylization help? 97-104 - Justin K. Bennett, Srinivas Sridharan, Brendan John, Reynold J. Bailey:
Looking at faces: autonomous perspective invariant facial gaze analysis. 105-112 - Timofey Grechkin, Jerald Thomas, Mahdi Azmandian, Mark T. Bolas, Evan A. Suma:
Revisiting detection thresholds for redirected walking: combining translation and curvature gains. 113-120 - Takahiro Kawabe, Shin'ya Nishida:
Seeing jelly: judging elasticity of a transparent object. 121-128 - Katherine Breeden, Pat Hanrahan:
Analyzing gaze synchrony in cinema: a pilot study. 129 - Daniel Simon, Srinivas Sridharan, Shagan Sah, Raymond W. Ptucha, Chris Kanan, Reynold Bailey:
Automatic scanpath generation with deep recurrent neural networks. 130 - Michihiro Mikamo, Kotaro Mori, Bisser Raytchev, Toru Tamaki, Kazufumi Kaneda:
Binocular tone reproduction display for an HDR panorama image. 131 - Nargess Hassani, Michael J. Murdoch:
Color appearance modeling in augmented reality. 132 - Mehul Bhatt, Jakob Suchan, Vasiliki Kondyli, Carl Schultz:
Embodied visuo-locomotive experience analysis: immersive reality based summarisation of experiments in environment-behaviour studies. 133 - Purnendu Kaul, Vijay Rajanna, Tracy Hammond:
Exploring users' perceived activities in a sketch-based intelligent tutoring system through eye movement data. 134 - Anahita Sanandaji, Cindy Grimm, Ruth West:
How experts' mental model affects 3D image segmentation. 135 - Jasper LaFortune, Kristen L. Macuga:
Learning movements from a virtual instructor. 136 - Ishwarya Thirunarayanan, Sanjeev J. Koppal, John M. Shea, Eakta Jain:
Leveraging gaze data for segmentation and effects on comics. 137 - Pallavi Raiturkar, Susan Jacobson, Beida Chen, Kartik Chaturvedi, Isabella Cuba, Andrew Lee, Melissa Franklin, Julian Tolentino, Nia Haynes, Rebecca Soodeen, Eakta Jain:
Measuring viewers' heart rate response to environment conservation videos. 138 - Yugo Sato, Takuya Kato, Naoki Nozawa, Shigeo Morishima:
Perception of drowsiness based on correlation with facial image features. 139 - Srinivas Sridharan, Reynold J. Bailey:
Saliency and optical flow for gaze guidance in videos. 140 - Pallavi Raiturkar, Andrew Lee, Eakta Jain:
Scan path and movie trailers for implicit annotation of videos. 141 - Jakob Suchan, Mehul Bhatt, Stella X. Yu:
The perception of symmetry in the moving image: multi-level computational analysis of cinematographic scene structure and its visual reception. 142 - Veronica U. Weser, Joel Hesch, Johnny Lee, Dennis R. Proffitt:
User sensitivity to speed- and height-mismatch in VR. 143
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.