Abstract
Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects’ performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adelson, S.J., Hodges, L.F.: Visible surface ray-tracing of stereoscopic images. In: ACM-SE 30: Proceedings of the 30th Annual Southeast Regional Conference, pp. 148–156. ACM, New York (1992)
Ward, G.J.: The radiance lighting simulation and rendering system. In: SIGGRAPH’94: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 459–472. ACM, New York (1994)
Watt, A., Policarpo, F.: The Computer Images. Addison-Wesley, Reading (1999)
Barbour, C.G., Meyer, G.W.: Visual cues and pictorial limitations for computer generated photo-realistic images. Vis. Comput. 9(3), 151–165 (1992)
Myszkowski, K., Tawara, T., Akamine, H., Seidel, H.-P.: Perception-guided global illumination solution for animation rendering. In: SIGGRAPH’01: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 221–230. ACM, New York (2001)
Yee, H., Pattanaik, S., Greenberg, D.P.: Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Trans. Graph. 20(1), 39–65 (2001)
Meyer, G.W., Rushmeier, H.E., Cohen, M.F., Greenberg, D.P., Torrance, K.E.: An experimental evaluation of computer graphics imagery. ACM Trans. Graph. 5(1), 30–50 (1986)
McNamara, A., Chalmers, A., Troscianko, T., Gilchrist, I.: Comparing real & synthetic scenes using human judgements of lightness. In: Proceedings of the Eurographics Workshop on Rendering Techniques 2000, pp. 207–218. Springer, Berlin (2000)
Cater, K., Chalmers, A., Ward, G.: Detail to attention: exploiting visual tasks for selective rendering. In: EGRW’03: Proceedings of the 14th Eurographics Workshop on Rendering, Eurographics Association, pp. 270–280 (2003)
Kjelldahl, L., Prime, M.: A study on how depth perception is affected by different presentation methods of 3d objects on a 2d display. Comput. Graph. 19(2), 199–202 (1995)
Wanger, L.C., Ferwerda, J.A., Greenberg, D.P.: Perceiving spatial relationships in computer-generated images. IEEE Comput. Graph. Appl. 12(3), 44–51, 54–58 (1992)
Servos, P., Goodale, A., Jakobson, L.S.: The role of binocular vision in prehension: a kinematic analysis. Vis. Res. 32(8), 1513–1521 (1992)
Hubona, G.S., Wheeler, P.N., Shirah, G.W., Brandt, M.: The relative contributions of stereo, lighting, and background scenes in promoting 3d depth visualization. ACM Trans. Comput.-Hum. Interact. 6(3), 214–242 (1999)
Wanger, L.: The effect of shadow quality on the perception of spatial relationships in computer generated imagery. In: SI3D’92: Proceedings of the 1992 Symposium on Interactive 3D Graphics, pp. 39–42. ACM, New York (1992)
Hu, H.H., Gooch, A.A., Thompson, W.B., Smits, B.E., Rieser, J.J., Shirley, P.: Visual cues for imminent object contact in realistic virtual environment. In: VIS’00: Proceedings of the Conference on Visualization’00, pp. 179–185. IEEE Comput. Soc., Los Alamitos (2000)
Kajiya, J.T.: The rendering equation. In: SIGGRAPH’86: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques (1986), pp. 143–150. ACM, New York (1986)
Hu, H.H., Gooch, A.A., Creem-Regehr, S.H., Thompson, W.B.: Visual cues for perceiving distances from objects to surfaces. Presence: Teleoper. Virtual Environ. 11(6), 652–664 (2002)
Horvitz, E., Lengyel, J.: Perception, attention, and resources: a decision-theoretic approach to graphics rendering. In: Proceedings of the 13th Conf. on Uncertainty in Artificial Intelligence, pp. 238–249. Morgan Kaufman, San Francisco (1997)
Perkins, M.G.: Data compression of stereopairs. IEEE Trans. Commun. 40(4), 684–696 (1992)
Siegel, M., Sethuraman, S., McVeigh, J.S., Jordan, A.: Compression and interpolation of 3d-stereoscopic and multi-view video. In: Stereoscopic Displays and Virtual Reality Systems IV, vol. 3012, pp. 227–238 (1997)
Kim, S.H., Siegel, M., Son, J.-Y.: Synthesis of a high resolution 3d-stereoscopic image from a high resolution monoscopic image and a low resolution depth map. In: Proceedings of the 1998 SPIE/IS&T Conference, vol. 3295A, pp. 76–86 (1998)
Badt, S. Jr.: Two algorithms taking advantage of temporal coherence in ray tracing. Vis. Comput. 4(1), 123–132 (1988)
Wendt, G., Faul, F., Mausfeld, R.: Highlight disparity contributes to the authenticity and strength of perceived glossiness. J. Vis. 8(1), 1–10 (2008)
eDimensional, Wireless 3D shutter glasses, eDimensional, Inc., http://www.edimensional.com/, updated 2004
Ward Larson, G., Shakespeare, R.: Rendering with RADIANCE: The Art and Science of Lighting Simulation. Morgan Kauffman, San Mateo (1998)
3DCombine, A stereoscopic image generation program. Shareware, http://www.3dcombine.com/
Presentation, High-precision program for stimulus delivery and experimental control for behavioral and physiological experiments, Neurobehavioral Systems, Inc. http://nbs.neuro-bs.com/
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lo, CH., Chu, CH., Debattista, K. et al. Selective rendering for efficient ray traced stereoscopic images. Vis Comput 26, 97–107 (2010). https://doi.org/10.1007/s00371-009-0379-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-009-0379-4