[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/984952.984960acmconferencesArticle/Chapter ViewAbstractPublication PagessccgConference Proceedingsconference-collections
Article

Visual attention models for producing high fidelity graphics efficiently

Published: 24 April 2003 Publication History

Abstract

Despite the ready availability of modern high performance graphics cards, the complexity of the scenes being modelled and the realism required of the images means that rendering high fidelity computer images is still simply not possible in a reasonable, let alone real-time. Knowing that it is a human that will be looking at the resultant images can be exploited to significantly reduce the computation time required for high fidelity graphical images, for although the human visual system is good, it does have limitations. The key is knowing where the user will be looking in the image.This paper describes high level task maps and low level saliency maps. For a large number of applications, these visual attention models can indeed determine where the user will be looking in scene with high accuracy. This information is then used to selectively render different parts of a complex scene at different qualities. We show that viewers performing a known visual task within the environment consistently fail to notice the difference in rendering quality between benchmark high quality images and the selectively rendered images that were rendered at a fraction of the computational cost.

References

[1]
CATER K., CHALMERS A. G., AND DALTON C. 2003. Varying Rendering Fidelity by Exploiting Human Change Blindness, In Proceedings of GRAPHITE 2003, ACM.
[2]
CATER K., CHALMERS A. G., AND LEDDA P. 2002. Selective Quality Rendering by Exploiting Human Inattentional Blindness: Looking but not Seeing, In Proceedings of Symposium on Virtual Reality Software and Technology 2002, ACM, 17--24.
[3]
CATER K., CHALMERS A. G., AND WARD G. 2003. Detail to Attention: Exploiting Visual Tasks for Selective Rendering, submitted for publication.
[4]
DALY S. 1993. The Visible Differences Predictor: An Algorithm for the Assessment of Image Fidelity. In A. B. Watson, editor, Digital Image and Human Vision, Cambridge, MA: MIT Press, 179--206.
[5]
DOWLING J. E. 1987. The retina: An approachable part of the brain, Cambridge: Belknap.
[6]
FERWERDA J., PATTANAIK S. N., SHIRLEY P., AND GREENBERG D. P. 1996. A Model of Visual Adaptation for Realistic Image Synthesis, In Proceedings of SIGGRAPH 1996, ACM Press / ACM SIGGRAPH, New York. H. Rushmeier, Ed., Computer Graphics Proceedings, Annual Conference Series, ACM, 249--258.
[7]
GREENBERG D. P., TORRANCE K. E., SHIRLEY P., ARVO J., FERWERDA J., PATTANAIK S. N., LAFORTUNE A. E., WALTER B., FOO S-C., AND TRUMBORE B. 1997. A Framework for Realistic Image Synthesis, In Proceedings of SIGGRAPH 1997 (Special Session), ACM Press / ACM SIGGRAPH, New York. T. Whitted, Ed., Computer Graphics Proceedings, Annual Conference Series, ACM, 477--494.
[8]
ITTI L. AND KOCH C. 2000. A saliency-based search mechanism for overt and covert shifts of visual attention, In Vision Research, vol. 40, no 10--12, 1489--1506.
[9]
JAMES W. 1890 Principles of Psychology, New York: Holt.
[10]
LEE M., REDNER R., AND USELTON S. 1985. Statistically Optimized Sampling for Distributed Ray Tracing, Siggraph vol. 19, No. 3.
[11]
LOSCHKY L. C., AND MCCONKIE G. W. 1999. Gaze Contingent Displays: Maximizing Display Bandwidth Efficiency. ARL Federated Laboratory Advanced Displays and Interactive Displays Consortium, Advanced Displays and Interactive Displays Third Annual Symposium, 79--83.
[12]
LOSCHKY L. C., MCCONKIE G. W., YANG, J., AND MILLER M. E. 2001. Perceptual Effects of a Gaze-Contingent Multi-Resolution Display Based on a Model of Visual Sensitivity. In the ARL Federated Laboratory 5th Annual Symposium - ADID Consortium Proceedings, 53--58.
[13]
LUBEKE D. AND HALLEN B. 2001. Perceptually driven simplification for interactive rendering, In Proceedings of 12th Eurographics Workshop on Rendering, 221--223.
[14]
LUBEKE D., REDDY M., WATSON B., COHEN J. AND VARSHNEY A. 2001. Advanced Issues in Level of Detail. Course #41 at SIGGRAPH 2001. Los Angeles, CA. August 12--17, 2001.
[15]
MACIEL P. W. C., AND SHIRLEY P. 1995. Visual Navigation of Large Environments Using Textured Clusters, In Proceedings of Symposium on Interactive 3D Graphics, 95--102.
[16]
MACK A. AND ROCK I. 1998. Inattentional Blindness, Massachusetts Institute of Technology Press.
[17]
MCNAMARA A., CHALMERS A. G., TROSCIANKO T., AND GILCHRIST I. 2000. Comparing Real and Synthetic Scenes using Human Judgements of Lightness. In B Peroche and H Rushmeier (eds), 12th Eurographics Workshop on Rendering, 207--219.
[18]
MCCONKIE G. W., AND LOSCHKY L. C. 1997. Human Performance with a Gaze-Linked Multi-Resolutional Display. ARL Federated Laboratory Advanced Displays and Interactive Displays Consortium, Advanced Displays and Interactive Displays First Annual Symposium, 25--34.
[19]
MYSZKOWSKI K., TAWARA T., AKAMINE H. AND SEIDEL H-P. 2001. Perception-Guided Global Illumination Solution for Animation Rendering. In Proceedings of SIGGRAPH 2001, ACM Press / ACM SIGGRAPH, New York. E. Fiume, Ed., Computer Graphics Proceedings, Annual Conference Series, ACM, 221--230.
[20]
O'REGAN J. K., DEUBEL H., CLARK J. J., AND RENSINK R. A. 2000. Picture changes during blinks: looking without seeing and seeing without looking. Visual Cognition. 7, 1, 191--212.
[21]
PARKHURST, D., LAW, K. AND NIEBUR E. 2002. Modeling the role of salience in the allocation of overt visual attention. Vision Research, vol 42, pp 107--123.
[22]
PATTANAIK S. N., FERWERDA J., FAIRCHILD M. D., AND GREENBERG D. P. 1998. A Multiscale Model of Adaptation and Spatial Vision for Realistic Image Display, in Proceedings of SIGGRAPH 1998, ACM Press / ACM SIGGRAPH, New York. M. Cohen, Ed., Computer Graphics Proceedings, Annual Conference Series, ACM, 287--298.
[23]
PATTANAIK S. N., TUMBLIN, J., YEE H., GREENBERG, D. P. 2000. Time-Dependent Visual Adaptation For FastRealistic Image Display. ACM Computer Graphics (Proc. of SIGGRAPH)'00, 47--53.
[24]
RAMASUBRAMANIAN M., PATTANAIK S. N., AND GREENBERG D. P. 1999. A Perceptually Based Physical Error Metric for Realistic Image Synthesis, In Proceedings of SIGGRAPH 1999, ACM Press / ACM SIGGRAPH, New York. Computer A. Rockwood, Ed., Graphics Proceedings, Annual Conference Series, ACM, 73--82.
[25]
REDDY M. 1997. Perceptually Modulated Level of Detail for Virtual Environments. Ph. D. Thesis (CST-134-97), University of Edinburgh.
[26]
TUMBLIN, J., RUSHMEIER, H. 1993. Tone Reproduction for Realistic images. IEEE Computer graphics and Applications, 13(6):42--48.
[27]
WARD, G. 1994. The RADIANCE Lighting Simulation and Rendering System. In Proceedings of ACM SIGGRAPH 1994, ACM Press / ACM SIGGRAPH, New York. Computer Graphics Proceedings, Annual Conference Series, ACM, 459--472.
[28]
WARD-LARSON, G., RUSHMEIER, H., PIATKO, C. 1997. A Visibility Matching Tone Reproduction Operator for High Dynamic range Scene. IEEE Transaction on Visualization and Computer Graphics, 3(4):291--306.
[29]
WARD LARSON, G. 1998. Rendering with RADIANCE: The art and science of lighting simulation. San Francisco: Morgan Kauffman.
[30]
WATSON, B., FRIEDMAN, A., AND MCGAFFEY, A. 1997. An evaluation of Level of Detail Degradation in Head-Mounted Display Peripheries. Presence, 6, 6, 630--637.
[31]
WATSON B., FRIEDMAN A., AND MCGAFFEY A. 2001. Measuring and Predicting Visual Fidelity, In Proceedings of SIGGRAPH 2001, ACM Press / ACM SIGGRAPH, New York. E. Fiume, Ed., Computer Graphics Proceedings, Annual Conference Series, ACM, 213--220.
[32]
YANTIS S. 1996. Attentional capture in vision. In A. Kramer, M. Coles and G. Logan (eds), Converging operations in the study of selective visual attention, 45--76, American Psychological Association.
[33]
YARBUS A. L. 1967. Eye movements during perception of complex objects. In L. A. Riggs, Ed., Eye Movements and Vision, Plenum Press, New York, Chapter VII, 171--196.
[34]
YEE H. 2000. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments, MSc. Thesis, Program of Computer Graphics, Cornell University.
[35]
YEE H., PATTANAIK, S., AND GREENBERG, D. P. 2001. Spatiotemporal sensitivity and Visual Attention for efficient rendering of dynamic Environments, In ACM Transactions on Computer Graphics, vol. 20, no 1, 39--65.

Cited By

View all
  1. Visual attention models for producing high fidelity graphics efficiently

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SCCG '03: Proceedings of the 19th Spring Conference on Computer Graphics
    April 2003
    267 pages
    ISBN:158113861X
    DOI:10.1145/984952
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 April 2003

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. realistic computer graphics
    2. saliency maps
    3. task maps
    4. visual perception

    Qualifiers

    • Article

    Conference

    SCCG03
    Sponsor:
    SCCG03: Spring Conference in Computer Graphics 2003
    April 24 - 26, 2003
    Budmerice, Slovakia

    Acceptance Rates

    Overall Acceptance Rate 67 of 115 submissions, 58%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)To See or Not to SeeProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481235:1(1-25)Online publication date: 30-Mar-2021
    • (2018)Computational Models of Visual AttentionComputer Vision10.4018/978-1-5225-5204-8.ch001(1-26)Online publication date: 2018
    • (2014)Computational Models of Visual AttentionResearch Developments in Computer Vision and Image Processing10.4018/978-1-4666-4558-5.ch004(54-76)Online publication date: 2014
    • (2008)Fast and Robust Generation of Feature Maps for Region-Based Visual AttentionIEEE Transactions on Image Processing10.1109/TIP.2008.91936517:5(633-644)Online publication date: May-2008
    • (2007)Visual equivalenceACM Transactions on Graphics10.1145/1276377.127647226:3(76-es)Online publication date: 29-Jul-2007
    • (2007)Visual equivalenceACM SIGGRAPH 2007 papers10.1145/1275808.1276472(76-es)Online publication date: 5-Aug-2007
    • (2006)Human visual perception of region warping distortionsProceedings of the 29th Australasian Computer Science Conference - Volume 4810.5555/1151699.1151724(217-226)Online publication date: 1-Jan-2006
    • (2006)Exploiting perception in high-fidelity virtual environments (Additional presentations from the 24th course are available on the citation page)ACM SIGGRAPH 2006 Courses10.1145/1185657.1185814(1-es)Online publication date: 30-Jul-2006
    • (2006)Human visual perception of region warping distortions with different display and scene characteristicsProceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia10.1145/1174429.1174490(357-365)Online publication date: 29-Nov-2006

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media