[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Public Access

Emptying, refurnishing, and relighting indoor spaces

Published: 05 December 2016 Publication History

Abstract

Visualizing changes to indoor scenes is important for many applications. When looking for a new place to live, we want to see how the interior looks not with the current inhabitant's belongings, but with our own furniture. Before purchasing a new sofa, we want to visualize how it would look in our living room. In this paper, we present a system that takes an RGBD scan of an indoor scene and produces a scene model of the empty room, including light emitters, materials, and the geometry of the non-cluttered room. Our system enables realistic rendering not only of the empty room under the original lighting conditions, but also with various scene edits, including adding furniture, changing the material properties of the walls, and relighting. These types of scene edits enable many mixed reality applications in areas such as real estate, furniture retail, and interior design. Our system contains two novel technical contributions: a 3D radiometric calibration process that recovers the appearance of the scene in high dynamic range, and a global-illumination-aware inverse rendering framework that simultaneously recovers reflectance properties of scene surfaces and lighting properties for several light source types, including generalized point and line lights.

References

[1]
Agarwal, S., Mierle, K., and Others. Ceres Solver.
[2]
Barron, J. T., and Malik, J. 2013. Intrinsic scene properties from a single rgb-d image. In CVPR.
[3]
Barron, J. T., and Malik, J. 2015. Shape, illumination, and reflectance from shading. PAMI 37, 8.
[4]
Boivin, S., and Gagalowicz, A. 2002. Inverse rendering from a single image. Conference on Colour in Graphics, Imaging, and Vision 2002, 1.
[5]
Cabral, R., and Furukawa, Y. 2014. Piecewise Planar and Compact Floorplan Reconstruction from Images. In CVPR.
[6]
Cohen, M. F., Wallace, J., and Hanrahan, P. 1993. Radiosity and realistic image synthesis. Academic Press Professional.
[7]
Colburn, A., Agarwala, A., Hertzmann, A., Curless, B., and Cohen, M. F. 2013. Image-based remodeling. TVCG 19, 1.
[8]
Colburn, R. A. 2014. Image-Based Remodeling: A Framework for Creating, Visualizing, and Editing Image-Based Models. PhD thesis, University of Washington.
[9]
Cossairt, O., Nayar, S., and Ramamoorthi, R. 2008. Light field transfer: global illumination between real and synthetic objects. ACM Trans. Graph. 27, 3.
[10]
Dai, A., Niessner, M., Zollhöfer, M., Izadi, S., and Theobalt, C. 2016. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Reintegration. arXiv:1604.01093.
[11]
Debevec, P. E., and Malik, J. 1997. Recovering high dynamic range radiance maps from photographs. In SIGGRAPH.
[12]
Debevec, P. 1998. Rendering synthetic objects into real scenes. In SIGGRAPH.
[13]
Forsyth, D. A. 2011. Variable-source shading analysis. IJCV 91, 3.
[14]
Furukawa, Y., Curless, B., Seitz, S., and Szeliski, R. 2009. Manhattan-world stereo. In CVPR.
[15]
Goldman, D. B. 2010. Vignette and exposure calibration and compensation. PAMI 32, 12.
[16]
Grossberg, M. D., and Nayar, S. K. 2002. What Can Be Known about the Radiometric Response from Images? In ECCV.
[17]
Grossberg, M., and Nayar, S. 2003. What is the space of camera response functions? In CVPR.
[18]
Grundmann, M., McClanahan, C., and Essa, I. 2013. Post-processing approach for radiometric self-calibration of video. In ICCP.
[19]
Ikehata, S., Yang, H., and Furukawa, Y. 2015. Structured indoor modelling. In ICCV.
[20]
Kajiya, J. T. 1986. The rendering equation. In SIGGRAPH.
[21]
Karsch, K., Hedau, V., Forsyth, D., and Hoiem, D. 2011. Rendering synthetic objects into legacy photographs. ACM Trans. Graph. 30, 6.
[22]
Karsch, K., Sunkavalli, K., Hadap, S., Carr, N., Jin, H., Fonte, R., Sittig, M., and Forsyth, D. 2014. Automatic Scene Inference for 3D Object Compositing. ACM Trans. Graph. 33, 3.
[23]
Kawai, J. K., Painter, J. S., and Cohen, M. F. 1993. Radioptimization. In SIGGRAPH.
[24]
Kazhdan, M., and Hoppe, H. 2013. Screened poisson surface reconstruction. ACM Trans. Graph. 32, 3.
[25]
Kim, S. J., and Pollefeys, M. 2008. Robust radiometric calibration and vignetting correction. PAMI 30, 4.
[26]
Kottas, D. G., Hesch, J. A., Bowman, S. L., and Roumeliotis, S. I. 2013. On the consistency of vision-aided inertial navigation. In Experimental Robotics, Springer, 303--317.
[27]
Li, S., Handa, A., Zhang, Y., and Calway, A. 2016. HDR-Fusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor. arXiv:1604.00895.
[28]
Lombardi, S., and Nishino, K. 2016. Reflectance and Illumination Recovery in the Wild. PAMI 38, 1.
[29]
Lombardi, S., and Nishino, K. 2016. Radiometric Scene Decomposition: Scene Reflectance, Illumination, and Geometry from RGB-D Images. arXiv:1604.01354.
[30]
Lourakis, M. I. A., and Argyros, A. A. 2009. SBA. ACM Transactions on Mathematical Software 36, 1.
[31]
Meilland, M., Barat, C., and Comport, A. 2013. 3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting. In ISMAR.
[32]
Mercier, B., Meneveaux, D., and Fournier, A. 2007. A framework for automatically recovering object shape, reflectance and light sources from calibrated images. IJCV 73, 1.
[33]
Mourikis, A. I., and Roumeliotis, S. I. 2007. A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. In ICRA.
[34]
Newcombe, R. A., Davison, A. J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., and Fitzgibbon, A. 2011. KinectFusion: Real-time dense surface mapping and tracking. In ISMAR.
[35]
Niessner, M., Zollhöfer, M., Izadi, S., and Stamminger, M. 2013. Real-time 3D Reconstruction at Scale Using Voxel Hashing. ACM Trans. Graph. 32, 6.
[36]
Patow, G., and Pueyo, X. 2003. A Survey of Inverse Rendering Problems. Computer Graphics Forum 22, 4.
[37]
Pharr, M., and Humphreys, G. 2004. Physically Based Rendering: From Theory To Implementation. Morgan Kaufmann.
[38]
Ramamoorthi, R., and Hanrahan, P. 2001. A signal-processing framework for inverse rendering. In SIGGRAPH.
[39]
Ramanarayanan, G., Ferwerda, J., Walter, B., and Bala, K. 2007. Visual equivalence. ACM Trans. Graph. 26, 3.
[40]
Stauder, J. 2000. Point light source estimation from two images and its limits. IJCV 36, 3.
[41]
Subcommittee on Photometry of the IESNA Computer Committee. 2002. Iesna standard file format for the electronic transfer of photometric data and related information. Tech. Rep. LM-63-02.
[42]
Takai, T., Niinuma, K., Maki, A., and Matsuyama, T. 2004. Difference sphere: an approach to near light source estimation. In CVPR.
[43]
Unger, J., Kronander, J., Larsson, P., Gustavson, S., Löw, J., and Ynnerman, A. 2013. Spatially varying image based lighting using HDR-video. Computers & Graphics 37, 7.
[44]
Weber, M., and Cipolla, R. 2001. A practical method for estimation of point light-sources. BMVC 2.
[45]
Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., and McDonald, J. 2012. Kintinuous: Spatially Extended KinectFusion. Tech. rep., MIT CSAIL.
[46]
Whelan, T., Leutenegger, S., Moreno, R. S., Glocker, B., and Davison, A. 2015. ElasticFusion: Dense SLAM Without A Pose Graph. Robotics: Science and Systems 11.
[47]
Wood, D. N., Azuma, D. I., Aldinger, K., Curless, B., Duchamp, T., Salesin, D. H., and Stuetzle, W. 2000. Surface light fields for 3D photography. In SIGGRAPH.
[48]
Wu, C., Zollhöfer, M., Niessner, M., Stamminger, M., Izadi, S., and Theobalt, C. 2014. Real-time shading-based refinement for consumer depth cameras. ACM Trans. Graph. 33, 6.
[49]
Xiao, J., and Furukawa, Y. 2014. Reconstructing the World's Museums. IJCV 110, 3.
[50]
Xu, S., and Wallace, A. M. 2008. Recovering surface reflectance and multiple light locations and intensities from image data. Pattern Recogn. Lett. 29, 11.
[51]
Yu, Y., Debevec, P., Malik, J., and Hawkins, T. 1999. Inverse global illumination. In SIGGRAPH.
[52]
Zollhöfer, M., Dai, A., Innmann, M., Wu, C., Stamminger, M., Theobalt, C., and Niessner, M. 2015. Shading-based refinement on volumetric signed distance functions. ACM Trans. Graph. 34, 4.

Cited By

View all
  • (2024)Deep SVBRDF Acquisition and Modelling: A SurveyComputer Graphics Forum10.1111/cgf.1519943:6Online publication date: 16-Sep-2024
  • (2024)DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality2024 International Conference on 3D Vision (3DV)10.1109/3DV62453.2024.00037(750-760)Online publication date: 18-Mar-2024
  • (2024)Deep indoor illumination estimation based on spherical gaussian representation with scene prior knowledgeJournal of King Saud University - Computer and Information Sciences10.1016/j.jksuci.2024.10222236:10(102222)Online publication date: Dec-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 35, Issue 6
November 2016
1045 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/2980179
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 December 2016
Published in TOG Volume 35, Issue 6

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. diminished reality
  2. indoor reconstruction
  3. inverse lighting
  4. inverse rendering
  5. lighting models
  6. reflectance capture

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)192
  • Downloads (Last 6 weeks)25
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Deep SVBRDF Acquisition and Modelling: A SurveyComputer Graphics Forum10.1111/cgf.1519943:6Online publication date: 16-Sep-2024
  • (2024)DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality2024 International Conference on 3D Vision (3DV)10.1109/3DV62453.2024.00037(750-760)Online publication date: 18-Mar-2024
  • (2024)Deep indoor illumination estimation based on spherical gaussian representation with scene prior knowledgeJournal of King Saud University - Computer and Information Sciences10.1016/j.jksuci.2024.10222236:10(102222)Online publication date: Dec-2024
  • (2024)Virtual home staging and relighting from a single panorama under natural illuminationMachine Vision and Applications10.1007/s00138-024-01559-735:4Online publication date: 11-Jul-2024
  • (2023)Virtual Reality Solutions Employing Artificial Intelligence Methods: A Systematic Literature ReviewACM Computing Surveys10.1145/356502055:10(1-29)Online publication date: 2-Feb-2023
  • (2023)Local-to-Global Panorama Inpainting for Locale-Aware Indoor Lighting PredictionIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332023329:11(4405-4416)Online publication date: Nov-2023
  • (2023)Real-Time Lighting Estimation for Augmented Reality via Differentiable Screen-Space RenderingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.314194329:4(2132-2145)Online publication date: 1-Apr-2023
  • (2023)MILO: Multi-Bounce Inverse Rendering for Indoor Scene With Light-Emitting ObjectsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.324465845:8(10129-10142)Online publication date: Aug-2023
  • (2023)Conditional 360-degree Image Synthesis for Immersive Indoor Scene Decoration2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00413(4455-4465)Online publication date: 1-Oct-2023
  • (2023)Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52729.2023.01203(12499-12509)Online publication date: Jun-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media