[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Towards Learning Neural Representations from Shadows

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13693))

Included in the following conference series:

  • 3942 Accesses

Abstract

We present a method that learns neural shadow fields, which are neural scene representations that are only learnt from the shadows present in the scene. While traditional shape-from-shadow (SfS) algorithms reconstruct geometry from shadows, they assume a fixed scanning setup and fail to generalize to complex scenes. Neural rendering algorithms, on the other hand, rely on photometric consistency between RGB images, but largely ignore physical cues such as shadows, which have been shown to provide valuable information about the scene. We observe that shadows are a powerful cue that can constrain neural scene representations to learn SfS, and even outperform NeRF to reconstruct otherwise hidden geometry. We propose a graphics-inspired differentiable approach to render accurate shadows with volumetric rendering, predicting a shadow map that can be compared to the ground truth shadow. Even with just binary shadow maps, we show that neural rendering can localize the object and estimate coarse geometry. Our approach reveals that sparse cues in images can be used to estimate geometry using differentiable volumetric rendering. Moreover, our framework is highly generalizable and can work alongside existing 3D reconstruction techniques that otherwise only use photometric consistency. Code is available here.

K. Tiwary and T. Klinghoffer—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 99.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Besl, P., McKay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). https://doi.org/10.1109/34.121791

    Article  Google Scholar 

  2. Bobrow, D.G.: Comment on “Numerical shape from shading and occluding boundaries", pp. 89–94. The MIT Press (1994)

    Google Scholar 

  3. Boss, M., Braun, R., Jampani, V., Barron, J.T., Liu, C., Lensch, H.P.: Nerd: neural reflectance decomposition from image collections. In: IEEE International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  4. Chang, A.X., et al.: ShapeNet: an Information-Rich 3D Model Repository. Technical report arXiv:1512.03012 [cs.GR], Stanford University – Princeton University – Toyota Technological Institute at Chicago (2015)

  5. Falcon, W., et al.: Pytorch lightning. GitHub. Note (2019): https://github.com/PyTorchLightning/pytorch-lightning 3

  6. Guo, Y., Kang, D., Bao, L., He, Y., Zhang, S.: Nerfren: neural radiance fields with reflections. CoRR abs/2111.15234 (2021). https://arxiv.org/abs/2111.15234

  7. Henley, C., Maeda, T., Swedish, T., Raskar, R.: Imaging behind occluders using two-bounce light. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 573–588. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_34

    Chapter  Google Scholar 

  8. Kato, H., Ushiku, Y., Harada, T.: Neural 3d mesh renderer. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  9. Landabaso, J.L., Pardàs, M., Casas, J.R.: Shape from inconsistent silhouette. Comput. Vis. Image Underst. 112, 210–224 (2008)

    Article  Google Scholar 

  10. Li, T.M., Aittala, M., Durand, F., Lehtinen, J.: Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37(6), 222:1–222:11 (2018)

    Google Scholar 

  11. Liu, R., Menon, S., Mao, C., Park, D., Stent, S., Vondrick, C.: Shadows shed light on 3d objects. arXiv e-prints pp. arXiv-2206 (2022)

    Google Scholar 

  12. Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: a differentiable renderer for image-based 3d reasoning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7708–7717 (2019)

    Google Scholar 

  13. Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph. 38(4), 65:1–65:14 (2019)

    Google Scholar 

  14. Loper, M.M., Black, M.J.: OpenDR: an approximate differentiable renderer. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 154–169. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10584-0_11

    Chapter  Google Scholar 

  15. Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3d surface construction algorithm. ACM Siggraph Comput. Graph. 21(4), 163–169 (1987)

    Article  Google Scholar 

  16. Martin, W.N., Aggarwal, J.K.: Volumetric descriptions of objects from multiple views. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-5(2), 150–158 (1983). https://doi.org/10.1109/TPAMI.1983.4767367

  17. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  18. Niemeyer, M., Geiger, A.: GIRAFFE: representing scenes as compositional generative neural feature fields (2020). https://arxiv.org/abs/2011.12100

  19. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  20. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  21. Nimier-David, M., Vicini, D., Zeltner, T., Jakob, W.: Mitsuba 2: a retargetable forward and inverse renderer. ACM Trans. Graph. (TOG) 38(6), 1–17 (2019)

    Google Scholar 

  22. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 165–174 (2019)

    Google Scholar 

  23. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Adv. Neural Inf. Process. Syst. 32, pp. 8024–8035. Curran Associates, Inc. (2019). https://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

  24. Quei-An, C.: Nerf_pl: a pytorch-lightning implementation of nerf (2020). https://github.com/kwea123/nerf_pl/

  25. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation (2015)

    Google Scholar 

  26. Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)

    Google Scholar 

  27. Savarese, S., Rushmeier, H., Bernardini, F., Perona, P.: Shadow carving. In: Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 1, pp. 190–197. IEEE (2001)

    Google Scholar 

  28. Schönberger, J.L., Frahm, J.-M.: Structure-from-Motion Revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  29. Sitzmann, V., Thies, J., Heide, F., Nießner, M., Wetzstein, G., Zollhofer, M.: Deepvoxels: learning persistent 3d feature embeddings. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2437–2446 (2019)

    Google Scholar 

  30. Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: Nerv: neural reflectance and visibility fields for relighting and view synthesis (2020)

    Google Scholar 

  31. Tancik, M., et al.: Fourier features let networks learn high frequency functions in low dimensional domains (2020)

    Google Scholar 

  32. Tulsiani, S., Efros, A.A., Malik, J.: Multi-view consistency as supervisory signal for learning shape and pose prediction. In: Computer Vision and Pattern Regognition (CVPR) (2018)

    Google Scholar 

  33. Velten, A., Willwacher, T., Gupta, O., Veeraraghavan, A., Bawendi, M.G., Raskar, R.: Recovering threedimensional shape around a corner using ultrafast time-of-flight imaging. Nature, p. 745 (2012)

    Google Scholar 

  34. Verbin, D., Hedman, P., Mildenhall, B., Zickler, T., Barron, J.T., Srinivasan, P.P.: Ref-NeRF: structured view-dependent appearance for neural radiance fields. arXiv (2021)

    Google Scholar 

  35. Vogel, O., Valgaerts, L., Breuß, M., Weickert, J.: Making shape from shading work for real-world images. In: Denzler, J., Notni, G., Süße, H. (eds.) DAGM 2009. LNCS, vol. 5748, pp. 191–200. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03798-6_20

    Chapter  Google Scholar 

  36. Williams, L.: Casting curved shadows on curved surfaces. In: Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques, pp. 270–274 (1978)

    Google Scholar 

  37. Yamazaki, S., Srinivasa Narasimhan, G., Baker, S., Kanade, T.: The theory and practice of coplanar shadowgram imaging for acquiring visual hulls of intricate objects. Int. J. Comput. Vis. 81, March 2009. https://doi.org/10.1007/s11263-008-0170-4

  38. Ye, Y., Tulsiani, S., Gupta, A.: Shelf-supervised mesh prediction in the wild. In: Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  39. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: PlenOctrees for real-time rendering of neural radiance fields. In: ICCV (2021)

    Google Scholar 

  40. Zhang, J.Y., Yang, G., Tulsiani, S., Ramanan, D.: NeRS: neural reflectance surfaces for sparse-view 3d reconstruction in the wild. In: Conference on Neural Information Processing Systems (2021)

    Google Scholar 

  41. Zhang, R., Tsai, P.S., Cryer, J., Shah, M.: Shape-from-shading: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 690–706 (1999). https://doi.org/10.1109/34.784284

    Article  MATH  Google Scholar 

  42. Zheng, Q., Chellappa, R.: Estimation of illuminant direction, albedo, and shape from shading. IEEE Trans. Pattern Anal. Mach. Intell. 13(7), 680–702 (1991). https://doi.org/10.1109/34.85658

    Article  Google Scholar 

Download references

Acknowledgement

This research was supported by the SMART Contract IARPA Grant #2021-20111000004. We would also like to thank Systems & Technology Research (STR). In addition, the authors would also like to thank Professor Voicu Popescu (Purdue University) for being so generous with his time and the valuable discussions that came from our meetings.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kushagra Tiwary .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 3071 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tiwary, K., Klinghoffer, T., Raskar, R. (2022). Towards Learning Neural Representations from Shadows. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13693. Springer, Cham. https://doi.org/10.1007/978-3-031-19827-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19827-4_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19826-7

  • Online ISBN: 978-3-031-19827-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics