[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3610548.3618139acmconferencesArticle/Chapter ViewAbstractPublication Pagessiggraph-asiaConference Proceedingsconference-collections
research-article
Open access

VR-NeRF: High-Fidelity Virtualized Walkable Spaces

Published: 11 December 2023 Publication History

Abstract

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in unprecedented quality and density. We extend instant neural graphics primitives with a novel perceptual color space for learning accurate HDR appearance, and an efficient mip-mapping mechanism for level-of-detail rendering with anti-aliasing, while carefully optimizing the trade-off between quality and speed. Our multi-GPU renderer enables high-fidelity volume rendering of our neural radiance field model at the full VR resolution of dual 2K × 2K at 36 Hz on our custom demo machine. We demonstrate the quality of our results on our challenging high-fidelity datasets, and compare our method and datasets to existing baselines. We release our dataset on our project website: https://vr-nerf.github.io.

References

[1]
Edward H. Adelson and James R. Bergen. 1991. The Plenoptic Function and the Elements of Early Vision. In Computational Models of Visual Processing. 3–20.
[2]
Agisoft, LLC. 2023. Metashape 2.0.
[3]
Samir Aroudj, Steven Lovegrove, Eddy Ilg, Tanner Schmidt, Michael Goesele, and Richard Newcombe. 2022. ERF: Explicit Radiance Field Reconstruction From Scratch. (2022). arXiv:2203.00051.
[4]
Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. 2021. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. In ICCV. https://doi.org/10.1109/ICCV48922.2021.00580
[5]
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In CVPR. https://doi.org/10.1109/CVPR52688.2022.00539
[6]
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2023. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. In ICCV.
[7]
Dean Beeler, Ed Hutchins, and Paul Pedriana. 2016. Asynchronous Spacewarp. https://developer.oculus.com/blog/asynchronous-spacewarp/. Oculus Developer Blog.
[8]
Tobias Bertel, Mingze Yuan, Reuben Lindroos, and Christian Richardt. 2020. OmniPhotos: Casual 360° VR Photography. ACM Trans. Graph. 39, 6 (2020), 267:1–12. https://doi.org/10.1145/3414685.3417770
[9]
Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. 2020. Immersive Light Field Video with a Layered Mesh Representation. ACM Trans. Graph. 39, 4 (2020), 86:1–15. https://doi.org/10.1145/3386569.3392485
[10]
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. 2022. TensoRF: Tensorial Radiance Fields. In ECCV. https://doi.org/10.1007/978-3-031-19824-3_20
[11]
Abe Davis, Marc Levoy, and Frédo Durand. 2012. Unstructured Light Fields. Comput. Graph. Forum 31, 2 (2012), 305–314. https://doi.org/10.1111/j.1467-8659.2012.03009.x
[12]
John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. DeepView: View Synthesis With Learned Gradient Descent. In CVPR. 2367–2376. https://doi.org/10.1109/CVPR.2019.00247
[13]
Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. 1996. The lumigraph. In SIGGRAPH. 43–54. https://doi.org/10.1145/237170.237200
[14]
Param Hanji, Fangcheng Zhong, and Rafał K. Mantiuk. 2020. Noise-Aware Merging of High Dynamic Range Image Stacks without Camera Calibration. In ECCV Workshops. https://doi.org/10.1007/978-3-030-67070-2_23
[15]
Peter Hedman, Tobias Ritschel, George Drettakis, and Gabriel Brostow. 2016. Scalable Inside-Out Image-Based Rendering. ACM Trans. Graph. 35, 6 (2016), 231:1–11. https://doi.org/10.1145/2980179.2982420
[16]
Xin Huang, Qi Zhang, Ying Feng, Hongdong Li, Xuan Wang, and Qing Wang. 2022. HDR-NeRF: High Dynamic Range Neural Radiance Fields. In CVPR. https://doi.org/10.1109/CVPR52688.2022.01785
[17]
Hyeonjoong Jang, Andréas Meuleman, Dahyun Kang, Donggun Kim, Christian Richardt, and Min H. Kim. 2022. Egocentric Scene Reconstruction from an Omnidirectional Video. ACM Trans. Graph. 41, 4 (2022), 100:1–12. https://doi.org/10.1145/3528223.3530074
[18]
Yifan Jiang, Peter Hedman, Ben Mildenhall, Dejia Xu, Jonathan T. Barron, Zhangyang Wang, and Tianfan Xue. 2023. AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training. In CVPR. https://doi.org/10.1109/CVPR52729.2023.00013
[19]
Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh. 2019. Panoptic Studio: A Massively Multiview System for Social Interaction Capture. TPAMI 41, 1 (2019), 190–204. https://doi.org/10.1109/TPAMI.2017.2782743
[20]
Kim Jun-Seong, Kim Yu-Ji, Moon Ye-Bin, and Tae-Hyun Oh. 2022. HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields. In ECCV. https://doi.org/10.1007/978-3-031-19824-3_23
[21]
Takeo Kanade, P. J. Narayanan, and Peter W. Rander. 1995. Virtualized reality: concepts and early results. In ICCV Workshops. 69–76. https://doi.org/10.1109/WVRS.1995.476854
[22]
Changil Kim, Henning Zimmer, Yael Pritch, Alexander Sorkine-Hornung, and Markus Gross. 2013. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32, 4 (2013), 73:1–12. https://doi.org/10.1145/2461912.2461926
[23]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.
[24]
Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. 2017. Tanks and Temples: Benchmarking Large-scale Scene Reconstruction. ACM Trans. Graph. 36, 4 (2017), 78:1–13. https://doi.org/10.1145/3072959.3073599
[25]
Timo Kunkel. 2022. The Perceptual Quantizer: Design Considerations and Applications. (2022). Talk at ICC HDR Experts Day.
[26]
Marc Levoy and Pat Hanrahan. 1996. Light field rendering. In SIGGRAPH. 31–42. https://doi.org/10.1145/237170.237199
[27]
Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H. Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. 2023. Neuralangelo: High-Fidelity Neural Surface Reconstruction. In CVPR. 8456–8465. https://doi.org/10.1109/CVPR52729.2023.00817
[28]
Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. 2021. BARF: Bundle-Adjusting Neural Radiance Fields. In ICCV. https://doi.org/10.1109/ICCV48922.2021.00569
[29]
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020. Neural Sparse Voxel Fields. In NeurIPS.
[30]
Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H. Kim, and Johannes Kopf. 2023. Progressively Optimized Local Radiance Fields for Robust View Synthesis. In CVPR. 16539–16548. https://doi.org/10.1109/CVPR52729.2023.01587
[31]
Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, and Jonathan T. Barron. 2022. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. In CVPR. https://doi.org/10.1109/CVPR52688.2022.01571
[32]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV. https://doi.org/10.1007/978-3-030-58452-8_24
[33]
Scott Miller, Mahdi Nezamabadi, and Scott Daly. 2013. Perceptual Signal Coding for More Efficient Usage of Bit Codes. SMPTE Motion Imaging Journal 122, 4 (2013), 52–59. https://doi.org/10.5594/j18290
[34]
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. Graph. 41, 4 (2022), 102:1–15. https://doi.org/10.1145/3528223.3530127
[35]
Sergio Orts-Escolano, Christoph Rhemann, Sean Fanello, Wayne Chang, Adarsh Kowdle, Yury Degtyarev, David Kim, Philip L. Davidson, Sameh Khamis, Mingsong Dou, Vladimir Tankovich, Charles Loop, Qin Cai, Philip A. Chou, Sarah Mennicken, Julien Valentin, Vivek Pradeep, Shenlong Wang, Sing Bing Kang, Pushmeet Kohli, Yuliya Lutchyn, Cem Keskin, and Shahram Izadi. 2016. Holoportation: Virtual 3D Teleportation in Real-time. In UIST. 741–754. https://doi.org/10.1145/2984511.2984517
[36]
Jin-Si R. Over, Andrew C. Ritchie, Christine J. Kranenburg, Jenna A. Brown, Daniel D. Buscombe, Tom Noble, Christopher R. Sherwood, Jonathan A. Warrick, and Phillipe A. Wernette. 2021. Processing coastal imagery with Agisoft Metashape Professional Edition, version 1.6—Structure from motion workflow documentation. Open-File Report 2021-1039. U.S. Geological Survey. https://doi.org/10.3133/ofr20211039
[37]
Ryan Styles Overbeck, Daniel Erickson, Daniel Evangelakos, Matt Pharr, and Paul Debevec. 2018. A System for Acquiring, Compressing, and Rendering Panoramic Light Field Stills for Virtual Reality. ACM Trans. Graph. 37, 6 (2018), 197:1–15. https://doi.org/10.1145/3272127.3275031
[38]
Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo-Martin Brualla. 2021. Nerfies: Deformable Neural Radiance Fields. In ICCV. https://doi.org/10.1109/ICCV48922.2021.00581
[39]
Albert Parra Pozo, Michael Toksvig, Terry Filiba Schrager, Joyse Hsu, Uday Mathur, Alexander Sorkine-Hornung, Rick Szeliski, and Brian Cabral. 2019. An Integrated 6DoF Video Camera and System Design. ACM Trans. Graph. 38, 6 (2019), 216:1–16. https://doi.org/10.1145/3355089.3356555
[40]
Julien Philip, Sébastien Morgenthaler, Michaël Gharbi, and George Drettakis. 2021. Free-viewpoint Indoor Neural Relighting from Multi-view Stereo. ACM Trans. Graph. 40, 5 (2021), 194:1–18. https://doi.org/10.1145/3469842
[41]
Erik Reinhard, Greg Ward, Sumanta Pattanaik, and Paul Debevec. 2006. High Dynamic Range Imaging – Acquisition, Display and Image-Based Lighting. Morgan Kaufmann.
[42]
Christian Richardt, James Tompkin, and Gordon Wetzstein. 2020. Capture, Reconstruction, and Representation of the Visual Real World for Virtual Reality. In Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays. 3–32. https://doi.org/10.1007/978-3-030-41816-8_1
[43]
Radu Alexandru Rosu and Sven Behnke. 2023. PermutoSDF: Fast Multi-View Reconstruction with Implicit Surfaces using Permutohedral Lattices. In CVPR. https://doi.org/10.1109/CVPR52729.2023.00818
[44]
Darius Rückert, Linus Franke, and Marc Stamminger. 2022. ADOP: Approximate Differentiable One-Pixel Point Rendering. ACM Trans. Graph. 41, 4 (2022), 99:1–14. https://doi.org/10.1145/3528223.3530122
[45]
SMPTE. 2014. High Dynamic Range Electro-Optical Transfer Function of Mastering Reference Displays. SMPTE Standard ST 2084:2014. Society of Motion Picture and Television Engineers. https://doi.org/10.5594/SMPTE.ST2084.2014
[46]
Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard Newcombe. 2019. The Replica Dataset: A Digital Replica of Indoor Spaces. (2019). arXiv:1906.05797.
[47]
Cheng Sun, Min Sun, and Hwann-Tzong Chen. 2022. Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction. In CVPR. https://doi.org/10.1109/CVPR52688.2022.00538
[48]
Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. 2021. Neural Geometric Level of Detail: Real-Time Rendering With Implicit 3D Shapes. In CVPR. 11358–11367. https://doi.org/10.1109/CVPR46437.2021.01120
[49]
Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P. Srinivasan, Jonathan T. Barron, and Henrik Kretzschmar. 2022. Block-NeRF: Scalable Large Scene Neural View Synthesis. In CVPR. https://doi.org/10.1109/CVPR52688.2022.00807
[50]
Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias Niessner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollhöfer, and Vladislav Golyanik. 2022. Advances in Neural Rendering. Comput. Graph. Forum 41, 2 (2022), 703–735. https://doi.org/10.1111/cgf.14507
[51]
Haithem Turki, Deva Ramanan, and Mahadev Satyanarayanan. 2022. Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs. In CVPR. 12922–12931. https://doi.org/10.1109/CVPR52688.2022.01258
[52]
Ziyu Wan, Christian Richardt, Aljaž Božič, Chao Li, Vijay Rengarajan, Seonghyeon Nam, Xiaoyu Xiang, Tuotuo Li, Bo Zhu, Rakesh Ranjan, and Jing Liao. 2023. Learning Neural Duplex Radiance Fields for Real-Time View Synthesis. In CVPR. https://doi.org/10.1109/CVPR52729.2023.00803
[53]
Peng Wang, Yuan Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, and Wenping Wang. 2023. F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories. In CVPR. https://doi.org/10.1109/CVPR52729.2023.00404
[54]
Zhongshu Wang, Lingzhi Li, Zhen Shen, Li Shen, and Liefeng Bo. 2022. 4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions. (2022). arXiv:2212.04701.
[55]
Thomas Whelan, Michael Goesele, Steven J. Lovegrove, Julian Straub, Simon Green, Richard Szeliski, Steven Butterfield, Shobhit Verma, and Richard Newcombe. 2018. Reconstructing Scenes with Mirror and Glass Surfaces. ACM Trans. Graph. 37, 4 (2018), 102:1–11. https://doi.org/10.1145/3197517.3201319
[56]
Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz, and Marc Levoy. 2005. High performance imaging using large camera arrays. ACM Trans. Graph. 24, 3 (2005), 765–776. https://doi.org/10.1145/1073204.1073259
[57]
Lance Williams. 1983. Pyramidal Parametrics. Computer Graphics (Proceedings of SIGGRAPH) 17, 3 (1983), 1–11. https://doi.org/10.1145/800059.801126
[58]
Xiuchao Wu, Jiamin Xu, Zihan Zhu, Hujun Bao, Qixing Huang, James Tompkin, and Weiwei Xu. 2022. Scalable Neural Indoor Scene Rendering. ACM Trans. Graph. 41, 4 (2022), 98:1–16. https://doi.org/10.1145/3528223.3530153
[59]
Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao, Christian Theobalt, Bo Dai, and Dahua Lin. 2022. BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering. In ECCV. https://doi.org/10.1007/978-3-031-19824-3_7
[60]
Jiamin Xu, Xiuchao Wu, Zihan Zhu, Qixing Huang, Yin Yang, Hujun Bao, and Weiwei Xu. 2021. Scalable Image-Based Indoor Scene Rendering with Reflections. ACM Trans. Graph. 40, 4 (2021), 60:1–14. https://doi.org/10.1145/3450626.3459849
[61]
Linning Xu, Yuanbo Xiangli, Sida Peng, Xingang Pan, Nanxuan Zhao, Christian Theobalt, Bo Dai, and Dahua Lin. 2023. Grid-guided Neural Radiance Fields for Large Urban Scenes. In CVPR. https://doi.org/10.1109/CVPR52729.2023.00802
[62]
Jiawei Yang, Marco Pavone, and Yue Wang. 2023. FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization. In CVPR. https://doi.org/10.1109/CVPR52729.2023.00798
[63]
Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz. 2020. Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera. In CVPR. https://doi.org/10.1109/CVPR42600.2020.00538
[64]
Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. 2022. MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction. In NeurIPS.
[65]
Yuqi Zhang, Guanying Chen, and Shuguang Cui. 2023. GP-NeRF: Efficient Large-scale Scene Representation with a Hybrid of High-resolution Grid and Plane Features. (2023). arXiv:2303.03003.
[66]
Jingsen Zhu, Yuchi Huo, Qi Ye, Fujun Luan, Jifan Li, Dianbing Xi, Lisha Wang, Rui Tang, Wei Hua, Hujun Bao, and Rui Wang. 2023. I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs. In CVPR. https://doi.org/10.1109/CVPR52729.2023.01202

Cited By

View all
  • (2024)Integration of 3D Gaussian Splatting and Neural Radiance Fields in Virtual Reality Fire FightingRemote Sensing10.3390/rs1613244816:13(2448)Online publication date: 3-Jul-2024
  • (2024)VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual RealityACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657448(1-1)Online publication date: 13-Jul-2024
  • (2024)NeRF as a Non-Distant Environment Emitter in Physics-based Inverse RenderingACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657404(1-12)Online publication date: 13-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SA '23: SIGGRAPH Asia 2023 Conference Papers
December 2023
1113 pages
ISBN:9798400703157
DOI:10.1145/3610548
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 December 2023

Check for updates

Author Tags

  1. High Dynamic Range Imaging
  2. Multi-View Capture
  3. Neural Radiance Fields
  4. Novel-View Synthesis
  5. Real-Time

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SA '23
Sponsor:
SA '23: SIGGRAPH Asia 2023
December 12 - 15, 2023
NSW, Sydney, Australia

Acceptance Rates

Overall Acceptance Rate 178 of 869 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)999
  • Downloads (Last 6 weeks)155
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Integration of 3D Gaussian Splatting and Neural Radiance Fields in Virtual Reality Fire FightingRemote Sensing10.3390/rs1613244816:13(2448)Online publication date: 3-Jul-2024
  • (2024)VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual RealityACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657448(1-1)Online publication date: 13-Jul-2024
  • (2024)NeRF as a Non-Distant Environment Emitter in Physics-based Inverse RenderingACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657404(1-12)Online publication date: 13-Jul-2024
  • (2024)The Metamorphosis of Storytelling: Time-based Interactivity in Virtual Reality FilmmakingExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3648672(1-5)Online publication date: 11-May-2024
  • (2024)Exploring Radiance Field Content Generation for Virtual Reality2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00145(697-698)Online publication date: 16-Mar-2024
  • (2024)AMVP: Adaptive Multi-Volume Primitives for Auto-Driving Novel View SynthesisIEEE Robotics and Automation Letters10.1109/LRA.2024.34446719:10(8306-8313)Online publication date: Oct-2024
  • (2024)DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02120(22466-22476)Online publication date: 16-Jun-2024
  • (2024)Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02067(21886-21896)Online publication date: 16-Jun-2024
  • (2024)HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01858(19647-19656)Online publication date: 16-Jun-2024
  • (2024)RoGUENeRF: A Robust Geometry-Consistent Universal Enhancer for NeRFComputer Vision – ECCV 202410.1007/978-3-031-73254-6_4(54-71)Online publication date: 28-Nov-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media