[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3664647.3681168acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Digging into Contrastive Learning for Robust Depth Estimation with Diffusion Models

Published: 28 October 2024 Publication History

Abstract

Recently, diffusion-based depth estimation methods have drawn widespread attention due to their elegant denoising patterns and promising performance. However, they are typically unreliable under adverse conditions prevalent in real-world scenarios, such as rainy, snowy, etc. In this paper, we propose a novel robust depth estimation method called D4RD, featuring a custom contrastive learning mode tailored for diffusion models to mitigate performance degradation in complex environments. Concretely, we integrate the strength of knowledge distillation into contrastive learning, building the 'trinity' contrastive scheme. This scheme utilizes the sampled noise of the forward diffusion process as a natural reference, guiding the predicted noise in diverse scenes toward a more stable and precise optimum. Moreover, we extend noise-level trinity to encompass more generic feature and image levels, establishing a multi-level contrast to distribute the burden of robust perception across the overall network. Before addressing complex scenarios, we enhance the stability of the baseline diffusion model with three straightforward yet effective improvements, which facilitate convergence and remove depth outliers. Extensive experiments demonstrate that D4RD surpasses existing state-of-the-art solutions on synthetic corruption datasets and real-world weather conditions. Source code and data are available at https://github.com/wangjiyuan9/D4RD.

References

[1]
Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, and Felix Heide. 2020. Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2]
Nicholas Charron, Stephen Phillips, and Steven Waslander. 2018. De-noising of Lidar Point Clouds Corrupted by Snowfall. 254--261. https://doi.org/10.1109/CRV.2018.00043
[3]
Shoufa Chen, Peize Sun, Yibing Song, and Ping Luo. 2022. Diffusiondet: Diffusion model for object detection. arXiv preprint arXiv:2211.09788 (2022).
[4]
Xinlei Chen and Kaiming He. 2020. Exploring Simple Siamese Representation Learning. arxiv: 2011.10566 [cs.CV]
[5]
Yiqun Duan, Xianda Guo, and Zheng Zhu. 2023. Diffusiondepth: Diffusion denoising approach for monocular depth estimation. arXiv preprint arXiv:2303.05021 (2023).
[6]
David Eigen and Rob Fergus. 2015. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision. 2650--2658.
[7]
David Eigen, Christian Puhrsch, and Rob Fergus. 2014. Depth map prediction from a single image using a multi-scale deep network. Advances in neural information processing systems, Vol. 27 (2014).
[8]
Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. 2018. Deep Ordinal Regression Network for Monocular Depth Estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. https://doi.org/10.1109/cvpr.2018.00214
[9]
Stefano Gasperini, Nils Morbitzer, HyunJun Jung, Nassir Navab, and Federico Tombari. 2023. Robust Monocular Depth Estimation under Challenging Conditions. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
[10]
Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel Brostow. 2019. Digging Into Self-Supervised Monocular Depth Estimation. arxiv: 1806.01260 [cs.CV]
[11]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. arxiv: 2006.11239 [cs.LG]
[12]
Yuanfeng Ji, Zhe Chen, Enze Xie, Lanqing Hong, Xihui Liu, Zhaoqiang Liu, Tong Lu, Zhenguo Li, and Ping Luo. 2023. DDP: Diffusion Model for Dense Visual Prediction. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE. https://doi.org/10.1109/iccv51070.2023.01987
[13]
Tobias Kalb and Jürgen Beyerer. 2023. Principles of Forgetting in Domain-Incremental Semantic Segmentation in Adverse Weather Conditions. arxiv: 2303.14115 [cs.CV]
[14]
Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler. 2024. Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15]
Neehar Kondapaneni, Markus Marks, Manuel Knott, Rogério Guimarães, and Pietro Perona. 2023. Text-image Alignment for Diffusion-based Perception.
[16]
Lingdong Kong, Shaoyuan Xie, Hanjiang Hu, Lai Xing Ng, Benoit R. Cottereau, and Wei Tsang Ooi. 2023. RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions. ArXiv, Vol. abs/2310.15171 (2023). https://api.semanticscholar.org/CorpusID:264436593
[17]
Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. 2016. Deeper Depth Prediction with Fully Convolutional Residual Networks. arxiv: 1606.00373 [cs.CV]
[18]
Jiuming Liu, Guangming Wang, Weicai Ye, Chaokang Jiang, Jinru Han, Zhe Liu, Guofeng Zhang, Dalong Du, and Hesheng Wang. 2023. DifFlow3D: Toward Robust Uncertainty-Aware Scene Flow Estimation with Diffusion Model. arXiv preprint arXiv:2311.17456 (2023).
[19]
Lina Liu, Xibin Song, Mengmeng Wang, Yong Liu, and Liangjun Zhang. 2021. Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation. arxiv: 2108.07628 [cs.CV]
[20]
Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. arxiv: 1711.05101 [cs.LG]
[21]
Yifan Mao, Jian Liu, and Xianming Liu. 2024. Stealing Stable Diffusion Prior for Robust Monocular Depth Estimation. arxiv: 2403.05056 [cs.CV]
[22]
Jisu Nam, Gyuseong Lee, Sunwoo Kim, Hyeonsu Kim, Hyoungwon Cho, Seyeon Kim, and Seungryong Kim. 2023. DiffMatch: Diffusion Model for Dense Matching. arxiv: 2305.19094 [cs.CV]
[23]
Suraj Patni, Aradhye Agarwal, and Chetan Arora. 2024. ECoDepth: Effective Conditioning of Diffusion Models for Monocular Depth Estimation. arXiv preprint arXiv:2403.18807 (2024).
[24]
René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. 2021. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 12179--12188.
[25]
René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. 2020. Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer. arxiv: 1907.01341 [cs.CV]
[26]
Kieran Saunders, George Vogiatzis, and Luis Manso. 2023. Self-supervised Monocular Depth Estimation: Let's Talk About The Weather. arxiv: 2307.08357 [cs.CV]
[27]
Saurabh Saxena, Charles Herrmann, Junhwa Hur, Abhishek Kar, Mohammad Norouzi, Deqing Sun, and David J. Fleet. 2023. The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation. arxiv: 2306.01923 [cs.CV]
[28]
Shuwei Shao, Zhongcai Pei, Weihai Chen, Dingchi Sun, Peter C. Y. Chen, and Zhengguo Li. 2023. MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model. arxiv: 2311.07198 [cs.CV]
[29]
Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising Diffusion Implicit Models. arxiv: 2010.02502 [cs.LG]
[30]
Ziyang Song, Ruijie Zhu, Chuxin Wang, Jiacheng Deng, Jianfeng He, and Tianzhu Zhang. 2023. EC-Depth: Exploring the consistency of self-supervised monocular depth estimation in challenging scenes. arxiv: 2310.08044 [cs.CV]
[31]
Jiyuan Wang, Chunyu Lin, Lang Nie, Shujun Huang, Yao Zhao, Xing Pan, and Rui Ai. 2024. WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather Conditions. arxiv: 2310.05556 [cs.CV]
[32]
Mengyu Wang, Henghui Ding, Jun Hao Liew, Jiajun Liu, Yao Zhao, and Yunchao Wei. 2023. SegRefiner: Towards Model-Agnostic Segmentation Refinement with Discrete Diffusion Process. In NeurIPS.
[33]
Guorun Yang, Xiao Song, Chaoqin Huang, Zhidong Deng, Jianping Shi, and Bolei Zhou. 2019. DrivingStereo: A Large-Scale Dataset for Stereo Matching in Autonomous Driving Scenarios. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34]
Ning Zhang, Francesco Nex, George Vosselman, and Norman Kerle. 2023. Lite-mono: A lightweight cnn and transformer architecture for self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18537--18546.
[35]
Chaoqiang Zhao, Yang Tang, and Qiyu Sun. 2022. Unsupervised Monocular Depth Estimation in Highly Complex Environments. arxiv: 2107.13137 [cs.CV]
[36]
Chaoqiang Zhao, Youmin Zhang, Matteo Poggi, Fabio Tosi, Xianda Guo, Zheng Zhu, Guan Huang, Yang Tang, and Stefano Mattoccia. 2022. MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer. In 2022 International Conference on 3D Vision (3DV). IEEE. https://doi.org/10.1109/3dv57658.2022.00077
[37]
Wenliang Zhao, Yongming Rao, Zuyan Liu, Benlin Liu, Jie Zhou, and Jiwen Lu. 2023. Unleashing Text-to-Image Diffusion Models for Visual Perception. ICCV (2023).
[38]
Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. 2017. Unsupervised Learning of Depth and Ego-Motion from Video. arxiv: 1704.07813 [cs.CV]

Index Terms

  1. Digging into Contrastive Learning for Robust Depth Estimation with Diffusion Models

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '24: Proceedings of the 32nd ACM International Conference on Multimedia
      October 2024
      11719 pages
      ISBN:9798400706868
      DOI:10.1145/3664647
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 28 October 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. depth estimation
      2. diffusion methods
      3. robust perception
      4. self-supervised learning

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      MM '24
      Sponsor:
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne VIC, Australia

      Acceptance Rates

      MM '24 Paper Acceptance Rate 1,150 of 4,385 submissions, 26%;
      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 59
        Total Downloads
      • Downloads (Last 12 months)59
      • Downloads (Last 6 weeks)53
      Reflects downloads up to 13 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media