[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

UDCGN: Uncertainty-Driven Cross-Guided Network for Depth Completion of Transparent Objects

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14262))

Included in the following conference series:

  • 896 Accesses

Abstract

In the field of robotics, most perception methods rely on depth information captured by RGB-D cameras. However, the ability of depth sensors to capture depth information is hindered by the reflection and refraction of light on transparent objects. Existing methods of completing transparent objects’ depth information are usually impractical due to the need for fixtures or unacceptably slow inference speeds. To address this challenge, we propose an efficient multi-stage architecture called UDCGN. This method progressively learns completion functions from sparse inputs by dividing the overall recovery process into more manageable steps. To enhance the interaction between different branches, Cross-Guided Fusion Block (CGFB) is introduced into each stage. The CGFB dynamically generates convolution kernel parameters from guided features and convolutes them with input features. Furthermore, the Adaptive Uncertainty-Driven Loss Function (AUDL) is developed to handle the uncertainty issue of sparse depth. It optimizes pixels with high uncertainty by adapting different distributions. Comprehensive experiments on representative datasets demonstrate that UDCGN significantly outperforms state-of-the-art methods in terms of both performance and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 55.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 69.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Häne, C., et al.: 3D visual perception for self-driving cars using a multi-camera system: calibration, mapping, localization, and obstacle detection. IVC 68, 14–27 (2017)

    Article  Google Scholar 

  2. Ma, F., Carlone, L., Ayaz, U., Karaman, S.: Sparse depth sensing for resource-constrained robots. IJRR 38, 935–980 (2019)

    Google Scholar 

  3. Sajjan, S., et al.: Clear grasp: 3D shape estimation of transparent objects for manipulation. In: ICRA (2020)

    Google Scholar 

  4. Phillips, C.J., Lecce, M., Daniilidis, K.: Seeing glassware: from edge detection to pose estimation and shape recovery. In: RSS (2016)

    Google Scholar 

  5. Qian, Y., Gong, M., Yang, Y.H.: 3D reconstruction of transparent objects with position-normal consistency. In: CVPR (2016)

    Google Scholar 

  6. Zhu, L., et al.: RGB-D local implicit function for depth completion of transparent objects. In: CVPR (2021)

    Google Scholar 

  7. Ma, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: ICRA (2018)

    Google Scholar 

  8. Hu, M., Wang, S., Li, B., Ning, S., Fan, L., Gong, X.: PENet: towards precise and efficient image guided depth completion. In: ICRA (2021)

    Google Scholar 

  9. Tang, J., Tian, F.P., Feng, W., Li, J., Tan, P.: Learning guided convolutional network for depth completion. TIP 30, 1116–1129 (2020)

    Google Scholar 

  10. Qiu, J., et al.: DeepLiDAR: deep surface normal guided depth prediction for outdoor scene from sparse lidar data and single color image. In: CVPR (2019)

    Google Scholar 

  11. Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.: Sparsity invariant CNNs. In: 3DV (2017)

    Google Scholar 

  12. Chodosh, N., Wang, C., Lucey, S.: Deep convolutional compressed sensing for LiDAR depth completion. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11361, pp. 499–513. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20887-5_31

    Chapter  Google Scholar 

  13. Eldesokey, A., Felsberg, M., Holmquist, K., Persson, M.: Uncertainty-aware CNNs for depth completion: uncertainty from beginning to end. In: CVPR (2020)

    Google Scholar 

  14. Dimitrievski, M., Veelaert, P., Philips, W.: Learning morphological operators for depth completion. In: Blanc-Talon, J., Helbert, D., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2018. LNCS, vol. 11182, pp. 450–461. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01449-0_38

    Chapter  Google Scholar 

  15. Senushkin, D., Romanov, M., Belikov, I., Patakin, N., Konushin, A.: Decoder modulation for indoor depth completion. In: IROS (2021)

    Google Scholar 

  16. Imran, S., Long, Y., Liu, X., Morris, D.: Depth coefficients for depth completion. In: CVPR (2019)

    Google Scholar 

  17. Ma, F., Cavalheiro, G.V., Karaman, S.: Self-supervised sparse-to-dense: self-supervised depth completion from lidar and monocular camera. In: ICRA (2019)

    Google Scholar 

  18. Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with CNNs: depth completion and semantic segmentation. In: 3DV (2018)

    Google Scholar 

  19. Zhang, Y., Wei, P., Li, H., Zheng, N.: Multiscale adaptation fusion networks for depth completion. In: IJCNN (2020)

    Google Scholar 

  20. Yan, Z., et al.: RigNet: repetitive image guided network for depth completion. arXiv:2107.13802 (2021)

  21. Lin, X., Ma, L., Liu, W., Chang, S.-F.: Context-gated convolution. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12363, pp. 701–718. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_41

    Chapter  Google Scholar 

  22. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? NeurIPS 30 (2017)

    Google Scholar 

  23. Zhu, Y., Dong, W., Li, L., Wu, J., Li, X., Shi, G.: Robust depth completion with uncertainty-driven loss functions. In: AAAI (2022)

    Google Scholar 

  24. Ning, Q., Dong, W., Li, X., Wu, J., Shi, G.: Uncertainty-driven loss for single image super-resolution. NeurIPS 34, 16398–16409 (2021)

    Google Scholar 

  25. Gu, Y., Jin, Z., Chiu, S.C.: Active learning combining uncertainty and diversity for multi-class image classification. IET-CVI 9, 400–407 (2015)

    Google Scholar 

  26. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  27. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018)

    Google Scholar 

  28. Figueiredo, M.: Adaptive sparseness using Jeffreys prior. NeurIPS 14, 722 (2001)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the Key Research and Development Program of Zhejiang Province (No. 2023C01168) and the Foundation of Zhejiang University City College (No. J202316).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, Y., Wang, Z., Chen, J., Qian, Y., Wang, W. (2023). UDCGN: Uncertainty-Driven Cross-Guided Network for Depth Completion of Transparent Objects. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14262. Springer, Cham. https://doi.org/10.1007/978-3-031-44201-8_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44201-8_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44200-1

  • Online ISBN: 978-3-031-44201-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics