[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3191442.3191464acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicigpConference Proceedingsconference-collections
research-article

Kinect Depth Map Inpainting using a Multi-Scale Deep Convolutional Neural Network

Published: 24 February 2018 Publication History

Abstract

Consumer level RGB-D camera, such as Kinect, is one of the most important devices to get depth data of 3D vision. However, it is quite difficult to get a high-quality depth map which has the same resolution with its corresponding color image and can be aligned perfectly to it. Most previous methods on depth map inpainting focused on denoising and filling up small holes, but they are ineffective to recover large areas of depth missing. Several reasons will cause the large-area depth missing problem, such as the strong specular reflection, the inconsistency of camera tether between color camera and depth camera. In this paper, we present a novel depth map inpainting method for Kinect with a multi-scale deep Convolutional Neural Network (CNN). This method has three stages: depth map pre-processing, multi-scale network training and image optimization. Our method provides a possibility to make a comprehensive refinement for Kinect depth map, including denoising, filling up small holes and inpainting large areas of depth missing. Besides, our recovered depth map can be aligned perfectly to its corresponding color image. And we also evaluate our method on the SUNCG Dataset and real scenes captured by Kinect 2.0. The experiment results show that our method is more capable of inpainting depth map.

References

[1]
Zhang, Z. (2012). Microsoft kinect sensor and its effect. IEEE multimedia, 19(2), 4--10.
[2]
Mallick, T., Das, P. P., & Majumdar, A. K. (2014). Characterizations of noise in Kinect depth images: A review. IEEE Sensors journal, 14(6), 1731--1740.
[3]
Ti, C., Xu, G., Guan, Y., & Teng, Y. (2017). Depth Recovery for Kinect Sensor Using Contour-Guided Adaptive Morphology Filter. IEEE Sensors Journal.
[4]
He, K., Sun, J., & Tang, X. (2013). Guided image filtering. IEEE transactions on pattern analysis and machine intelligence, 35(6), 1397--1409.
[5]
Miao, D., Fu, J., Lu, Y., Li, S., & Chen, C. W. (2012, May). Texture-assisted kinect depth inpainting. In Circuits and Systems (ISCAS), 2012 IEEE International Symposium on(pp. 604--607). IEEE.
[6]
Yang, J., Ye, X., Li, K., Hou, C., & Wang, Y. (2014). Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE transactions on image processing, 23(8), 3443--3458.
[7]
Kopf, J., Cohen, M. F., Lischinski, D., & Uyttendaele, M. (2007, August). Joint bilateral upsampling. In ACM Transactions on Graphics (ToG) (Vol. 26, No. 3, p. 96). ACM.
[8]
Yang, Q., Yang, R., Davis, J., & Nistér, D. (2007, June). Spatial-depth super resolution for range images. In Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on (pp. 1--8). IEEE.
[9]
Dolson, J., Baek, J., Plagemann, C., & Thrun, S. (2010, June). Upsampling range data in dynamic environments. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (pp. 1141--1148). IEEE.
[10]
Li, F., Yu, J., & Chai, J. (2008, June). A hybrid camera for motion deblurring and depth map super-resolution. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on (pp. 1--8). IEEE.
[11]
Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., & Toyama, K. (2004). Digital photography with flash and no-flash image pairs. ACM transactions on graphics (TOG), 23(3), 664--672.
[12]
He, K., Sun, J., & Tang, X. (2013). Guided image filtering. IEEE transactions on pattern analysis and machine intelligence, 35(6), 1397--1409.
[13]
Hung, M. F., Miaou, S. G., & Chiang, C. Y. (2013, October). Dual edge-confined inpainting of 3D depth map using color image's edges and depth image's edges. In Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2013 Asia-Pacific (pp. 1--9). IEEE.
[14]
Song, S., Yu, F., Zeng, A., Chang, A. X., Savva, M., & Funkhouser, T. (2016). Semantic scene completion from a single depth image. arXiv preprint arXiv:1611.08974.
[15]
Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., & Navab, N. (2016, October). Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth International Conference on (pp. 239--248). IEEE.
[16]
A. Z. A. X. C. M. S. T. F. Shuran Song, Fisher Yu. Semantic Scene Completion from a Single Depth Image. In arXiv,2016

Cited By

View all
  • (2024)SelfredepthJournal of Real-Time Image Processing10.1007/s11554-024-01491-z21:4Online publication date: 4-Jul-2024
  • (2023)Depth Information Precise Completion-GAN: A Precisely Guided Method for Completing Ill Regions in Depth MapsRemote Sensing10.3390/rs1514368615:14(3686)Online publication date: 24-Jul-2023
  • (2023)Toward physically realistic vision in teleoperation: A user study with light‐field head mounted display and 6‐DoF head motionJournal of the Society for Information Display10.1002/jsid.126231:12(663-674)Online publication date: 5-Nov-2023
  • Show More Cited By

Index Terms

  1. Kinect Depth Map Inpainting using a Multi-Scale Deep Convolutional Neural Network

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICIGP '18: Proceedings of the 2018 International Conference on Image and Graphics Processing
    February 2018
    183 pages
    ISBN:9781450363679
    DOI:10.1145/3191442
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • Wuhan Univ.: Wuhan University, China

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 February 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. CNN
    2. depth map
    3. inpainting

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICIGP 2018

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)6
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 31 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)SelfredepthJournal of Real-Time Image Processing10.1007/s11554-024-01491-z21:4Online publication date: 4-Jul-2024
    • (2023)Depth Information Precise Completion-GAN: A Precisely Guided Method for Completing Ill Regions in Depth MapsRemote Sensing10.3390/rs1514368615:14(3686)Online publication date: 24-Jul-2023
    • (2023)Toward physically realistic vision in teleoperation: A user study with light‐field head mounted display and 6‐DoF head motionJournal of the Society for Information Display10.1002/jsid.126231:12(663-674)Online publication date: 5-Nov-2023
    • (2022)Depth Map Reconstruction Method in Control Problems for Robots and Mechatronic SystemsMekhatronika, Avtomatizatsiya, Upravlenie10.17587/mau.23.104-11223:2(104-112)Online publication date: 6-Feb-2022
    • (2022)A vision system using depth inpainting for virtual content reconstruction in Augmented RealityCounterterrorism, Crime Fighting, Forensics, and Surveillance Technologies VI10.1117/12.2642007(22)Online publication date: 4-Nov-2022
    • (2021)Condor: Mobile Golf Swing Tracking via Sensor Fusion using Conditional Generative Adversarial NetworksProceedings of the 2021 International Conference on Embedded Wireless Systems and Networks10.5555/3451271.3451275(31-42)Online publication date: 20-Feb-2021
    • (2021)Depth Inpainting via Vision Transformer2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct54149.2021.00065(286-291)Online publication date: Oct-2021
    • (2020)Depth map artefacts reduction: a reviewIET Image Processing10.1049/iet-ipr.2019.162214:12(2630-2644)Online publication date: 3-Sep-2020

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media