[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Depth Estimation Using Single Camera with Dual Apertures

  • Chapter
  • First Online:
Smart Sensors at the IoT Frontier

Abstract

Depth sensing is an active area of research in imaging technology. Here, we use a dual-aperture system to infer depth from a single image based on the principle of depth from defocus (DFD). Dual-aperture camera includes a small all-pass aperture (which allows all light through the aperture) and a larger RGB-pass aperture (which allows visible light only). IR image captured through the smaller aperture is sharper than the RGB image captured through the large aperture. Since the difference of blurriness between two images is dependent on the actual distance, using a dual-aperture camera provides an opportunity to estimate depth of a scene. Measuring the absolute blur size is difficult, since it is affected by illuminant’s spectral distribution, noise, specular highlight, vignetting, etc. By using a dual-aperture camera, however, the relative blurriness can be measured in a robust way. In this article,, a detailed description of extracting depth using a dual-aperture camera is provided which includes procedures for fixing each of artifacts that degrade the depth quality based on DFD. Experimental results confirm the improved depth extraction by employing the aforementioned schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References

  1. Brown, M.Z., Burschka, D., Hager, G.D.: Advances in computational stereo. IEEE Trans. Pattern Anal. Mach. Intell. 25(8), 993–1008 (2003)

    Article  Google Scholar 

  2. https://www.ptgrey.com/stereo-vision-cameras-systems

  3. https://www.stereolabs.com/

  4. Ren, Z., Yuan, J., Zhang, Z.: Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 1093–1096 (2011)

    Google Scholar 

  5. Burgin, W., Pantofaru, C., Smart, W.D.: Using depth information to improve face detection. In: Proceedings of the 6th International Conference on Human-Robot Interaction, pp. 119–120 (2011)

    Google Scholar 

  6. Harville, M., Gordon, G., Woodfill, J.: Foreground segmentation using adaptive mixture models in color and depth. In: Proceedings of IEEE Workshop on Detection and Recognition of Events in Video, pp. 3–11 (2001)

    Google Scholar 

  7. Labati, R.D., Genovese, A., Piuri, V., Scotti, F.: Touchless fingerprint biometrics: a survey on 2D and 3D technologies. J. Internet Technol. 15(3), 325–332 (2014)

    Google Scholar 

  8. Salvi, J., Pages, J., Batlle, J.: Pattern codification strategies in structured light systems. Pattern Recogn. 37(4), 827–849 (2004)

    Article  Google Scholar 

  9. Gokturk, S.B., Yalcin, H., Bamji, C.: CA time-of-flight depth sensor-system description; issues and solutions. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 35–35 (2004)

    Google Scholar 

  10. Green, P., Sun, W., Matusik, W., Durand, F.: Multi-aperture photography. ACM Trans. Graph. 26(3), (2007)

    Article  Google Scholar 

  11. Zhou, C., Lin, S., Nayar, S.: Coded aperture pairs for depth from defocus. In: Proceedings of IEEE International Conference on Computer Vision, pp. 325–332 (2009)

    Google Scholar 

  12. Hiura, S., Matsuyama, T.: Depth measurement by the multi-focus camera. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 953–959 (1998)

    Google Scholar 

  13. Subbarao, M., Surya, S.: Depth from defocus: a spatial domain approach. Int. J. Comput. Vis. 13(3), 271–294 (1994)

    Article  Google Scholar 

  14. Martinello, M., Wajs, A., Quan, S., Lee, H., Lim, C., Woo, T., Lee, W., Kim, S.S., Lee, D.: Dual aperture photography: image and depth from a mobile camera. In: Proceedings of IEEE International Conference on Computational Photography, pp. 1–10 (2015)

    Google Scholar 

  15. Chen, X., He, L., Jeon, G., Jeong, J.: Local adaptive directional color filter array interpolation based on inter-channel correlation. Opt. Commun. A324, 269–276 (2014)

    Article  Google Scholar 

  16. Li, X., Orchard, T.: New edge-directed interpolation. IEEE Trans. Image Process. 10, 1521–1527 (2001)

    Article  Google Scholar 

  17. Hwang, W., Wang, H., Kim, H., Kee, S., Kim, J.: Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation. IEEE Trans. Image Process. 20(4), 1152–1165 (2011)

    Article  MathSciNet  Google Scholar 

  18. Zhang, T.Y., Suen, C.Y.: A fast parallel algorithm for thinning digital patterns. Commun. ACM. 27(3), 236–239 (1984)

    Article  Google Scholar 

  19. Besag, J.: On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B Methodol. 48(8), 259–302 (1986)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was supported by the Center for Integrated Smart Sensors funded by the Ministry of Science, ICT and Future Planning as the Global Frontier Project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyun Sang Park .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Park, H.S. et al. (2017). Depth Estimation Using Single Camera with Dual Apertures. In: Yasuura, H., Kyung, CM., Liu, Y., Lin, YL. (eds) Smart Sensors at the IoT Frontier . Springer, Cham. https://doi.org/10.1007/978-3-319-55345-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55345-0_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55344-3

  • Online ISBN: 978-3-319-55345-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics