[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Real-time 2D to 3D video conversion

  • Original Research Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

We present a real-time implementation of 2D to 3D video conversion using compressed video. In our method, compressed 2D video is analyzed by extracting motion vectors. Using the motion vector maps, depth maps are built for each frame and the frames are segmented to provide object-wise depth ordering. These data are then used to synthesize stereo pairs. 3D video synthesized in this fashion can be viewed using any stereoscopic display. In our implementation, anaglyph projection was selected as the 3D visualization method, because it is mostly suited to standard displays.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Blundell, B., Schwarz, A.: Volumetric Three Dimensional Display Systems. Wiley, New York (2000)

    Google Scholar 

  2. Halle, M.: Autoestereoscopic displays and computer graphics. Comput. Graph. (ACM) 31, 58–62 (1997)

    Article  Google Scholar 

  3. Ideses, I., Yaroslavsky, L.: A method for generating 3D video from a single video stream. VMV 2002 435–438 (2002)

  4. Ideses I., Yaroslavsky L.: 3 methods to improve quality of colour anaglyphs. J. Optics. A: Pure, Applied Optics 7(12), 755–762 (8) (2005)

    Google Scholar 

  5. Ideses, I., Yaroslavsky, L.: New methods to produce high quality color anaglyphs for 3-D visualization. In: Image Analysis and Recognition: International Conference ICIAR 2004, Lecture Notes in Computer Science. pp. 273–280. Springer, Heidelberg (2004)

  6. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of 7th International Joint Conference on Artificial Intelligence (IJCAI), pp. 674–679 (1981)

  7. Horn, B., Schunck, B.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)

    Article  Google Scholar 

  8. Periaswamy, S. Farid, H: Elastic registration in the presence of intensity variations. IEEE. Trans. Med. Imaging. 22(7) (2003)

  9. Wu, Y.T., Kanade, T., Li, C.C., Cohn, J.: Image registration using wavelet-based motion model. Int. J. Comput. Vis. (2000)

  10. Alvarez, L., Deriche, R., Sanchez, J., Weickert, J.: Dense disparity map estimation respecting image discontinuities: a PDE and scalespace based approach. Technical Report RR-3874, INRIA (2000)

  11. Schmidt, J., Niemann, H., Vogt, S.: Dense disparity maps in real-time with an application to augmented reality. In: IEEE Workshop on Applications of Computer Vision (WACV 2002), 3–4 December 2002. IEEE Computer Society, Orlando

  12. Ran, A., Sochen, N.A.: Differential Geometry Techniques in Stereo Vision Proceedings of EWCG, pp. 98–103 (2000)

  13. Corke, P., Dunn, P.: Real-Time Stereopsis Using FPGAs, IEEE TENCON—Speech and Image Technologies for Computing and Telecommunications, pp. 235–238 (1997)

  14. Faugeras, O. et al.: Real time correlation based stereo: algorithm, implementations and applications. INRIA Technical Report 2013 (1993)

  15. Kimura, S., Kanade, T., Kano, H., Yoshida, A., Kawamura, E., Oda, K.: CMU video-rate stereo machine. Proceedings of Mobile Mapping Symposium (1995)

  16. Konolige, K.: Small vision systems: hardware and implementation. In: Eighth International Symposium on Robotics Research, Hayama, Japan (1997)

  17. Kimura, S., Shinbo, T., Yamaguchi, H., Kawamura, E., Naka, K.: A convolver-based real-time stereo machine (SAZAN). CVPR, pp. 457–463 (1999)

  18. Matthies, L.: Stereo vision for planetary rovers: stochastic modeling to near realtime implementation. Int. J. Comput. Vis. 8, 71–91 (1992)

    Article  Google Scholar 

  19. Mulligan, J., Daniilidis, K.: Real-time trinocular stereo for tele-immersion. ICIP (2001)

  20. Woodfill, J., Von Herzen, B.: Real-time stereo vision on the PARTS reconfigurable computer. In: Proceedings of IEEE Workshop FPGAs for Custom Computing Machines, pp. 242–250 (1997)

  21. Ideses, I.P., Yaroslavsky, L.P., Vistuch, R., Fishbain, B.: 3D video from compressed 2D video. In: Proceedings of Stereoscopic Displays and Applications XVIII. SPIE and IS&T, San Jose, CA (2007)

  22. Ohm, J.R.: Stereo/multiview video encoding using the MPEG family of standards. In: Merritt, O.J., Bolas, M.T., Fisher,S.S., (eds.) The Engineering Reality of Virtual Reality, vol. 3639, pp. 242–253. SPIE, San Jose (1999)

  23. Wiegand, T., Sullivan, G.J., Bjøntegaard, G., Luthra, A.: Overview of the H.264/AVC video coding standard. IEEE. Trans. Circ. Syst. Video Technol. 13(7), 560–576 (2003)

    Google Scholar 

  24. Yaroslavsky, L.P., Campos, J., Espínola, M., Ideses, I.: Redundancy of stereoscopic images: experimental evaluation. Opt. Express. 13, 10895–10907 (2005)

    Article  Google Scholar 

  25. Yaroslavsky, L.P.: On redundancy of stereoscopic pictures. In: Proceedings of Image Science ‘85, Helsinki, Finland, 11–14 June 1985, vol. 1, pp. 82–85. Acta Polytechnica Scandinavica, no. 149 (1985)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ianir Ideses.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ideses, I., Yaroslavsky, L.P. & Fishbain, B. Real-time 2D to 3D video conversion. J Real-Time Image Proc 2, 3–9 (2007). https://doi.org/10.1007/s11554-007-0038-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-007-0038-9

Keywords

Navigation