$^\circ$ Stereo Image Composition With Depth Adaption | IEEE Transactions on Visualization and Computer Graphics"/>
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="huang-ieq1-3327943.gif"/></alternatives></inline-formula> Stereo Image Composition With Depth Adaption

Published: 01 September 2024 Publication History

Abstract

360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="huang-ieq3-3327943.gif"/></alternatives></inline-formula> images and videos have become an economic and popular way to provide VR experiences using real-world content. However, the manipulation of the stereo panoramic content remains less explored. In this article, we focus on the 360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="huang-ieq4-3327943.gif"/></alternatives></inline-formula> image composition problem, and develop a solution that can take an object from a stereo image pair and insert it at a given 3D position in a target stereo panorama, with well-preserved geometry information. Our method uses recovered 3D point clouds to guide the composited image generation. More specifically, we observe that using only a one-off operation to insert objects into equirectangular images will never produce satisfactory depth perception and generate ghost artifacts when users are watching the result from different view directions. Therefore, we propose a novel per-view projection method that segments the object in 3D spherical space with the stereo camera pair facing in that direction. A deep depth densification network is proposed to generate depth guidance for the stereo image generation of each view segment according to the desired position and pose of the inserted object. We finally combine the synthesized view segments and blend the objects into the target stereo 360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="huang-ieq5-3327943.gif"/></alternatives></inline-formula> scene. A user study demonstrates that our method can provide good depth perception and removes ghost artifacts. The per-view solution is a potential paradigm for other content manipulation methods for 360<inline-formula><tex-math notation="LaTeX">$^\circ$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="huang-ieq6-3327943.gif"/></alternatives></inline-formula> images and videos.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics  Volume 30, Issue 9
Sept. 2024
704 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 01 September 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media