WO2004049735A1 - 立体視用映像提供方法及び立体映像表示装置 - Google Patents
立体視用映像提供方法及び立体映像表示装置 Download PDFInfo
- Publication number
- WO2004049735A1 WO2004049735A1 PCT/JP2003/012177 JP0312177W WO2004049735A1 WO 2004049735 A1 WO2004049735 A1 WO 2004049735A1 JP 0312177 W JP0312177 W JP 0312177W WO 2004049735 A1 WO2004049735 A1 WO 2004049735A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- information
- image
- stereoscopic
- data
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/08—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/21—Collision detection, intersection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/8042—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
Definitions
- the present invention relates to a stereoscopic image providing method and a stereoscopic image display device.
- a stereoscopic video receiving apparatus and a stereoscopic video system that generate a stereoscopic video based on depth information extracted from a two-dimensional video signal and a two-dimensional video signal have been proposed (see Japanese Patent Application Laid-Open No. 20-210). 0 0—7 8 6 1 1).
- a stereoscopic video image having parallax information can be generated from a real two-dimensional video image.
- a house exists as an object in the two-dimensional image, and when this image is combined with a rolling image of a ball, if the ball hits the house from the lateral direction. When in position, the ball must be displayed as rebounding against the house.
- only the surface position of the object is defined by the depth information, it is not possible to determine the collision between the object and the poll, and the poll passes in front of the house, or Will just pass behind. Disclosure of the invention
- an object of the present invention is to provide a stereoscopic video providing method and a stereoscopic video display device capable of displaying various stereoscopic videos by adding more information about an object or the like.
- a method for providing a stereoscopic image according to the present invention includes: When providing a two-dimensional image as data, stereoscopic information useful for converting the data of the two-dimensional image into a stereoscopic image, and thickness information of an object on the two-dimensional image, It is characterized in that it is provided together with the data of the 2D video as additional information of the 2D video.
- the object can be treated as thick even on the stereoscopic image based on the thickness information of the object on the three-dimensional image. It can be used for determining collision with this other video (or an object on this another video).
- the stereoscopic video providing method of the present invention provides the stereoscopic video information which is useful for converting the data of the two-dimensional video into the stereoscopic video when providing the two-dimensional video as data.
- Object on 2D video Depth information indicating the position on the near side of the c-cut and depth information indicating the position on the far side are provided together with the data of the 2D image as the additional information of the 2D image.
- the object can be treated as thick even on the stereoscopic image by the depth information indicating the near side position of the object on the two-dimensional image and the depth information indicating the depth side position.
- the stereoscopic image providing method of the present invention provides the stereoscopic information which is useful for converting the data of the two-dimensional image into the stereoscopic image when providing the two-dimensional image as data;
- the thickness information for each pixel on the 2D image and the thickness information are provided as the additional information of the 2D image together with the data of the 2D image.
- each pixel on the stereoscopic video can be treated as having a certain thickness based on the thickness information of each pixel on the two-dimensional video. Collision with video display It can be used to make settings.
- the stereoscopic image providing method of the present invention provides the stereoscopic image information which is useful for converting the data of the two-dimensional image into the stereoscopic image when providing the two-dimensional image as data. It is characterized in that depth information indicating a near side position for each pixel on a two-dimensional image and depth information indicating a back side position are provided together with the data of the two-dimensional image as additional information of the two-dimensional image. .
- information may be provided by any of broadcasting, communication, and recording on a recording medium. Further, at least one piece of photographing time information of the focal length information and the angle of view information may be provided together with the data of the two-dimensional video as additional information of the two-dimensional video.
- the stereoscopic video providing method of the present invention is a stereoscopic video providing method for providing multi-view 2D video as data, comprising: viewpoint interval information, adjacent viewpoints and an object to be shot. At least one of the angle information, the optical axis crossing position information, the focal distance information, and the angle of view information to be captured is provided together with the data of the two-dimensional image as ancillary information of the two-dimensional image. It is characterized by
- the display device uses the information at the time of shooting provided as the additional information of the 2D video, for example, when the viewpoint is selected according to the position of the shooting target Becomes possible. Also, when the surroundings of the object to be photographed are multi-viewpoints that are photographed in a ring, it becomes easy to incorporate and handle the stereoscopic video of the object to be photographed in three-dimensional data.
- the stereoscopic video display device of the present invention includes: means for generating stereoscopic video data based on two-dimensional video data and stereoscopic information; Means for synthesizing the image for use in stereoscopic video and a display object for stereoscopic video and a display object for the separate video based on thickness information of a pixel or an object on the two-dimensional image which is information attached to the two-dimensional image. Means for making a collision determination.
- the object is treated as thick even on the stereoscopic image based on the thickness information of the object on the two-dimensional image, and a collision is determined by combining different images. Processing according to the judgment can be performed.
- the stereoscopic video display device of the present invention includes: a unit configured to generate stereoscopic video data based on two-dimensional video data and depth information indicating a near side position of an object on the two-dimensional video; Means for generating thickness information of the object X based on the depth information indicating the depth position of the object and the depth information indicating the near side position. Further, the stereoscopic video display device of the present invention includes: means for generating stereoscopic video data based on two-dimensional video data and depth information indicating a near side position of each pixel on the two-dimensional video. Means for generating thickness information for each pixel based on depth information indicating a depth position for each pixel and depth information indicating the front position.
- the stereoscopic video display device of the present invention is a stereoscopic video display device that performs stereoscopic video display using two videos from a multi-view video, wherein the selection of the two videos is performed by using viewpoint interval information, It is characterized in that it is configured to perform based on at least one of the shooting information among the angle information, the optical axis crossing position information, the focal length information, and the angle of view information between the matching viewpoint and the shooting target. It shall be.
- two images can be selected based on the information at the time of shooting provided as the additional information of the two-dimensional image.
- FIGS. 1 (a), 1 (b) and 1 (c) are illustrations showing a method for providing a stereoscopic video image according to an embodiment of the present invention.
- FIGS. 2 (a) and 2 (b) are explanatory diagrams exemplifying a transmission format of stereoscopic video.
- Fig. 3 is an explanatory diagram showing the collision determination.
- Fig. 3 (a) shows a video
- Fig. 3 (b) shows a case where there is thickness information
- Fig. 3 (b) shows a case where there is no thickness information. Is shown.
- FIG. 5 is an explanatory diagram showing a selection form of two images.
- a 3D image is generated from the 2D image and the stereoscopic information (here, depth information), and a collision between the object thickness information on the 2D image and the composite image is determined.
- the system is composed of a transmitting side configured as a broadcasting station or a server on the Internet, and a receiving side composed of a broadcasting receiver and a personal computer equipped with a network connection environment.
- FIG. 7A shows a real-life two-dimensional image 100.
- image analysis is performed on the two-dimensional video 100, and as shown in FIG.
- Scenery image 101, building image 102, and car image 103 are extracted.
- These extracted images are treated as object ⁇ (eg, edge information).
- depth values are given in pixel units to generate depth maps. Note that a depth value can be given for each object. The depth value may be given automatically (estimated) or may be given manually.
- the thickness information may be provided for each pixel, or may be provided for each object. If the thickness of the object is constant (for example, in the case of a square building photographed from the front), it can be given in object units.
- two depth maps may be given. When one depth map is used as depth information indicating the position on the near side, and the other depth map is used as depth information indicating the position on the back side, the thickness is led by the difference between them.
- depth information indicating the near side position is given to a two-dimensional video of a certain frame, and the two-dimensional video of the next frame is given.
- the depth information indicating the depth position may be provided, and the depth information may be alternately switched between the depth information indicating the near position and the depth information indicating the depth position.
- the transmitting side transmits the depth map and the thickness information together with the data of the two-dimensional video as additional information of the two-dimensional video. In transmission, it performs data compression and multiplex processing.
- FIG. 2A shows an example of a format for inserting the thickness information.
- the attribute of the information is indicated in the “identification part”, and here, it indicates depth information and thickness information.
- “Pixel number” specifies each pixel.
- “Depth information” is the depth value of the pixel of the pixel number. You. “Thickness information” is the thickness information of the pixel of the pixel number.
- the transmitting side uses a depth map indicating the near side position and a depth map indicating the back side position as the additional information of the 2D video, It will be provided along with the data.
- An example of the format in this case is shown in Fig. 2 (b).
- the attribute of the information is indicated in the “identification part”, which indicates that the information is depth information.
- “Pixel number” specifies each pixel.
- the “first depth information” is a depth value at a position on the near side of the pixel of the pixel number.
- “Second depth information” is a depth value at a depth position of a pixel of the pixel number.
- the receiving side receives the data of the background image 101, the image 102 of the building, and the image 103 of the car and the accompanying information. If these data are multiplexed, demultiplex processing is performed. As the decoding process for each data, basically, for example, a process based on MPEG4 is adopted.
- the background image 101, the building image 102, and the car image 103 data, depth map, and composite image (for example, a 3D image of a ball 105 generated by a computer) )
- a means for receiving data (a modem, a tuner, etc.), a demultiplexer, a decoder, and generating stereoscopic video data based on the two-dimensional video data and the stereoscopic information.
- a stereoscopic video data generating unit and a video synthesizing processing unit for synthesizing another video with the stereoscopic video based on the data of another video are provided. Further, in this embodiment, a stereoscopic video data
- a collision determination unit is provided for determining a collision between a display object on an image and a display object of the another image.
- the depth value of background image 101 is 100
- the depth value of building image 102 is 50
- the thickness value is 30.
- the depth value of the car image 103 is 30 and the thickness value is 10
- the depth value of the pole image 105 for synthesis is 55 and the thickness value is 1.
- the ball 105 is located on the coordinates on the back side of the car image 103, and from the front to back of the building image 102. It can be determined that the position is on the coordinates between the two.
- Fig. 3 (c) shows the case of the conventional depth value only.
- the pole it is determined that 105 and the building image 102 have collided.
- the result of this determination is given to the computer described above, and this computer generates a 3D image of the ball 105 in which the course of the ball 105 has been reversed (bounced).
- the ball 105 becomes an image passing through the back of the image 102 of the building.
- Fig. 4 (a) shows the situation when a multi-view video (actual image) is acquired.
- the object to be photographed (object) A is photographed by camera 1, camera 2, camera 3, camera 4, camera 5, and camera 6, and a two-dimensional image from six viewpoints is obtained. Then, when transmitting the two-dimensional images of the six viewpoints as data, information indicating a viewpoint interval (camera interval), information indicating an optical axis intersection position, focal length information (subject distance), and angle of view information. At least one of the information at the time of shooting is transmitted together with the data of the two-dimensional video as the additional information of the two-dimensional video.
- Fig. 4 (b) shows another example when acquiring a multi-view video (real shot).
- camera 11, camera 12, camera 13, camera 14, camera 15, camera 16, camera 16 A multi-view two-dimensional image is obtained by placing the camera 17 and the camera 18 for shooting.
- information on the angle between the adjacent viewpoint (camera) and the imaging target A is acquired instead of the information indicating the viewpoint interval.
- a multi-view two-dimensional image can also be obtained by shooting with a single camera while rotating the shooting target A. At this time, the rotation speed may be included in the shooting information.
- each point (each pixel of the display video) ) Can be given three-dimensional coordinate values, making it easy to handle the subject A (actual image) by incorporating it into the three-dimensional data (easy to arrange the actual image in the three-dimensional data).
- the background should be black (a black curtain is placed on the background), and the image should be taken so that one object can be taken out.
- a stereoscopic video display device to which multi-view two-dimensional video data and information at the time of shooting are given, stereoscopic video display is performed using two videos from the multi-view video.
- the three-dimensional image display method using these two images includes a method in which the two images are alternately displayed in time and viewed with shirt glasses, and an image in which the two images are displayed alternately spatially and the image is separated by a parallax barrier.
- the stereoscopic video display device can determine the front-back position (closer or farther) of the display object based on the focal length information (subject distance) in the shooting information. Then, as shown in FIG. 5 (FIG. 5 is a diagram corresponding to FIG. 4 (a)), when the object A is close to the observer E, the images of the cameras 2 and 5 are selected, and the object is selected. When A is far from the observer E, the images from Camera 3 and Camera 4 are selected.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Television Systems (AREA)
- Processing Or Creating Images (AREA)
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN038252422A CN1706201B (zh) | 2002-11-25 | 2003-09-24 | 立体视觉用途图像提供方法和立体图像显示设备 |
US10/534,058 US20060132597A1 (en) | 2002-11-25 | 2003-09-24 | Stereoscopic video providing method and stereoscopic video display |
EP03811878A EP1571854A4 (en) | 2002-11-25 | 2003-09-24 | METHOD FOR CREATING STEREOSCOPIC VIDEO IMAGES AND DISPLAYING STEREOSCOPIC VIDEO IMAGES |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-340245 | 2002-11-25 | ||
JP2002340245A JP4190263B2 (ja) | 2002-11-25 | 2002-11-25 | 立体視用映像提供方法及び立体映像表示装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004049735A1 true WO2004049735A1 (ja) | 2004-06-10 |
Family
ID=32375815
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/012177 WO2004049735A1 (ja) | 2002-11-25 | 2003-09-24 | 立体視用映像提供方法及び立体映像表示装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20060132597A1 (ja) |
EP (1) | EP1571854A4 (ja) |
JP (1) | JP4190263B2 (ja) |
KR (1) | KR100739275B1 (ja) |
CN (1) | CN1706201B (ja) |
WO (1) | WO2004049735A1 (ja) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100697972B1 (ko) | 2004-11-16 | 2007-03-23 | 한국전자통신연구원 | 입체 방송 서비스를 위한 디지털 방송 송신 장치 및 그 방법 |
US7586489B2 (en) * | 2005-08-01 | 2009-09-08 | Nvidia Corporation | Method of generating surface defined by boundary of three-dimensional point cloud |
JP4645356B2 (ja) * | 2005-08-16 | 2011-03-09 | ソニー株式会社 | 映像表示方法、映像表示方法のプログラム、映像表示方法のプログラムを記録した記録媒体及び映像表示装置 |
CN101375315B (zh) | 2006-01-27 | 2015-03-18 | 图象公司 | 数字重制2d和3d运动画面以呈现提高的视觉质量的方法和系统 |
CA2884702C (en) | 2006-06-23 | 2018-06-05 | Samuel Zhou | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
KR100763753B1 (ko) * | 2006-09-21 | 2007-10-04 | 에스케이 텔레콤주식회사 | 통신망 기반의 3차원 입체방송 서비스 시스템 및 방법 |
EP2084672A1 (en) * | 2006-11-20 | 2009-08-05 | THOMSON Licensing | System and method for compositing 3d images |
KR100905723B1 (ko) * | 2006-12-08 | 2009-07-01 | 한국전자통신연구원 | 비실시간 기반의 디지털 실감방송 송수신 시스템 및 그방법 |
US20080226281A1 (en) * | 2007-03-13 | 2008-09-18 | Real D | Business system for three-dimensional snapshots |
US20080273081A1 (en) * | 2007-03-13 | 2008-11-06 | Lenny Lipton | Business system for two and three dimensional snapshots |
KR101483659B1 (ko) | 2008-07-11 | 2015-01-16 | 삼성디스플레이 주식회사 | 입체영상 표시방법, 이를 수행하기 위한 표시장치 |
JP2010028456A (ja) * | 2008-07-18 | 2010-02-04 | Sony Corp | データ構造、再生装置および方法、並びにプログラム |
KR100945307B1 (ko) * | 2008-08-04 | 2010-03-03 | 에이알비전 (주) | 스테레오스코픽 동영상에서 이미지를 합성하는 방법 및장치 |
JP4457323B2 (ja) * | 2008-10-09 | 2010-04-28 | 健治 吉田 | 遊技ゲーム機 |
KR101574068B1 (ko) * | 2008-12-26 | 2015-12-03 | 삼성전자주식회사 | 영상 처리 방법 및 장치 |
CN101562754B (zh) * | 2009-05-19 | 2011-06-15 | 无锡景象数字技术有限公司 | 一种改善平面图像转3d图像视觉效果的方法 |
CN101562755B (zh) * | 2009-05-19 | 2010-09-01 | 无锡景象数字技术有限公司 | 一种由平面视频制作3d视频的方法 |
JP5521486B2 (ja) * | 2009-06-29 | 2014-06-11 | ソニー株式会社 | 立体画像データ送信装置および立体画像データ送信方法 |
JP5405264B2 (ja) | 2009-10-20 | 2014-02-05 | 任天堂株式会社 | 表示制御プログラム、ライブラリプログラム、情報処理システム、および、表示制御方法 |
JP4754031B2 (ja) | 2009-11-04 | 2011-08-24 | 任天堂株式会社 | 表示制御プログラム、情報処理システム、および立体表示の制御に利用されるプログラム |
EP2355526A3 (en) | 2010-01-14 | 2012-10-31 | Nintendo Co., Ltd. | Computer-readable storage medium having stored therein display control program, display control apparatus, display control system, and display control method |
KR101674953B1 (ko) * | 2010-07-05 | 2016-11-10 | 엘지전자 주식회사 | 이동 단말기 및 이것의 객체 아이콘 디스플레이 방법 |
KR101667583B1 (ko) * | 2010-05-12 | 2016-10-19 | 엘지전자 주식회사 | 이동 단말기 및 이것의 객체 아이콘 디스플레이 방법 |
JP5573349B2 (ja) | 2010-05-17 | 2014-08-20 | パナソニック株式会社 | パノラマ展開画像撮影システムおよび方法 |
US9693039B2 (en) | 2010-05-27 | 2017-06-27 | Nintendo Co., Ltd. | Hand-held electronic device |
JP2012257105A (ja) * | 2011-06-09 | 2012-12-27 | Olympus Corp | 立体画像取得装置 |
CN102306393B (zh) * | 2011-08-02 | 2013-07-17 | 清华大学 | 一种基于轮廓匹配的深度扩散方法及装置 |
KR101256924B1 (ko) * | 2012-05-19 | 2013-04-19 | (주)루쏘코리아 | 3차원영상 제작방법 |
JP2018157398A (ja) * | 2017-03-17 | 2018-10-04 | 株式会社リコー | 情報端末、情報処理装置、情報処理システム、情報処理方法及びプログラム |
JP7492433B2 (ja) | 2020-10-15 | 2024-05-29 | シャープ株式会社 | 画像形成装置 |
CN114827440A (zh) * | 2021-01-29 | 2022-07-29 | 华为技术有限公司 | 基于光场显示的显示模式的转换方法及转换装置 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09289638A (ja) * | 1996-04-23 | 1997-11-04 | Nec Corp | 3次元画像符号化復号方式 |
JP2002095018A (ja) * | 2000-09-12 | 2002-03-29 | Canon Inc | 画像表示制御装置及び画像表示システム、並びに画像データの表示方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0749466A (ja) * | 1993-08-04 | 1995-02-21 | Sony Corp | 画像表示方法 |
US6049341A (en) * | 1997-10-20 | 2000-04-11 | Microsoft Corporation | Edge cycle collision detection in graphics environment |
US6518966B1 (en) * | 1998-03-11 | 2003-02-11 | Matsushita Institute Industrial Co., Ltd. | Method and device for collision detection and recording medium recorded with collision detection method |
JP2000078611A (ja) * | 1998-08-31 | 2000-03-14 | Toshiba Corp | 立体映像受信装置及び立体映像システム |
JP2002025188A (ja) * | 2000-07-04 | 2002-01-25 | Hitachi Ltd | 情報記憶装置、信号処理回路 |
-
2002
- 2002-11-25 JP JP2002340245A patent/JP4190263B2/ja not_active Expired - Fee Related
-
2003
- 2003-09-24 EP EP03811878A patent/EP1571854A4/en not_active Withdrawn
- 2003-09-24 KR KR1020057009345A patent/KR100739275B1/ko not_active IP Right Cessation
- 2003-09-24 WO PCT/JP2003/012177 patent/WO2004049735A1/ja active Application Filing
- 2003-09-24 CN CN038252422A patent/CN1706201B/zh not_active Expired - Fee Related
- 2003-09-24 US US10/534,058 patent/US20060132597A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09289638A (ja) * | 1996-04-23 | 1997-11-04 | Nec Corp | 3次元画像符号化復号方式 |
JP2002095018A (ja) * | 2000-09-12 | 2002-03-29 | Canon Inc | 画像表示制御装置及び画像表示システム、並びに画像データの表示方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1571854A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP1571854A4 (en) | 2011-06-08 |
KR20050086765A (ko) | 2005-08-30 |
CN1706201B (zh) | 2012-02-15 |
EP1571854A1 (en) | 2005-09-07 |
JP2004179702A (ja) | 2004-06-24 |
CN1706201A (zh) | 2005-12-07 |
JP4190263B2 (ja) | 2008-12-03 |
KR100739275B1 (ko) | 2007-07-12 |
US20060132597A1 (en) | 2006-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004049735A1 (ja) | 立体視用映像提供方法及び立体映像表示装置 | |
CN101636747B (zh) | 二维/三维数字信息获取和显示设备 | |
EP1836859B1 (en) | Automatic conversion from monoscopic video to stereoscopic video | |
JP4188968B2 (ja) | 立体視用映像提供方法及び立体映像表示装置 | |
JP2006128818A (ja) | 立体映像・立体音響対応記録プログラム、再生プログラム、記録装置、再生装置及び記録メディア | |
CN102802014B (zh) | 一种多人跟踪功能的裸眼立体显示器 | |
WO2012060564A1 (en) | 3d camera | |
EP3349444B1 (en) | Method for processing media content and technical equipment for the same | |
WO2011070774A1 (ja) | 3d映像処理装置および3d映像処理方法 | |
JP2010181826A (ja) | 立体画像形成装置 | |
JP2006128816A (ja) | 立体映像・立体音響対応記録プログラム、再生プログラム、記録装置、再生装置及び記録メディア | |
EP2668640A1 (en) | Method, apparatus and computer program product for three-dimensional stereo display | |
KR101960577B1 (ko) | 뷰 공간에 관한 스테레오 정보를 송수신하는 방법 | |
CN102938845B (zh) | 基于透视投影的实时虚拟视点生成方法 | |
JP2011182003A (ja) | パノラマカメラ及び360度パノラマ立体映像システム | |
JPH09107561A (ja) | 2つのカメラの自動選択装置および2つのカメラの自動選択方法およびその用途 | |
JP2003085593A (ja) | 対話型映像操作装置及び映像コンテンツの表示方法 | |
KR101046580B1 (ko) | 영상처리장치 및 그 제어방법 | |
Hori et al. | Arbitrary stereoscopic view generation using multiple omnidirectional image sequences | |
KR102094848B1 (ko) | (초)다시점 미디어의 라이브 스트리밍 방법 및 장치 | |
RU2146856C1 (ru) | Система объемного телевидения | |
KR20210111600A (ko) | 다중 화각 카메라를 활용한 전방위 뎁스 영상 생성 기반 이동수단 모니터링 시스템 및 방법 | |
JP2004507947A (ja) | 立体ビデオキャプチャ装置および3次元用ビューファインダを有するデュアル受信機 | |
CN102769763B (zh) | 三维影像摄相机及其相关控制方法 | |
Jung et al. | Evaluation of perceived depth resolution in multi-view threedimensional display using depth image-based rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003811878 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20038252422 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057009345 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057009345 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003811878 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006132597 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10534058 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10534058 Country of ref document: US |