KR101239149B1 - Apparatus and method for converting multiview 3d image - Google Patents
Apparatus and method for converting multiview 3d image Download PDFInfo
- Publication number
- KR101239149B1 KR101239149B1 KR1020110081365A KR20110081365A KR101239149B1 KR 101239149 B1 KR101239149 B1 KR 101239149B1 KR 1020110081365 A KR1020110081365 A KR 1020110081365A KR 20110081365 A KR20110081365 A KR 20110081365A KR 101239149 B1 KR101239149 B1 KR 101239149B1
- Authority
- KR
- South Korea
- Prior art keywords
- image
- depth map
- view
- information
- image frame
- Prior art date
Links
Images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The present invention relates to a multi-view three-dimensional image conversion apparatus. The multi-view 3D image converting apparatus may include a display unit displaying a first image and a second image, a sensor unit detecting a user's viewpoint based on a screen of the display unit, a current image frame and a previous image frame, and receiving the current image frame. A map generator configured to generate a depth map for movement of each of the unit elements constituting the image frame; and the first image in which each of the unit elements of the current image frame is moved based on the depth map and the user's viewpoint. And a multi-view image generator configured to generate the second image based on a tilt control of the current image frame based on the user's viewpoint, wherein the first image and the second image are for configuring a 3D image. It is a video.
Description
The present invention relates to an image conversion system, and more particularly, to an apparatus and method for converting a 2D image into a multiview 3D image according to a user's viewpoint in real time.
3D imaging technology, which is rapidly developing, is moving beyond the level of delivering stereoscopic images to users and progressing to providing realistic images as in actual situations. It is evolving from transmitting 3D information to delivering 3D experience. Multi-view 3D image conversion technology is a technology that provides a stereoscopic image having various binocular disparity as in the real world according to the change of the user's point of view unlike the short-view 3D image technology.
Unlike 2D images, 3D images contain a lot of information, which is expensive in the production stage. Multi-view 3D imaging technology requires more information than short-term 3D imaging technology, and production of 3D contents including 3D broadcasting is not activated due to the difficulty of various standards.
In order for the 3D image display apparatus and related industries to be activated and used in various places, it is more competitive to generate not only short-term 3D images but also multi-view 3D images.
Therefore, there is a need to convert a two-dimensional image to a three-dimensional image and to change the three-dimensional effect of the three-dimensional image provided according to a user's viewpoint.
An object of the present invention is to provide a multi-view 3D image converting apparatus and method capable of converting a 2D image into a multi-view 3D image according to a user's viewpoint.
The multi-view three-dimensional image conversion apparatus of the present invention receives a display unit for displaying a first image and a second image, a sensor unit for detecting a user view based on the screen of the display unit, the current image frame and the previous image frame A map generator configured to generate a depth map for movement of each of the unit elements constituting the current image frame, and the first unit that moves each of the unit elements of the current image frame based on the depth map and the user's viewpoint. A multi-view image generating unit generating a first image and generating the second image by controlling a tilt of the current image frame based on the user's viewpoint, wherein the first image and the second image include a 3D image; This is a video for configuration.
In this embodiment, the map generator selects one base map from among a plurality of base maps in which perspective information is set using the previous image frame, and corrects a plurality of brightnesses in which a brightness value is set using the previous image frame. A preprocessing unit to select one brightness correction table among tables, a full image processing unit to generate a full depth map for setting a perspective of the current image frame according to the selected base map, and converting the current image frame into the selected brightness correction table And a local image processor configured to generate a local depth map by compensating brightness, and a depth map generator configured to generate the depth map by calculating the full depth map and the local depth map.
In this embodiment, the multi-view image generating unit selects a first multi-view information and a second multi-view information corresponding to the user viewpoint, the depth map according to the first multi-view information And a parallax processor configured to generate a first image for reproducing a 3D image from the current image frame by performing a multiview and calculating the multiview depth map and the current image frame. And a tilt controller configured to generate a second image for reproducing a 3D image through a tilt operation using the second multiview information.
In this embodiment, the first multi-view information includes information on parameters and depth directions for multi-viewing the depth map.
In this embodiment, the second multiview information includes information on an inclination value and an inclination direction for multiviewing the current image frame.
In this embodiment, the multi-view image generating unit is a first distance control unit for controlling to move the first image in one of the left and right directions according to the first distance control signal, and the second image The apparatus may further include a second distance controller configured to move in one of the right and left directions according to the distance control signal.
In this embodiment, the first distance control signal and the second distance control signal are signals for either increasing or decreasing the distance between the first image and the second image.
In this embodiment, the parallax processor receives a buffer for storing the current image frame, the depth map and the first multiview information, and converts the depth map to multiview the depth map according to the multiview information. A parallax calculator configured to generate the first image by calculating the current image frame output from the buffer with the multiview depth map processed by the multiview; a 3D image interpolator to interpolate the first image; and the interpolated And a circular buffer for sequentially selecting and outputting the first image.
In this embodiment, the tilt control unit receives the second multi-view information and multi-views the current image frame according to the second multi-view information.
In this embodiment, if the first image is a left image, the second image is a right image, and if the first image is a right image, the second image is a left image.
In this embodiment, the unit element is characterized in that the pixel.
The multi-view three-dimensional image conversion method according to an embodiment of the present invention detects the user's viewpoint from the display screen, for the movement of each of the unit elements constituting the current image frame using the previous image frame and the current image frame. Generating a depth map, selecting first multiview information corresponding to the user's viewpoint, converting the depth map into a multiview depth map based on the first multiview information, and generating the multiview depth map Generating a first image in which each of the unit elements of the current image frame is moved, selecting second multiview information corresponding to the user view, and based on the second multiview information Generating a second image through tilt control of the second image; and synchronically controlling the first image and the second image to output the second image .
In this embodiment, the first multi-view information includes information on parameters and depth directions for multi-viewing the depth map.
In this embodiment, the second multiview information includes information on an inclination value and an inclination direction for multiviewing the current image frame.
In this embodiment, the current image frame and the previous image frame are two-dimensional image frames.
According to the present invention, the multi-view three-dimensional image conversion apparatus can convert the two-dimensional image to a multi-view three-dimensional image by generating two left and right images from the two-dimensional image, respectively, according to the user's viewpoint. In addition, the apparatus for converting a 3D image of the present invention can obtain a multiview 3D image from a 2D image, thereby reducing the cost of producing a multiview 3D image.
1 is a diagram illustrating a multi-view 3D image conversion apparatus according to an embodiment of the present invention;
2 is a view showing a user viewpoint measured by a sensor unit according to an embodiment of the present invention;
3 is a view showing a map generating unit according to an embodiment of the present invention;
4 is a diagram illustrating base maps and base map indexes according to an embodiment of the present invention;
5A through 5D illustrate brightness correction using brightness correction tables corresponding to brightness correction table indexes '1', '2', '3', and 'k' according to an embodiment of the present invention;
6 is a view showing a local image processing unit according to an embodiment of the present invention;
7 is a view showing a brightness correction unit according to an embodiment of the present invention;
8 is a view showing a depth map generator according to an embodiment of the present invention;
9 illustrates a full depth map, an area depth map, and a depth map according to an embodiment of the present invention;
10 is a view illustrating a depth map generation operation of a depth map generator according to an embodiment of the present invention;
11 illustrates a depth map assigned to an image field according to depth map correction according to an embodiment of the present invention;
12 illustrates a depth map and a corrected depth map according to an embodiment of the present invention;
13 is a diagram illustrating a structure of a multiview image generator according to an embodiment of the present invention;
14 illustrates a depth map and a slope map according to a user's viewpoint according to an embodiment of the present invention;
15 is a view illustrating a parallax processing unit according to an embodiment of the present invention;
16 is a view illustrating a 3D image generating operation of a parallax processor according to an embodiment of the present invention;
17 is a view illustrating a tilt control operation of a tilt controller according to an embodiment of the present invention;
18 is a view schematically showing first and second images generated according to an embodiment of the present invention; and
19 is a diagram illustrating operations of the first distance controller and the second distance controller according to an exemplary embodiment of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention, and how to accomplish it, will be described with reference to the embodiments described in detail below with reference to the accompanying drawings. However, the present invention is not limited to the embodiments described herein but may be embodied in other forms. However, the embodiments of the present invention are provided to explain in detail enough to facilitate the technical idea of the present invention to those skilled in the art.
In the drawings, embodiments of the present invention are not limited to the specific forms shown and are exaggerated for clarity. In addition, parts denoted by the same reference numerals throughout the specification represent the same components.
The expression "and / or" is used herein to mean including at least one of the components listed before and after. In addition, the expression “connected / combined” is used in the sense including including directly connected to or indirectly connected to other components. In this specification, the singular forms also include the plural unless specifically stated otherwise in the phrases. Also, as used herein, components, steps, operations, and elements referred to as "comprising" or "comprising" refer to the presence or addition of one or more other components, steps, operations, elements, and devices.
The present invention provides a multi-view 3D image conversion apparatus for converting a 2D image into a multiview 3D image according to a user's viewpoint.
1 is a diagram illustrating a multi-view 3D image conversion apparatus according to an embodiment of the present invention.
Referring to FIG. 1, the apparatus for converting a
The
The
The
Meanwhile, the first image and the second image are respectively generated to provide a three-dimensional stereoscopic feeling to a user in an image frame. Therefore, the first image and the second image have an association relationship with each other. Therefore, the
The
Accordingly, the multi-view three-dimensional
The multi-view 3D
2 is a diagram illustrating a user viewpoint measured by a sensor unit according to an exemplary embodiment of the present invention.
Referring to FIG. 2, the
A number (0 or 1) located in front of the coordinates indicates a position according to the distance of the user's viewpoint with respect to the screen 141. Here, 0 is located near the screen 141 relative to the area indicated by 1, and 1 is located far away on the screen relative to the area indicated by 0. FIG.
The number (-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5) located behind the coordinates is the direction of the user's viewpoint relative to the center of the screen 141 (left Or right). Here, in the case of 5, the viewpoint of the user means the right side based on the screen 141, and in the case of -5, the viewpoint of the user means the left side of the screen 141. Also, 0 indicates that the user's viewpoint is in front of the screen 141. Accordingly, the
The
3 is a diagram illustrating a map generator according to an exemplary embodiment of the present invention.
Referring to FIG. 3, the
The
When the current image frame (n image frame), which is a two-dimensional image, is input based on the multi-view 3D
The
Also, the
The
The
The
The
For example, the
4 is a diagram illustrating basic indexes and basic maps according to an embodiment of the present invention.
Referring to FIG. 4, the
For example, when looking at an image captured by a camera frame, a nearby subject has a small perspective (distance) and a background such as a landscape, the sea, and the sky has a larger perspective (distance) than the subject. Therefore, if there is a large difference in the distance between the upper and lower regions of the image frame with respect to one image frame, it is regarded as a far image, and if it is less, a near image. If the depth is set to '0' for the far region, a depth value of greater than '0' is set as the near region is reached. The base map may have a different depth depending on whether the 2D image is a near image or a far image. The near image has a similar depth difference between the background image and the object image than the far image. Assuming that the upper part of the frame is far or background, and the lower part is near or object based on the landscape, the far image has a depth greater than that of the background area compared to the near image.
Here, the base map index of the near image is set to '1' and the base map index of the far image is set to 'i'.
For example, the base map corresponding to the base map index '1' has an even distribution of depth values without a difference in depth values between the background area and the object area. However, in the base map corresponding to the base map index 'i', the object region has a depth value larger than the background region with respect to the bottom. As the change from the near image to the far image, the base map index may increase sequentially from '1' to 'i'.
At this time, the figure on the left shows a perspective, and the right shows a basic map on which the perspective is set for each of the basic map indices. Thus, the darker areas of the base map are farther away than the brighter areas. Here, the basic map set according to the distance has been described as an example, and may be implemented in various other forms.
The
On the other hand, when a two-dimensional image is first input to the multi-view three-dimensional
In addition, the
The
5A to 5D illustrate brightness correction using brightness correction tables corresponding to brightness correction table indices '1', '2', '3', and 'k' according to an embodiment of the present invention.
Area depth maps are generated using the brightness of the image. When analyzing the brightness distribution of the overall brightness of the image, the overall dark image, or the unclear image, the brightness is distributed in a specific area. If you create an area depth map from these images, the quality of the 3D image will be degraded and the stereoscopic effect will be reduced. As the distribution of brightness values is uniform, a depth map that can improve a three-dimensional effect can be generated.
The brightness distribution of the entire image can be made uniform by substituting the brightness value of each pixel in the image using the brightness correction table selected according to the characteristics of the image. The index of the brightness correction table may be set from '1' to 'k' according to a predefined brightness distribution. The input brightness value (or input brightness level) of the brightness correction table is a value that normalizes the brightness value of the pixel extracted from the input image, and the output brightness value (or output brightness level) is corrected to normalize it to the range of the input brightness value. Brightness value.
The meanings of the input and output brightness values in the table are as follows. After subsampling the image input from the previous frame to extract the brightness information of the pixel value, the value is normalized to 0-15 steps. The frequency of brightness values corresponding to 0-15 is accumulated until the image frame of the input image is completed. When the image frames are all input, the distribution of brightness values from 0 to 15 levels is generated. Set the index of the table similar to the distribution of the brightness values of the input image until the next image frame is input. An image input to the next image frame is corrected by finding a brightness correction table corresponding to the index by using an index of a preset brightness correction table and substituting the corresponding values with the corresponding values.
Referring to FIG. 5A, when the brightness correction table index is '1', an input brightness value of the brightness correction table may be divided into '0' to '15'. In this case, the input brightness value '0' is converted (or replaced) into the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '1'. In this order, the input brightness values '0' to '15' correspond to the output brightness values '0' to '15', respectively. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.
Referring to FIG. 5B, when the brightness correction table index is '2', an input brightness value of the brightness correction table may be divided into '0' to '15'. At this time, the input brightness value '0' is converted (or replaced) to the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '0'. In this order, input brightness values' 0 'to' 15 'are output brightness values' 0, 0, 0, 0, 0, 0, 0, 1, 2, 4, 6, 8, 10, 12, 14, 15 'Corresponds to each. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.
Referring to FIG. 5C, when the brightness correction table index is '3', an input brightness value of the brightness correction table may be divided into '0' to '15'. In this case, the input brightness value '0' is converted (or replaced) into the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '1'. In this order, input brightness values' 0 'to' 15 'are output brightness values' 0, 1, 2, 4, 6, 8, 10, 12, 14, 15, 15, 15, 15, 15, 15, 15. 'Corresponds to each. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.
Referring to FIG. 5D, when the brightness correction table index is 'k', the input brightness values of the brightness correction table may be divided into '0' to '15'. At this time, the input brightness value '0' is converted (or replaced) to the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '0'. Input brightness values' 0 'to' 15 'in this order are output brightness values' 0, 0, 0, 0, 0, 1, 3, 5, 7, 9, 11, 13, 15, 15, 15, 15 Corresponds to each. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.
5A through 5D illustrate brightness correction tables corresponding to indices of '1' through 'k' as an example. Thus, the index of each brightness correction table can be implemented with various values.
6 is a diagram illustrating a local image processor according to an exemplary embodiment of the present invention.
Referring to FIG. 6, the
The
The
7 is a diagram illustrating a brightness compensator according to an exemplary embodiment of the present invention.
Referring to FIG. 7, the
The
The brightness
The
The corrected
On the other hand, the brightness value of the brightness correction table, that is, the brightness level (steps 0-15) is used in steps '0' to '15' different from the brightness value of the input image, that is, the brightness level (0-n). The reason for using different brightness levels is to minimize the size of the memory to store the brightness correction table and to easily check the distribution of brightness values. For example, in order to convert the brightness value of the "0-15" step into "0-255", the brightness value of the "0-15" step must be upsampled 16 times. That is, each step of the input and output values of the brightness correction table should be upsampled 16 times. In this case, the
8 is a diagram illustrating a depth map generator according to an exemplary embodiment of the present invention.
Referring to FIG. 8, the
The
9 is a diagram illustrating an entire depth map, an area depth map, and a depth map according to an embodiment of the present invention.
Referring to FIG. 9, the
Depth is set in each of the pixels in the
For example, a pixel having a depth of '0' first located at the top left of the
10 is a diagram illustrating a depth map generation operation of the depth map generator according to an exemplary embodiment of the present invention.
Referring to FIG. 10, the
11 is a diagram illustrating a depth map allocated to an image field according to a depth map correction according to an embodiment of the present invention.
Referring to FIG. 11, the
The
For example, depth value
and If you calibrate , Is output. , Indicates depth information of the previous field. , Can be represented by
Correct depth information for odd areas
) Represents depth information () of odd pixels at the same position as the current position (p) with respect to the previous field. ) And depth information of odd pixels at the previous position (p-1) ), The average value of the depth information of the even pixels and the depth information of the odd pixels at the current position (p) ) And depth information of odd pixels at the previous position (p-1) Can be corrected to an average value ofCorrect depth information in even areas (
) Represents depth information () of odd pixels at the same position as the current position (p) with respect to the previous field. ) And depth information of odd pixels at the previous position (p-1) ), The depth information of the corrected odd pixel at the current position p based on the current field ( ) And depth information of odd pixels at the previous position (p-1) Can be corrected to an average value ofDepth correction of odd areas can reduce the influence of noise by smoothing the depth map. Depth correction of the even region can achieve a clear image quality by minimizing the overlap of the pixel shift that may occur during the three-dimensional parallax processing through the correction of the depth information value assigned to the odd region.
Through this, the
In addition, the
12 illustrates a depth map and a corrected depth map according to an embodiment of the present invention.
Referring to FIG. 12, the
13 is a diagram illustrating a structure of a multiview image generator according to an embodiment of the present invention.
Referring to FIG. 13, the multiview
The
The
The
For example, the tilt control refers to a control for converting a rectangular image frame into a rhombus image frame. Depth information may be set in each of the pixel lines (fields) constituting the image frame similarly to the image viewed from the front of the user. The image frame may be composed of a plurality of pixel lines.
The
The
The
The
The
The
Each of the
The multi-view three-dimensional
In this case, the multi-view 3D
14 is a diagram illustrating a depth map and a slope map according to a user's viewpoint according to an embodiment of the present invention.
Referring to FIG. 14, (a) shows depth maps of a left image. (b) shows the slope maps of the right image. The first image generated by the
(a) has depths corresponding to L (-3), L (-2), L (1), L (0), L (1), L (2), and L (3) according to the user's viewpoint. The depth maps of the set left image are shown.
(b) has depths corresponding to R (-3), R (-2), R (1), R (0), R (1), R (2), and R (3) according to the user's viewpoint. Tilt maps of the set right image are shown.
(c) illustrates a first image generated by using the corrected depth map corresponding to the user's view shown in FIG. 2 and a second image generated by controlling the tilt. Here, the horizontal axis represents the direction along the left side or the right side from 141, and the vertical axis represents the distance from the screen 141.
For example, when the user's viewpoint corresponds to coordinates (1, -5), three-dimensional images (first image and second image) corresponding to L (-3) and R (-1) may be generated. If the user viewpoint corresponds to the coordinates (0, 2), three-dimensional images (first image and second image) corresponding to L (0) and R (2) may be generated. For example, the 3D image may generate multi-view images of 22 divided regions according to a user's viewpoint through a combination of seven images, such as left, right, and front. However, the number of depth maps of the left image and the gradient maps of the right image described herein and the number of regions according to the user's viewpoint may be changed.
15 is a diagram illustrating a parallax processing unit according to an embodiment of the present invention.
Referring to FIG. 15, the
The
The
The
In addition, the
The
The
The
16 is a diagram illustrating a 3D image generating operation of a parallax processor according to an embodiment of the present invention.
Referring to FIG. 16, the
In (a), a two-dimensional image is shown.
In (b1)-(b4), multi-view depth maps are shown.
In (c1)-(c4), three-dimensional images obtained from two-dimensional images by a depth map are shown. The
In (b1), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multiview depth map is set to a depth of '4'. The
In (b2), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multi-view depth map is set to a depth of '5'. The
In (b3), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multi-view depth map is set to a depth of '6'. The
In (b4), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multi-view depth map is set to a depth of '7'. The
17 is a diagram illustrating a tilt control operation of a tilt controller according to an embodiment of the present invention.
Referring to FIG. 17, the
In (a), the
In operation (b), the
The
Therefore, the
18 is a diagram schematically illustrating first and second images generated according to an embodiment of the present invention.
Referring to FIG. 18, the
The subject reproduced through the screen 141 may have various shapes according to the viewpoint of the user. When looking at the object from the front and looking at the object from the left or the right, the shape of the subject can be changed. Accordingly, in the multi-view image generating apparatus of the present invention, the depth (or slope) d (x) of the image has a larger value as the user's viewpoint is positioned laterally based on the viewpoint on the left side and the viewpoint on the right side.
19 is a diagram illustrating operations of the first distance controller and the second distance controller according to an exemplary embodiment of the present invention.
Referring to FIG. 19, the
The
In addition, the
As a result, the multi-view 3D
The multi-view three-dimensional image conversion device of the present invention can be implemented in the form of a chip (for example, a single chip), the next generation display devices, for example television (TV), monitors, notebooks, netbooks, digital video discs (DVD: Digital Video Disk (PV) player, Portable Multimedia Player (PMP), mobile phone, tablet, navigation device, etc.). In addition, the present invention may be applied to the generation of a virtual input device and a realistic experience image, for example, a hologram display. In addition, the multi-view three-dimensional image conversion apparatus of the present invention may be implemented in each of the next-generation display devices.
100: multi-view three-dimensional image conversion device
110: sensor unit 120: map generator
130: multi-view image generation unit 140: display unit
121: preprocessor 122: entire image processor
123: Local image processor 124: Depth map generator
125: depth map correction unit 131: multi-view information selection unit
132: parallax processing unit 133: tilt control unit
134: first distance controller 135: second distance controller
210: brightness correction unit 220: depth generation unit
211: table selector 212: brightness value converter
213: brightness information extraction unit 214: correction brightness value generation unit
310: map computing unit
510: buffer 520: depth map converter
530: parallax operation unit 540: 3D image interpolation unit
550: circular buffer
Claims (15)
A sensor unit configured to detect a user's viewpoint based on the screen of the display unit;
A map generator which receives a current image frame and a previous image frame and generates a depth map for movement of each pixel constituting the current image frame; And
The first image is generated by moving each of the pixels of the current image frame based on the depth map and the user's viewpoint, and the second image is generated through a tilt control of the current image frame based on the user's viewpoint. Including a multi-view image generating unit,
The first image and the second image is a multi-view three-dimensional image conversion apparatus that is an image for forming a three-dimensional image.
The map generation unit
One base map is selected from among a plurality of base maps in which perspective information is set using the previous image frame, and one brightness correction table is selected from a plurality of brightness correction tables in which brightness values are set using the previous image frame. A preprocessing unit;
An entire image processor configured to generate an entire depth map for setting perspective of the current image frame according to the selected basic map;
A local image processor configured to generate an area depth map by compensating the current image frame with the selected brightness correction table; And
And a depth map generator configured to generate the depth map by calculating the full depth map and the local depth map.
The multiview image generating unit
A multi-viewpoint information selecting unit which selects first multi-viewpoint information and second multi-viewpoint information corresponding to the user viewpoint;
The multi-view process is performed on the depth map according to the first multi-view information, and a first image for reproducing a 3D image is reproduced from the current image frame by calculating the multi-view processed multi-view depth map and the current image frame. Generating a parallax processor; And
And a tilt controller configured to generate a second image for reproducing a 3D image by performing a tilt operation using the second multiview information on the current image frame.
And the first multi-view information includes information about a parameter and a depth direction for multi-viewing the depth map.
The second multi-view information is a multi-view 3D image conversion apparatus including information on the inclination value and the inclination direction for multi-viewing the current image frame.
The multiview image generating unit
A first distance controller configured to control the first image to move in one of left and right directions according to a first distance control signal; And
And a second distance controller configured to control the second image to move in one of right and left directions according to a second distance control signal.
And the first distance control signal and the second distance control signal are signals for one of increasing and decreasing a distance between the first image and the second image.
The parallax processing unit
A buffer for storing the current image frame;
A depth map converter configured to receive the depth map and the first multi-view information and multi-view the depth map according to the multi-view information;
A parallax calculator configured to generate the first image by calculating a current image frame output from the buffer with the multiview depth map processed by the multiview;
A 3D image interpolator interpolating the first image; And
And a circular buffer for sequentially selecting and outputting the interpolated first image.
The tilt control unit
The multi-view 3D image converting apparatus receives the second multi-view information and multi-views the current image frame according to the second multi-view information.
And the second image is a right image, when the first image is a left image, and when the first image is a right image, the second image is a left image.
Generating a depth map for movement of each pixel constituting the current image frame using the previous image frame and the current image frame;
Selecting first multiview information corresponding to the user's viewpoint and converting the depth map into a multiview depth map based on the first multiview information;
Generating a first image by shifting each pixel of the current image frame using the multi-view depth map;
Selecting second multiview information corresponding to the user view and generating a second image through tilt control of the current image frame based on the second multiview information; And
And outputting the synchronously controlled first and second images.
And the first multiview information includes information about a parameter and a depth direction for multiviewing the depth map.
The second multiview information includes a tilt value and information on a tilt direction for multiviewing the current image frame.
The current image frame and the previous image frame is a multi-view image frame conversion method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110081365A KR101239149B1 (en) | 2011-08-16 | 2011-08-16 | Apparatus and method for converting multiview 3d image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110081365A KR101239149B1 (en) | 2011-08-16 | 2011-08-16 | Apparatus and method for converting multiview 3d image |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20130019295A KR20130019295A (en) | 2013-02-26 |
KR101239149B1 true KR101239149B1 (en) | 2013-03-11 |
Family
ID=47897484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020110081365A KR101239149B1 (en) | 2011-08-16 | 2011-08-16 | Apparatus and method for converting multiview 3d image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101239149B1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100503276B1 (en) | 2003-08-12 | 2005-07-22 | 최명렬 | Apparatus for converting 2D image signal into 3D image signal |
KR20100034789A (en) * | 2008-09-25 | 2010-04-02 | 삼성전자주식회사 | Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image |
-
2011
- 2011-08-16 KR KR1020110081365A patent/KR101239149B1/en not_active IP Right Cessation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100503276B1 (en) | 2003-08-12 | 2005-07-22 | 최명렬 | Apparatus for converting 2D image signal into 3D image signal |
KR20100034789A (en) * | 2008-09-25 | 2010-04-02 | 삼성전자주식회사 | Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image |
Also Published As
Publication number | Publication date |
---|---|
KR20130019295A (en) | 2013-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8866884B2 (en) | Image processing apparatus, image processing method and program | |
JP5556394B2 (en) | Stereoscopic image display system, parallax conversion device, parallax conversion method, and program | |
US9401039B2 (en) | Image processing device, image processing method, program, and integrated circuit | |
JP6147275B2 (en) | Stereoscopic image processing apparatus, stereoscopic image processing method, and program | |
US10115207B2 (en) | Stereoscopic image processing method and apparatus thereof | |
JP5879713B2 (en) | Image processing apparatus, image processing method, and program | |
JP6370708B2 (en) | Generation of a depth map for an input image using an exemplary approximate depth map associated with an exemplary similar image | |
KR101690297B1 (en) | Image converting device and three dimensional image display device including the same | |
WO2012176431A1 (en) | Multi-viewpoint image generation device and multi-viewpoint image generation method | |
JP5068391B2 (en) | Image processing device | |
US20120163701A1 (en) | Image processing device, image processing method, and program | |
US20140333739A1 (en) | 3d image display device and method | |
JP2013005259A (en) | Image processing apparatus, image processing method, and program | |
JP5002702B2 (en) | Parallax image generation device, stereoscopic video display device, and parallax image generation method | |
KR20180030881A (en) | Virtual / augmented reality system with dynamic local resolution | |
US20140079313A1 (en) | Method and apparatus for adjusting image depth | |
JP5627498B2 (en) | Stereo image generating apparatus and method | |
KR101239149B1 (en) | Apparatus and method for converting multiview 3d image | |
JP2014072809A (en) | Image generation apparatus, image generation method, and program for the image generation apparatus | |
WO2013080898A2 (en) | Method for generating image for virtual view of scene | |
KR101165728B1 (en) | Apparatus and method for converting three dimension image | |
KR20120087867A (en) | Method for converting 2 dimensional video image into stereoscopic video | |
KR20120072786A (en) | Method for converting 2 dimensional video image into stereoscopic video | |
KR20130078990A (en) | Apparatus for convergence in 3d photographing apparatus | |
JP2012060246A (en) | Image processor and integrated circuit device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
LAPS | Lapse due to unpaid annual fee |