CN103916652B - Difference vector generation method and device - Google Patents
Difference vector generation method and device Download PDFInfo
- Publication number
- CN103916652B CN103916652B CN201310007164.2A CN201310007164A CN103916652B CN 103916652 B CN103916652 B CN 103916652B CN 201310007164 A CN201310007164 A CN 201310007164A CN 103916652 B CN103916652 B CN 103916652B
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- viewpoint
- block
- difference vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000013598 vector Substances 0.000 title claims abstract description 341
- 238000000034 method Methods 0.000 title claims abstract description 94
- 230000000007 visual effect Effects 0.000 claims abstract description 66
- 230000000694 effects Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 12
- 101100120298 Rattus norvegicus Flot1 gene Proteins 0.000 description 7
- 101100412403 Rattus norvegicus Reg3b gene Proteins 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 102100023882 Endoribonuclease ZC3H12A Human genes 0.000 description 6
- 101710112715 Endoribonuclease ZC3H12A Proteins 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000205 computational method Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Landscapes
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a kind of difference vector generation method and device.Wherein, the difference vector generation method includes:The first depth value of basic depth block is obtained according to the depth pixel value of basic depth block in the first viewpoint depth image;The first difference vector of primary image block in the second visual point image is generated according to the first depth value, wherein, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.By the present invention, having reached the computational complexity of difference vector reduces, projects the effect that number is few and memory space is small.
Description
Technical field
The present invention relates to the communications field, in particular to a kind of difference vector generation method and device.
Background technology
3 D video (3D video) includes multichannel (being usually 2 tunnels or 3 tunnels), and (each of which frame is table to texture image sequence
Show subject color or the image of brightness) and range image sequence (each of which frame for represent subject with shooting
The image of the distance between camera).Generally, texture image sequence corresponds to range image sequence, referred to as multiple views all the way all the way
Video plus depth (Multi-View Video Plus Depth, referred to as MVD) form;Sometimes, texture image sequence may not have
There is corresponding range image sequence, such as two-way texture image sequence only wherein has corresponding range image sequence all the way,
MVD (unpaired MVD) referred to as without pairing.3 D video is produced empty by View Synthesis (view synthesis) technology
Intend viewpoint video sequence.In addition, the resolution ratio between multichannel texture image is generally equal, the resolution ratio between multichannel depth image
Also it is generally equal;The resolution ratio of texture image and depth image may be equal all the way, or the resolution ratio of depth image is less than line
Image is managed, such as the level (horizontal) of depth image and vertical (vertical) resolution ratio are texture image analog value
Half, the also referred to as resolution ratio (total pixel number) of texture image is 4 times of depth image resolution ratio.
In 3 d video encoding can by between the correlation between viewpoint, such as the texture image of two viewpoints have one
Fixed similarity (including similarity of pixel value, movable information etc.), also have between the depth image of two viewpoints certain
Similarity.But parallax between multiple viewpoints be present, i.e., existence position between the corresponding points (correspondence) between viewpoint
Offset, the corresponding points between two viewpoints are generally indicated by difference vector.Fig. 1 is refer to, Fig. 1 is sweared according to the parallax of correlation technique
Measure difference vector original position between relation schematic diagram, for example, when the centre coordinate in the image of viewpoint 1 be P1 (150,
100) pixel region A (pixel region is such as a pixel or a rectangular block of pixels) corresponds in the image of viewpoint 2
Centre coordinate P2 (180,95) pixel region B for being, then for A and B, P2 in the image of viewpoint 2 is pointed to by P1 in the image of viewpoint 1
Difference vector DV1 be (30, -5), 30 be horizontal component, and -5 be vertical component, that is, has P2=P1+DV1, P1=P2-DV1, arrow
Amount direction can be sketched as " pointing to viewpoint 1 image by the image of viewpoint 2 ", or " points to viewpoint 1 " by viewpoint 2;By P2 in the image of viewpoint 2
The difference vector DV2 for pointing to P1 in the image of viewpoint 1 is (- 30,5), that is, has P1=P2+DV2, P2=P1-DV2;There is DV2 accordingly
=-DV1.Direction vector is not differentiated between generally, DV1 and DV2 are the difference vector between viewpoint 1 and viewpoint 2, are only directed to phase
Instead.Particularly, when two visual point images are in parallel vidicon arrangement (1D parallel camera arrangement, i.e. light
Axle is parallel, focal length is equal, photocentre is on same level straight line and image resolution ratio is identical) when, it is any between two visual point images
The vertical component of the difference vector of pixel is 0 (i.e. vertical parallax is 0);Now, it is only necessary to indicate the horizontal component of difference vector
;In this case, the coordinate such as the horizontal component of horizontal parallax, i.e. parallax, also referred to as parallax, above P2=P1+DV1 is transported
Calculation can deteriorate to horizontal direction scalar operation.
Certain geometrical relationship between parallax and depth be present.In the image of viewpoint 1 and viewpoint corresponding to some pixel region A
Parallax between 2 images can be converted to according to the camera parameters between the depth value of the pixel region and the two viewpoints.Example
Such as, when two visual point images for usually said parallel vidicon arrangement when, vertical parallax is 0, each image-region (such as one
Pixel) horizontal parallax value DV can be obtained by DV=(f × L/Z)+du, wherein f is video camera corresponding to the image of viewpoint 1
Focal length, parallax ranges of the L between viewpoint 1 and viewpoint 2, Z for as corresponding to image-region depth value D indicate pixel institute
Distance in object with corresponding video camera, water of the du between viewpoint 1 and the image center of viewpoint 2 (principle point)
Flat skew.Now, the image-region respectively with same depth value is with identical horizontal parallax value, i.e. horizontal parallax value and its institute
Image-region position it is unrelated;It generally can first establish the mapping table (look-up that horizontal parallax value is mapped to by depth value
table);To each region, the parallax as corresponding to its position and depth, correspondence position of the region in another viewpoint is found.
It should be noted that parallax of the same object between two different visual angles images is with the parallax range at the two visual angles
(baseline) increase and increase, therefore, at least need to illustrate that two viewpoints could this clear and definite parallax corresponding to difference vector
The size of vector.
When the image of two viewpoints is unsatisfactory for parallel vidicon arrangement, then it can pass through increasingly complex tripleplane
(3D warping) equation, each image-region is obtained another by the position of each image-region and depth (and camera parameters)
Projected position in viewpoint, so as to obtain parallax.Now horizontal parallax, the position of vertical parallax and image-region where parallax have
Close.Disparity computation under above-mentioned parallel vidicon arrangement is a conventional simple special case of this situation.
The depth image that some coding toolses are rebuild using present encoding viewpoint helps present encoding viewpoint texture image
The prediction of the View Synthesis based on back projection (backward warping) in code efficiency, such as 3D-ATM platforms
(ViewSynthesis Prediction, the referred to as VSP) and motion-vector prediction (Depth-based based on depth
Motion Vector Prediction, referred to as DMVP).They are required for from the reconstruction depth image of present encoding viewpoint
Depth pixel value obtains a difference vector of present encoding texture block (for example, to a 4x4 block, by the depth of its central point
Pixel value is converted to the difference vector of this block).So if the texture image of present encoding viewpoint is compiled prior to depth image
Depth image is not rebuild also when code, i.e. encoding texture image, then above-mentioned two coding tools can not obtain difference vector, no
Energy normal work, so as to cause the coding efficiency of present encoding viewpoint texture image to decline.Cause these coding toolses in line
Reason image remains able to work when being encoded prior to depth image, then needs another parallax independent of current view point depth to swear
Measure generation method.
In coding, difference vector generation method includes following 2 kinds:
1) if the depth image of current view point (just in the viewpoint of coding/decoding) can obtain, by a target in current view point
The depth value of depth image is converted to the difference vector of object block corresponding to block (being typically a texture block).But this method
Have the disadvantage that:If depth when the texture image of present encoding viewpoint is prior to depth image coding, i.e. encoding texture image
Image is not rebuild also, then this method can not obtain difference vector;
2) if the depth image non-availability, S.Shimizu et al. of current view point proposes that another kind regards in JCT3V-B0103
Difference vector deriving method:Pass through forward projection with the depth image of another viewpoint viewpoint of coding/decoding (one)
The mode of (forward warping) synthesizes the depth image of current view point, for replacing the depth image of present encoding viewpoint.
Difference vector is converted to further according to the depth image of synthesis.But this method has the disadvantage that:Project all depth pictures
Element, projection number is more, has very high complexity, it is also desirable to which larger data space is used for the depth map for storing synthesis
Picture;In addition, it is still desirable to the processing of the difference vector of target area is converted into by the depth image synthesized.
For the difference vector deriving method complexity in correlation technique it is higher, take larger data memory space and need
The problem of carrying out image conversion processing, effective solution is not yet proposed at present.
The content of the invention
The invention provides a kind of difference vector generation method and device, at least to solve the above problems.
According to an aspect of the invention, there is provided a kind of difference vector generation method, including:According to the first viewpoint depth
The depth pixel value of basic depth block obtains the first depth value of basic depth block in image;Second is generated according to the first depth value
First difference vector of primary image block in visual point image, wherein, the second visual point image is the second viewpoint texture image or second
Viewpoint depth image.
Preferably, the of basic depth block is obtained according to the depth pixel value of basic depth block in the first viewpoint depth image
One depth value, including one of in the following manner:Using the depth value of the depth pixel in a precalculated position in basic depth block as
One depth value;Using the depth value selected in the depth value of the depth pixel in multiple precalculated positions from basic depth block as
First depth value, wherein, the depth value of selection is the maximum in the depth value of the depth pixel in multiple precalculated positions, minimum value
Or median;It is deep using the weighted average of the depth value of the depth pixel in multiple precalculated positions in basic depth block as first
Angle value.
Preferably, the first difference vector of primary image block in the second visual point image is generated according to the first depth value, including:
First depth value is converted into the second difference vector between the first visual point image and the second visual point image, and obtains basic depth
Correspondence position of the block in the second visual point image, wherein, when the second visual point image is the second viewpoint texture image, the first viewpoint
Image is the first viewpoint texture image, and when the second visual point image is the second viewpoint depth image, the first visual point image is first
Viewpoint depth image;Using the product of the second difference vector or the second difference vector and predetermined real number as the first difference vector, its
In, primary image block is located on correspondence position, and predetermined real number includes one below:Constant, zoom factor, wherein, zoom factor
Absolute value for the first viewpoint depth image first resolution and the second visual point image second resolution ratio or ratio
Inverse.
Preferably, according to the first depth value generate the second visual point image in primary image block the first difference vector it
Afterwards, in addition to:The 3rd difference vector of target image block is generated according to the first difference vector, wherein, target image block includes more
Individual primary image block.
Preferably, the 3rd difference vector of target image block is generated according to the first difference vector, including:Determine target image
First difference vector of the primary image block in one or more precalculated positions in block;Selected from all first difference vectors of determination
The value of first difference vector is taken as the 3rd difference vector, or, the weighting of all first difference vectors of determination is put down
Average is as the 3rd difference vector.
According to another aspect of the present invention, there is provided a kind of difference vector generating means, including:Acquisition module, for root
The first depth value of basic depth block is obtained according to the depth pixel value of basic depth block in the first viewpoint depth image;First generation
Module, for generating the first difference vector of primary image block in the second visual point image according to the first depth value, wherein, second regards
Dot image is the second viewpoint texture image or the second viewpoint depth image.
Preferably, acquisition module includes one below:First setting unit, for by a pre-determined bit in basic depth block
The depth value for the depth pixel put is as the first depth value;Choose unit, for will from basic depth block multiple precalculated positions
Depth pixel depth value in a depth value selecting as the first depth value, wherein, the depth value of selection is multiple pre-
Position maximum, minimum value or the median in the depth value for the depth pixel put;Second setting unit, for will be substantially deep
The weighted average of the depth value of the depth pixel in multiple precalculated positions in block is spent as the first depth value.
Preferably, the first generation module includes:Converting unit, for by the first depth value be converted to the first visual point image and
The second difference vector between second visual point image, and correspondence position of the basic depth block in the second visual point image is obtained, its
In, when the second visual point image is the second viewpoint texture image, the first visual point image is the first viewpoint texture image, is regarded when second
When dot image is the second viewpoint depth image, the first visual point image is the first viewpoint depth image;3rd setting unit, for inciting somebody to action
The product of second difference vector or the second difference vector and predetermined real number as the first difference vector, wherein, primary image block position
In on correspondence position, predetermined real number includes one below:Constant, zoom factor, wherein, the absolute value of zoom factor regards for first
The first resolution of point depth image and the ratio of second resolution or the inverse of ratio of the second visual point image.
Preferably, the device also includes:Second generation module, for generating target image block according to the first difference vector
3rd difference vector, wherein, target image block includes multiple primary image blocks.
Preferably, the second generation module includes:Determining unit, for determining one or more pre-determined bits in target image block
First difference vector of the primary image block put;4th setting unit, for being chosen from all first difference vectors of determination
The value of one the first difference vector as the 3rd difference vector, or, by the weighted average of all first difference vectors of determination
Value is used as the 3rd difference vector.
By the present invention, each primary image block of present encoding viewpoint is obtained using by the basic depth block in encoded viewpoint
Difference vector, the mode of the difference vector of each target image block is generated by the difference vector of primary image block, solves correlation
Difference vector deriving method complexity in technology is higher, takes larger data memory space and needs to carry out at image conversion
The problem of reason, and then the computational complexity of difference vector reduces, the projection effect that number is few and memory space is small.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the part of the application, this hair
Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the relation schematic diagram between difference vector and difference vector original position according to correlation technique;
Fig. 2 is difference vector generation method flow chart according to embodiments of the present invention;
Fig. 3 is the structured flowchart of difference vector generating means according to embodiments of the present invention;
Fig. 4 is the structured flowchart of difference vector generating means according to the preferred embodiment of the invention;
Fig. 5 is the structural representation of the difference vector generating means of a preferred embodiment according to embodiments of the present invention
Figure;
Fig. 6 is the structural representation of the difference vector generating means of another preferred embodiment according to embodiments of the present invention
Figure.
Embodiment
Describe the present invention in detail below with reference to accompanying drawing and in conjunction with the embodiments.It should be noted that do not conflicting
In the case of, the feature in embodiment and embodiment in the application can be mutually combined.
Fig. 2 is difference vector generation method flow chart according to embodiments of the present invention, as shown in Fig. 2 this method is mainly wrapped
Include following steps (step S202- step S204):
Step S202, basic depth block is obtained according to the depth pixel value of basic depth block in the first viewpoint depth image
First depth value;
Step S204, the first difference vector of primary image block in the second visual point image is generated according to the first depth value, its
In, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
In the present embodiment, step S202 can be realized by the way of one below:(1) by one in basic depth block
The depth value of the depth pixel in individual precalculated position is as the first depth value;(2) by multiple precalculated positions from basic depth block
The depth value selected in the depth value of depth pixel as the first depth value, wherein, the depth value of selection is multiple predetermined
Maximum, minimum value or median in the depth value of the depth pixel of position;(3) by multiple pre-determined bits in basic depth block
The weighted average of the depth value for the depth pixel put is as the first depth value.
In the present embodiment, step S204 can be adopted in such a way to realize:First depth value is converted to first
The second difference vector between visual point image and the second visual point image, and obtain pair of the basic depth block in the second visual point image
Position is answered, wherein, when the second visual point image is the second viewpoint texture image, the first visual point image is the first viewpoint texture maps
Picture, when the second visual point image is the second viewpoint depth image, the first visual point image is the first viewpoint depth image;Second is regarded
The product of difference vector or the second difference vector and predetermined real number as the first difference vector, wherein, primary image block is positioned at corresponding
On position, predetermined real number includes one below:Constant, zoom factor, wherein, the absolute value of zoom factor is the first viewpoint depth
The ratio of the second resolution of the first resolution of image and the second visual point image or the inverse of ratio.
In a preferred embodiment of the present invention, can also be according to the first parallax after step S204 is performed
Vector generates the 3rd difference vector of target image block, wherein, target image block includes multiple primary image blocks.
Preferably, when generating three difference vector of target image block according to the first difference vector, can so come real
It is existing:First determine the first difference vector of the primary image block in one or more precalculated positions in target image block;Again from determination
The value of first difference vector is chosen in all first difference vectors as the 3rd difference vector, or, by all of determination
The weighted average of first difference vector is as the 3rd difference vector.
For the ease of understanding the difference vector generation method of above-described embodiment offer, one is first provided with detailed ginseng here
Several examples are suitably illustrated.
For example, in actual applications, the difference vector generation method that above-described embodiment provides can be come in the following way
Implement:
Viewpoint is obtained by the basic depth block of E × F sizes in the depth image of viewpoint 1 (i.e. above-mentioned first viewpoint depth image)
In 2 images (i.e. above-mentioned second visual point image) difference vector of the primary image block of M × N sizes process (wherein, E × F > 1,
M × N > 1) it may comprise steps of:
1st, the depth value of basic depth block is obtained by the depth value of X (1≤X≤E × F) individual depth pixel of basic depth block
D;
The 2nd, depth value D is converted to difference vector DV1 (i.e. above-mentioned second parallaxes between the image of viewpoint 1 and the image of viewpoint 2
Vector), and obtain correspondence position Pos2 of the basic depth block in the image of viewpoint 2;
3rd, the difference vector DV2 (i.e. above-mentioned first difference vector) of primary image block is difference vector DV1, or is DV1
With the product of a real number, wherein, primary image block is located at the correspondence position Pos2 in the image of viewpoint 2, wherein, real number can be with
It is:The constants such as -1,1/2, -1/2,2, -2 or a zoom factor, wherein, the absolute value of the zoom factor is viewpoint 1
Depth image resolution ratio and the image resolution ratio ratio of viewpoint 2 or the inverse of the ratio.
It should be noted that one of following relation in this example, between E, F and M, N be present:
Relation one:E=M × S1, F=N × S2, wherein, S1 and S2 are constant, such as S1=S2=1, S1=S2=2, S1
=S2=1/2, S1=1, S2=2;
Relation two:The ratio of the depth image resolution ratio of viewpoint 1 and the image resolution ratio of viewpoint 2 is multiplied by M and N respectively, obtains E
And F.
Wherein, the depth of basic depth block is obtained by the depth value of X (1≤X≤E × F) individual depth pixel of basic depth block
Angle value D process can include carrying out using one of following processing method:
Method 1:Using the depth value of the depth pixel of a fixed position in basic depth block as depth value D;
Method 2:One is selected in the depth value of the depth pixel of multiple fixed positions be used as depth from basic depth block
Value D;Wherein system of selection includes selecting maximum, minimum value or intermediate value therein from multiple depth values;
Method 3:The weighted average of the depth value of the depth pixel of multiple fixed positions is as deeply using in basic depth block
Angle value D.
Certainly, in actual applications, in the image of viewpoint 2 is obtained the primary image block of M × N sizes difference vector it
Afterwards, continue to adopt to be repeated in manner just described, further by the basic depth block of E × F sizes in the depth image of viewpoint 1
The difference vector (i.e. above-mentioned 3rd difference vector) of the target image block of J × K sizes in visual point image 2 is obtained, wherein, target figure
Picture block includes the primary image block of the individual M × N sizes of Q (Q >=2), can specifically perform following processing again:
Step 1 is repeated to step 3 to one or more basic depth blocks, determines the Q that the target image block includes
The difference vector of Q1 (1≤Q1≤Q) individual primary image block in individual primary image block, after obtaining Q1 difference vector, by mesh
The difference vector of logo image block is entered as a difference vector in Q1 difference vector, or the weighting of this Q1 difference vector
Average value.
Fig. 3 is the structured flowchart of difference vector generating means according to embodiments of the present invention, and the device is above-mentioned for realizing
The difference vector generation method that embodiment provides, as shown in figure 3, the device includes:The generation module 20 of acquisition module 10 and first.
Wherein, for the first depth according to the basic depth block of depth pixel value acquisition of basic depth block in the first viewpoint depth image
Value;First generation module 20, is connected to acquisition module 10, for generating parent map in the second visual point image according to the first depth value
As the first difference vector of block, wherein, the second visual point image is the second viewpoint texture image or the second viewpoint depth image.
Fig. 4 is the structured flowchart of difference vector generating means according to the preferred embodiment of the invention, as shown in figure 4, at this
In the difference vector generating means that preferred embodiment provides, acquisition module 10 can include one below:First setting unit 12,
For using the depth value of the depth pixel in a precalculated position in basic depth block as the first depth value;Unit 14 is chosen, is used
It is deep as first in the depth value that will be selected in the depth value of the depth pixel in multiple precalculated positions from basic depth block
Angle value, wherein, the depth value of selection is the maximum in the depth value of the depth pixel in multiple precalculated positions, minimum value or in
Between be worth;Second setting unit 16, for the weighting of the depth value of the depth pixel in multiple precalculated positions in basic depth block to be put down
Average is as the first depth value.
In the preferred embodiment, the first generation module 20 includes:Converting unit 22, for the first depth value to be converted to
The second difference vector between first visual point image and the second visual point image, and basic depth block is obtained in the second visual point image
Correspondence position;3rd setting unit 24, be connected to converting unit 22, for by the second difference vector or the second difference vector with
The product of predetermined real number as the first difference vector, wherein, primary image block is located on correspondence position, predetermined real number include it is following
One of:Constant, zoom factor, wherein, the absolute value of zoom factor is the first resolution and second of the first viewpoint depth image
The ratio of the second resolution of visual point image or the inverse of ratio.
In the preferred embodiment, the difference vector generating means can also include:Second generation module 30, for basis
First difference vector generates the 3rd difference vector of target image block, wherein, target image block includes multiple primary image blocks.
Preferably, the second generation module 30 can include:Determining unit 32, for determining in target image block one or more
First difference vector of the primary image block in individual precalculated position;4th setting unit 34, determining unit 32 is connected to, for from true
The value of first difference vector is chosen in fixed all first difference vectors as the 3rd difference vector, or, by determination
The weighted average of all first difference vectors is as the 3rd difference vector.
The parallax provided with reference to Fig. 5, Fig. 6 and preferred embodiment 1 to preferred embodiment 15 above-described embodiment is sweared
Amount generation method and difference vector generating means are further described in more detail.
Before doing so, some parameters for needing to use to preferred embodiment below first are briefly described:
The depth image resolution ratio of viewpoint 1 is Wd1 × Hd1 (i.e. width is Wd1 depth pixel, is highly Hd1 depth pixel);Viewpoint
1 texture image resolution ratio is Wt1 × Ht1 (i.e. width is Wt1 texture pixel, is highly Ht1 texture pixel);Viewpoint 2 is deep
Degree image resolution ratio is Wd2 × Hd2 (i.e. width is Wd2 depth pixel, is highly Hd2 depth pixel), the texture maps of viewpoint 2
As resolution ratio is Wt2 × Ht2 (i.e. width is Wt2 texture pixel, is highly Ht2 texture pixel).
(subject color or brightness are represented in addition, defining " image of viewpoint 2 " here and referring to the texture image of viewpoint 2
Image) or the depth image of viewpoint 2 (representing subject with shooting the image of the distance between camera), its resolution ratio is
W2×H2;That is, when the image of viewpoint 2 refers to 2 texture image of viewpoint, resolution ratio W2 × H2 is the texture image of viewpoint 2
Resolution ratio, that is, have W2=Wt2, H2=Ht2, and the image pixel of viewpoint 2 refers to the texture pixel in the texture image of viewpoint 2;When regarding
When 2 images of point refer to 2 depth image of viewpoint, resolution ratio W2 × H2 is the resolution ratio of the depth image of viewpoint 2, that is, has W2=Wd2, H2
=Hd2, the image pixel of viewpoint 2 refer to the depth pixel in the depth image of viewpoint 2.Generally, the texture image of a viewpoint point
The ratio of resolution and depth image resolution ratio is constant (such as 1 times, 2 times, 4 times etc.), the texture image resolution ratio of different points of view
Identical, the depth image resolution ratio of different points of view is identical.
Difference vector DV horizontal parallax component and vertical parallax component are represented by DVx and DVy respectively;One position coordinates
P (x, y) horizontal coordinate and vertical coordinate are represented by Px and Py respectively.
Preferred embodiment 1
This preferred embodiment is related to a kind of difference vector generation method:
The depth image of viewpoint 1 includes Nd basic depth blocks, and basic depth block includes E × F depth pixel (i.e. each base
The width of this depth block is E pixel, is highly F pixel), such as:The depth image horizontal direction of viewpoint 1 includes Wd1/E
Basic depth block, vertical direction include Hd1/F basic depth blocks, altogether the individual basic depth of Nd=(Wd1/E) × (Hd1/F)
Block.
The image of viewpoint 2 includes Nt primary image block, and primary image block includes M × N number of image pixel, such as:The figure of viewpoint 2
As horizontal direction has W2/M primary image block, vertical direction has H2/N primary image block, altogether Nt=(W2/M) × (H2/
N) individual primary image block.As described above, the image of viewpoint 2 can be the texture image of viewpoint 2, or the depth image of viewpoint 2.Often
Difference vector between the corresponding image of viewpoint 1 of individual primary image block and the image of viewpoint 2, in particular, by the image of viewpoint 2
The difference vector (or can also be the difference vector that the image of viewpoint 2 is pointed to by the image of viewpoint 1) of the image of viewpoint 1 is pointed to, referred to as
The difference vector of primary image block.It should be noted that the image of the viewpoint 1 reference texture image of viewpoint 1 or depth image, and just like
Lower relation:When the image of viewpoint 2 refers to 2 texture image of viewpoint, the image of viewpoint 1 refers to the texture image of viewpoint 1;When the image of viewpoint 2
When referring to 2 depth image of viewpoint, the image of viewpoint 1 refers to the depth image of viewpoint 1.
Substantially the determination mode of depth block size and primary image block size has a variety of, can select one kind, such as:
Mode 1:By primary image block size and basic depth block size be disposed as M × N (such as 2 × 2,4 × 4,8 ×
8,16 × 16,4 × 2,8 × 4,16 × 8,2 × 4,4 × 8,8 × 16,3 × 5 etc.), i.e. E=M, F=N;
Mode 2:By primary image block size be arranged to M × N (such as 2 × 2,4 × 4,8 × 8,16 × 16,4 × 2,8 × 4,
16 × 8,2 × 4,4 × 8,8 × 16,3 × 5 etc.);Basic depth block is dimensioned to (M × S1) × (N × S2), wherein S1 and
S2 is constant, such as:S1=S2=1/2, or S1=1, S2=1/2, either S1=S2=2 or S1=2, S2=1 etc.;
Mode 3:By primary image block size be arranged to M × N (such as 2 × 2,4 × 4,8 × 8,16 × 16,4 × 2,8 × 4,
16 × 8,2 × 4,4 × 8,8 × 16,3 × 5 etc.), by the ratio of the depth image resolution ratio of viewpoint 1 and the image resolution ratio of viewpoint 2 point
M and N are not multiplied by, obtains E and F, be i.e. E, F and M, N meets following relation:E=M × Wd1/W2, F=N × Hd1/H2.Work as Wd1/W2
During=Hd1/H2=S, then E=M × S, F=N × S.
It should be noted that in aforesaid way 2 and mode 3, equivalently basic depth block first can also be dimensioned to E
× F, then the inverse of E and the F coefficient being multiplied by aforesaid way 2 and mode 3 is obtained into M and N.
The difference vector of the individual primary image blocks of Nta (Nta≤Nt) in Nt primary image block can be stored as a bag
Difference vector field containing this Nta difference vector.Particularly, when the vertical component of all difference vectors is 0, it is only necessary to store
The difference vector field being made up of the horizontal component of parallax.In addition, the difference vector of primary image block can also be stored as another kind
Form:Depth value corresponding to difference vector;When needing to access the difference vector of primary image block, by the depth of primary image block
Value is converted to the difference vector of primary image block.
The parent map of M × N sizes in the image of viewpoint 2 is obtained by the basic depth block of E × F sizes in the depth image of viewpoint 1
As the difference vector of block, wherein E × F > 1, M × N > 1, including following processing:
(1) to any one basic depth block, a depth value is obtained by the individual depth pixels of X therein (1≤X≤E × F)
D, its method have a variety of, can select one kind, such as:
Method 1:Using the pixel value of the depth pixel in a precalculated position in basic depth block as depth value D, such as a left side
Upper angle depth pixel or lower left corner depth pixel or upper right corner depth pixel or lower right corner depth pixel or depth to center
Pixel etc.;
Method 2:One is selected in the pixel value of the depth pixel in multiple precalculated positions be used as depth from basic depth block
Value D, the upper left corner of the plurality of precalculated position for example including the basic depth block, the upper right corner, the lower left corner, the lower right corner, central point
Deng two or more in position, in another example including depth pixel position all in the basic depth block;Wherein selecting party
Method selects maximum, minimum value or intermediate value (medium for example from the pixel value of the depth pixel in multiple precalculated positions
Value) etc.;
Method 3:The weighted average of the pixel value of the depth pixel in multiple precalculated positions is as deeply using in basic depth block
Angle value D, the upper left corner, the upper right corner, the lower left corner, the lower right corner, center of the plurality of precalculated position for example including the basic depth block
Two or more in position such as point, in another example including depth pixel position all in the basic depth block;Weighted average
The computational methods of value such as average value (i.e. each weights are equal), in another example when including three precalculated positions, using 1/4,1/2,
1/4 respectively as three positions weights.
(2) depth value D is converted into the difference vector DV1 between the image of viewpoint 1 and the image of viewpoint 2, and obtained substantially deep
Correspondence position Pos2 of the block in the image of viewpoint 2 is spent, wherein, difference vector DV1 points to basic depth block by correspondence position Pos2
Position Pos1 in the image of viewpoint 1 (or points to correspondence position by position Pos1 of the basic depth block in the image of viewpoint 1
Pos2);Wherein, when the image of viewpoint 2 is 2 texture image of viewpoint, the image of viewpoint 1 refers to the texture image of viewpoint 1, when the figure of viewpoint 2
During as being 2 depth image of viewpoint, the image of viewpoint 1 refers to the depth image of viewpoint 1.
By depth value D be converted to difference vector DV1 of the basic depth block between the image of viewpoint 1 and the image of viewpoint 2 be into
Ripe conventional method, such as when the image of viewpoint 1 and the image of viewpoint 2 arrange in (or being approximately) parallel vidicon, difference vector
DV1 vertical component is 0, and Horizontal component values DV1x can be obtained with formula DV1x=(f × L/Z)+du, and wherein f is the image of viewpoint 1
The focal length of corresponding video camera, parallax ranges of the L between viewpoint 1 and viewpoint 2, Z are the pixel of depth value D instructions to correspondingly
The physical distance of video camera, du are viewpoint 1 and image principal point offset difference (the difference in principal of viewpoint 2
point offset).Generally, L is signed number, and the distance between its absolute value representation viewpoint 1 and viewpoint 2, its symbol is with regarding
Difference vector direction, viewpoint 1 are related to the right position relation of viewpoint 2, such as:;When difference vector DV1 points to viewpoint 1 from viewpoint 2
When, if viewpoint 2, on the left of viewpoint 1, L is negative value (corresponding f × L/Z values are negative value), if viewpoint 2 is in the right side of viewpoint 1, L
On the occasion of;When difference vector DV1 points to viewpoint 2 from viewpoint 1, if viewpoint 2 in the left side of viewpoint 1, L be on the occasion of, if viewpoint 2 regarding
The right side of point 1, then L is negative value.Difference vector DV1 can be the correspondence position by basic depth block in the image of viewpoint 2
(corresponding position) Pos2 points to correspondence position Pos1 of the basic depth block in the image of viewpoint 1, parallax arrow
Amount direction can sketch serve as reasons " image of viewpoint 2 points to the image of viewpoint 1 ", or " point to viewpoint 1 " by viewpoint 2;Now there is Pos2=
Pos1-DV1 (special, to have Pos2x=Pos1x-DV1x for horizontal component);In addition, difference vector DV1 can also serve as reasons
Pos1 points to Pos2, and difference vector direction can sketch serve as reasons " image of viewpoint 1 points to the image of viewpoint 2 ", or " pointed to and regarded by viewpoint 1
Point 2 ", now there is Pos2=Pos1+DV1 (special, to have Pos2x=Pos1x+DV1x for horizontal component).Difference vector DV1
Can be whole pixel precision, or sub-pixel precision, such as 1/2 or 1/4 pixel precision.When the image of viewpoint 1 and the figure of viewpoint 2
During as being in non-parallel camera alignment, Pos1 correspondence position is tried to achieve using conventional tripleplane (3D warping) equation
Pos2, while (DV1 is pointed to by Pos1 by DV1=Pos1-Pos2 (DV1 points to Pos1 by Pos2) or DV1=Pos2-Pos1
Pos2 DV1) is waited until.In order to make it easy to understand, here can be with reference to Fig. 1.
It should be added that the position of a block generally can be by a certain pixel in block (such as central point, upper left
Certain point or other points made an appointment on angle point, vertical centerline) coordinate representation in the picture, combined block it is big
It is small, it is possible to know that this block occupies which of image region.In the present embodiment, the position agreement of block is the center by block
Point represents, can also use other stipulated forms.In addition, convenient for statement, the position for remembering basic depth block in viewpoint 1 is Pos0.
Basic correspondence position Pos1 of the depth block in the image of viewpoint 1 can be determined by the following method:When the image of viewpoint 1 refers to
During for 1 depth image of viewpoint, the depth image block where Pos1 be basic depth block in itself, that is, have Pos1=Pos0;When regarding
When 1 image of point refers to 1 texture image of viewpoint, the texture image block where Pos1 corresponds to same space region with basic depth block
Domain, such as when texture image and depth image have equal resolution, Pos1=Pos0 (has Pos1x=Pos0x, Pos1y
=Pos0y), in another example when the horizontal resolution of texture image and vertical resolution ratio are 2 times with depth image, Pos1=
(there are Pos1x=Pos0x × 2, Pos2y=Pos0y × 2 in Pos0 × 2).
(3) the difference vector DV2 of primary image block is difference vector DV1, or the product for DV1 and real number, its
Middle primary image block is located at correspondence position Pos2, and wherein real number is such as -1,1/2, -1/2,2, -2 etc. constants, or for one
Absolute value is the depth image resolution ratio of viewpoint 1 and the image resolution ratio ratio of viewpoint 2 or its zoom factor reciprocal.
Primary image block is located at correspondence position Pos2, i.e. primary image block is the figure that Pos2 is covered in the image of viewpoint 2
As block, such as an image block for including M × N number of image pixel centered on Pos2.Particularly, if the image of viewpoint 2
Multiple image blocks for including M × N number of image pixel are divided into advance by certain rule, then primary image block is drawn in advance for these
A Pos2 image block for including M × N number of image pixel is covered in the image block divided;It should be noted that this primary image
Center point P os2 ' and the Pos2 of block may be unequal.
Generally, DV2 DV1 ,-DV1 (i.e. size is identical, in opposite direction) can also be saved as, or saves as and DV1 is contracted
Put the value after (Scaling).
Preferred embodiment 2
This preferred embodiment be related to one kind regard between prognostic chart picture production method, be difference vector generation side provided by the invention
One of application of method.The difference vector DV2 of primary image block in the image of viewpoint 2, base are first obtained by the methods described of preferred embodiment 1
The position of this image block is Pos2 ', and DV2 by the image of viewpoint 2 points to the image of viewpoint 1, and (or DV2 points to the figure of viewpoint 2 by the image of viewpoint 1
Picture);Then, a correspondence position in the image of viewpoint 1 is obtained by Pos1 '=Pos2+DV2 (or Pos1 '=Pos2-DV2)
Pos1 ' image block;It should be noted that Pos1 ' may be not equal to the basic depth for being used to generate DV2 in preferred embodiment 1
Correspondence position Pos1 of the block in the image of viewpoint 1.
Image block where Pos1 ' is taken as prognostic chart picture between the regarding of the primary image block in viewpoint 2;Need what is illustrated
It is that when Pos1 ' is sub-pixel location (when DV2 is sub-pixel precision), sub-pixel interpolation wave filter can be used (such as
H.264/AVC sub-pixel interpolation wave filter in etc.) obtain sub-pixel location image pixel, produce Pos1 ' where image block
Sub-pixel precision pixel value.It should be added that in the present embodiment, the texture image and depth image of viewpoint 1 lead to
Often it is reconstruction (reconstructed) image, rather than original (original) image.
Preferred embodiment 3
This preferred embodiment is related to a kind of difference vector generation method.In this preferred embodiment, the line of viewpoint 1 and viewpoint 2
Reason image resolution ratio is identical, and viewpoint 1 is identical with the depth image resolution ratio of viewpoint 2, the horizontal resolution of the texture image of viewpoint 1
It it is 2 times of depth image with vertical resolution ratio.Viewpoint 1 and the texture image of viewpoint 2 arrange in parallel vidicon.The present embodiment is used
In the difference vector of the generation texture image of viewpoint 2.It should be added that when the present embodiment is applied to Video coding, line
It is usually to rebuild (reconstructed) image to manage image and depth image.
The texture image of viewpoint 2 is divided into M × N blocks and (includes the block of M × N number of pixel, width is M pixel, is highly
N number of pixel)), M=4, N=4, each M × N blocks are primary image block, relatively sharp clear and definite to state, and in the present embodiment, are claimed
For basic texture block;The depth image of viewpoint 1 is divided into E × F blocks, E=2, F=2, each E × F blocks are basic depth block.
To all basic depth blocks or a part therein (such as in the rectangular window comprising one or more basic depth blocks
All basic depth blocks), be handled as follows, obtain the parallax of one or more basic texture blocks in the texture image of viewpoint 2
Vector:
(1) to any one basic depth block, with four depth pixels such as its upper left corner, the upper right corner, the lower left corner, lower right corner
Depth value depth value D of the maximum as the basic depth block.
(2) the difference vector DV1 of basic depth block horizontal component is obtained by common-used formula DV1x=(f × L/Z)+du
DV1x, vertical component 0, wherein f be video camera corresponding to the image of viewpoint 1 focal length, baselines of the L between viewpoint 1 and viewpoint 2
Distance, Z are that the pixel of depth value D instructions is inclined for viewpoint 1 and the image principal point of viewpoint 2 to the physical distance of corresponding video camera, du
Difference (difference in principal point offset) is moved, its value is usually 0.DV1 directions are to be pointed to by viewpoint 2
Viewpoint 1, its horizontal component numerical value are 1/4 pixel precision (such as numerical value 5 represents 1.25 pixels, or 5 1/4 pixels).By
It is substantially deep that Pos2=Pos1-DV1, which obtains basic depth block the correspondence position Pos2 on the texture image of viewpoint 2, wherein Pos1,
Correspondence position of the block on the texture image of viewpoint 1 is spent, Pos1, Pos2 horizontal component are also 1/4 pixel precision, and vertical component is
Whole pixel precision (i.e. numerical value 5 represents 5 pixels).Coordinate note of the basic depth block top left corner pixel in the depth image of viewpoint 1
For (x1, y1), wherein x1, y1 is whole pixel precision, then has:
Pos1x=x1 × Sc1 × Fa+offset1, Pos1y=y1 × Sc2+offset2;
Pos2x=x1 × Sc1 × Fa+offset1-DV1x, Pos2y=y1 × Sc2+offset3;
Wherein, Sc1 is equal to the ratio of texture image horizontal resolution and depth image horizontal resolution, and Sc2 is equal to texture
The ratio of the vertical resolution ratio of image and the vertical resolution ratio of depth image, " × Fa " operations switch to the horizontal coordinate of whole pixel precision
1/Fa pixel precisions;In the present embodiment, that is, there are Sc1=2, Sc2=2, Fa=4.Offset1=Fa × E × Sc1/2;
Offset2, offset3 are 0 number arrived between F × Sc2-1, such as offset2=offset3=0.
(3) the difference vector DV2 for the basic texture block for covering Pos2 points is entered as DV1;Wherein judge basic texture block
Following methods can for example be can use by whether covering Pos2:Remember seat of the basic texture block top left corner pixel in the texture image of viewpoint 2
It is designated as (x2, y2), wherein x2, y2 are whole pixel precision, if meeting, x2/M rounds (i.e. x2 divided by M integer part) and is equal to
Pos2x/ (Fa × M) is rounded and y2/N is rounded and rounded equal to Pos2y/N, then the basic texture block covering Pos2.Need what is illustrated
It is that, when Y is 2 power, " X/Y is rounded " operation can also use and " move to right log to X2(Y) position " operation is realized.
Preferred embodiment 4
This preferred embodiment 4 is related to a kind of difference vector generation method.In the preferred embodiment, viewpoint 1 and viewpoint 2
Texture image resolution ratio is identical, and viewpoint 1 is identical with the depth image resolution ratio of viewpoint 2, the texture image resolution ratio and depth of viewpoint 1
It is identical to spend image resolution ratio.Viewpoint 1 and the texture image of viewpoint 2 arrange in parallel vidicon.Viewpoint 1 and the texture image of viewpoint 2
Principal point offset difference (difference in principal point offset) is 0.This preferred embodiment is used to generate
The difference vector (difference vector of the primary image block i.e. in the texture image of viewpoint 2) of the texture image of viewpoint 2.
The texture image of viewpoint 2 is divided into M × N blocks, such as has M=4, N=4, each 4 × 4 pieces are primary image block, are
State it is relatively sharp, in the present embodiment, referred to as basic texture block;The depth image of viewpoint 1 is divided into E × F blocks, E, F according to
The ratio-dependent of the depth image of viewpoint 1 and the texture image resolution ratio of viewpoint 2:E=M × (Wd1/Wt2)=4, F=N × (Hd1/
Ht2)=4, each E × F blocks are basic depth block.
To all basic depth blocks or a parts therein, be handled as follows, obtain in the texture image of viewpoint 2 one or
The difference vector of multiple basic texture blocks:
(1) to any one basic depth block, the basic depth is used as by the use of the depth value of its central point Cen depth pixel
The depth value D of block.Basic coordinate of the depth block top left corner pixel in the depth image of viewpoint 1 is designated as (x1, y1), wherein x1, y1
For whole pixel precision, then central point Cen horizontal coordinate, vertical coordinate may be defined as:
Cenx=x1+E/2;Ceny=y1+F/2;
Or also it may be defined as:
Cenx=x1+E/2-1;Ceny=y1+F/2-1.
(2) by common-used formula DV1=f × l/Z=G1×D+C2Obtain the difference vector DV1 of basic depth block level point
Amount, vertical component 0, wherein,
(physics of the depth value D of depth pixel and pixel to corresponding video camera away from
Corresponding relation from Z);
Wherein, f is the focal length of video camera corresponding to the image of viewpoint 1, and parallax ranges of the l between viewpoint 1 and viewpoint 2, Z is
Depth value D instruction pixel to correspondence video camera physical distance, ZnearAnd ZfarRespectively nearest and farthest depth plane.
DV1 directions are by viewpoint 1 to point to viewpoint 2, its horizontal component numerical value be 1/2 pixel precision (such as numerical value 5 represents 2.5 pixels,
Or 5 1/2 pixels).Correspondence position of the basic depth block on the texture image of viewpoint 2 is obtained by Pos2=Pos1+DV1, wherein
Pos1 is correspondence position of the basic depth block on the texture image of viewpoint 1, and Pos1, Pos2 horizontal component are also 1/2 pixel essence
Degree, vertical component is whole pixel precision.Basic coordinate of the depth block top left corner pixel in the depth image of viewpoint 1 be designated as (x1,
Y1), wherein x1, y1 are whole pixel precision, then have:
Pos1x=x1 × Fa+offset1, Pos1y=y1+offset2;
Pos2x=x1 × Fa+offset1+DV1x, Pos2y=y1+offset3;
Offset1=Fa × E/2;Offset2, offset3 are 0 to a number between F-1, such as offset2=
Offset3=F/2.In the present embodiment, Fa=2, " horizontal coordinate of whole pixel precision is switched to 1/2 pixel essence by × Fa " operations
Degree.
(3) the difference vector DV2 for the basic texture block for covering Pos2 points is entered as DV1;Wherein judge basic texture block
Following methods can for example be used by whether covering Pos2 (i.e. whether Pos2 falls in basic texture block):Remember basic texture block upper left
Coordinate of the angle pixel in the texture image of viewpoint 2 is (x2, y2), and wherein x2, y2 is whole pixel precision, if meeting x2≤Pos2x/
Fa < x2+M and y2≤Pos2y < y2+N, then basic texture block covering Pos2.It should be added that image coordinate system
Usually horizontal coordinate is positive direction from left to right, and vertical coordinate is positive direction from top to bottom.
Preferred embodiment 5
This preferred embodiment is related to a kind of difference vector generation method.In the present embodiment, the depth of viewpoint 1 and viewpoint 2
Image resolution ratio is identical.Viewpoint 1 and the depth image of viewpoint 2 arrange in parallel vidicon.Viewpoint 1 and the depth image principal point of viewpoint 2
Offset difference (difference in principal point offset) is 0.The present embodiment is used to generate the depth map of viewpoint 2
The difference vector of picture.
The depth image of viewpoint 2 is divided into M × N blocks in advance, such as has M=8, N=4, each M × N blocks are primary image
Block;The depth image of viewpoint 1 is divided into E × F blocks, such as E=4, F=4, each E × F blocks are basic depth block.
To all basic depth blocks, it is handled as follows, obtains the parallax of multiple primary image blocks in the depth image of viewpoint 2
Vector:
(1) to any one basic depth block, the basic depth block is used as by the use of the depth value of the depth pixel in its upper left corner
Depth value D.
(2) by common-used formula DV1x=f × l/Z=C1×D+C2Obtain the difference vector DV1 of basic depth block level point
DV1x, vertical component 0 are measured, DV1 directions are to point to the image of viewpoint 2 by the image of viewpoint 1, and its horizontal component numerical value is 1/4 pixel
Precision;
Correspondence position Pos2 of the basic depth block on the depth image of viewpoint 2 is obtained by Pos2=Pos1+DV1, wherein
Pos1 is correspondence position of the basic depth block on the depth image of viewpoint 1, and Pos1, Pos2 horizontal component are also 1/4 pixel essence
Degree, vertical component is whole pixel precision.Basic coordinate of the depth block top left corner pixel in the depth image of viewpoint 1 be designated as (x1,
Y1), wherein x1, y1 are whole pixel precision, then have:
Pos1x=x1 × 4+8, Pos1y=y1;
Pos2x=x1 × 4+8+DV1x, Pos2y=y1;Wherein " × 4 " operations switch to the horizontal coordinate of whole pixel precision
1/4 pixel precision.2 integral number power (such as 2,4,8) is multiplied by, can also be realized with displacement.
(3) the difference vector DV2 for the primary image block for covering Pos2 points is entered as-DV1.Wherein judge primary image block
Following methods can for example be used by whether covering Pos2:Remember primary image block top left corner pixel in the image of viewpoint 2 (in this embodiment
For the depth image of viewpoint 2) in coordinate be (x2, y2), wherein x2, y2 is whole pixel precision, if meeting x2≤Pos2x/4 <
X2+M and y2≤Pos2y < y2+N, then the primary image block covering Pos2.
Preferred embodiment 6
This preferred embodiment is related to a kind of difference vector generation method.In the present embodiment, the texture of viewpoint 1 and viewpoint 2
Image resolution ratio is identical, and viewpoint 1 is identical with the depth image resolution ratio of viewpoint 2, the horizontal resolution of the texture image of viewpoint 1 and
Vertical resolution ratio is 2 times of depth image.Viewpoint 1 and the image of viewpoint 2 are in non-parallel camera alignment.The present embodiment is used to give birth to
Into the difference vector of the depth image of viewpoint 2.
The depth image of viewpoint 2 is divided into M × N blocks (block for including M × N number of pixel), such as has M=3, N=3, often
Individual M × N blocks are primary image block, the depth image of viewpoint 1 are divided into E × F blocks, E=3, F=3, each E × F blocks are substantially deep
Spend block.
To all basic depth blocks or a part of (such as the basic depth block of a line) therein, it is handled as follows, obtains
The difference vector of one or more primary image blocks in the depth image of viewpoint 2:
(1) to any one basic depth block, its position is Pos1, with its upper left corner, the depth of the depth pixel in the upper right corner
Depth value D of the average value of angle value as the basic depth block.
(2) correspondence position of the basic depth block on the depth image of viewpoint 2 is obtained by tripleplane (3Dwarping) formula
Pos2, Pos1, Pos2 horizontal component and vertical component are whole pixel precision.Basic depth block central point pixel is in viewpoint 1
Coordinate in depth image is designated as (x1, y1), and wherein x1, y1 are whole pixel precision, then have:
Pos1x=x1, Pos1y=y1;(position that basic depth block is represented with basic depth block central point)
DV1x=Pos2x-Pos1x, DV1y=Pos2y-Pos1y;
Wherein DV1 direction vectors are to point to viewpoint 2 from viewpoint 1.
(3) the difference vector DV2 for the primary image block put centered on Pos2 is entered as S5 × DV1, wherein S5 is one
Fixed constant, such as S5=-1, or S5=2.
Preferred embodiment 7
This preferred embodiment is related to a kind of difference vector generation method.First to some region in viewpoint 1 (such as one or
Rectangular area or entire image of multiple macro-block lines or R × S size etc.) in basic depth block successively using implementing
Difference vector generation method described by example 1 (or a kind of embodiment in embodiment 3,4,5,6), is obtained in the image of viewpoint 2
The difference vector of one or more primary image blocks.
To the target image block of J × K size in viewpoint 2, Q1 (1≤Q1 in the Q primary image block included by it
≤ Q) individual precalculated position primary image block difference vector, Q1 difference vector is obtained, by the difference vector of target image block
DV2 ' is entered as a difference vector in Q1 difference vector, or this Q1 difference vector weighted average (such as when
During Q1=5, use weight coefficient for 1/8,1/8,1/8,1/8,1/2 weighted average).
Wherein, the method such as one below for difference vector DV2 ' being entered as in Q1 difference vector:
Method 1:Using the difference vector of the primary image block in a precalculated position in target image block as target image block
Difference vector, wherein the upper left corner or the lower left corner of the precalculated position such as target image block or the upper right corner or bottom right
Angle or central point etc.;
Method 2:A conduct is selected in the difference vector of the primary image block in multiple precalculated positions from target image block
The difference vector of target image block, plurality of the precalculated position such as upper left corner including the target image block, the upper right corner, lower-left
Two or more in the positions such as angle, the lower right corner, central point, in another example including primary image all in the target image block
The position of block;Wherein system of selection selects maximum, most for example from the difference vector of the primary image block in multiple precalculated positions
Small value or intermediate value (medium value) etc..
Preferred embodiment 8
This preferred embodiment is related to a kind of difference vector generation method.First to the rectangle of R × S size in viewpoint 1
Basic depth block in the Reg1 of region carries out the difference vector generation method described in embodiment 3 successively, obtains in the image of viewpoint 2
The difference vector of another region Reg2 one or more primary image blocks.To J × K size in Reg2 in viewpoint 2
Target image block, with regarding for four primary image blocks where the target image block upper left corner, the upper right corner, the lower left corner, the lower right corner
Difference vector DV2 ' of the difference vector value of horizontal component maximum absolute value as this target image block in difference vector.
It should be noted that some primary image blocks in Reg2 may there is no difference vector, for instance in going to hide
Keep off the primary image block of region (dis-occlusion region);What they were not generated by the depth of a basic depth block
Difference vector.The difference vector of these primary image blocks, a fixed value can be set to, or be another around primary image block
The difference vector of one primary image block, or the weighting of the difference vector for multiple primary image blocks around primary image block
Average value.
Preferred embodiment 9
This preferred embodiment is related to a kind of difference vector generation method.First to the rectangle of R × S size in viewpoint 1
Basic depth block in the Reg1 of region carries out the difference vector generation method described in embodiment 4 successively, obtains in the image of viewpoint 2
The difference vector of another region Reg2 (may be non-rectangular area) one or more primary image blocks.To in viewpoint 2
The target image block of J × K size in Reg2, with where the target image block upper left corner, the upper right corner, the lower left corner, the lower right corner
Four primary image blocks difference vector in horizontal component absolute value for intermediate value difference vector as the target image block
Difference vector DV2 '.
It is above-mentioned difference vector export processing is carried out successively to basic depth block during, if two horizontal directions are adjacent
The corresponding primary image block horizontal direction in viewpoint 2 of basic depth block it is non-conterminous when, that is, have N (N is positive integer) individual basic
Image block is located between the two primary image blocks, then with absolute value in the difference vector of the two primary image blocks it is smaller (or
It is larger) difference vector be assigned to the difference vector of this N number of primary image block.
Preferred embodiment 10
This preferred embodiment is related to a kind of difference vector generation method.First to a R × S (R >=E, S >=F) in viewpoint 1
Basic depth block in the rectangular area Reg1 of size carries out the difference vector generation method described in preferred embodiment 5 successively, obtains
The difference vector of another region Reg2 one or more primary image blocks into the image of viewpoint 2.To in Reg2 in viewpoint 2
The target image block of one J × K size, the difference vector of primary image block is used as the target where using target image block central point
The difference vector DV2 ' of image block.If the difference vector of primary image block is not present where central point, DV2 ' is set to mesh
The average value of the difference vector of 2 primary image blocks where the logo image block upper left corner, the upper right corner.
Preferred embodiment 11
This preferred embodiment is related to a kind of difference vector generation method.A J × K first in the image of viewpoint 2 is big
Small target image block, determine the rectangular area Reg1 of R × S size in the image of viewpoint 1;To the basic depth in Reg1
Block carries out the difference vector generation method described in preferred embodiment 1 successively, and at least generates one in target image block
The difference vector of the primary image block in portion.To target image block, multiple predetermined position candidates in target image block are inquired about successively
The difference vector of primary image block where (such as central point, the upper left corner, the lower right corner, the lower left corner, upper right corner etc.) is with the presence or absence of (i.e.
Whether processing is carried out using difference vector generation method via the above-mentioned basic depth block in Reg1 and generate primary image
The difference vector of block), the difference vector that first Query Result is existing primary image block is taken as the target image block
Difference vector DV2 '.When the difference vector of the primary image block where all position candidates is not present, can use with lower section
One of method obtains difference vector:
Method one:Outside primary image block where (such as Zig-Zag scanning sequencies etc.) accesses position candidate successively
Primary image block, use first access result be used as target image in right amount the parallax of the primary image block of difference vector to be present
The difference vector DV2 ' of block;
Method two:The difference vector DV2 ' of the target image block is set to a fixed value;
Method three:The difference vector DV2 ' of the target image block is set to adjacent with a target image block base
The difference vector of this image block.
Preferred embodiment 12
This preferred embodiment be related to one kind regard between motion information prediction method, be difference vector generation side provided by the invention
One of application of method.Difference vector generation method first according to preferred embodiment 7 obtains the parallax arrow of target image block
Measure DV2 ' (viewpoint 1 is pointed to by viewpoint 2).By DV2 ' and the position Pos2 ' of target image block, the correspondence in the image of viewpoint 1 is found
Point Pos1 ', that is, there are Pos1 '=Pos2 '+DV2.Take the movable information of Pos1 ' pixels, such as motion vector (motion
Vector), reference number (reference index) etc., the motion information prediction value as target image block.
Preferred embodiment 13
This preferred embodiment is related to a kind of motion vector candidates queue (motion vector candidate list) structure
Method is made, is one of the application of difference vector method provided by the invention.Difference vector first according to preferred embodiment 8
Generation method obtains the difference vector DV2 ' (pointing to viewpoint 1 by viewpoint 2) of target image block.Added DV2 ' as a vector
Into the motion vector candidates queue of target image block, regarded corresponding to DV2 ' with reference to frame number (reference index) instruction
1 texture image of point.
Preferred embodiment 14
This preferred embodiment is related to a kind of difference vector generating means.Fig. 5 is that according to embodiments of the present invention one is preferred
The structural representation of the difference vector generating means of embodiment, as shown in figure 5, the device includes two units:Block depth is produced
Raw unit and difference vector generation unit.Wherein, block depth generation unit is used for the depth value for producing a basic depth block;Depending on
Difference vector generation unit is used for the difference vector that some primary image block in another viewpoint is produced by the depth value of basic depth block.
The two functional units are described below.
Block depth generation unit, it inputs the basic depth block included in the depth image of viewpoint 1, and it is exported including substantially deep
The depth value of block is spent, wherein basic depth block includes the E × F depth pixel (wherein E × F > 1) in the depth image of viewpoint 1;
In function and embodiment and above-mentioned difference vector generation method that block depth generation unit is completed by the X in basic depth block
The function that the depth value of (1≤X≤E × F) individual depth pixel obtains the depth value D of basic depth block is identical with embodiment.
Difference vector generation unit, it inputs depth value and the position for including basic depth block, and its output is the image of viewpoint 2
In a primary image block difference vector, wherein primary image block include the image of viewpoint 2 in M × N number of image pixel (wherein
M × N > 1);The function and embodiment that difference vector generation unit is completed and the following place in above-mentioned difference vector generation method
It is identical to manage content:
(1) depth value D is converted into the difference vector DV1 between the image of viewpoint 1 and the image of viewpoint 2, and obtained substantially deep
Spend correspondence position Pos2 of the block in the image of viewpoint 2;
(2) the difference vector DV2 of primary image block is difference vector DV1, or the product for DV1 and real number, its
Middle primary image block is located at correspondence position Pos2, wherein real number such as -1,1/2, -1/2,2, -2 constants, or for one definitely
It is worth the depth image resolution ratio and the image resolution ratio ratio of viewpoint 2 or its zoom factor reciprocal for viewpoint 1;
Wherein, the image of viewpoint 2 can be the texture image of viewpoint 2, and now the image of viewpoint 1 refers to the texture image of viewpoint 1;Viewpoint
2 images can also be the depth image of viewpoint 2, and now the image of viewpoint 1 refers to the depth image of viewpoint 1.
Preferred embodiment 15
This preferred embodiment is related to a kind of difference vector generating means.Fig. 6 is that according to embodiments of the present invention another is excellent
The structural representation of the difference vector generating means of embodiment is selected, as shown in fig. 6, the device includes three units:Block depth
Generation unit, difference vector generation unit, object block difference vector computing unit.Wherein, block depth generation unit is used to produce
The depth value of one basic depth block;Difference vector generation unit is used to be produced in another viewpoint by the depth value of basic depth block
The difference vector of some primary image block;The one or more that object block difference vector computing unit is covered by a target area
The difference vector of primary image block calculates the difference vector of this target area.These three functional units are retouched below
State.
Block depth generation unit, it inputs the basic depth block included in the depth image of viewpoint 1, and it is exported including substantially deep
The depth value of block is spent, wherein basic depth block includes the E × F depth pixel (wherein E × F > 1) in the depth image of viewpoint 1;
In function and embodiment and above-mentioned difference vector generation method that block depth generation unit is completed by the X in basic depth block
The function that the depth value of (1≤X≤E × F) individual depth pixel obtains the depth value D of basic depth block is identical with embodiment.
Difference vector generation unit, it inputs depth value and the position for including basic depth block, and its output is the image of viewpoint 2
In a primary image block difference vector, wherein primary image block include the image of viewpoint 2 in M × N number of image pixel (wherein
M × N > 1);The function and embodiment that difference vector generation unit is completed and the following place in above-mentioned difference vector generation method
It is identical to manage content:
(1) depth value D is converted into the difference vector DV1 between the image of viewpoint 1 and the image of viewpoint 2, and obtained substantially deep
Spend correspondence position Pos2 of the block in the image of viewpoint 2;
(2) the difference vector DV2 of primary image block is difference vector DV1, or the product for DV1 and real number, its
Middle primary image block is located at correspondence position Pos2, wherein real number such as -1,1/2, -1/2,2, -2 constants, or for one definitely
It is worth the depth image resolution ratio and the image resolution ratio ratio of viewpoint 2 or its zoom factor reciprocal for viewpoint 1.
Object block difference vector computing unit, it inputs the size and location of the object block included in the image of viewpoint 2,
And the difference vector of one or more primary image blocks, object block include J × K image pixel (wherein J × K > 1), its is defeated
Go out including a difference vector;The function and embodiment that object block difference vector computing unit is completed are given birth to above-mentioned difference vector
The parallax of Q1 (1≤Q1≤Q) individual primary image block in the individual primary image blocks of Q (Q >=2) that object block includes is obtained into method
Vector, the difference vector difference vector of target image block being entered as in Q1 difference vector, or this Q1 parallax
The function of the weighted average of vector is identical with embodiment.
Wherein, the image of viewpoint 2 can be the texture image of viewpoint 2, and now the image of viewpoint 1 refers to the texture image of viewpoint 1;Viewpoint
2 images can also be the depth image of viewpoint 2, and now the image of viewpoint 1 refers to the depth image of viewpoint 1.
Difference vector generating means can realize by various ways, such as:
Method one:It is additional next real with difference vector generation method function identical software program by hardware of electronic computer
It is existing.
Method two:Added using single-chip microcomputer as hardware with difference vector generation method function identical software program to realize.
Method three:Added and difference vector generation method function identical software program by hardware of digital signal processor
To realize.
Method four:Design is realized with difference vector generation method function identical circuit.
Certainly, in actual applications, other a variety of, not only office can also be had by realizing the mode of difference vector generating means
It is limited to above-mentioned four kinds.
The difference vector generation method and difference vector generating means provided using above-described embodiment, independent of current
The reconstruction depth image of coded views, it can produce and compiled when the texture image of present encoding viewpoint encodes prior to depth image
Difference vector between code viewpoint (viewpoint 1) image and present encoding viewpoint (viewpoint 2) image, so as to support to compile dependent on current
Code viewpoint rebuilds the coding tools (such as VSP and DMVP) of depth image, solves the difference vector generation method in correlation technique
Complexity is higher, takes larger data memory space and needs to carry out more multiple depth asking to difference vector conversion processing
Topic, and then the computational complexity of difference vector reduces, the projection effect that number is few and memory space is small.
As can be seen from the above description, the present invention realizes following technique effect:(1) compared with prior art, originally
Reconstruction depth image of the difference vector deriving method of invention independent of present encoding viewpoint, can be in the line of present encoding viewpoint
Reason image is prior to depth image when encoding, produce encoded viewpoint (viewpoint 1) image and present encoding viewpoint (viewpoint 2) image it
Between difference vector, so as to support dependent on present encoding viewpoint rebuild depth image coding tools, such as VSP and DMVP.Separately
Outside, difference vector generation method provided by the invention can be used for exporting the parallax arrow in the region during coding depth image-region
Amount (because during coding depth image, depth image certainty non-availability).(2) compared with the method that S.Shimizu et al. is proposed, this
Invention is obtained the difference vector of each primary image block of present encoding viewpoint, each target figure by the basic depth block in encoded viewpoint
As the difference vector of block is exported by the difference vector of primary image block, it is no longer necessary to the conversion of depth to parallax, therefore computing is answered
Miscellaneous degree is lower.In addition, a primary image block includes multiple depth pixels but only corresponds to a difference vector, the number of difference vector
Mesh is much smaller than the pixel count of depth image, so producing difference vector compared with synthesizing depth image, projection number is less, storage
Space is smaller.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step can be with general
Computing device realize that they can be concentrated on single computing device, or be distributed in multiple computing devices and formed
Network on, alternatively, they can be realized with the program code that computing device can perform, it is thus possible to they are stored
Performed in the storage device by computing device, and in some cases, can be with different from shown in order execution herein
The step of going out or describing, they are either fabricated to each integrated circuit modules respectively or by multiple modules in them or
Step is fabricated to single integrated circuit module to realize.So, the present invention is not restricted to any specific hardware and software combination.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies
Change, equivalent substitution, improvement etc., should be included in the scope of the protection.
Claims (8)
- A kind of 1. difference vector generation method, it is characterised in that including:The first depth of the basic depth block is obtained according to the depth pixel value of basic depth block in the first viewpoint depth image Value;The first difference vector of primary image block in the second visual point image is generated according to first depth value, wherein, described the Two visual point images are the second viewpoint texture image or the second viewpoint depth image,Wherein, the first difference vector of primary image block in the second visual point image is generated according to first depth value, including:First depth value is converted into the second difference vector between the first visual point image and second visual point image, and The basic correspondence position of the depth block in second visual point image is obtained, wherein, when second visual point image is institute When stating the second viewpoint texture image, first visual point image is the first viewpoint texture image, when second visual point image is During the second viewpoint depth image, first visual point image is the first viewpoint depth image;Using the product of second difference vector or second difference vector and predetermined real number as first difference vector, Wherein, the primary image block is located on the correspondence position, and the predetermined real number includes one below:Constant, zoom factor, Wherein, first resolution and second visual point image of the absolute value of the zoom factor for the first viewpoint depth image The ratio of second resolution or the inverse of the ratio.
- 2. according to the method for claim 1, it is characterised in that according to the depth of basic depth block in the first viewpoint depth image Spend the first depth value that pixel value obtains the basic depth block, including one of in the following manner:Using the depth value of the depth pixel in a precalculated position in the basic depth block as first depth value;Using the depth value selected in the depth value of the depth pixel in multiple precalculated positions from the basic depth block as First depth value, wherein, the depth value of selection is the maximum in the depth value of the depth pixel in the multiple precalculated position Value, minimum value or median;Using the weighted average of the depth value of the depth pixel in multiple precalculated positions in the basic depth block as described first Depth value.
- 3. method according to any one of claim 1 to 2, it is characterised in that generated according to first depth value In second visual point image after the first difference vector of primary image block, in addition to:The 3rd difference vector of target image block is generated according to first difference vector, wherein, the target image block includes Multiple primary image blocks.
- 4. according to the method for claim 3, it is characterised in that target image block is generated according to first difference vector 3rd difference vector, including:Determine first difference vector of the primary image block in one or more precalculated positions in the target image block;The value that first difference vector is chosen from all first difference vectors of determination regards as the described 3rd Difference vector, or, using the weighted average of all first difference vectors of determination as the 3rd difference vector.
- A kind of 5. difference vector generating means, it is characterised in that including:Acquisition module, the basic depth is obtained for the depth pixel value according to basic depth block in the first viewpoint depth image First depth value of block;First generation module, for generating the first parallax of primary image block in the second visual point image according to first depth value Vector, wherein, second visual point image is the second viewpoint texture image or the second viewpoint depth image,Wherein, the first generation module includes:Converting unit, for first depth value to be converted to between the first visual point image and second visual point image Two difference vectors, and the basic correspondence position of the depth block in second visual point image is obtained, wherein, when described second When visual point image is the second viewpoint texture image, first visual point image is the first viewpoint texture image, when described When two visual point images are the second viewpoint depth image, first visual point image is the first viewpoint depth image;3rd setting unit, for using the product of second difference vector or second difference vector and predetermined real number as First difference vector, wherein, the primary image block is located on the correspondence position, the predetermined real number include it is following it One:Constant, zoom factor, wherein, the absolute value of the zoom factor is the first resolution of the first viewpoint depth image With the ratio of the second resolution of second visual point image or the inverse of the ratio.
- 6. device according to claim 5, it is characterised in that acquisition module includes one below:First setting unit, for using the depth value of the depth pixel in a precalculated position in the basic depth block as described in First depth value;Choose unit, for will be selected from the basic depth block in the depth value of the depth pixel in multiple precalculated positions one Individual depth value as first depth value, wherein, the depth value of selection for the depth pixel in the multiple precalculated position depth Maximum, minimum value or median in angle value;Second setting unit, for the weighting of the depth value of the depth pixel in multiple precalculated positions in the basic depth block to be put down Average is as first depth value.
- 7. the device according to any one of claim 5 to 6, it is characterised in that described device also includes:Second generation module, for generating the 3rd difference vector of target image block according to first difference vector, wherein, institute Stating target image block includes multiple primary image blocks.
- 8. device according to claim 7, it is characterised in that second generation module includes:Determining unit, for determining in the target image block described in the primary image block in one or more precalculated positions First difference vector;4th setting unit, for one first difference vector of selection from all first difference vectors of determination Value is used as the 3rd difference vector, or, using the weighted average of all first difference vectors of determination as described in 3rd difference vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310007164.2A CN103916652B (en) | 2013-01-09 | 2013-01-09 | Difference vector generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310007164.2A CN103916652B (en) | 2013-01-09 | 2013-01-09 | Difference vector generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103916652A CN103916652A (en) | 2014-07-09 |
CN103916652B true CN103916652B (en) | 2018-01-09 |
Family
ID=51042000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310007164.2A Active CN103916652B (en) | 2013-01-09 | 2013-01-09 | Difference vector generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103916652B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016123774A1 (en) * | 2015-02-05 | 2016-08-11 | 华为技术有限公司 | Method and device for encoding and decoding |
CN110336942B (en) * | 2019-06-28 | 2021-02-02 | Oppo广东移动通信有限公司 | Blurred image acquisition method, terminal and computer-readable storage medium |
CN113891060B (en) * | 2020-07-03 | 2024-06-07 | 阿里巴巴集团控股有限公司 | Free viewpoint video reconstruction method, play processing method, device and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101227601B1 (en) * | 2005-09-22 | 2013-01-29 | 삼성전자주식회사 | Method for interpolating disparity vector and method and apparatus for encoding and decoding multi-view video |
JP6061150B2 (en) * | 2011-03-18 | 2017-01-18 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
-
2013
- 2013-01-09 CN CN201310007164.2A patent/CN103916652B/en active Active
Non-Patent Citations (2)
Title |
---|
3D-CE5.a related: Simplification on the disparity vector derivation for AVC-based 3D video coding;Jian-Liang Lin等;《Joint Collaborative Team on 3D Video Coding Extension Development of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11》;20120720;第1-3页 * |
Test Model under Consideration for HEVC based 3D video coding;Heiko Schwarz;《ISO/IEC JTC1/SC29/WG11 MPEG2011/N12559》;20120229 * |
Also Published As
Publication number | Publication date |
---|---|
CN103916652A (en) | 2014-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102390695B1 (en) | Picture prediction method and picture prediction apparatus | |
CN104662896B (en) | Apparatus and method for image procossing | |
CN104322060B (en) | System, method and apparatus that low latency for depth map is deformed | |
JP5883153B2 (en) | Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, image decoding program, and recording medium | |
CN109792520A (en) | For the method and apparatus using omnidirectional's video coding of most probable mode in adaptive frame | |
CN106105191A (en) | For the method and apparatus processing multiview video signal | |
WO2013171183A1 (en) | Estimation, encoding and decoding of motion information in multidimensional signals through motion zones, and auxiliary information through auxiliary zones | |
CN103402097B (en) | A kind of free viewpoint video depth map encoding method and distortion prediction method thereof | |
CN108353189A (en) | Method and apparatus for coding and decoding the image based on light field and corresponding computer program product | |
JP2007336188A (en) | Multi-viewpoint image compression coding method, device, and program | |
CN103686165A (en) | Depth image intra-frame coding and decoding method, video encoder and video decoder | |
WO2017046272A1 (en) | Method for encoding a light field content | |
CN104754359A (en) | Depth map coding distortion forecasting method for two-dimensional free viewpoint video | |
CN103916652B (en) | Difference vector generation method and device | |
CN103873867B (en) | Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method | |
CN115359173A (en) | Virtual multi-view video generation method and device, electronic equipment and storage medium | |
CN106791876B (en) | A kind of depth map fast intra-frame predicting method based on 3D-HEVC | |
CN105122808B (en) | Three-dimensional or multi-view video coding or decoded method and device | |
CN104350748B (en) | Use the View synthesis of low resolution depth map | |
EP3579561A1 (en) | Prediction for light-field coding and decoding | |
JPWO2018186287A1 (en) | Video data generating device, video reproducing device, video data generating method, control program, and recording medium | |
CN103997635B (en) | The synthesis viewpoint distortion prediction method of free viewpoint video and coding method | |
CN104683812B (en) | Video preprocessing method and device for motion estimation | |
Werner et al. | Selection of reference views for image-based representation | |
JP2020195093A (en) | Encoder, decoder, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |