CN105139355A - Method for enhancing depth images - Google Patents
Method for enhancing depth images Download PDFInfo
- Publication number
- CN105139355A CN105139355A CN201510508960.3A CN201510508960A CN105139355A CN 105139355 A CN105139355 A CN 105139355A CN 201510508960 A CN201510508960 A CN 201510508960A CN 105139355 A CN105139355 A CN 105139355A
- Authority
- CN
- China
- Prior art keywords
- pixel
- depth
- image
- value
- depth image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method for enhancing depth images, and relates to the image processing field. The method comprises: the step 1: shooting a depth image for an object through a depth camera, and shooting a color image for the object through a color camera which is synchronous with the depth camera; the step 2: constructing an energy function according to the depth image and the color image; the step 3: minimizing the energy function by using a least square method, and obtaining the optimal value of the enhanced depth value of the pixel in the depth image through calculation; and the step 4: outputting the optimal value of the enhanced depth value of the pixel in the depth image. The method for enhancing depth images can obtain a more accurate depth value of image pixel through calculation.
Description
Technical field
The present invention relates to image processing field, refer to a kind of Enhancement Method of depth image especially.
Background technology
Depth map represents the distance of each point distance video camera in scene.Obtaining scene depth figure is one of vital task of computer vision system.Existing depth measuring devices (such as Microsoft Kinect) limit by its precision, and the depth map of catching comprises noise and damaged area usually.The time complexity of traditional depth map acquisition methods based on solid geometry theory is high, and have dependence to the texture of scene, range of application is restricted.
Based on the depth map acquisition methods of hardware sensor, do not rely on scene texture, there is high speed, flexibly feature; Especially selling of nearly 2 years low price depth transducers, make these class methods more be tending towards practical, derivative application emerges in an endless stream.But be subject to the restriction of hardware precision and built-in algorithms, depth transducer can only obtain the low resolution depth map comprising noise and damaged area at a low price.
In order to strengthen low-quality depth map, existing method comprises: use synchronization from the coloured image of similar viewing angles as a reference, use visual angle mapping algorithm to be alignd with coloured image by depth map, and utilize filtering or global optimization strategy, improve the resolution of depth map.
The people such as Yang proposed in 2007 first on low resolution depth map, to construct a cost model, were then estimated by least consume and sub-pixel derivation algorithm increases depth map resolution iteratively.The people such as Park proposed in 2011 to utilize the depth map of color, marginal information and the linear interpolation extracted from coloured image to make reference, and be optimized, solve high-resolution depth map by markov random file theory.
There is open defect in above two kinds of methods:
First, they suppose that used coloured image can align with depth map in perfection, and this and actual observation are inconsistent.Map and the impact of the noise of depth transducer own and measuring error by camera lens distortion, visual angle, the coloured image after alignment and depth map there are differences in object edge observed reading.Use such coloured image can produce misguidance to the enhancing algorithm of depth map as clue.
Secondly, above algorithm hypothesis original depth-map is measured complete, and there is not large area damaged area, this is not inconsistent with actual observation yet.According to the difference of sensor measurement principle and object material, depth map presents corresponding breakage.Such as, using the depth transducer of Infrared survey principle cannot the low region of stably measured infrared reflecting, can there is breakage in this region in depth map.In addition, visual angle mapping algorithm can leave slight crack by object edge in depth map.
Summary of the invention
The technical problem to be solved in the present invention is, provides a kind of Enhancement Method of depth image, can calculate the depth value of the better image pixel of accuracy.
Described method comprises:
Step one, takes the depth image of an object by depth camera; And the coloured image of described object is taken by the colour imagery shot synchronous with described depth camera;
Step 2, according to described depth image and described coloured image, structure energy function;
Step 3, minimizes described energy function by least square method, calculates the optimal value of the depth value after the enhancing of pixel in described depth image;
Step 4, exports the optimal value of the depth value after the enhancing of pixel in described depth image.
Described energy function is:
E(D')=E
d(D')+λ
hE
h(D')+λ
sE
s(D')
Wherein, E (D') is energy function; D' is the depth value after the enhancing of pixel in depth image; E
d(D') be data constraint item, E
h(D') be the damaged area bound term of depth image; E
s(D') be Smoothing Constraint item; λ
hfor the weight of the damaged area bound term of depth image; λ
sit is the weight of Smoothing Constraint item.
Described data constraint item is defined by following formula:
Wherein, x is the sequence number of pixel in depth image; The existing depth value that D (x) is pixel x in depth image, D'(x) be depth value after the enhancing of pixel x in depth image, C (x) is the degree of confidence of the existing depth value of pixel x in depth image:
Described damaged area bound term is defined by following formula:
Wherein, Ω
hthe pixel set of damaged area in depth image,
it is the degree of depth initial value of pixel x in the damaged area of depth image;
Described Smoothing Constraint item is defined by following formula:
Wherein, N
1x () is the 1 rank neighbour of pixel x;
X
iit is a pixel in the single order neighbour of pixel x; I is the index of pixel;
D'(x
i) be pixel x in depth image
ienhancing after depth value;
W (x
i, x) be weight coefficient;
Weight coefficient w (x
i, x) be color weight w
i(x
i, x), edge weights w
e(x
i, x) with segmentation weight w
s(x
i, the x) product of three;
Ω
eit is the pixel set of the fringe region of coloured image;
S (x) is the mark of the block that the pixel x of coloured image is corresponding; S (x
i) be the mark of the block that in coloured image, pixel is corresponding;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value
σ
iit is the first function parameter.
Before described step 3, described method also comprises:
Card Buddhist nun Canny edge detection process is carried out to described coloured image, obtains the pixel set omega of the fringe region of described coloured image
e.
Before described step 3, described method also comprises:
Use average drifting MeanShift image segmentation, be super-pixel by described color images, obtain mark S (x) of the block that pixel x is corresponding in described coloured image.
Before described step 3, described method also comprises:
To the damaged area Ω of described depth image
h, use depth map restore design to fill, solve the degree of depth initial value of pixel x in the damaged area of described depth image.
Before described step 3, described method also comprises:
According to described depth image and described coloured image, calculate the degree of confidence C (x) of the existing depth value of pixel x in described depth image.
Described according to described depth image and described coloured image, the step calculating the degree of confidence C (x) of the existing depth value of pixel x in described depth image comprises:
Wherein, Ω
hit is the pixel set of damaged area in depth image; Ω
dbe in depth image do not belong to damaged area and distance damaged area is no more than the pixel set of d; D is the first distance threshold;
C
fx () is defined by following formula:
N
rit is the set that distance pixel x is less than or equal to the pixel of r; R is second distance threshold value;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value; D (x) is the existing depth value of pixel x in depth image, D (x
i) be pixel x in depth image
iexisting depth value; η is w
f(x
i, summation x),
σ Iand σ
dthe second function parameter and the 3rd function parameter respectively;
C
hx () is defined by following formula:
Wherein, η ' is w
h(x
i, summation x), σ '
iwith σ '
dthe 4th function parameter and the 5th function parameter respectively.
Before described step 3, described method also comprises:
Spatial registration is carried out to the pixel of described depth image and the pixel of described coloured image;
Corresponding relation between the color value obtaining same pixel x in the depth value of pixel x in described depth image and described coloured image.
The beneficial effect of technique scheme of the present invention is as follows:
In the present invention, use markov random file theory to be optimized depth map, obtain the depth map that a pair is more complete, more credible, more consistent with coloured image observed result.Depth map after enhancing can be used as input data, is applied in more senior computer vision, augmented reality task.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the Enhancement Method of the depth image shown in one embodiment of the invention;
Fig. 2 is the schematic flow sheet of the Enhancement Method of the depth image shown in one embodiment of the invention;
Fig. 3 is the principle schematic of the Enhancement Method of depth image of the present invention.
Embodiment
For making the technical problem to be solved in the present invention, technical scheme and advantage clearly, be described in detail below in conjunction with the accompanying drawings and the specific embodiments.
As shown in Figure 1, the Enhancement Method of described depth image, comprising:
Step 11, takes the depth image of an object by depth camera; And the coloured image of described object is taken by the colour imagery shot synchronous with described depth camera;
Step 12, according to described depth image and described coloured image, structure energy function;
Step 13, minimizes described energy function by least square method, calculates the optimal value of the depth value after the enhancing of pixel in described depth image;
Step 14, exports the optimal value of the depth value after the enhancing of pixel in described depth image.
Wherein, described energy function is:
E(D')=E
d(D')+λ
hE
h(D')+λ
sE
s(D')
Wherein, E (D') is energy function; D' is the depth value after the enhancing of pixel in depth image; E
d(D') be data constraint item, for retraining the difference between D ' and D.E
h(D') be the damaged area bound term of depth image; E
s(D') be Smoothing Constraint item; λ
hfor the weight of the damaged area bound term of depth image; λ
sit is the weight of Smoothing Constraint item.
Wherein, described data constraint item is defined by following formula:
Wherein, x is the sequence number of pixel in depth image; The existing depth value that D (x) is pixel x in depth image, D'(x) be depth value after the enhancing of pixel x in depth image, C (x) is the degree of confidence of the existing depth value of pixel x in depth image:
Described damaged area bound term is defined by following formula:
Wherein, Ω
hthe pixel set of damaged area in depth image,
it is the degree of depth initial value of pixel x in the damaged area of depth image;
Described Smoothing Constraint item is defined by following formula:
Wherein, N
1x () is the 1 rank neighbour of pixel x;
X
iit is a pixel in the single order neighbour of pixel x; I is the index of pixel;
D'(x
i) be pixel x in depth image
ienhancing after depth value;
W (x
i, x) be weight coefficient;
Weight coefficient w (x
i, x) be color weight w
i(x
i, x), edge weights w
e(x
i, x) with segmentation weight w
s(x
i, the x) product of three;
Ω
eit is the pixel set of the fringe region of coloured image;
S (x) is the mark of the block that the pixel x of coloured image is corresponding; S (x
i) be the mark of the block that in coloured image, pixel is corresponding;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value
σ
ibe the first function parameter, carry out value by empirical value, can 30 be taken as.
Wherein, before described step 13, described method also comprises:
Step 131, carries out card Buddhist nun Canny edge detection process to described coloured image, obtains the pixel set omega of the fringe region of described coloured image
e.
Wherein, before described step 13, described method also comprises:
Step 132, uses average drifting MeanShift image segmentation, is super-pixel, obtains mark S (x) of the block that pixel x is corresponding in described coloured image by described color images.
Wherein, before described step 13, described method also comprises:
Step 133, to the damaged area Ω of described depth image
h, use depth map restore design to fill, solve the degree of depth initial value of pixel x in the damaged area of described depth image.
Wherein, before described step 13, described method also comprises:
Step 134, according to described depth image and described coloured image, calculates the degree of confidence C (x) of the existing depth value of pixel x in described depth image.
Step 134 is specially according to following formulae discovery:
Wherein, Ω
hit is the pixel set of damaged area in depth image; Ω
dbe in depth image do not belong to damaged area and distance damaged area is no more than the pixel set of d; D is the first distance threshold;
C
fx () is defined by following formula:
N
rit is the set that distance pixel x is less than or equal to the pixel of r; R is second distance threshold value;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value; D (x) is the existing depth value of pixel x in depth image, D (x
i) be pixel x in depth image
iexisting depth value; η is w
f(x
i, summation x),
σ Iand σ
dthe second function parameter and the 3rd function parameter respectively; Carry out value by empirical value, 30 and 10 can be taken as respectively;
C
hx () is defined by following formula:
Wherein, η ' is w
h(x
i, summation x), σ '
iwith σ '
dbe the 4th function parameter and the 5th function parameter respectively, carry out value by empirical value, 30 and 10 can be taken as respectively.
Wherein, before described step 13, described method also comprises:
Step 135, carries out spatial registration to the pixel of described depth image and the pixel of described coloured image; Corresponding relation between the color value obtaining same pixel x in the depth value of pixel x in described depth image and described coloured image.
In above-described embodiment, disclose and a kind ofly utilize the hint information in coloured image to carry out the method for depth map of wild phase like visual angle.The coloured image that the present invention uses a pair visual angle similar and depth map, utilize the correlativity of regional area color and the degree of depth, the degree of confidence of depth map is estimated, and utilize the color of this degree of confidence and coloured image, edge and segmentation result as guiding clue, use markov random file theory to be optimized depth map, finally obtain the depth map that a pair is more complete, more credible, more consistent with coloured image observed result.Depth map after enhancing can be used as input data, is applied in more senior computer vision, augmented reality task.
As shown in Figure 2, the Enhancement Method of described depth image, comprising:
Step 21, takes the depth image of an object by depth camera; And the coloured image of described object is taken by the colour imagery shot synchronous with described depth camera;
Step 22, carries out spatial registration to the pixel of described depth image and the pixel of described coloured image; Corresponding relation between the color value obtaining same pixel x in the depth value of pixel x in described depth image and described coloured image.
Step 23, carries out card Buddhist nun Canny edge detection process to described coloured image, obtains the pixel set omega of the fringe region of described coloured image
e.
Step 24, uses average drifting MeanShift image segmentation, is super-pixel, obtains mark S (x) of the block that pixel x is corresponding in described coloured image by described color images.
Step 25, according to described depth image and described coloured image, calculates the degree of confidence C (x) of the existing depth value of pixel x in described depth image.
Step 26, to the damaged area Ω of described depth image
h, use depth map restore design to fill, solve the degree of depth initial value of pixel x in the damaged area of described depth image.
Step 27, according to described depth image and described coloured image, structure energy function;
Step 28, minimizes described energy function by least square method, calculates the optimal value of the depth value after the enhancing of pixel in described depth image;
Step 29, exports the optimal value of the depth value after the enhancing of pixel in described depth image.
Wherein, described energy function is:
E(D')=E
d(D')+λ
hE
h(D')+λ
sE
s(D')
Wherein, E (D') is energy function; D' is the depth value after the enhancing of pixel in depth image; E
d(D') be data constraint item, for retraining the difference between D ' and D.E
h(D') be the damaged area bound term of depth image; E
s(D') be Smoothing Constraint item; λ
hfor the weight of the damaged area bound term of depth image; λ
sit is the weight of Smoothing Constraint item.
Wherein, described data constraint item is defined by following formula:
Wherein, x is the sequence number of pixel in depth image; The existing depth value that D (x) is pixel x in depth image, D'(x) be depth value after the enhancing of pixel x in depth image, C (x) is the degree of confidence of the existing depth value of pixel x in depth image:
Described damaged area bound term is defined by following formula:
Wherein, Ω
hthe pixel set of damaged area in depth image,
it is the degree of depth initial value of pixel x in the damaged area of depth image;
Described Smoothing Constraint item is defined by following formula:
Wherein, N
1x () is the 1 rank neighbour of pixel x;
X
iit is a pixel in the single order neighbour of pixel x; I is the index of pixel;
D'(x
i) be pixel x in depth image
ienhancing after depth value;
W (x
i, x) be weight coefficient;
Weight coefficient w (x
i, x) be color weight w
i(x
i, x), edge weights w
e(x
i, x) with segmentation weight w
s(x
i, the x) product of three;
Ω
eit is the pixel set of the fringe region of coloured image;
S (x) is the mark of the block that the pixel x of coloured image is corresponding; S (x
i) be the mark of the block that in coloured image, pixel is corresponding;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value
σ
ibe the first function parameter, carry out value by empirical value, can 30 be taken as.
Thought of the present invention is below described.As shown in Figure 3, a kind ofly utilize the hint information in coloured image to carry out the method for depth map of wild phase like visual angle, based on color and depth consistency, the Enhancement Method of described depth map comprises the following steps:
Step 1, use depth camera shooting depth map to obtain depth data, and use takes synchronous coloured image with the colour imagery shot of depth camera registration, this results in impaired depth data and coloured image corresponding with it.
Step 2, according to the position relationship between colour imagery shot and depth camera, carries out pre-service to depth image, by position corresponding on depth image data-mapping to coloured image.
Step 3, carries out Boundary Detection and cutting operation to coloured image, extracts marginal information and the provincial characteristics of image.
Step 4, after the feature extracting coloured image, carries out " color--the degree of depth " consistency detection according to feature for the coloured image matched and depth data, calculates the degree of confidence of each depth value in original depth image data thus.
Step 5, for the leak part in original depth-map, uses easy depth map restore design to carry out filling up as initial value.
Step 6, according to the information such as image feature information, degree of confidence obtained, utilizes Markov random field theory building least energy function.
Step 7, solves energy equation, obtains the depth data after repairing.
In above-described embodiment, with coloured image as a reference, the depth map obtained from low side depth transducer is strengthened.The target strengthened not only comprises repairs the damaged area of depth map, and corrects " color-degree of depth " inconsistency produced because of measuring error and view transformation, obtains that align with coloured image, that marginal information is consistent depth map.Quoting markov random file theory for realizing this target, defining following energy function E (D'), and by minimizing the enhancing of this energy function realization to depth map:
E(D')=E
d(D')+λ
hE
h(D')+λ
sE
s(D')(1)
Wherein, E
d(D') be data constraint item, each pixel value of the inferior quality depth map that it uses sensor to obtain is as constraint, and by inspection " color-degree of depth " consistance, confidence level estimation is carried out to each pixel in depth map, according to the power of the height definition data constraint of degree of confidence.E
h(D') be damaged area bound term, for repairing the large scale damaged area in depth map, it is tentatively repaired depth map by simple and easy digital picture restore design, and uses this reparation result as constraint.E
s(D') be Smoothing Constraint item, its uses the color of coloured image, edge and segmentation result as constraint, makes the depth value of color smooth region also level and smooth, and makes the edge of depth image and coloured image ensure to align.
Depth map after being repaired by the present invention is consistent with Color Image Edge, align accurately.Found by the comparison in different common data sets, the depth map average error after reparation of the present invention is only 0.32%, far below the reparation error using simple digital image repair algorithm average 7.52%.
Below embodiments of the invention are described.Said method comprising the steps of:
Step 1, uses common RGBD (red, green, blue, the degree of depth) camera (as Microsoft Kinect, Asus Xtion etc.) the shooting degree of depth/coloured image pair.Usual coloured image has higher resolution, and depth image resolution is low, and noise is serious and comprise more mistake.
Step 2, uses camera calibration technology to obtain camera interior and exterior parameter, and calculates the corresponding relation from depth map to coloured image based on this, the registration degree of depth and coloured image.The SDK (SDK (Software Development Kit)) of Kinect itself, with this function, therefore can directly use.Noting due to reasons such as deep errors, camera calibration errors, often there is certain error in the corresponding relation calculated by the method.
Step 3, the degree of confidence C (x) according to " color-degree of depth " consistance compute depth pixel:
Wherein, Ω
hthe pixel set of damaged area in depth map, Ω
dthe pixel set (d can be set to 7) itself not belonging to damaged area but be no more than d apart from damaged area, C
fx () is defined by following formula:
Wherein, N
rbe the set (r can be set to 7) that distance pixel x is no more than the pixel of r, I (x) and D (x) is color value and the depth value of pixel x, and η is w in formula (4)
f(x
i, summation x), σ
iand σ
d15 gray scale differences and 30 millimeters can be set to respectively.
C in formula (3)
hx () is defined by following formula:
Wherein, the meaning of every symbol is identical with formula (5) (6), and η ' is w in formula (7)
h(x
i, summation x), σ '
iwith σ '
d15 gray scale differences and 15 millimeters can be set to respectively, can set based on experience value.
Step 4, to the damaged area Ω of depth map
h(namely depth camera fails correctly to obtain the region of the degree of depth), uses the technology of Simple depth figure reparation to carry out fillings acquisition degree of depth initial value.
Simple depth figure restore design using damaged area borderline pixel as Seed Points, according to Seed Points priority from high to low, to center, damaged area iteration diffusion mode carry out region reparation.The priority of Seed Points defines according to following formula:
Wherein, T (x
s) for Seed Points is to the distance on border, damaged area, D (x
s) be the depth value of Seed Points, D
maxfor the maximum depth value of scene,
for the Laplace transform value of coloured image, λ
1and λ
2-10 and 7 can be set to respectively.
Step 5, to coloured image, adopts the set omega of all image borders of Canny edge detection calculation
e; And use average drifting (MeanShift) image partition method to be super-pixel by Iamge Segmentation, and remember that S (x) is for cut zone mark corresponding to pixel x.
Step 6, structure markov energy function.This energy function is by data constraint item E
d(D'), damaged area bound term E
hand Smoothing Constraint item E (D')
s(D') three compositions.Data constraint item in energy function is defined as follows:
Wherein, D (x) is the existing depth value of pixel x in depth map, D'(x) be strengthen after depth value, C (x) is the degree of confidence of the existing depth value of pixel x:
Damaged area bound term in energy function is defined by following formula:
Wherein, Ω
hthe pixel set of damaged area in depth map,
it is the depth value that Simple depth figure restore design solves.
Energy function Smoothing Constraint item is defined by following formula:
Wherein, N
1x () is the 1 rank neighbour of pixel x, weight coefficient w (x
i, x) be color weight w
i(x
i, x), edge weights w
e(x
i, x) with segmentation weight w
s(x
i, the x) product of three, they define according to following formula respectively:
Wherein, Ω
ebe the edge aggregation of coloured image, S (x) is the mark of color images block, σ
i10 gray scale differences can be set to.
Step 7, minimizes following energy function by least square method, obtain D'(x) optimum solution as Output rusults:
E(D')=E
d(D')+λ
hE
h(D')+λ
sE
s(D')
Wherein, λ
hand λ
sbe the weight of damaged area bound term and Smoothing Constraint item respectively, 0.1 and 0.2 can be set to respectively.
The above is the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the prerequisite not departing from principle of the present invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (9)
1. an Enhancement Method for depth image, is characterized in that, comprising:
Step one, takes the depth image of an object by depth camera; And the coloured image of described object is taken by the colour imagery shot synchronous with described depth camera;
Step 2, according to described depth image and described coloured image, structure energy function;
Step 3, minimizes described energy function by least square method, calculates the optimal value of the depth value after the enhancing of pixel in described depth image;
Step 4, exports the optimal value of the depth value after the enhancing of pixel in described depth image.
2. method according to claim 1, is characterized in that, described energy function is:
E(D')=E
d(D')+λ
hE
h(D')+λ
sE
s(D')
Wherein, E (D') is energy function; D' is the depth value after the enhancing of pixel in depth image; E
d(D') be data constraint item, E
h(D') be the damaged area bound term of depth image; E
s(D') be Smoothing Constraint item; λ
hfor the weight of the damaged area bound term of depth image; λ
sit is the weight of Smoothing Constraint item.
3. method according to claim 2, is characterized in that,
Described data constraint item is defined by following formula:
Wherein, x is the sequence number of pixel in depth image; The existing depth value that D (x) is pixel x in depth image, D'(x) be depth value after the enhancing of pixel x in depth image, C (x) is the degree of confidence of the existing depth value of pixel x in depth image:
Described damaged area bound term is defined by following formula:
Wherein, Ω
hthe pixel set of damaged area in depth image,
it is the degree of depth initial value of pixel x in the damaged area of depth image;
Described Smoothing Constraint item is defined by following formula:
Wherein, N
1x () is the 1 rank neighbour of pixel x;
X
iit is a pixel in the single order neighbour of pixel x; I is the index of pixel;
D'(x
i) be pixel x in depth image
ienhancing after depth value;
W (x
i, x) be weight coefficient;
Weight coefficient w (x
i, x) be color weight w
i(x
i, x), edge weights w
e(x
i, x) with segmentation weight w
s(x
i, the x) product of three;
Ω
eit is the pixel set of the fringe region of coloured image;
S (x) is the mark of the block that the pixel x of coloured image is corresponding; S (x
i) be in coloured image pixel to x
ithe mark of the block of answering;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value
σ
iit is the first function parameter.
4. method according to claim 3, is characterized in that, before described step 3, described method also comprises:
Card Buddhist nun Canny edge detection process is carried out to described coloured image, obtains the pixel set omega of the fringe region of described coloured image
e.
5. method according to claim 3, is characterized in that, before described step 3, described method also comprises:
Use average drifting MeanShift image segmentation, be super-pixel by described color images, obtain mark S (x) of the block that pixel x is corresponding in described coloured image.
6. method according to claim 3, is characterized in that, before described step 3, described method also comprises:
To the damaged area Ω of described depth image
h, use depth map restore design to fill, solve the degree of depth initial value of pixel x in the damaged area of described depth image.
7. method according to claim 3, is characterized in that, before described step 3, described method also comprises:
According to described depth image and described coloured image, calculate the degree of confidence C (x) of the existing depth value of pixel x in described depth image.
8. method according to claim 7, is characterized in that, described according to described depth image and described coloured image, the step calculating the degree of confidence C (x) of the existing depth value of pixel x in described depth image comprises:
Wherein, Ω
hit is the pixel set of damaged area in depth image; Ω
dbe in depth image do not belong to damaged area and distance damaged area is no more than the pixel set of d; D is the first distance threshold;
C
fx () is defined by following formula:
N
rit is the set that distance pixel x is less than or equal to the pixel of r; R is second distance threshold value;
I (x) is the color value of pixel x in coloured image; I (x
i) be pixel x in coloured image
icolor value;
D (x) is the existing depth value of pixel x in depth image, D (x
i) be pixel x in depth image
iexisting depth value; η is w
f(x
i, summation x), σ
iand σ
dthe second function parameter and the 3rd function parameter respectively;
C
hx () is defined by following formula:
Wherein, η ' is w
h(x
i, summation x), σ '
iwith σ '
dthe 4th function parameter and the 5th function parameter respectively.
9. method according to claim 8, is characterized in that, before described step 3, described method also comprises:
Spatial registration is carried out to the pixel of described depth image and the pixel of described coloured image;
Corresponding relation between the color value obtaining same pixel x in the depth value of pixel x in described depth image and described coloured image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510508960.3A CN105139355A (en) | 2015-08-18 | 2015-08-18 | Method for enhancing depth images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510508960.3A CN105139355A (en) | 2015-08-18 | 2015-08-18 | Method for enhancing depth images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105139355A true CN105139355A (en) | 2015-12-09 |
Family
ID=54724688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510508960.3A Pending CN105139355A (en) | 2015-08-18 | 2015-08-18 | Method for enhancing depth images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105139355A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105741265A (en) * | 2016-01-21 | 2016-07-06 | 中国科学院深圳先进技术研究院 | Depth image processing method and depth image processing device |
CN106780383A (en) * | 2016-12-13 | 2017-05-31 | 长春理工大学 | The depth image enhancement method of TOF camera |
CN106886988A (en) * | 2015-12-11 | 2017-06-23 | 中国科学院深圳先进技术研究院 | A kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing |
CN106998460A (en) * | 2017-05-16 | 2017-08-01 | 合肥工业大学 | A kind of hole-filling algorithm based on depth transition and depth total variational |
WO2017143550A1 (en) * | 2016-02-25 | 2017-08-31 | SZ DJI Technology Co., Ltd. | Imaging system and method |
CN108234858A (en) * | 2017-05-19 | 2018-06-29 | 深圳市商汤科技有限公司 | Image virtualization processing method, device, storage medium and electronic equipment |
CN108629756A (en) * | 2018-04-28 | 2018-10-09 | 东北大学 | A kind of Kinect v2 depth images Null Spot restorative procedure |
WO2020083307A1 (en) * | 2018-10-24 | 2020-04-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method, apparatus, and storage medium for obtaining depth image |
CN111340824A (en) * | 2020-02-26 | 2020-06-26 | 青海民族大学 | Image feature segmentation method based on data mining |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103295229A (en) * | 2013-05-13 | 2013-09-11 | 清华大学深圳研究生院 | Global stereo matching method for video depth information recovery |
CN103391446A (en) * | 2013-06-24 | 2013-11-13 | 南京大学 | Depth image optimizing method based on natural scene statistics |
WO2014044569A1 (en) * | 2012-09-18 | 2014-03-27 | Iee International Electronics & Engineering S.A. | Depth image enhancement method |
CN104680496A (en) * | 2015-03-17 | 2015-06-03 | 山东大学 | Kinect deep image remediation method based on colorful image segmentation |
-
2015
- 2015-08-18 CN CN201510508960.3A patent/CN105139355A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014044569A1 (en) * | 2012-09-18 | 2014-03-27 | Iee International Electronics & Engineering S.A. | Depth image enhancement method |
CN103295229A (en) * | 2013-05-13 | 2013-09-11 | 清华大学深圳研究生院 | Global stereo matching method for video depth information recovery |
CN103391446A (en) * | 2013-06-24 | 2013-11-13 | 南京大学 | Depth image optimizing method based on natural scene statistics |
CN104680496A (en) * | 2015-03-17 | 2015-06-03 | 山东大学 | Kinect deep image remediation method based on colorful image segmentation |
Non-Patent Citations (2)
Title |
---|
YANKE WANG 等: "Depth Map enhancement based on color and depth consistency", 《THE VISUAL COMPUTER》 * |
王延可: "增强现实几何一致性相关问题研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886988B (en) * | 2015-12-11 | 2020-07-24 | 中国科学院深圳先进技术研究院 | Linear target detection method and system based on unmanned aerial vehicle remote sensing |
CN106886988A (en) * | 2015-12-11 | 2017-06-23 | 中国科学院深圳先进技术研究院 | A kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing |
CN105741265A (en) * | 2016-01-21 | 2016-07-06 | 中国科学院深圳先进技术研究院 | Depth image processing method and depth image processing device |
CN105741265B (en) * | 2016-01-21 | 2019-03-01 | 中国科学院深圳先进技术研究院 | The processing method and processing device of depth image |
WO2017143550A1 (en) * | 2016-02-25 | 2017-08-31 | SZ DJI Technology Co., Ltd. | Imaging system and method |
US11044452B2 (en) | 2016-02-25 | 2021-06-22 | SZ DJI Technology Co., Ltd. | Imaging system and method |
CN106780383B (en) * | 2016-12-13 | 2019-05-24 | 长春理工大学 | The depth image enhancement method of TOF camera |
CN106780383A (en) * | 2016-12-13 | 2017-05-31 | 长春理工大学 | The depth image enhancement method of TOF camera |
CN106998460A (en) * | 2017-05-16 | 2017-08-01 | 合肥工业大学 | A kind of hole-filling algorithm based on depth transition and depth total variational |
CN108234858B (en) * | 2017-05-19 | 2020-05-01 | 深圳市商汤科技有限公司 | Image blurring processing method and device, storage medium and electronic equipment |
CN108234858A (en) * | 2017-05-19 | 2018-06-29 | 深圳市商汤科技有限公司 | Image virtualization processing method, device, storage medium and electronic equipment |
WO2018210318A1 (en) * | 2017-05-19 | 2018-11-22 | 深圳市商汤科技有限公司 | Blurring method and apparatus for image, storage medium, and electronic device |
US10970821B2 (en) | 2017-05-19 | 2021-04-06 | Shenzhen Sensetime Technology Co., Ltd | Image blurring methods and apparatuses, storage media, and electronic devices |
CN108629756A (en) * | 2018-04-28 | 2018-10-09 | 东北大学 | A kind of Kinect v2 depth images Null Spot restorative procedure |
CN108629756B (en) * | 2018-04-28 | 2021-06-25 | 东北大学 | Kinectv2 depth image invalid point repairing method |
WO2020083307A1 (en) * | 2018-10-24 | 2020-04-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method, apparatus, and storage medium for obtaining depth image |
US11042966B2 (en) | 2018-10-24 | 2021-06-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method, electronic device, and storage medium for obtaining depth image |
CN111340824A (en) * | 2020-02-26 | 2020-06-26 | 青海民族大学 | Image feature segmentation method based on data mining |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105139355A (en) | Method for enhancing depth images | |
CN102609941A (en) | Three-dimensional registering method based on ToF (Time-of-Flight) depth camera | |
CN107909640B (en) | Face relighting method and device based on deep learning | |
CN104680496B (en) | A kind of Kinect depth map restorative procedures based on color images | |
CN102073874B (en) | Geometric constraint-attached spaceflight three-line-array charged coupled device (CCD) camera multi-image stereo matching method | |
CN101137003B (en) | Gray associated analysis based sub-pixel fringe extracting method | |
CN103440653A (en) | Binocular vision stereo matching method | |
CN105528785A (en) | Binocular visual image stereo matching method | |
CN105654547B (en) | Three-dimensional rebuilding method | |
CN104616284A (en) | Pixel-level alignment algorithm for color images to depth images of color depth camera | |
CN104156957B (en) | Stable and high-efficiency high-resolution stereo matching method | |
CN103868460A (en) | Parallax optimization algorithm-based binocular stereo vision automatic measurement method | |
CN106485690A (en) | Cloud data based on a feature and the autoregistration fusion method of optical image | |
CN102938142A (en) | Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect | |
CN107545586B (en) | Depth obtaining method and system based on light field polar line plane image local part | |
CN101996407A (en) | Colour calibration method for multiple cameras | |
CN104079800A (en) | Shaking preventing method for video image in video surveillance | |
CN105303616A (en) | Embossment modeling method based on single photograph | |
CN107680140A (en) | A kind of depth image high-resolution reconstruction method based on Kinect cameras | |
CN103281513B (en) | Pedestrian recognition method in the supervisory control system of a kind of zero lap territory | |
CN105139401A (en) | Depth credibility assessment method for depth map | |
CN104200453A (en) | Parallax image correcting method based on image segmentation and credibility | |
CN102930551A (en) | Camera intrinsic parameters determined by utilizing projected coordinate and epipolar line of centres of circles | |
Zou et al. | A method of stereo vision matching based on OpenCV | |
WO2021213650A1 (en) | Device and method for depth estimation using color images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20151209 |
|
RJ01 | Rejection of invention patent application after publication |