[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103747248A - Detection and processing method for boundary inconsistency of depth and color videos - Google Patents

Detection and processing method for boundary inconsistency of depth and color videos Download PDF

Info

Publication number
CN103747248A
CN103747248A CN201410039677.6A CN201410039677A CN103747248A CN 103747248 A CN103747248 A CN 103747248A CN 201410039677 A CN201410039677 A CN 201410039677A CN 103747248 A CN103747248 A CN 103747248A
Authority
CN
China
Prior art keywords
pixel
inconsistent
border
video
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410039677.6A
Other languages
Chinese (zh)
Other versions
CN103747248B (en
Inventor
朱策
雷建军
李帅
高艳博
王勇
李贞贞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201410039677.6A priority Critical patent/CN103747248B/en
Publication of CN103747248A publication Critical patent/CN103747248A/en
Application granted granted Critical
Publication of CN103747248B publication Critical patent/CN103747248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention belongs to the field of 3D (three dimensional) video coding, and relates to a color video boundary-based detection and processing method for the boundary inconsistency of the depth video in the depth video coding assisted by the color video. The method comprises the concrete steps of determining the area with the boundary inconsistency between the depth video and the color video, detecting the inconsistent boundary pixels between the depth video and the color video, and processing the inconsistent boundary pixels before entropy coding. The detection and processing method can perform detection and processing on the boundary inconsistency between the color video and the depth video in the depth video prediction process based on the structural similarity between the color video and the depth video, so the rate-distortion performance of coding and the virtual viewpoint synthesis quality at a final decoding end are improved and the coding cost can be effectively reduced.

Description

The inconsistent detection of the degree of depth and color video border and processing method
Technical field
The invention belongs to 3D field of video encoding, relate to the inconsistent detection in deep video border and the processing method of in a kind of deep video coding of color video assistance, based on color video border, carrying out.
Background technology
Many viewpoints plus depth video can utilize the View Synthesis technology based on the degree of depth to generate the virtual view of optional position, is similar at generation position virtual camera and gathers image.This form is comprised of color video and the deep video of multiple viewpoints, and wherein color video records the colour information of scene, and deep video records the depth information of scene.Deep video can represent the geometrical relationship between geometry and the viewpoint of scene.Combining camera configuration parameter, can map to virtual view position at the video that gathers camera position acquisition, forms virtual visual point image.
Existing depth map obtain manner is mainly divided into three kinds: based on the Stereo matching of multi-view point video; Depth camera gathers; The degree of depth based on single color video generates.Because depth camera is expensive, and the depth map obtaining based on single color video technology is second-rate, and therefore the most frequently used method is the Stereo Matching Algorithm based on multi-view point video at present.But this algorithm causes the depth map of generation second-rate at texture smooth region owing to lacking matching characteristic, and at object edge, due to occlusion issue, in generating depth map, depth map border can not be in full accord with cromogram border, exists the border between depth map and cromogram inconsistent.
In the deep video coding of assisting at color video, the inconsistent prediction larger prediction residual of inconsistent position generation on border afterwards that causes in border between color video and deep video, seriously reduces code efficiency.Simultaneously, because deep video is for carrying out View Synthesis rather than directly showing in decoding end, the quality of deep video should be determined by the quality of generating virtual viewpoint video, and virtual view generative process is to utilize the colour information of deep video reference projection viewpoint to virtual view position, the inconsistent virtual view that will have a strong impact in border of deep video and color video synthesizes quality, and the border between color video and deep video is inconsistent for this reason need to process before virtual view generates.
Exist at present part for the inconsistent virtual view composition algorithm in border between color video and deep video, as the boundary denoise algorithm that the people such as Cheon Lee propose, in synthetic virtual view video, object boundary part is detected and suspicious noise is removed; As the people such as Yin Zhao propose before virtual view is synthetic, border is inconsistent between to color video and deep video processes, and suspicious transition portion is suppressed to shine upon.
In the deep video cataloged procedure based on color video, between color video and deep video, the pixel of inconsistent position, border can produce larger prediction residual after prediction, through after dct transform, at frequency domain, produce a large amount of high fdrequency components, seriously reduced code efficiency.In order to reduce the encoded question of being brought by the pixel of inconsistent position, border between color video and deep video to improve code efficiency, and improve decoding end virtual view video generation quality.The invention provides a kind of inconsistent detection in deep video border based on color video border and the method for processing, the method, in the predictive coding process based on piece level, utilizes the depth prediction value of the current block of the structural similarity acquisition based on color video and deep video that the inconsistent pixel in the border of deep video current block is detected and processed.
Summary of the invention
The object of the present invention is to provide the inconsistent detection of a kind of degree of depth and color video border and processing method to improve code efficiency and the synthetic quality of virtual view of many viewpoints plus depth video.
Object of the present invention realizes as follows:
S1, determine and between deep video and color video, occur inconsistent region, border, between deep video and color video, there is the inconsistent region Ω in border in the region of expanding H pixel in depth map centered by the object boundary detecting, wherein, 2≤H≤3;
The inconsistent pixel in border between S2, detection deep video and color video, comprising:
S21, finding out described in S1 that region Ω inner boundary is inconsistent may pixel: calculate the pixel O in Ω iwith the absolute difference D of the possible depth value P for coded prediction, i.e. D=|O i-P|, when D < T, O ifor possible inconsistent pixel, proceed next step the inconsistent detection in border, when D>=T, now this pixel is the new object around not occurring in neighborhood, does not continue detection, wherein, P belongs to the set of coding depth value;
The inconsistent pixel in border between S22, detection deep video and color video, comprising:
S221, search for and record the maximum of the residual absolute value outside Ω in the piece of place, current region to be detected, be denoted as the residual error amplitude peak R of current block max;
S222, judge whether current detection pixel is the inconsistent pixel in border, when the residual absolute value of current detection pixel is during much larger than the residual error amplitude peak of this pixel place piece, the original value mistake of this pixel is the inconsistent pixel in border:
If R i-R max> T 1, current detection pixel is the inconsistent pixel in border,
If R i-R max≤ T 1, current detection pixel is not the inconsistent pixel in border,
Wherein, T 1for threshold value, T 1> 0, R ifor the residual absolute value of current detection pixel;
S3, the inconsistent pixel in border described in S2 is processed before entropy coding, being comprised:
S31, determine the affiliated area of the inconsistent pixel in border described in S2, when the pixel value of one of the predicted value of this pixel and both sides neighborhood territory pixel is nearer, this pixel can be judged to be to belong to corresponding neighborhood, and process is the absolute value D that calculates the difference of the original pixels of the inconsistent pixel of predicted value and the described border both sides neighborhood territory pixel of the inconsistent pixel in border land D r, wherein, described neighborhood territory pixel outside inconsistent region, border, D lfor inconsistent pixel left side, border absolute difference, D rfor inconsistent pixel right side, border absolute difference;
S32, work as D l< D rtime, by O described in S2 iregion is made as left side neighborhood, and by O ipixel assignment be the original value of the neighborhood pixels outside inconsistent region, left side neighborhood border, by O iresidual values be made as this original value and O ipredicted value poor,
Work as D l> D rtime, by O described in S2 iregion is made as right side neighborhood, and by O ipixel assignment be the original value of the neighborhood pixels outside inconsistent region, neighborhood border, right side, by O iresidual values be made as this original value and O ipredicted value poor.
Further, H=3 described in S1.
Further, number >=1 of P representative depth values described in S2.
Further, 10≤T described in S222 1≤ 20.
The invention has the beneficial effects as follows: the present invention is in the process of deep video prediction, utilize the border of color video to carry out the inconsistent detection in border and processing to deep video, the process of described deep video prediction is to carry out based on the structural similarity between color video and deep video.The present invention can improve the distortion performance of coding and the synthetic quality of the virtual view of final decoding end, can effectively reduce coding cost simultaneously.
Accompanying drawing explanation
Fig. 1 is the inconsistent pixel detecting method schematic diagram in the border between deep video and color video.
Fig. 2 is the inconsistent processes pixel schematic diagram in the border between deep video and color video.
Fig. 3 is flow chart of the present invention.
Wherein, 1-E d, 2-Ω, the large residual error pixel of 3-, 4-current pixel, 5-coded residual piece, the non-border of the large residual error neighborhood of pixels of 6-is inconsistent may region B 1and B 2, be collectively referred to as region N r.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described:
As shown in Figure 3:
S1, determine and between deep video and color video, occur inconsistent region, border:
Utilize the Boundary Detection operators such as Canny Boundary Detection operator (multistage edge detection operator) to detect the border of depth map, described border is designated as E d.Because within the scope of the neighborhood of inconsistent 2 to 3 pixels of object boundary that mainly appear at deep video in the border between deep video and color video, no longer think that for the mistake that exceedes 3 pixels border is inconsistent.Between deep video and color video, there is inconsistent region, border in the region of expanding H pixel in depth map centered by the object boundary therefore detecting, wherein, and 2≤H≤3.Therefore, utilize 3 × 3 rectangle to E dexpand and obtain with E dcentered by the extended area of 3 pixels, this region is between deep video and color video and occurs being designated as Ω in inconsistent region, border.
The inconsistent pixel in border between S2, detection deep video and color video, comprising:
S21, finding out described in S1 that the border in region is inconsistent may pixel: the pixel O in the inconsistent region Ω in border between compute depth video and color video iwith the absolute difference D of the depth value of the information P for predicting, i.e. D=|O i-P|, wherein may comprise multiple depth values for the information P predicting.When D < T, judge O ifor carrying out the inconsistent pixel in border, wherein, the top that P is current block and left side next-door neighbour's the value of encoded pixels.
The inconsistent pixel in border between S22, detection deep video and color video, comprising:
S221, search for and record the residual absolute value R outside inconsistent region, Kuai Zhong border, place, current region to be detected imaximum, be denoted as the residual error amplitude peak R of current block max;
S222, judge whether current detection pixel is the inconsistent pixel in border between deep video and color video: if R i-R max> T 1, T 1> 0, current detection pixel is the inconsistent pixel in border between deep video and color video; If R i-R max≤ T 1, current detection pixel is not the inconsistent pixel in border between deep video and color video, wherein, and T 1for threshold value, R ifor the residual absolute value of current detection pixel;
S3, the inconsistent pixel in border described in S2 is processed before entropy coding, being comprised:
S31, determine the affiliated area of the inconsistent pixel in border described in S2, calculate the absolute value D of the difference of the original pixels of the inconsistent pixel of predicted value and the described border both sides neighborhood territory pixel of the inconsistent pixel in border between deep video and color video land D r, wherein, described neighborhood territory pixel outside inconsistent region, border, D lfor inconsistent pixel left side, border absolute difference, D rfor inconsistent pixel right side, border absolute difference;
S32, work as D l< D rtime, by O described in S2 iregion is made as left side neighborhood, and by O ipixel assignment be the original value of the neighborhood pixels outside inconsistent region, left side neighborhood border, by O iresidual values be made as this original value and O ipredicted value poor,
Work as D l> D rtime, by O described in S2 iregion is made as right side neighborhood, and by O ipixel assignment be the original value of the neighborhood pixels outside inconsistent region, neighborhood border, right side, by O iresidual values be made as this original value and O ipredicted value poor.In coding, revise O iresidual values be original value and O ithe difference of predicted value.

Claims (4)

1. the inconsistent detection of the degree of depth and color video border and processing method, is characterized in that: comprise the steps:
S1, determine and between deep video and color video, occur inconsistent region, border, between deep video and color video, there is the inconsistent region Ω in border in the region of expanding H pixel in depth map centered by the object boundary detecting, wherein, 2≤H≤3;
The inconsistent pixel in border between S2, detection deep video and color video, comprising:
S21, finding out described in S1 that region Ω inner boundary is inconsistent may pixel: calculate the pixel O in Ω iwith the absolute difference D of the possible depth value P for coded prediction, i.e. D=|O i-P|, when D < T, O ifor possible inconsistent pixel, proceed next step the inconsistent detection in border, when D>=T, now this pixel is the new object around not occurring in neighborhood, does not continue detection, wherein, P belongs to the set of coding depth value;
The inconsistent pixel in border between S22, detection deep video and color video, comprising:
S221, search for and record the maximum of the residual absolute value outside Ω in the piece of place, current region to be detected, be denoted as the residual error amplitude peak R of current block max;
S222, judge whether current detection pixel is the inconsistent pixel in border, when the residual absolute value of current detection pixel is during much larger than the residual error amplitude peak of this pixel place piece, the original value mistake of this pixel is the inconsistent pixel in border:
If R i-R max> T 1, current detection pixel is the inconsistent pixel in border,
If R i-R max≤ T 1, current detection pixel is not the inconsistent pixel in border,
Wherein, T 1for threshold value, T 1> 0, R ifor the residual absolute value of current detection pixel;
S3, the inconsistent pixel in border described in S2 is processed before entropy coding, being comprised:
S31, determine the affiliated area of the inconsistent pixel in border described in S2, when the pixel value of one of the predicted value of this pixel and both sides neighborhood territory pixel is nearer, this pixel can be judged to be to belong to corresponding neighborhood, and process is the absolute value D that calculates the difference of the original pixels of the inconsistent pixel of predicted value and the described border both sides neighborhood territory pixel of the inconsistent pixel in border land D r, wherein, described neighborhood territory pixel outside inconsistent region, border, D lfor inconsistent pixel left side, border absolute difference, D rfor inconsistent pixel right side, border absolute difference;
S32, work as D l< D rtime, by O described in S2 iregion is made as left side neighborhood, and by O ipixel assignment be the original value of the neighborhood pixels outside inconsistent region, left side neighborhood border, by O iresidual values be made as this original value and O ipredicted value poor,
Work as D l> D rtime, by O described in S2 iregion is made as right side neighborhood, and by O ipixel assignment be the original value of the neighborhood pixels outside inconsistent region, neighborhood border, right side, by O iresidual values be made as this original value and O ipredicted value poor.
2. the inconsistent detection of the degree of depth according to claim 1 and color video border and processing method, is characterized in that: H=3 described in S1.
3. the inconsistent detection of the degree of depth according to claim 1 and color video border and processing method, is characterized in that: number >=1 of P representative depth values described in S2.
4. the inconsistent detection of the degree of depth according to claim 1 and color video border and processing method, is characterized in that: 10≤T described in S222 1≤ 20.
CN201410039677.6A 2014-01-27 2014-01-27 The inconsistent detection of the degree of depth and color video border and processing method Active CN103747248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410039677.6A CN103747248B (en) 2014-01-27 2014-01-27 The inconsistent detection of the degree of depth and color video border and processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410039677.6A CN103747248B (en) 2014-01-27 2014-01-27 The inconsistent detection of the degree of depth and color video border and processing method

Publications (2)

Publication Number Publication Date
CN103747248A true CN103747248A (en) 2014-04-23
CN103747248B CN103747248B (en) 2016-01-20

Family

ID=50504232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410039677.6A Active CN103747248B (en) 2014-01-27 2014-01-27 The inconsistent detection of the degree of depth and color video border and processing method

Country Status (1)

Country Link
CN (1) CN103747248B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805841A (en) * 2018-06-12 2018-11-13 西安交通大学 A kind of depth map recovery and View Synthesis optimization method based on cromogram guiding
CN109120942A (en) * 2018-09-27 2019-01-01 合肥工业大学 The coding circuit of depth image intra prediction based on pipelined architecture and its coding method
CN110648343A (en) * 2019-09-05 2020-01-03 电子科技大学 Image edge detection method based on six-order spline scale function

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903111B2 (en) * 2005-01-08 2011-03-08 Samsung Electronics Co., Ltd. Depth image-based modeling method and apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903111B2 (en) * 2005-01-08 2011-03-08 Samsung Electronics Co., Ltd. Depth image-based modeling method and apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEON LEE AND YO-SUNG HO: "Boundary filtering on synthesized views of 3D video", 《IEEE, 2008 SECOND INTERNATIONAL CONFERENCE ON FUTURE GENERATION COMMUNICATION AND NETWORKING SYMPOSIA, 2008》 *
YIN ZHAO ET AL.: "Boundary artifact reduction in view synthesis of 3D video: from perspective of texture-depth alignment", 《IEEE TRANSACTIONS ON BROADCASTING, 2011》 *
YIN ZHAO ET AL.: "Depth no-synthesis-error model for view synthesis in 3-D video", 《IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011》 *
YIN ZHAO ET AL.: "Suppressing texture-depth misalignment for boundary noise removal in view synthesis", 《IEEE, 28TH PICTURE CODING SYMPOSIUM, PCS2010》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805841A (en) * 2018-06-12 2018-11-13 西安交通大学 A kind of depth map recovery and View Synthesis optimization method based on cromogram guiding
CN108805841B (en) * 2018-06-12 2021-01-19 西安交通大学 Depth map recovery and viewpoint synthesis optimization method based on color map guide
CN109120942A (en) * 2018-09-27 2019-01-01 合肥工业大学 The coding circuit of depth image intra prediction based on pipelined architecture and its coding method
CN109120942B (en) * 2018-09-27 2020-08-07 合肥工业大学 Coding circuit and coding method for depth image intra-frame prediction based on pipeline architecture
CN110648343A (en) * 2019-09-05 2020-01-03 电子科技大学 Image edge detection method based on six-order spline scale function
CN110648343B (en) * 2019-09-05 2022-09-23 电子科技大学 Image edge detection method based on six-order spline scale function

Also Published As

Publication number Publication date
CN103747248B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
CN101873500B (en) Interframe prediction encoding method, interframe prediction decoding method and equipment
CN102271254B (en) Depth image preprocessing method
US20160065931A1 (en) Method and Apparatus for Computing a Synthesized Picture
US20110150321A1 (en) Method and apparatus for editing depth image
CN109587503B (en) 3D-HEVC depth map intra-frame coding mode fast decision method based on edge detection
CN105430415B (en) Fast encoding method in a kind of 3D HEVC deep video frames
CN105120290B (en) A kind of deep video fast encoding method
WO2014063373A1 (en) Methods for extracting depth map, judging video scenario switching and optimizing edge of depth map
CN104602028B (en) A kind of three-dimensional video-frequency B frames entire frame loss error concealing method
CN101653006A (en) Method and apparatus for encoding and decoding based on inter prediction
CN103996174A (en) Method for performing hole repair on Kinect depth images
Li et al. Pixel-based inter prediction in coded texture assisted depth coding
Milani et al. Efficient depth map compression exploiting segmented color data
CN102595145A (en) Method for error concealment of whole frame loss of stereoscopic video
CN105141940B (en) A kind of subregional 3D method for video coding
CN103747248B (en) The inconsistent detection of the degree of depth and color video border and processing method
CN106131553B (en) A kind of video steganalysis method based on motion vector residual error correlation
CN104506871A (en) Three-dimensional (3D) video fast coding method based on high efficiency video coding (HEVC)
CN104093034B (en) A kind of H.264 video flowing adaptive hypermedia system method of similarity constraint human face region
US20140184739A1 (en) Foreground extraction method for stereo video
da Silva et al. Fast mode selection algorithm based on texture analysis for 3D-HEVC intra prediction
CN111066322B (en) Intra-prediction for video coding via perspective information
CN109922349B (en) Stereo video right viewpoint B frame error concealment method based on disparity vector extrapolation
CN103997653A (en) Depth video encoding method based on edges and oriented toward virtual visual rendering
CN104486633B (en) Video error hides method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant