WO2015057037A1 - 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 - Google Patents
멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 Download PDFInfo
- Publication number
- WO2015057037A1 WO2015057037A1 PCT/KR2014/009859 KR2014009859W WO2015057037A1 WO 2015057037 A1 WO2015057037 A1 WO 2015057037A1 KR 2014009859 W KR2014009859 W KR 2014009859W WO 2015057037 A1 WO2015057037 A1 WO 2015057037A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- disparity vector
- block
- disparity
- vector
- current block
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to video coding, and more particularly, to coding of 3D video images.
- High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
- 3D video can provide realism and immersion using a plurality of view channels.
- 3D video can be used in a variety of areas such as free viewpoint video (FVV), free viewpoint TV (FTV), 3DTV, social security and home entertainment.
- FVV free viewpoint video
- FTV free viewpoint TV
- 3DTV social security and home entertainment.
- 3D video using multi-view has a high correlation between views of the same picture order count (POC). Since the multi-view image captures the same scene at the same time by using several adjacent cameras, that is, multiple views, the correlation between the different views is high because it contains almost the same information except for parallax and slight lighting differences.
- POC picture order count
- the decoding target block of the current view may be predicted or decoded with reference to the block of another view.
- An object of the present invention is to provide a method and apparatus for deriving a more effective disparity vector using information and depth of neighboring blocks.
- An object of the present invention is to define a storage unit of a disparity vector that can reduce complexity and improve coding efficiency.
- the present invention provides a method and apparatus for improving coding efficiency by efficiently predicting a motion vector using a disparity vector.
- An embodiment of the present invention is a video decoding apparatus for decoding multi-view video, comprising: an entropy decoding unit for entropy decoding a bitstream to derive video information necessary for decoding a current block, and pictures referred to for decoding the current block
- a memory to store the first disparity vector based on a neighboring block of the current block in the same view using the video information, and a first view using the first disparity vector and a reference view depth.
- Derive a second disparity vector derive a third disparity vector using the difference between the first disparity vector and the second disparity vector, and obtain the first disparity vector, the second disparity vector, or the third disparity.
- a prediction unit and a current block for deriving a prediction sample of the current block using any one of parity vectors For it may include a filter for applying filtering to the current picture using the restored prediction samples.
- Another embodiment of the present invention is a video decoding method for decoding multi-view video, comprising: entropy decoding a bitstream to derive video information necessary for decoding a current block, wherein the current information in the same view is used using the video information.
- Derive the first disparity vector based on a neighboring block of the block derive a second disparity vector using the first disparity vector and the reference view depth, and generate the first disparity vector and the first disparity vector.
- Deriving a third disparity vector by using a difference between two disparity vectors, and using one of the first disparity vector, the second disparity vector, or the third disparity vector to obtain a prediction sample of the current block. Inducing step and
- the method may include applying filtering to the current picture reconstructed using the prediction sample.
- the disparity vector can be effectively derived using the information and the depth of the neighboring block.
- the present invention it is possible to newly define a storage unit of the disparity vector, to reduce the complexity of the coding process and to increase the coding efficiency.
- the coding efficiency of the multi-view video can be improved by efficiently predicting the motion vector using the disparity vector.
- 1 is a diagram schematically illustrating a process of encoding and decoding 3D video.
- FIG. 2 is a diagram schematically illustrating a configuration of a video encoding apparatus.
- FIG. 3 is a diagram schematically illustrating a configuration of a video decoding apparatus.
- FIG. 4 is a diagram schematically illustrating inter view coding.
- FIG. 5 schematically illustrates a multi-view coding method using a depth map.
- FIG. 6 is a diagram schematically illustrating a DV-MCP block.
- FIG. 7 is a diagram schematically illustrating an example of neighboring blocks of a current block.
- FIG. 8 is a diagram schematically illustrating correcting a disparity vector derived from a neighboring block using a depth.
- FIG. 9 is a diagram schematically illustrating a method of searching for NBDV using a DV-MCP block.
- FIG. 10 is a diagram schematically illustrating an example of neighboring blocks that may be used for motion vector prediction.
- FIG. 11 is a diagram schematically illustrating a temporal neighboring block of a current block.
- FIG. 12 is a diagram schematically illustrating an example of configuring an MVP list according to the present invention.
- FIG. 13 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- FIG. 14 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- 15 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- 16 is a diagram schematically illustrating another example of a method of deriving an MVP list according to the present invention.
- 17 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- FIG. 18 is a diagram schematically illustrating an operation of a decoding apparatus according to the present invention.
- a pixel or a pel may mean a minimum unit constituting one image.
- the term 'sample' may be used as a term indicating a value of a specific pixel.
- the sample generally indicates the value of the pixel, but may indicate only the pixel value of the Luma component or only the pixel value of the Chroma component.
- a unit may mean a basic unit of image processing or a specific position of an image.
- the unit may be used interchangeably with terms such as 'block' or 'area' as the case may be.
- an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
- 1 is a diagram schematically illustrating a process of encoding and decoding 3D video.
- the 3D video encoder may encode a video picture, a depth map, and a camera parameter to output a bitstream.
- the depth map may be composed of distance information (depth information) between a camera and a subject with respect to pixels of a corresponding video picture (texture picture).
- the depth map may be an image in which depth information is normalized according to bit depth.
- the depth map may be composed of recorded depth information without color difference representation.
- disparity information indicating the correlation between views may be derived from depth information of the depth map using camera parameters.
- a general color image that is, a bitstream including a depth map and camera information together with a video picture (texture picture) may be transmitted to a decoder through a network or a storage medium.
- the decoder side can receive the bitstream and reconstruct the video.
- the 3D video decoder may decode the video picture and the depth map and the camera parameters from the bitstream. Based on the decoded video picture, the depth map and the camera parameters, the views required for the multi view display can be synthesized. In this case, when the display used is a stereo display, a 3D image may be displayed using two pictures from the reconstructed multi views.
- the stereo video decoder can reconstruct two pictures that will each be incident in both from the bitstream.
- a stereoscopic image may be displayed by using a view difference or disparity between a left image incident to the left eye and a right image incident to the right eye.
- the multi view display is used together with the stereo video decoder, different views may be generated based on the two reconstructed pictures to display the multi view.
- the 2D image may be restored and the image may be output to the 2D display.
- the decoder may output one of the reconstructed images to the 2D display when using a 3D video decoder or a stereo video decoder.
- view synthesis may be performed at the decoder side or may be performed at the display side.
- the decoder and the display may be one device or separate devices.
- the 3D video decoder, the stereo video decoder, and the 2D video decoder are described as separate decoders.
- one decoding apparatus may perform 3D video decoding, stereo video decoding, and 2D video decoding.
- the 3D video decoding apparatus may perform 3D video decoding
- the stereo video decoding apparatus may perform stereo video decoding
- the 2D video decoding apparatus may perform the 2D video decoding apparatus.
- the multi view display may output 2D video or output stereo video.
- the video encoding apparatus 200 may include a picture splitter 205, a predictor 210, a subtractor 215, a transformer 220, a quantizer 225, a reorderer 230, An entropy encoding unit 235, an inverse quantization unit 240, an inverse transform unit 245, an adder 250, a filter unit 255, and a memory 260 are included.
- the picture dividing unit 205 may divide the input picture into at least one processing unit block.
- the processing unit block may be a coding unit block, a prediction unit block, or a transform unit block.
- the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of coding.
- the prediction unit block is a block partitioned from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
- the transform unit block may be divided from the coding unit block along a quad tree structure, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
- a coding unit block is called a coding block or a coding unit (CU)
- a prediction unit block is called a prediction block or a prediction unit (PU)
- a transform unit block is a transform block.
- a transform unit (TU) transform unit
- a prediction block or prediction unit may mean a specific area in the form of a block within a picture or may mean an array of prediction samples.
- a transform block or a transform unit may mean a specific area in a block form within a picture, or may mean an array of transform coefficients or residual samples.
- the prediction unit 210 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a prediction block including prediction samples of the current block.
- the unit of prediction performed by the prediction unit 210 may be a coding block, a transform block, or a prediction block.
- the prediction unit 210 may determine whether intra prediction or inter prediction is applied to the current block.
- the prediction unit 210 may derive a prediction sample for the current block based on neighboring block pixels in a picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 210 may (i) derive a prediction sample based on the average or interpolation of neighbor reference samples of the current block, and (ii) a specific direction with respect to the prediction target pixel among the neighboring blocks of the current block. A prediction sample may be derived based on a reference sample present at. For convenience of explanation, the case of (i) is referred to as non-directional mode and the case of (ii) is referred to as directional mode. The prediction unit 210 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the prediction unit 210 may derive a prediction sample for the current block based on the samples specified by the motion vector on the reference picture.
- the predictor 210 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
- the prediction unit 210 may use the motion information of the neighboring block as the motion information of the current block.
- the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the neighboring block may be used as a motion vector predictor (MVP) to derive the motion vector of the current block.
- MVP motion vector predictor
- the neighboring block includes a spatial neighboring block present in the current picture and a temporal neighboring block present in the collocated picture.
- the motion information includes a motion vector and a reference picture.
- motion information of a temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the prediction unit 210 may perform inter view prediction.
- the predictor 210 may construct a reference picture list by including pictures of other views. For inter view prediction, the predictor 210 may derive a disparity vector. Unlike a motion vector that specifies a block corresponding to the current block in another picture in the current view, the disparity vector may specify a block corresponding to the current block in another view of the same access unit (AU) as the current picture.
- AU access unit
- the prediction unit 210 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter view motion prediction, and residual. Prediction, illumination compensation (IC), view synthesis, and the like can be performed.
- the disparity vector for the current block can be derived from the depth value using the camera parameter or from the motion vector or disparity vector of the neighboring block in the current or other view.
- the prediction unit 210 may include an inter-view merging candidate (IvMC) corresponding to temporal motion information of a reference view and an inter-view disparity vector candidate corresponding to the disparity vector.
- view disparity vector candidate (IvDC) shifted IvMC derived by shifting the disparity vector
- texture merge candidate derived from the texture corresponding to when the current block is a block on the depth map texture merging candidate (T)
- D disparity derived merging candidate
- VSP view synthesis prediction merge candidate derived based on view synthesis : VSP
- the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
- the prediction unit 210 may apply the inter-view motion vector prediction to predict the motion vector of the current block based on the disparator vector.
- the prediction unit 210 may derive the disparity vector based on the conversion of the maximum depth value in the corresponding depth block.
- a block including the reference sample may be used as the reference block.
- the prediction unit 210 may use the motion vector of the reference block as a candidate motion parameter or motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity for disparity-compensated prediction (DCP). Can be used as a parity vector.
- DCP disparity-compensated prediction
- the subtraction unit 215 generates a residual sample which is a difference between the original sample and the prediction sample.
- residual samples may not be generated as described above.
- the transform unit 220 generates a transform coefficient by transforming the residual sample in units of transform blocks.
- the quantization unit 225 may quantize the transform coefficients to generate quantized transform coefficients.
- the reordering unit 230 rearranges the quantized transform coefficients.
- the reordering unit 230 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form by scanning the coefficients.
- the entropy encoding unit 235 may perform entropy encoding on the quantized transform coefficients.
- Entropy encoding may include, for example, encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
- CABAC Context-Adaptive Binary Arithmetic Coding
- the entropy encoding unit 235 may encode information necessary for video reconstruction other than the quantized transform coefficients (eg, a value of a syntax element) together or separately.
- Entropy-encoded information may be transmitted or stored in units of NAL units in the form of a bitstream.
- the dequantization unit 240 inversely quantizes the quantized transform coefficients to generate transform coefficients.
- the inverse transform unit 245 inverse transforms the transform coefficients to generate residual samples.
- the adder 250 reconstructs the picture by combining the residual sample and the predictive sample.
- the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
- the adder 250 has been described in a separate configuration, the adder 250 may be part of the predictor 210.
- the filter unit 255 may apply a deblocking filter and / or an offset to the reconstructed picture. Through the deblocking filtering mill / or offset, the artifacts at the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
- the offset may be applied on a sample basis or may be applied after the process of deblocking filtering is completed.
- the memory 260 may store information necessary for reconstructed pictures or encoding / decoding.
- the memory 260 may store pictures used for inter prediction / inter-view prediction.
- pictures used for inter prediction / inter-view prediction may be designated by a reference picture set or a reference picture list.
- one encoding device has been described as encoding the independent view and the dependent view, this is for convenience of description, and a separate encoding device is configured for each view or a separate internal module (for example, prediction for each view). B) may be configured.
- the video decoding apparatus 300 includes an entropy decoding unit 310, a reordering unit 320, an inverse quantization unit 330, an inverse transform unit 340, a predictor 350, and an adder 360.
- the filter unit 370 and the memory 380 are included.
- the video decoding apparatus 300 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
- the video decoding apparatus 300 may perform video decoding using a processing unit applied in the video encoding apparatus.
- the processing unit block of video decoding may be a coding unit block, a prediction unit block, or a transform unit block.
- the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of decoding.
- the prediction unit block is a block partitioned from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
- the transform unit block may be divided from the coding unit block along a quad tree structure, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
- the entropy decoding unit 310 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 310 may decode the information in the bitstream based on the exponential Golomb, CAVLC, CABAC, etc., and output a syntax element value required for video reconstruction, a quantized value of transform coefficients related to the residual, and the like. have.
- the bitstream may be input for each view.
- information about each view may be multiplexed in the bitstream.
- the entropy decoding unit 310 may de-multiplex the bitstream and parse for each view.
- the reordering unit 320 may rearrange the quantized transform coefficients in the form of a two-dimensional block.
- the reordering unit 320 may perform reordering in response to coefficient scanning performed by the encoding apparatus.
- the inverse quantization unit 330 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
- information for deriving a quantization parameter may be signaled from the encoding apparatus.
- the inverse transform unit 340 may inverse residual transform coefficients to derive residual samples.
- the prediction unit 350 may perform prediction on the current block and generate a prediction block including prediction samples for the current block.
- the unit of prediction performed by the prediction unit 350 may be a coding block, a transform block, or a prediction block.
- the prediction unit 350 may determine whether to apply intra prediction or inter prediction.
- a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
- the unit for generating the prediction sample in inter prediction and intra prediction may also be different.
- the prediction unit 350 may derive the prediction sample for the current block based on the neighboring block pixels in the current picture.
- the prediction unit 350 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the peripheral reference samples of the current block.
- the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
- the prediction unit 350 may derive the prediction sample for the current block based on the samples specified by the motion vector on the reference picture.
- the prediction unit 350 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
- the motion information of the neighboring block may be used as the motion information of the current block.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the predictor 350 may construct a merge candidate list using motion information of available neighboring blocks, and use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
- the merge index may be signaled from the encoding device.
- the motion information includes a motion vector and a reference picture. When motion information of a temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor (MVP).
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the prediction unit 350 may perform inter view prediction.
- the prediction unit 350 may configure a reference picture list including pictures of other views.
- the predictor 210 may derive a disparity vector.
- the prediction unit 350 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter view motion prediction, and residual. Prediction, illumination compensation (IC), view synthesis, and the like can be performed.
- the disparity vector for the current block can be derived from the depth value using the camera parameter or from the motion vector or disparity vector of the neighboring block in the current or other view.
- Camera parameters may be signaled from the encoding device.
- the prediction unit 350 shifts the IvMC corresponding to the temporal motion information of the reference view, the IvDC corresponding to the disparity vector, and the disparity vector. Shifted IvMC derived by a subfield, a texture merge candidate (T) derived from a texture corresponding to a case in which the current block is a block on a depth map, and a disparity derivation merge candidate (D) derived using disparity from a texture merge candidate. ), A view synthesis prediction merge candidate (VSP) derived based on view synthesis may be added to the merge candidate list.
- VSP view synthesis prediction merge candidate
- the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
- the prediction unit 350 may apply inter-view motion vector prediction to predict the motion vector of the current block based on the disparator vector.
- the prediction unit 350 may use a block in the reference view specified by the disparity vector as the reference block.
- the prediction unit 350 may use the motion vector of the reference block as a candidate motion parameter or motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity vector for DCP.
- the adder 360 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
- the adder 360 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
- the adder 360 has been described in a separate configuration, the adder 360 may be part of the predictor 350.
- the filter unit 370 may apply deblocking filtering and / or offset to the reconstructed picture.
- the offset may be adaptively applied as an offset in a sample unit.
- the memory 380 may store information necessary for reconstruction picture or decoding.
- the memory 380 may store pictures used for inter prediction / inter-view prediction.
- pictures used for inter prediction / inter-view prediction may be designated by a reference picture set or a reference picture list.
- the reconstructed picture can be used as a reference picture.
- the memory 380 may output the reconstructed picture in the output order.
- the output unit may display a plurality of different views.
- each decoding apparatus may operate for each view, and an operation unit (eg, a prediction unit) corresponding to each view may be provided in one decoding apparatus.
- an operation unit eg, a prediction unit
- the encoding apparatus and the decoding apparatus may improve the efficiency of video coding for the current view using the coded data of another view belonging to the same access unit (AU) as the current picture.
- pictures having the same POC may be referred to as one AU.
- the POC corresponds to the display order of the pictures.
- the encoding apparatus and the decoding apparatus may code views in units of AUs, and may code pictures in units of views. Coding proceeds between views according to a predetermined order.
- the first coded view may be referred to as a base view or an independent view.
- a view that can be coded by referencing another view after the independent view is coded can be called a dependent view.
- another view referred to in coding (encoding / decoding) of the current view may be referred to as a reference view.
- FIG. 4 is a diagram schematically illustrating inter view coding.
- coding is performed in units of AU, where V0 is an independent view and V1 is a dependent view.
- inter-picture prediction that refers to another picture 430 of the same view using the motion vector 440 may be referred to as motion-compensated prediction (MCP).
- MCP motion-compensated prediction
- disparity-compensated prediction is performed by using the disparity vector 450 for inter-picture prediction that refers to the picture 420 of another view in the same access unit, that is, the same POC.
- DCP Compensated Picture
- a depth map may be used in addition to a method of using pictures of other views.
- FIG. 5 schematically illustrates a multi-view coding method using a depth map.
- a block (current block) 505 of the current picture 500 in the current view may be coded (encoded / decoded) using the depth map 510.
- the depth value d of the position (x, y) of the sample 520 in the depth map 510 corresponding to the position (x, y) of the sample 515 in the current block 505 is the disparity vector. 525.
- the depth value d can be derived based on the distance between the sample (pixel) and the camera.
- the encoding apparatus and the decoding apparatus may add the disparity vector 525 to the sample 530 position (x, y) to determine the position of the reference sample 535 in the current picture 540 in the reference view.
- the disparity vector may have only x-axis components. Accordingly, the value of the disparity vector may be (disp, 0), and the position (xr, y) of the reference sample 540 may be determined as (x + disp, y).
- the encoding apparatus and the decoding apparatus may use the motion parameter of the reference block 545 including the reference pixel 535 as a candidate of the motion parameter of the current block. For example, if the reference picture 550 in the reference view is a reference picture for the reference block 545, the motion vector 555 of the reference block 545 may be derived to the motion vector 560 of the current block 505. It may be. At this time, the picture 565 is a reference picture in the current view.
- the disparity vector may be used to refer to information of another view.
- the disparity vector of the DCP coding block may be used as a disparity vector to be applied to the current block.
- the disparity vector derived from the neighboring block that is, the disparity vector of the DCP coded block
- IVMP inter-view motion prediction
- IVRP inter-view residual prediction
- MVP motion vector prediction mode
- AMVP advanced motion vector prediction
- SKIP skip
- the block in which the motion vector is predicted by the IVMP method among the MCP coded blocks is called a DV-MCP block.
- FIG. 6 is a diagram schematically illustrating a DV-MCP block.
- FIG. 6 illustrates a case of inter prediction the current block 620 in the current picture 610 of the current view.
- the motion vector MV1 of the neighboring block 630 used for inter prediction of the current block 620 is derived from the corresponding block 650 of the reference picture 640 in the base view.
- the corresponding block is specified by the disparity vector DV 660.
- the motion vector MV1 of the neighboring block 630 may be set to or derived from the motion vector MV2 of the corresponding block 650.
- the POCs of the reference picture 640 and the current picture 610 in the base view may be the same.
- the neighboring block 630 to which the motion vector MV1 predicted from the motion vector MV2 of the corresponding block 650 in another view is applied may be referred to as a DV-MCP block.
- the encoding apparatus and the decoding apparatus may store the information of the disparity vector used for the motion vector prediction of the DV-MCP block and use it in the process of deriving the disparity vector of the neighboring block.
- FIG. 7 is a diagram schematically illustrating an example of neighboring blocks of a current block.
- the neighboring blocks of FIG. 7 are blocks that are already decoded at the time of decoding the current block and are accessible.
- the neighboring blocks of the current block 710 are the spatial neighboring blocks A0, A1, B0, B1, B2 and the temporal neighboring blocks col-CTR (col-center) and col-RB (col-right bottom). Include.
- the spatial neighboring blocks are each specified based on the position of the current block 710.
- temporal neighboring blocks may be specified based on a position 720 corresponding to the current block in a collocated picture, which is one of the reference pictures.
- a coding block including a pixel located at the center of the current block 720 in a collocated picture designated at the time of decoding the current picture or the current slice becomes col-CTR.
- the coding block including the pixel at the (x + 1, y + 1) position becomes col-RB.
- col-CTR may be expressed as CTR and col-BR as BR.
- the call-located picture may be one of a temporal reference picture included in the current picture or a reference picture list of the current slice, selected for temporal disparity vector derivation. .
- the call-picture may be known to the decoder through a slice header. For example, information indicating which picture to use as a call-picture may be signaled in the slice header.
- the encoding apparatus and / or the decoding apparatus search temporal and spatial neighboring blocks in a predetermined order, and when the searched block is a DCP block, returns a disparity vector for the DCP and derives a disparity vector. Terminate the process.
- the neighboring block is a DCP block
- the prediction mode of the neighboring block is inter prediction or skip mode, and let POC and view ID of the L0 reference picture or the L1 reference picture be neighbor_ref_pocX and neighbor_ref_vidX, respectively (X is 0 or 1).
- the neighboring block POC (neighbor_ref_pocX) is the same as the POC of the current picture
- the view ID (neighbor_ref_vidX) of the neighboring block is different from the view ID of the current picture
- the neighboring block may be determined as a DCP block.
- the encoding apparatus and / or the decoding apparatus searches for the temporal neighboring blocks and the spatial neighboring blocks in a predetermined order and checks whether they are DV-MCP blocks. If the searched neighboring block is a DV-MCP block, the encoding device and / or the decoding device returns a disparity vector stored in the DV-MCP block and ends the disparity vector derivation process.
- the order of searching for neighboring blocks in (1-1) and (1-2) described above may be variously set.
- neighboring blocks may be searched in the order of A1, B1, A0, B0, B2, col-CTR, col-RB, or searched in the order of A0, A1, B0, B1, B2, col-CTR, col-RB. It may be.
- the encoding apparatus and / or the decoding apparatus may use only predetermined neighboring blocks without searching all neighboring blocks. For example, only the upper block B1 and the left block A1 of the current block among the neighboring blocks of FIG. 7 may be used as the spatial neighboring block. In addition, only the block col-CTR located in the center of the region 720 corresponding to the current block in the call-picture among the neighboring blocks of FIG. 7 may be used as the temporal neighboring block.
- the encoding apparatus and the decoding apparatus may search for neighboring blocks and derive a disparity vector in a predetermined order. For example, the temporal neighboring block may be searched first and the spatial neighboring block may be searched. In addition, for the spatial neighboring blocks, the left block may be searched first and the upper block may be searched. Alternatively, the encoding apparatus and the decoding apparatus may derive a disparity vector by setting a predetermined search order in advance.
- the encoding apparatus and / or the decoding apparatus finds the DCP block among the neighboring blocks. If more than one DCP block is found, the disparity vector having the largest absolute value of the horizontal component among the disparity vectors of each DCP block is returned, and the process of deriving the disparity vector ends. For example, suppose that a DCP block found among neighboring blocks and a disparity vector of each DCP block have a relationship as shown in Table 1.
- the second row of each column is initialized to X (not false), and if the neighboring block is a DCP block, it stores the value O (true), and the third row contains the motion vector of the DCP block. Record it.
- the next block is not searched.
- the col-CTR block of the two temporal neighboring blocks col-CTR and col-RB is a DCP block and information of the disparity vector is obtained from the col-CTR block, the additional search, that is, the col-RB block is performed. The search does not proceed.
- the disparity vector of the B0 block having the largest absolute value of the horizontal component is set as the disparity vector of the current block and the derivation process of the disparity vector ends.
- the encoding device and / or the decoding device may search for the DV-MCP block. If more than one DV-MCP block is found, the encoding device and / or decoding device returns a disparity vector having the largest absolute value of the horizontal component among the disparity vectors stored in each DV-MCP block. The process of deriving the disparity vector is terminated.
- Table 2 shows an example of collecting DV-MCP block information from neighboring blocks. In Table 2, the largest absolute value of the horizontal component of the disparity vector is 10 of B2. Accordingly, the encoding apparatus and / or the decoding apparatus may derive the disparity vector of the B2 block as the disparity vector of the current block and terminate the disparity vector derivation process.
- the aforementioned method 1 and method 2 may be combined to derive the disparity vector of the current block from the neighboring block.
- the encoding apparatus and / or the decoding apparatus finds DCP blocks only among spatial neighboring blocks. When a plurality of DCP blocks are found, the disparity vector having the largest absolute value of the horizontal component among the disparity vectors of each DCP block is returned and the disparity vector derivation process is terminated.
- the encoding device and / or the decoding device searches the temporal neighboring blocks in a predetermined order to find the DCP block. If the DCP block is found, the disparity vector for the DCP is returned and the disparity vector derivation process ends.
- the encoding apparatus and / or the decoding apparatus finds DV-MCP blocks among spatial neighboring blocks. When a plurality of DV-MCP blocks are found, the disparity vector having the largest absolute value of the horizontal component among the disparity vectors stored in each DV-MCP block is returned and the disparity vector derivation process is terminated.
- the encoding device and / or the decoding device searches the temporal neighboring blocks in a predetermined order to find the DV-MCP block. If the DV-MCP block is found, the disparity vector stored in the DV-MCP block is returned, and the disparity vector derivation process ends.
- the aforementioned method 1 and method 2 may be combined to derive the disparity vector of the current block from the neighboring block.
- the encoding device and / or the decoding device first finds DCP blocks among temporal neighboring blocks. When a plurality of DCP blocks are found, the encoding apparatus and / or the decoding apparatus sets the disparity vector having the largest absolute value of the horizontal component among the disparity vectors of the DCP block as the disparity block of the current block and derives the disparity vector. You can terminate the process.
- the encoding device and / or the decoding device searches for the spatial neighboring blocks in a predetermined order to find the DCP block.
- the encoding apparatus and / or the decoding apparatus outputs the disparity vector of the found DCP block as the disparity vector of the current block and terminates the disparity vector derivation process.
- the encoding apparatus and / or the decoding apparatus finds DV-MCP blocks among spatial neighboring blocks. When a plurality of DV-MCP blocks are found, the disparity vector having the largest absolute value of the horizontal component among the disparity vectors stored in each DV-MCP block is returned and the disparity vector derivation process is terminated.
- the encoding apparatus and / or the decoding apparatus may use only predetermined neighboring blocks without searching all the neighboring blocks. For example, only the upper block B1 and the left block A1 of the current block among the neighboring blocks of FIG. 7 may be used as the spatial neighboring block. In addition, only the block col-CTR located in the center of the region 720 corresponding to the current block in the call-picture among the neighboring blocks of FIG. 7 may be used as the temporal neighboring block.
- deriving the disparity vector from the neighboring block may be simply referred to as disparity vector from neighboring blocks (NBDV).
- NBDV disparity vector from neighboring blocks
- the encoding device and / or the decoding device may give higher priority to the information of the DCP block than the information of the DV-MCP block.
- the disparity vector derived through the disparity vector derivation process is a disparity vector selected by a reference from a neighboring block for the current CU block.
- the neighboring block is based on the assumption that it is similar to the current block, if the similarity between the block inducing the disparity and the current block (eg, the current coding block) falls, the accuracy of the disparity may also be lowered.
- the encoding apparatus and / or the decoding apparatus may use zero disparity when searching for neighboring blocks to obtain disparity.
- the method of inducing disparity from the neighboring block uses the disparity derived from the neighboring block as a disparity for the current block, the disparity used in the prediction process may have a difference between the actual disparity. .
- the disparity may be corrected using a depth map previously decoded in the neighboring view.
- the depth map of the view (eg, base view) to be referred to is accessible. If the pixel value and the camera parameter of the depth map are input at the time of encoding, the disparity may be calculated through this. Therefore, in the process of deriving the disparity (disparity vector) to code the texture of the dependent view, if the depth map of the referenced view is used, the disparity of the current block derived from the neighboring block can be corrected.
- FIG. 8 is a diagram schematically illustrating correcting a disparity vector derived from a neighboring block using a depth.
- a method of correcting a disparity vector of a current block is as follows (i) to (v).
- the encoding apparatus and / or the decoding apparatus may derive the disparity of the current block 810 in the current texture picture T1 800 from the neighboring block.
- the encoding device and / or decoding device may use the disparity derived in (i) to specify a location on a neighboring view corresponding to a location on the current view. That is, the encoding apparatus and / or the decoding apparatus may project the sample position on the current view on the depth map D0 820 of the neighboring view using the disparity 820. If the mapped position exists outside the depth map, the encoding apparatus and / or the decoding apparatus may allow the mapped position to correspond to the boundary of the depth map through clipping. In this case, the depth map D0 830 is already coded, and the encoding apparatus and / or the decoding apparatus may use depth information of the depth member D0 830.
- the encoding device and / or the decoding device may include a virtual depth block of a block (current block 810) to code a depth block at the corresponding location on the reference view 840. block, 850).
- the encoding apparatus and / or the decoding apparatus finds the pixel having the largest value among the four corner pixel values of the virtual depth block 850.
- the encoding apparatus and / or the decoding apparatus converts the pixel value found in (vi) into disparity. That is, the encoding device and / or the decoding device may convert the largest value of the four corner pixel values of the virtual depth block 850 into disparity. In this case, the encoding apparatus and / or the decoding apparatus may derive the disparity vector using the depth lookup table.
- the refinement of the disparity vector derived from the neighboring block using the virtual depth may also be referred to simply as a depth oriented neighboring block based disparity vector (DoNBDV).
- DoNBDV depth oriented neighboring block based disparity vector
- the disparity vector obtained through the DoNBDV process may be obtained using the disparity derived in the NBDV process.
- the current block eg, CU
- the current block may have two or more disparity vector information through NBDV and DoNBDV.
- the disparity vector of the current block can be well represented by the disparity vector derived from the neighboring block, that is, NBDV.
- the disparity vector corrected by the depth value that is, DoNBDV
- DoNBDV may well express the disparity vector of the current block. Therefore, in the 3D video compression / restoration process, encoding efficiency may be increased by selectively using one of a plurality of disparity vectors held for each CU.
- the disparity vector information of a CU unit may be used for inter-view motion parameter prediction (IVMC), advanced residual prediction (ARP), view synthesized prediction (VSP), and the like, which are inter-view prediction techniques.
- IVMC inter-view motion parameter prediction
- ARP advanced residual prediction
- VSP view synthesized prediction
- the encoding device and / or the decoding device may fixedly use NBDV or DoNBDV, and selectively use information of a disparity vector having high coding efficiency among them.
- the encoding device and / or the decoding device may use a disparity vector of a larger magnitude, NBDV or DoNBDV, as the disparity vector of the current block without additional information (eg, a flag indicating which disparity vector to use).
- additional information eg, a flag indicating which disparity vector to use.
- the encoding apparatus and / or the decoding apparatus may use the disparity vector of NBDV or DoNBDV having a smaller magnitude as the disparity vector of the current block without additional information (eg, a flag indicating which disparity vector to use). It may be.
- the encoding apparatus and / or the decoding apparatus may selectively use NBDV or DoNBDV according to the positional relationship between the reference view and the current view.
- the encoding device and / or the decoding device may directly send disparity vector information used for encoding the current block using flag information.
- the encoding apparatus may transmit a flag indicating whether to use NBDV or DoNBDV for decoding the current block.
- the decoding apparatus may decode the current block by using the disparity vector indicated by the flag.
- a disparity vector having different characteristics can be obtained, so that the disparity vector of the current block can be effectively predicted.
- the disparity vector of the current block may be derived more accurately in consideration of both NBDV and DoNBDV.
- an enhanced disparity vector can be derived from the disparity vector.
- the derivation process of the enhanced disparity vector is as follows.
- the encoding device and / or the decoding device may derive the disparity vector DV NBDV from the neighboring block of the current block (eg, CU) through an NBDV process.
- the encoding apparatus and / or the decoding apparatus may derive the disparity vector DV DoNBDV of the current block through the DoNBDV process using the depth value.
- the encoding apparatus and / or the decoding apparatus unlike the above-described process of NBDV, does not terminate the process even if the first disparity vector is derived in the NBDV process, and disparity vector from all available blocks among all candidate blocks.
- the encoding device and / or the decoding device may calculate an absolute difference between the disparity vector candidate group DV NBDV (x) and DV DoNBDV derived through the NBDV process.
- the encoding apparatus and / or the decoding apparatus may set the disparity vector having the smallest absolute difference with the DV DoNBDV among the disparity vector candidate groups DV NBDV (x) as the new disparity vector DV NBDV_NEW as shown in Equation 1.
- the encoding device and / or decoding apparatus can be used in place of 1 ⁇ 5 DV DoNBDV the DV NBDV_NEW obtained by the method of.
- the encoding device and / or decoding apparatus without the use of DV NBDV_NEW may be used as a DV DoNBDV.
- the encoding device and / or the decoding device may be DV NBDV_NEW Wow DV NBDV If is different, one of the following (1) and (2) can be selected and used as the display vector or motion vector predictor applied to generate the predictive sample of the current block: (1) DV NBDV_NEW And DV DoNBDV Any one selected. (2) DV NBDV
- the memory capacity of the encoding apparatus and / or the decoding apparatus may be considered in the process of deriving the disparity vector from the neighboring block.
- the encoding device and / or the decoding device are encoded by DV-MCP (i.e., IvMC) in the vicinity of the current block.
- the disparity vector of the block may be used as a candidate disparity vector for the current block.
- FIG. 9 is a diagram schematically illustrating a method of searching for NBDV using a DV-MCP block.
- the motion vector mv1 930 of the neighboring block 920 may be derived from the corresponding block 940 of the base view referenced by the current block C 910.
- the motion vector 930 of the neighboring block 920 may be set to be the same as the motion vector mv1 950 of the corresponding block 940.
- the motion vector mv1 950 of the corresponding block 940 may specify the reference block 970 in the picture at another time t1.
- Corresponding block 940 may be specified by disparity vector dv1 960.
- the encoding device and / or the decoding device may use dv1 960 as a candidate for deriving the NBDV of the current block C 910.
- the encoding apparatus and / or the decoding apparatus should store disparity vector information (eg, information about dv1 of FIG. 9) of a block coded (encoded / decoded) by DV-MCP during the encoding / decoding process.
- the disparity vector information of the current DV-MCP block is stored in CU units. Since the size of the minimum CU is 8x8 pixels, even if the information of the disparity vector is stored in every 8x8 block, there is a problem to be considered in the implementation of hardware. In other words, the memory size must be designed so that a disparity vector can be stored for every 8x8 block.
- the memory size for storing the disparity vector information of the DV-MCP block can be reduced to 1/4.
- the disparity vector value of the DV-MCP block may be extended to the periphery, which may have an effect that the NBDV value is more propagated than the conventional method.
- the present invention is not limited to storing the disparity vector in units of 16 ⁇ 16 pixels.
- the disparity vector may be stored in units of sizes of 16x16 pixels or more (eg, 32x32 pixels and 64x64 pixels). In this case, the required memory size can be further reduced.
- the encoding apparatus and / or the decoding apparatus may store the disparity vector in non-square units such as 32x16 pixels and 16x8 pixels instead of square units.
- a method of reducing the amount of transmission of information can be considered as a method of increasing the coding efficiency.
- the encoding apparatus may use a method of transmitting only a motion vector difference between the motion vector of the current block and the motion vector prediction value.
- the decoding apparatus obtains a motion vector prediction value of the current block (current processing unit, eg, CU or PU) by using motion information of other decoded units (eg, CU or PU), and uses the transmitted difference value to determine the current block.
- the motion vector value for is obtained.
- FIG. 10 is a diagram schematically illustrating an example of neighboring blocks that may be used for motion vector prediction.
- FIG. 10 illustrates a specific example of neighboring blocks of the current block when coding the current view.
- the positions of the spatial neighboring blocks A0, A1, B0, B1, and B2 of the current block 1000 may be specified based on the location of the current block 1000.
- spatial neighboring blocks may be configured as a lower left block A0 of a current block, a left block A1 of a current block, a right upper block B0 of a current block, an upper block B1 of a current block, and a left upper block B2 of a current block.
- the temporal neighboring block T of the current block 1000 may be specified.
- FIG. 11 is a diagram schematically illustrating a temporal neighboring block of a current block.
- the temporal neighboring block T of the current block illustrated in FIG. 10 may be specified based on the position 1010 of the current block in a collocated picture designated at the time of decoding the current picture or the current slice.
- a center block CTR and a bottom right block RB may be specified. If the right bottom block RB is available, the RB is used as the temporal neighboring block of the current block. If the RB is not available, the central block CTR may be used as the neighboring block of the current block.
- the neighboring blocks of the current block illustrated in FIGS. 10 and 11 are blocks already decoded at the time of decoding the current block, and correspond to neighboring blocks of the current block described with reference to FIG. 7.
- the encoding apparatus and the decoding apparatus may construct a motion vector predictor list (MVP list) using motion vectors of neighboring blocks of the current block (eg, coding block or prediction block).
- MVP list motion vector predictor list
- the encoding apparatus may transmit an index value indicating a motion vector prediction value having the most coding efficiency on the MVP list and a difference value of a motion vector to be applied when the prediction value indicated by the index is used.
- the decoding apparatus may receive a difference value between the index value and the motion vector indicating on the MVP list which motion vector prediction value to use.
- the decoding apparatus may reconstruct the motion vector to be used for the current block by using the motion vector prediction value indicated by the index and the difference value. For example, the decoding apparatus may derive the motion vector of the current block based on the motion vector prediction value plus the difference value.
- the encoding device and / or the decoding device are already decoded. You can use the disparity vector of the finished reference view.
- the encoding apparatus and / or the decoding apparatus may add the motion vector prediction value using the motion vector of the corresponding block specified by the disparity vector of the reference view to the MVP list.
- the encoding apparatus and / or the decoding apparatus may generate the motion vector prediction value in the following manner according to the coding mode of the corresponding block.
- the reference picture of the current picture is a temporal reference picture that is a picture of the same view as the current picture
- the encoding apparatus and / or the decoding apparatus adds the motion vector of the corresponding block to the MVP list as a motion vector predictor (MVP) candidate of the current block. can do.
- MVP motion vector predictor
- POC1 which is the POC value of the reference picture referred to by the corresponding block
- POC2 which is the POC of the reference picture referenced by the current block
- scaling of the motion vector of the corresponding block may be additionally performed.
- the motion vector MV of the corresponding block may adjust the size of the motion vector as MV * ((POC2-POC0) / (POC1-POC0)).
- a motion vector prediction value may be generated as follows.
- the (1-ii-1) encoding device and / or the decoding device may use a zero vector, that is, (0,0) as a motion vector predictor.
- the encoding apparatus and / or the decoding apparatus may find an MCP coded block among neighboring blocks of the corresponding block and use the motion vector prediction value.
- the encoding apparatus and / or the decoding apparatus may find the MCP coded block by sequentially searching the neighboring blocks of the corresponding block in various orders. For example, the encoding device and / or the decoding device searches for neighboring blocks in the order of A1, B1, A0, B0, B2, col-CTR, col-RB, or A0, A1, B0, B1, B2, col-CTR, col You can also search in -RB order.
- the encoding device and / or the decoding device may use the disparity value predicted through the disparity vector derivation process. For example, since the vertical component of the disparity vector is zero, the encoding device and / or the decoding device use the disparity value disp predicted through the disparity vector derivation process to select (disp, 0) as a motion vector predictor candidate in the MVP list. You can add
- the index (refIdx) value of the reference picture is decoded before the index value of the MVP list is decoded, so that the reference picture is the reference picture of the same view or the reference picture of another view. It can be seen. Using this, an efficient motion vector prediction value can be derived.
- AMVP Advanced Motion Vector Prediction
- the encoding apparatus and / or the decoding apparatus may find the corresponding block of the neighboring view using the disparity value derived from the neighboring block as described above.
- the encoding device and / or the decoding device may check whether this block is coded with MCP.
- the encoding apparatus and / or the decoding apparatus may add the motion vector of the corresponding block to the MVP list as one candidate of the motion vector prediction value of the current block.
- the encoding device and / or the decoding device use the disparity value disp derived from the neighboring block to set (disp, 0) to the MVP list as a motion vector prediction value candidate. Can be added to
- the MVP list may be configured in various ways.
- embodiments of constructing an MVP list when a current block can refer to pictures of another view will be described in detail.
- the peripheral blocks are the peripheral blocks shown in FIGS. 10 and 11.
- a motion vector predictor for the current block may mean a disparity vector. Accordingly, the encoding apparatus and / or the decoding apparatus may refine the disparity vector found from the neighboring block by using a base view depth value. That is, the encoding device and / or the decoding device may refine the disparity vector induced by the NBDV process through the DoNBDV process.
- the encoding apparatus and / or the decoding apparatus refines the disparity vector derived by the NBDV process through the DoNBDV process.
- the disparity vector derived through the NBDV process is derived from neighboring blocks A0 and B0, and the derived disparity vector is modified through DoNBDV.
- the encoding apparatus and / or the decoding apparatus may use the DoNBDV as a motion vector predictor candidate (AMVP candidate) included in the MVP list through the DoNBDV. It can be refined and used as a final AMVP candidate.
- AMVP candidate motion vector predictor candidate
- the motion vector prediction value for the current block means a disparity vector. Therefore, when the maximum number of motion vector prediction value candidates for the current block is not filled in the MVP list, the encoding apparatus and / or the decoding apparatus may update the disparity vector added to the current MVP list through the DoNBDV process and add it to the MVP list. It may be.
- the encoding apparatus and / or the decoding apparatus may add the DoNBDV process to the zero vector to add to the MVP list.
- FIG. 13 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- an encoding device and / or a decoding device construct an MVP list with available disparity vector (s) derived by the NBDV process, and should add a candidate to the MVP list
- a new candidate may be derived by applying the DoNBDV process to the added candidate.
- a disparity vector available from a neighboring block A0 is derived through an NBDV process is described as an example.
- Up candidate number of MVP list 2 and the encoding device and / or decoding apparatus may derive a candidate disparity vector (A0 ') tablets by applying DoNBDV process a disparity vector (A0) derived from A0 .
- the method of using only two neighboring blocks A1 and B1 among the spatial neighboring blocks may be equally applied. If the neighboring block B1 is not available, the encoding device and / or the decoding device determines the final AMVP candidate (MVP list) as the disparity vector A1 ⁇ derived by applying DoNBDV to the disparity vector A1 and the disparity vector A1 derived from the block A1. ) Can be configured.
- MVP list the final AMVP candidate
- the motion vector prediction value for the current block may mean a disparity vector. Therefore, the encoding apparatus and / or the decoding apparatus may configure the MVP list with the disparity vector obtained through the NBDV process and the disparity vector updated through the DoNBDV process. In this case, as the disparity vector derived from the neighboring block, a disparity vector already derived in the previous encoding or decoding process or derived in encoding / decoding of the current block may be used.
- the final AMVP candidates to be used for the current block that is, the MVP list
- the MVP list are composed of NBDV and DoNBDV derived from the neighboring block, not the motion vector of the neighboring block.
- the motion vector prediction value for the current block may mean a disparity vector. Therefore, when the maximum number of motion vector prediction value candidates for the current block is not filled in the MVP list, the encoding apparatus and / or the decoding apparatus may add the disparity vector derived through the NBDV process to the MVP list. In addition, the encoding apparatus and / or the decoding apparatus may update the disparity vector through the DoNBDV process and add the disparity vector to the MVP list.
- FIG. 15 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- an MVP list includes two final motion vector predictor candidates will be described as an example.
- the encoding device and / or decoding device is induced via NBDV process on the neighboring blocks
- the disparity vector derived through the disparity vector or the DoNBDV process may be added to the MVP list.
- the encoding apparatus and / or the decoding apparatus may add the disparity vector updated through the DoNBDV process to the MVP list.
- the encoding device and / or the decoding device may not fill an empty position of the list when there is no disparity for the current block. That is, when there is a disparity vector derived from a neighboring block through the NBDV process, the encoding device and / or the decoding device supplements the final AMVP candidate with a disparity vector induced through NBDV or a disparity vector updated through DoNBDV. You may.
- FIG. 16 is a diagram schematically illustrating another example of a method of deriving an MVP list according to the present invention.
- MVP list consists of two motion vector prediction value candidates is demonstrated as an example.
- the encoding device and / or the decoding device does not add a zero vector, but converts the zero vector into a DoNBDV. It can be refined and added to the MVP list.
- a zero vector may be used as the zero disparity vector.
- the encoding apparatus and / or the decoding apparatus may update the zero disparity vector with the DoNBDV process and add it to the MVP list.
- 17 is a diagram schematically illustrating another example of configuring an MVP list according to the present invention.
- the motion vector prediction value for the current block means a disparity vector.
- the encoding device and / or the decoding device may use a zero vector added as a zero disperity vector when there is no motion vector predictor (mvp) candidate available.
- the encoding apparatus and / or the decoding apparatus may replace the zero vector with the disparity vector updated through the DoNBDV process.
- the decoding apparatus may entropy decode a bitstream (S1810).
- the decoding device may parse the bitstream and output video information necessary for decoding the current block.
- the video information may include information specifying a neighboring block (for example, information indicating a call picture), information indicating a reference view, information indicating whether a depth is used for derivation of a disparity vector (indicative of whether DoNBDV is applied). Information), and the like.
- the decoding apparatus may derive a disparity vector based on the video information (S1820).
- the decoding apparatus uses the video information to derive the first disparity vector based on the neighboring blocks of the current block within the same view, and the second disparity using the first disparity vector and the reference view depth.
- a vector may be derived, and a third disparity vector may be derived by using a difference between the first disparity vector and the second disparity vector.
- the first disparity vector may correspond to the disparity vector DV NBDV induced by the NBDV process
- the second disparity vector may correspond to the disparity vector DV DoNBDV induced by the DoNBDV process
- the third The disparity vector may correspond to the disparity vector DV NBDV_NEW derived based on the difference between DVNBDV and DVDoNBDV.
- the decoding apparatus may derive the predictive sample of the current block by using the derived disparity vector (S1830).
- the decoding apparatus may derive the prediction sample of the current block by using any one of the first disparity vector, the second disparity vector, and the third disparity vector.
- the decoding apparatus may construct an MVP candidate list using motion vectors including a first disparity vector, a second disparity vector, and a third disparity vector.
- the decoding apparatus may induce the motion vector of the current block based on the sum of the motion vector predictor selected from the MVP candidate list and the transmitted motion vector difference value.
- the motion vector difference value may be calculated from the encoding device as a difference value between the motion vector prediction value and the motion vector of the current block, and transmitted to the decoding device together with information indicating the reference picture.
- the decoding apparatus may derive the predictive sample of the current block based on the sample values specified by the motion vector in the reference picture.
- the construction method of the MVP candidate list and the derivation method of the prediction sample are as described above.
- the decoding apparatus may apply filtering to the reconstructed picture using the predictive sample (S1840).
- the decoding apparatus may derive the reconstructed sample of the current block by adding the residual sample to the predictive sample.
- the residual sample may be entropy encoded from the encoding device and transmitted to the decoding device.
- the decoding apparatus may derive the residual sample value based on entropy decoding.
- the decoding apparatus may apply deblocking filtering or SAO to the reconstructed picture. Whether filtering is applied may be signaled from the encoding device to the decoding device.
- S1820 to S1840 are contents that operate in a decoding loop in the encoding apparatus, and may be similarly applied to the encoding apparatus.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치로서,
비트스트림을 엔트로피 디코딩하여 현재 블록의 디코딩에 필요한 비디오 정보를 유도하는 엔트로피 디코딩부;
상기 현재 블록의 디코딩에 참조되는 픽처들을 저장하는 메모리;
상기 비디오 정보를 이용하여 동일 뷰 내에 있는 상기 현재 블록의 주변 블록을 기반으로 상기 제1 디스패리티 벡터를 유도하고, 상기 제1 디스패리티 벡터 및 참조 뷰 뎁스(depth)를 이용하여 제2 디스패리티 벡터를 유도하며, 상기 제1 디스패리티 벡터 및 제2 디스패리티 벡터의 차이를 이용하여 제3 디스패리티 벡터를 유도하고, 상기 제1 디스패리티 벡터, 제2 디스패리티 벡터 또는 제3 디스패리티 벡터 중 어느 하나를 이용하여 상기 현재 블록의 예측 샘플을 유도하는 예측부; 및
상기 현재 블록에 대한 예측 샘플을 이용하여 복원된 현재 픽처에 필터링을 적용하는 필터링부를 포함하는 것을 특징으로 하는 비디오 디코딩 장치. - 제1항에 있어서, 상기 예측부는 상기 현재 블록의 주변 블록들의 제1 디스패리티 벡터들 중 상기 제2 디스패리티 벡터와 가장 유사한 제1 디스패리티 벡터를 상기 제3 디스패리티 벡터로 설정하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제2항에 있어서, 상기 제2 디스패리티 벡터와 가장 유사한 제1 디스패리티 벡터는, 상기 제2 디스패리티 벡터의 크기와 가장 작은 차이의 크기를 가지는 제1 디스패리티 벡터인 것을 특징으로 하는 비디오 디코딩 장치.
- 제1항에 있어서, 상기 예측부는 상기 현재 블록의 주변 블록들로부터 제1 디스패리티 벡터들을 유도하여 디스패리티 벡터 후보군을 구성하며,
상기 디스패리티 벡터 후보군 중에서 상기 제2 디스패리티 벡터의 절대값과 절대값의 차이가 가장 작은 디스패리티 벡터를 제3 디스패리티 벡터로 설정하는 것을 특징으로 하는 비디오 디코딩 장치. - 제1항에 있어서, 상기 예측부는 상기 제2 디스패리티 벡터 및 상기 제3 디스패리티 벡터 중 어느 하나를 사용하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제5항에 있어서, 상기 예측부는 상기 제1 디스패리티 벡터와 상기 제3 디스패리티 벡터가 상이한 경우에, 상기 제2 디스패리티 벡터와 상기 제3 디스패리티 벡터 중 선택된 디스패리티 벡터와 상기 제1 디스패리티 벡터 중에서 어느 하나를 선택하여 상기 현재 블록의 예측에 사용하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제1항에 있어서, 상기 메모리는 유도된 디스패리티 벡터를 16x16 픽셀 블록 단위, 32x32 픽셀 블록 단위 또는 64x64 픽셀 블록 단위 중 어느 한 단위로 저장하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제1항에 있어서, 상기 메모리는 유도된 디스패리티 벡터를 비정방형 블록 단위로 저장하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제1항에 있어서, 상기 예측부는 상기 현재 블록에 대한 움직임 벡터를 예측하며, 상기 움직임 벡터 예측에 사용되는 후보 예측자가 부족한 경우, 제로 디스패리티 벡터를 이용하여 움직임 벡터 예측자 리스트를 구성하는 것을 특징으로 하는 비디오 디코딩 방법.
- 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 방법으로서,
비트스트림을 엔트로피 디코딩하여 현재 블록의 디코딩에 필요한 비디오 정보를 유도하는 단계;
상기 비디오 정보를 이용하여 동일 뷰 내에 있는 상기 현재 블록의 주변 블록을 기반으로 상기 제1 디스패리티 벡터를 유도하고, 상기 제1 디스패리티 벡터 및 참조 뷰 뎁스(depth)를 이용하여 제2 디스패리티 벡터를 유도하며, 상기 제1 디스패리티 벡터 및 제2 디스패리티 벡터의 차이를 이용하여 제3 디스패리티 벡터를 유도하는 단계;
상기 제1 디스패리티 벡터, 제2 디스패리티 벡터 또는 제3 디스패리티 벡터 중 어느 하나를 이용하여 상기 현재 블록의 예측 샘플을 유도하는 단계; 및
상기 예측 샘플을 이용하여 복원된 현재 픽처에 필터링을 적용하는 단계를 포함하는 것을 특징으로 하는 비디오 디코딩 방법. - 제10항에 있어서, 상기 현재 블록의 주변 블록들의 제1 디스패리티 벡터들 중 상기 제2 디스패리티 벡터와 가장 유사한 제1 디스패리티 벡터가 상기 제3 디스패리티 벡터로 설정되는 것을 특징으로 하는 비디오 디코딩 방법.
- 제11항에 있어서, 상기 제2 디스패리티 벡터와 가장 유사한 제1 디스패리티 벡터는, 상기 제2 디스패리티 벡터의 크기와 가장 작은 차이의 크기를 가지는 제1 디스패리티 벡터인 것을 특징으로 하는 비디오 디코딩 방법.
- 제10항에 있어서, 상기 디스패리티 벡터의 유도 단계는, 상기 현재 블록의 주변 블록들로부터 제1 디스패리티 벡터들을 유도하여 디스패리티 벡터 후보군을 구성하는 단계; 및
상기 디스패리티 벡터 후보군 중에서 상기 제2 디스패리티 벡터의 절대값과 절대값의 차이가 가장 작은 디스패리티 벡터를 제3 디스패리티 벡터로 설정하는 단계를 포함하는 것을 특징으로 하는 비디오 디코딩 방법. - 제10항에 있어서, 상기 예측 샘플을 유도하는 단계는, 상기 제1 디스패리티 벡터와 상기 제3 디스패리티 벡터가 상이한 경우에, 상기 제2 디스패리티 벡터와 상기 제3 디스패리티 벡터 중 선택된 디스패리티 벡터와 상기 제1 디스패리티 벡터 중에서 어느 하나를 선택하여 상기 현재 블록의 예측 샘플을 유도하는 단계를 포함하는 것을 특징으로 하는 비디오 디코딩 방법.
- 제10항에 있어서, 상기 유도된 디스패리티 벡터는 16x16 픽셀 블록 단위, 32x32 픽셀 블록 단위 또는 64x64 픽셀 블록 단위 중 어느 한 단위로 저장되는 것을 특징으로 하는 비디오 디코딩 방법.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480057173.5A CN105637874B (zh) | 2013-10-18 | 2014-10-20 | 解码多视图视频的视频解码装置和方法 |
KR1020167009245A KR20160072105A (ko) | 2013-10-18 | 2014-10-20 | 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 |
EP14853773.1A EP3059966B1 (en) | 2013-10-18 | 2014-10-20 | Video decoding apparatus and method for decoding multi-view video |
US15/028,649 US10063887B2 (en) | 2013-10-18 | 2014-10-20 | Video decoding apparatus and method for decoding multi-view video |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361892447P | 2013-10-18 | 2013-10-18 | |
US201361892462P | 2013-10-18 | 2013-10-18 | |
US201361892450P | 2013-10-18 | 2013-10-18 | |
US61/892,447 | 2013-10-18 | ||
US61/892,462 | 2013-10-18 | ||
US61/892,450 | 2013-10-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015057037A1 true WO2015057037A1 (ko) | 2015-04-23 |
Family
ID=52828405
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/009859 WO2015057037A1 (ko) | 2013-10-18 | 2014-10-20 | 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10063887B2 (ko) |
EP (1) | EP3059966B1 (ko) |
KR (1) | KR20160072105A (ko) |
CN (1) | CN105637874B (ko) |
WO (1) | WO2015057037A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016182255A1 (ko) * | 2015-05-11 | 2016-11-17 | 삼성전자 주식회사 | 전자 장치 및 이의 페이지 병합 방법 |
CN113196756A (zh) * | 2018-12-31 | 2021-07-30 | 腾讯美国有限责任公司 | 视频编解码的方法和装置 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015005753A1 (ko) | 2013-07-12 | 2015-01-15 | 삼성전자 주식회사 | 깊이 기반 디스패리티 벡터를 이용하는 인터 레이어 비디오 복호화 방법 및 그 장치, 깊이 기반 디스패리티 벡터를 이용하는 인터 레이어 비디오 부호화 방법 및 장치 |
US10178384B2 (en) * | 2013-12-19 | 2019-01-08 | Sharp Kabushiki Kaisha | Image decoding device, image coding device, and residual prediction device |
WO2016143972A1 (ko) * | 2015-03-11 | 2016-09-15 | 엘지전자(주) | 비디오 신호의 인코딩/디코딩 방법 및 장치 |
WO2016204373A1 (ko) * | 2015-06-18 | 2016-12-22 | 엘지전자 주식회사 | 영상 코딩 시스템에서 영상 특성에 기반한 적응적 필터링 방법 및 장치 |
EP3313079B1 (en) * | 2015-06-18 | 2021-09-01 | LG Electronics Inc. | Image filtering method in image coding system |
EP3596926A1 (en) | 2017-03-17 | 2020-01-22 | Vid Scale, Inc. | Predictive coding for 360-degree video based on geometry padding |
CN110868613B (zh) * | 2018-08-28 | 2021-10-01 | 华为技术有限公司 | 基于历史候选列表的图像编码方法、图像解码方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012086963A1 (ko) * | 2010-12-22 | 2012-06-28 | 엘지전자 주식회사 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
KR20130016172A (ko) * | 2010-12-14 | 2013-02-14 | 오수미 | 인터 예측 부호화된 동영상 복호화 방법 |
WO2013032073A1 (ko) * | 2011-08-29 | 2013-03-07 | 주식회사 아이벡스피티홀딩스 | Amvp 모드에서의 예측 블록 생성 방법 |
WO2013055148A2 (ko) * | 2011-10-12 | 2013-04-18 | 엘지전자 주식회사 | 영상 인코딩 방법 및 디코딩 방법 |
WO2013133587A1 (ko) * | 2012-03-07 | 2013-09-12 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101523436A (zh) | 2006-10-02 | 2009-09-02 | 皇家飞利浦电子股份有限公司 | 用于恢复视频流中的视差的方法和滤波器 |
JP5422168B2 (ja) * | 2008-09-29 | 2014-02-19 | 株式会社日立製作所 | 動画像符号化方法および動画像復号化方法 |
US20120075436A1 (en) * | 2010-09-24 | 2012-03-29 | Qualcomm Incorporated | Coding stereo video data |
US9047681B2 (en) * | 2011-07-07 | 2015-06-02 | Samsung Electronics Co., Ltd. | Depth image conversion apparatus and method |
EP2752001A4 (en) * | 2011-08-30 | 2015-04-15 | Nokia Corp | APPARATUS, METHOD AND COMPUTER PROGRAM FOR VIDEO ENCODING AND DECODING |
CN104041041B (zh) * | 2011-11-04 | 2017-09-01 | 谷歌技术控股有限责任公司 | 用于非均匀运动向量栅格的运动向量缩放 |
US20130177084A1 (en) * | 2012-01-10 | 2013-07-11 | Qualcomm Incorporated | Motion vector scaling in video coding |
WO2014008817A1 (en) * | 2012-07-09 | 2014-01-16 | Mediatek Inc. | Method and apparatus of inter-view sub-partition prediction in 3d video coding |
US10334259B2 (en) * | 2012-12-07 | 2019-06-25 | Qualcomm Incorporated | Advanced residual prediction in scalable and multi-view video coding |
US9350970B2 (en) * | 2012-12-14 | 2016-05-24 | Qualcomm Incorporated | Disparity vector derivation |
US9521389B2 (en) * | 2013-03-06 | 2016-12-13 | Qualcomm Incorporated | Derived disparity vector in 3D video coding |
US9948915B2 (en) * | 2013-07-24 | 2018-04-17 | Qualcomm Incorporated | Sub-PU motion prediction for texture and depth coding |
-
2014
- 2014-10-20 WO PCT/KR2014/009859 patent/WO2015057037A1/ko active Application Filing
- 2014-10-20 US US15/028,649 patent/US10063887B2/en active Active
- 2014-10-20 CN CN201480057173.5A patent/CN105637874B/zh not_active Expired - Fee Related
- 2014-10-20 KR KR1020167009245A patent/KR20160072105A/ko not_active Application Discontinuation
- 2014-10-20 EP EP14853773.1A patent/EP3059966B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130016172A (ko) * | 2010-12-14 | 2013-02-14 | 오수미 | 인터 예측 부호화된 동영상 복호화 방법 |
WO2012086963A1 (ko) * | 2010-12-22 | 2012-06-28 | 엘지전자 주식회사 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
WO2013032073A1 (ko) * | 2011-08-29 | 2013-03-07 | 주식회사 아이벡스피티홀딩스 | Amvp 모드에서의 예측 블록 생성 방법 |
WO2013055148A2 (ko) * | 2011-10-12 | 2013-04-18 | 엘지전자 주식회사 | 영상 인코딩 방법 및 디코딩 방법 |
WO2013133587A1 (ko) * | 2012-03-07 | 2013-09-12 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016182255A1 (ko) * | 2015-05-11 | 2016-11-17 | 삼성전자 주식회사 | 전자 장치 및 이의 페이지 병합 방법 |
US10817179B2 (en) | 2015-05-11 | 2020-10-27 | Samsung Electronics Co., Ltd. | Electronic device and page merging method therefor |
CN113196756A (zh) * | 2018-12-31 | 2021-07-30 | 腾讯美国有限责任公司 | 视频编解码的方法和装置 |
CN113196756B (zh) * | 2018-12-31 | 2024-02-27 | 腾讯美国有限责任公司 | 视频编解码的方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
US10063887B2 (en) | 2018-08-28 |
EP3059966B1 (en) | 2021-01-13 |
CN105637874B (zh) | 2018-12-07 |
CN105637874A (zh) | 2016-06-01 |
US20160255369A1 (en) | 2016-09-01 |
KR20160072105A (ko) | 2016-06-22 |
EP3059966A1 (en) | 2016-08-24 |
EP3059966A4 (en) | 2017-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102254599B1 (ko) | 멀티-뷰 비디오 코딩에 있어서, 뷰 합성 예측 방법 및 이를 이용한 머지 후보 리스트 구성 방법 | |
KR102135997B1 (ko) | 심도 인트라 예측 모드들에 대한 잔차 코딩 | |
WO2015057037A1 (ko) | 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 | |
JP6542206B2 (ja) | マルチビュービデオをデコードするビデオデコード方法及び装置 | |
US10659814B2 (en) | Depth picture coding method and device in video coding | |
US10587894B2 (en) | Method and device for encoding/decoding 3D video | |
US20170310993A1 (en) | Movement information compression method and device for 3d video coding | |
US20170310994A1 (en) | 3d video coding method and device | |
US20160255371A1 (en) | Method and apparatus for coding/decoding 3d video | |
US10419779B2 (en) | Method and device for processing camera parameter in 3D video coding | |
KR102343817B1 (ko) | 멀티-뷰 비디오의 디코딩 방법 및 장치 | |
US20160255368A1 (en) | Method and apparatus for coding/decoding video comprising multi-view | |
US10397611B2 (en) | Method and device for encoding/decoding 3D video | |
WO2015141977A1 (ko) | 3d 비디오 부호화/복호화 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14853773 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20167009245 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15028649 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014853773 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014853773 Country of ref document: EP |