WO2015057038A1 - 멀티-뷰 비디오의 디코딩 방법 및 장치 - Google Patents
멀티-뷰 비디오의 디코딩 방법 및 장치 Download PDFInfo
- Publication number
- WO2015057038A1 WO2015057038A1 PCT/KR2014/009860 KR2014009860W WO2015057038A1 WO 2015057038 A1 WO2015057038 A1 WO 2015057038A1 KR 2014009860 W KR2014009860 W KR 2014009860W WO 2015057038 A1 WO2015057038 A1 WO 2015057038A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- current block
- prediction
- picture
- sub
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to video coding, and more particularly, to coding of 3D video images.
- High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
- 3D video can provide realism and immersion using a plurality of view channels.
- 3D video can be used in a variety of areas such as free viewpoint video (FVV), free viewpoint TV (FTV), 3DTV, social security and home entertainment.
- FVV free viewpoint video
- FTV free viewpoint TV
- 3DTV social security and home entertainment.
- 3D video using multi-view has a high correlation between views of the same picture order count (POC). Since the multi-view image captures the same scene at the same time by using several adjacent cameras, that is, multiple views, the correlation between the different views is high because it contains almost the same information except for parallax and slight lighting differences.
- POC picture order count
- the decoding target block of the current view may be predicted or decoded with reference to the block of another view.
- An object of the present invention is to provide a method and apparatus for restoring information of a current view based on a picture of another view.
- An object of the present invention is to provide a method and apparatus for inheriting motion information of a texture view into motion information of a current block in a depth view.
- An object of the present invention is to provide a method and apparatus for deriving motion information of a texture view in sub-block units and using the motion information of a current block of a depth view.
- An object of the present invention is to provide a method and apparatus for deriving a prediction sample of a current block by deriving motion information of a texture view in a prediction block unit or a sub prediction block unit.
- An embodiment of the present invention is a video decoding apparatus for decoding multi-view video, comprising: an entropy decoding unit for entropy decoding a bitstream and outputting video information necessary for decoding a current block in a depth picture; A memory for storing pictures and a predictor for deriving a prediction sample for the current block using motion information of a texture picture in the same view as the motion information for the current block, wherein the predictor is configured to obtain motion information of the texture picture. It may be determined whether to derive on a sub-block basis of the current block, and motion information on the current block may be derived based on the determination.
- Another embodiment of the present invention is a video decoding method for decoding a multi-view video, comprising: entropy decoding a bitstream to derive video information required for decoding a current block in a depth picture, and based on the video information, Determining whether to derive the motion information for the current block from the texture picture in sub-block units, deriving motion information for the current block from the texture picture according to the determination, and using the motion vector for the current block. Deriving a predictive sample.
- the depth view can be efficiently coded by inheriting the motion information of the view as the motion information of the current block in the depth view.
- the motion information of the texture view may be derived in sub-block units and used as motion information of the current block of the depth view.
- the motion information of the texture view may be derived in the prediction block unit or the sub-prediction block unit as necessary, and may be used as the motion information of the current block of the depth view.
- 1 is a diagram schematically illustrating a process of encoding and decoding 3D video.
- FIG. 2 is a diagram schematically illustrating a configuration of a video encoding apparatus.
- FIG. 3 is a diagram schematically illustrating a configuration of a video decoding apparatus.
- FIG. 4 is a diagram schematically illustrating inter view coding.
- FIG. 5 schematically illustrates a multi-view coding method using a depth map.
- FIG. 6 is a diagram schematically illustrating a DV-MCP block.
- FIG. 7 is a diagram schematically illustrating an example of neighboring blocks of a current block.
- FIG. 8 is a diagram schematically illustrating a method of deriving information from a texture picture.
- FIG. 9 is a diagram schematically illustrating a process of deriving a motion vector of a texture picture through MVI.
- FIG. 10 is a diagram schematically illustrating a method of deriving a motion vector by applying MVI on a sub-block basis.
- FIG. 11 is a flowchart schematically illustrating an operation of a decoding apparatus according to the present invention.
- a pixel or a pel may mean a minimum unit constituting one image.
- the term 'sample' may be used as a term indicating a value of a specific pixel.
- the sample generally indicates the value of the pixel, but may indicate only the pixel value of the Luma component or only the pixel value of the Chroma component.
- a unit may mean a basic unit of image processing or a specific position of an image.
- the unit may be used interchangeably with terms such as 'block' or 'area' as the case may be.
- an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
- 1 is a diagram schematically illustrating a process of encoding and decoding 3D video.
- the 3D video encoder may encode a video picture, a depth map, and a camera parameter to output a bitstream.
- the depth map may be composed of distance information (depth information) between a camera and a subject with respect to pixels of a corresponding video picture (texture picture).
- the depth map may be an image in which depth information is normalized according to bit depth.
- the depth map may be composed of recorded depth information without color difference representation.
- disparity information indicating the correlation between views may be derived from depth information of the depth map using camera parameters.
- a general color image that is, a bitstream including a depth map and camera information together with a video picture (texture picture) may be transmitted to a decoder through a network or a storage medium.
- the decoder side can receive the bitstream and reconstruct the video.
- the 3D video decoder may decode the video picture and the depth map and the camera parameters from the bitstream. Based on the decoded video picture, the depth map and the camera parameters, the views required for the multi view display can be synthesized. In this case, when the display used is a stereo display, a 3D image may be displayed using two pictures from the reconstructed multi views.
- the stereo video decoder can reconstruct two pictures that will each be incident in both from the bitstream.
- a stereoscopic image may be displayed by using a view difference or disparity between a left image incident to the left eye and a right image incident to the right eye.
- the multi view display is used together with the stereo video decoder, different views may be generated based on the two reconstructed pictures to display the multi view.
- the 2D image may be restored and the image may be output to the 2D display.
- the decoder may output one of the reconstructed images to the 2D display when using a 3D video decoder or a stereo video decoder.
- view synthesis may be performed at the decoder side or may be performed at the display side.
- the decoder and the display may be one device or separate devices.
- the 3D video decoder, the stereo video decoder, and the 2D video decoder are described as separate decoders.
- one decoding apparatus may perform 3D video decoding, stereo video decoding, and 2D video decoding.
- the 3D video decoding apparatus may perform 3D video decoding
- the stereo video decoding apparatus may perform stereo video decoding
- the 2D video decoding apparatus may perform the 2D video decoding apparatus.
- the multi view display may output 2D video or output stereo video.
- the video encoding apparatus 200 may include a picture splitter 205, a predictor 210, a subtractor 215, a transformer 220, a quantizer 225, a reorderer 230, An entropy encoding unit 235, an inverse quantization unit 240, an inverse transform unit 245, an adder 250, a filter unit 255, and a memory 260 are included.
- the picture dividing unit 205 may divide the input picture into at least one processing unit block.
- the processing unit block may be a coding unit block, a prediction unit block, or a transform unit block.
- the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of coding.
- the prediction unit block is a block partitioned from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
- the transform unit block may be divided from the coding unit block along a quad tree structure, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
- a coding unit block is called a coding block or a coding unit (CU)
- a prediction unit block is called a prediction block or a prediction unit (PU)
- a transform unit block is a transform block.
- a transform unit (TU) transform unit
- a prediction block or prediction unit may mean a specific area in the form of a block within a picture or may mean an array of prediction samples.
- a transform block or a transform unit may mean a specific area in a block form within a picture, or may mean an array of transform coefficients or residual samples.
- the prediction unit 210 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a prediction block including prediction samples of the current block.
- the unit of prediction performed by the prediction unit 210 may be a coding block, a transform block, or a prediction block.
- the prediction unit 210 may determine whether intra prediction or inter prediction is applied to the current block.
- the prediction unit 210 may derive a prediction sample for the current block based on neighboring block pixels in a picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 210 may (i) derive a prediction sample based on the average or interpolation of neighbor reference samples of the current block, and (ii) a specific direction with respect to the prediction target pixel among the neighboring blocks of the current block. A prediction sample may be derived based on a reference sample present at. For convenience of explanation, the case of (i) is referred to as non-directional mode and the case of (ii) is referred to as directional mode. The prediction unit 210 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the prediction unit 210 may derive a prediction sample for the current block based on the samples specified by the motion vector on the reference picture.
- the predictor 210 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
- the prediction unit 210 may use the motion information of the neighboring block as the motion information of the current block.
- the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the neighboring block may be used as a motion vector predictor (MVP) to derive the motion vector of the current block.
- MVP motion vector predictor
- the neighboring block includes a spatial neighboring block present in the current picture and a temporal neighboring block present in the collocated picture.
- the motion information includes a motion vector and a reference picture.
- motion information of a temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the prediction unit 210 may perform inter view prediction.
- the predictor 210 may construct a reference picture list by including pictures of other views. For inter view prediction, the predictor 210 may derive a disparity vector. Unlike a motion vector that specifies a block corresponding to the current block in another picture in the current view, the disparity vector may specify a block corresponding to the current block in another view of the same access unit (AU) as the current picture.
- AU access unit
- the prediction unit 210 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter view motion prediction, and residual. Prediction, illumination compensation (IC), view synthesis, and the like can be performed.
- the disparity vector for the current block can be derived from the depth value using the camera parameter or from the motion vector or disparity vector of the neighboring block in the current or other view.
- the prediction unit 210 may include an inter-view merging candidate (IvMC) corresponding to temporal motion information of a reference view and an inter-view disparity vector candidate corresponding to the disparity vector.
- view disparity vector candidate (IvDC) shifted IvMC derived by shifting the disparity vector
- texture merge candidate derived from the texture corresponding to when the current block is a block on the depth map texture merging candidate (T)
- D disparity derived merging candidate
- VSP view synthesis prediction merge candidate derived based on view synthesis : VSP
- the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
- the prediction unit 210 may apply the inter-view motion vector prediction to predict the motion vector of the current block based on the disparator vector.
- the prediction unit 210 may derive the disparity vector based on the conversion of the maximum depth value in the corresponding depth block.
- a block including the reference sample may be used as the reference block.
- the prediction unit 210 may use the motion vector of the reference block as a candidate motion parameter or motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity for disparity-compensated prediction (DCP). Can be used as a parity vector.
- DCP disparity-compensated prediction
- the subtraction unit 215 generates a residual sample which is a difference between the original sample and the prediction sample.
- residual samples may not be generated as described above.
- the transform unit 220 generates a transform coefficient by transforming the residual sample in units of transform blocks.
- the quantization unit 225 may quantize the transform coefficients to generate quantized transform coefficients.
- the reordering unit 230 rearranges the quantized transform coefficients.
- the reordering unit 230 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form by scanning the coefficients.
- the entropy encoding unit 235 may perform entropy encoding on the quantized transform coefficients.
- Entropy encoding may include, for example, encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
- CABAC Context-Adaptive Binary Arithmetic Coding
- the entropy encoding unit 235 may encode information necessary for video reconstruction other than the quantized transform coefficients (eg, a value of a syntax element) together or separately.
- Entropy-encoded information may be transmitted or stored in units of NAL units in the form of a bitstream.
- the dequantization unit 240 inversely quantizes the quantized transform coefficients to generate transform coefficients.
- the inverse transform unit 245 inverse transforms the transform coefficients to generate residual samples.
- the adder 250 reconstructs the picture by combining the residual sample and the predictive sample.
- the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
- the adder 250 has been described in a separate configuration, the adder 250 may be part of the predictor 210.
- the filter unit 255 may apply a deblocking filter and / or an offset to the reconstructed picture. Through the deblocking filtering mill / or offset, the artifacts at the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
- the offset may be applied on a sample basis or may be applied after the process of deblocking filtering is completed.
- the memory 260 may store information necessary for reconstructed pictures or encoding / decoding.
- the memory 260 may store pictures used for inter prediction / inter-view prediction.
- pictures used for inter prediction / inter-view prediction may be designated by a reference picture set or a reference picture list.
- one encoding device has been described as encoding the independent view and the dependent view, this is for convenience of description, and a separate encoding device is configured for each view or a separate internal module (for example, prediction for each view). B) may be configured.
- the video decoding apparatus 300 includes an entropy decoding unit 310, a reordering unit 320, an inverse quantization unit 330, an inverse transform unit 340, a predictor 350, and an adder 360.
- the filter unit 370 and the memory 380 are included.
- the video decoding apparatus 300 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
- the video decoding apparatus 300 may perform video decoding using a processing unit applied in the video encoding apparatus.
- the processing unit block of video decoding may be a coding unit block, a prediction unit block, or a transform unit block.
- the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of decoding.
- the prediction unit block is a block partitioned from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
- the transform unit block may be divided from the coding unit block along a quad tree structure, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
- the entropy decoding unit 310 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 310 may decode the information in the bitstream based on the exponential Golomb, CAVLC, CABAC, etc., and output a syntax element value required for video reconstruction, a quantized value of transform coefficients related to the residual, and the like. have.
- the bitstream may be input for each view.
- information about each view may be multiplexed in the bitstream.
- the entropy decoding unit 310 may de-multiplex the bitstream and parse for each view.
- the reordering unit 320 may rearrange the quantized transform coefficients in the form of a two-dimensional block.
- the reordering unit 320 may perform reordering in response to coefficient scanning performed by the encoding apparatus.
- the inverse quantization unit 330 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
- information for deriving a quantization parameter may be signaled from the encoding apparatus.
- the inverse transform unit 340 may inverse residual transform coefficients to derive residual samples.
- the prediction unit 350 may perform prediction on the current block and generate a prediction block including prediction samples for the current block.
- the unit of prediction performed by the prediction unit 350 may be a coding block, a transform block, or a prediction block.
- the prediction unit 350 may determine whether to apply intra prediction or inter prediction.
- a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
- the unit for generating the prediction sample in inter prediction and intra prediction may also be different.
- the prediction unit 350 may derive the prediction sample for the current block based on the neighboring block pixels in the current picture.
- the prediction unit 350 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the peripheral reference samples of the current block.
- the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
- the prediction unit 350 may derive the prediction sample for the current block based on the samples specified by the motion vector on the reference picture.
- the prediction unit 350 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
- the motion information of the neighboring block may be used as the motion information of the current block.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the predictor 350 may construct a merge candidate list using motion information of available neighboring blocks, and use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
- the merge index may be signaled from the encoding device.
- the motion information includes a motion vector and a reference picture. When motion information of a temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor (MVP).
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the prediction unit 350 may perform inter view prediction.
- the prediction unit 350 may configure a reference picture list including pictures of other views.
- the predictor 210 may derive a disparity vector.
- the prediction unit 350 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter view motion prediction, and residual. Prediction, illumination compensation (IC), view synthesis, and the like can be performed.
- the disparity vector for the current block can be derived from the depth value using the camera parameter or from the motion vector or disparity vector of the neighboring block in the current or other view.
- Camera parameters may be signaled from the encoding device.
- the prediction unit 350 shifts the IvMC corresponding to the temporal motion information of the reference view, the IvDC corresponding to the disparity vector, and the disparity vector. Shifted IvMC derived by a subfield, a texture merge candidate (T) derived from a texture corresponding to a case in which the current block is a block on a depth map, and a disparity derivation merge candidate (D) derived using disparity from a texture merge candidate. ), A view synthesis prediction merge candidate (VSP) derived based on view synthesis may be added to the merge candidate list.
- VSP view synthesis prediction merge candidate
- the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
- the prediction unit 350 may apply inter-view motion vector prediction to predict the motion vector of the current block based on the disparator vector.
- the prediction unit 350 may use a block in the reference view specified by the disparity vector as the reference block.
- the prediction unit 350 may use the motion vector of the reference block as a candidate motion parameter or motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity vector for DCP.
- the adder 360 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
- the adder 360 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
- the adder 360 has been described in a separate configuration, the adder 360 may be part of the predictor 350.
- the filter unit 370 may apply deblocking filtering and / or offset to the reconstructed picture.
- the offset may be adaptively applied as an offset in a sample unit.
- the memory 380 may store information necessary for reconstruction picture or decoding.
- the memory 380 may store pictures used for inter prediction / inter-view prediction.
- pictures used for inter prediction / inter-view prediction may be designated by a reference picture set or a reference picture list.
- the reconstructed picture can be used as a reference picture.
- the memory 380 may output the reconstructed picture in the output order.
- the output unit may display a plurality of different views.
- each decoding apparatus may operate for each view, and an operation unit (eg, a prediction unit) corresponding to each view may be provided in one decoding apparatus.
- an operation unit eg, a prediction unit
- the encoding apparatus and the decoding apparatus may improve the efficiency of video coding for the current view using the coded data of another view belonging to the same access unit (AU) as the current picture.
- pictures having the same POC may be referred to as one AU.
- the POC corresponds to the display order of the pictures.
- the encoding apparatus and the decoding apparatus may code views in units of AUs, and may code pictures in units of views. Coding proceeds between views according to a predetermined order.
- the first coded view may be referred to as a base view or an independent view.
- a view that can be coded by referencing another view after the independent view is coded can be called a dependent view.
- another view referred to in coding (encoding / decoding) of the current view may be referred to as a reference view.
- FIG. 4 is a diagram schematically illustrating inter view coding.
- coding is performed in units of AU, where V0 is an independent view and V1 is a dependent view.
- inter-picture prediction that refers to another picture 430 of the same view using a motion vector may be referred to as motion-compensated prediction (MCP).
- MCP motion-compensated prediction
- the disparity-compensated prediction may be performed by using a disparity vector to perform inter-picture prediction that refers to the picture 420 in another view having the same POC, that is, the same POC. DCP).
- a depth map may be used in addition to a method of using pictures of other views.
- FIG. 5 schematically illustrates a multi-view coding method using a depth map.
- a block (current block) 505 of the current picture 500 in the current view may be coded (encoded / decoded) using the depth map 510.
- the depth value d of the position (x, y) of the sample 520 in the depth map 510 corresponding to the position (x, y) of the sample 515 in the current block 505 is the disparity vector. 525.
- the depth value d can be derived based on the distance between the sample (pixel) and the camera.
- the encoding apparatus and the decoding apparatus may add the disparity vector 525 to the sample 530 position (x, y) to determine the position of the reference sample 535 in the current picture 540 in the reference view.
- the disparity vector may have only x-axis components. Accordingly, the value of the disparity vector may be (disp, 0), and the position (xr, y) of the reference sample 540 may be determined as (x + disp, y).
- the encoding apparatus and the decoding apparatus may use the motion parameter of the reference block 545 including the reference pixel 535 as a candidate of the motion parameter of the current block. For example, if the reference picture 550 in the reference view is a reference picture for the reference block 545, the motion vector 555 of the reference block 545 may be derived to the motion vector 560 of the current block 505. It may be. At this time, the picture 565 is a reference picture in the current view.
- the disparity vector of the DCP coding block may be used as a disparity vector to be applied to the current block.
- the disparity vector derived from the neighboring block that is, the disparity vector of the DCP coded block
- IVMP inter-view motion prediction
- IVRP inter-view residual prediction
- MVP motion vector prediction mode
- AMVP advanced motion vector prediction
- SKIP skip
- the block in which the motion vector is predicted by the IVMP method among the MCP coded blocks is called a DV-MCP block.
- FIG. 6 is a diagram schematically illustrating a DV-MCP block.
- FIG. 6 illustrates a case of inter prediction the current block 620 in the current picture 610 of the current view.
- the motion vector MV1 of the neighboring block 630 used for inter prediction of the current block 620 is derived from the corresponding block 650 of the reference picture 640 in the base view.
- the corresponding block is specified by the disparity vector DV 660.
- the motion vector MV1 of the neighboring block 630 may be set to or derived from the motion vector MV2 of the corresponding block 650.
- the POCs of the reference picture 640 and the current picture 610 in the base view may be the same.
- the neighboring block 630 to which the motion vector MV1 predicted from the motion vector MV2 of the corresponding block 650 in another view is applied may be referred to as a DV-MCP block.
- the encoding apparatus and the decoding apparatus may store the information of the disparity vector used for the motion vector prediction of the DV-MCP block and use it in the process of deriving the disparity vector of the neighboring block.
- FIG. 7 is a diagram schematically illustrating an example of neighboring blocks of a current block.
- the neighboring blocks of FIG. 7 are blocks that are already decoded at the time of decoding the current block and are accessible.
- the neighboring blocks of the current block 710 are the spatial neighboring blocks A0, A1, B0, B1, B2 and the temporal neighboring blocks col-CTR (col-center) and col-RB (col-right bottom). Include.
- the spatial neighboring blocks are each specified based on the position of the current block 710.
- temporal neighboring blocks may be specified based on a position 720 corresponding to the current block in a collocated picture, which is one of the reference pictures.
- a coding block including a pixel located at the center of the current block 720 in a collocated picture designated at the time of decoding the current picture or the current slice becomes col-CTR.
- the coding block including the pixel at the (x + 1, y + 1) position becomes col-RB.
- col-CTR may be expressed as CTR and col-BR as BR.
- the call-located picture may be one of a temporal reference picture included in the current picture or a reference picture list of the current slice, selected for temporal disparity vector derivation. .
- the call-picture may be known to the decoder through a slice header. For example, information indicating which picture to use as a call-picture may be signaled in the slice header.
- prediction samples are performed in units of a prediction block (eg, PU) or a sub-prediction block (eg, sub-PU).
- the prediction unit of the encoding apparatus and the decoding apparatus if the current prediction block is in the texture, and at least one inter-view reference block is present in the current slice, a block corresponding to the current block may be specified based on the disparity vector, A prediction sample may be derived at a PU level or a sub PU level using the corresponding block.
- the prediction unit of the encoding apparatus and the decoding apparatus constructs the merge candidate list in the same manner as in the base view, and then moves the motion vector of the corresponding block in the reference view.
- a shift synthesized IvMC, a shifted IvDC, and a view synthesis prediction (VSP) merge candidate derived based on a depth may be added to the merge candidate list.
- merge candidates constituting the merge candidate list will be briefly described.
- the available motion vector is derived from spatial neighboring blocks in the same way as the merge candidate list used in the base view.
- the spatial neighboring blocks of the current block become blocks A0, A1, B0, B1, and B2 around the current block 710 illustrated in FIG. 7.
- the information of the corresponding block in the reference view different from the current view may be used as the merge candidate of the current block.
- the corresponding block may be specified by the disparity vector.
- the disparity vector may be derived from a disparity vector or a motion vector of a neighboring block to which DCP or MCP is applied, or a value modified from the derived motion vector using a depth map may be used as the disparity vector.
- the disparity vector derived from the neighboring block is called disparity vector from neighboring blocks (NBDV), and the disparity vector derived from the NBDV using the depth value is referred to as depth oriented NBDV (DoNBDV).
- the prediction unit of the encoding apparatus and the decoding apparatus may use the motion vector used when the reference block specified by the disparity vector in the reference view performs temporal motion compensation as the inter-view merge candidate (IvMC). That is, the motion vector of the block to which the MCP is applied in the reference view may be used as the motion vector candidate of the current block.
- NBDV or DoNBDV derived based on the neighboring block of the current block may be used as the disparity vector used to specify the reference block, or a value derived based on the depth map may be used.
- a method of deriving a PU level or sub-PU level may be used for derivation of IvMC.
- the prediction units of the encoding apparatus and the decoding apparatus may use the disparity vector of the corresponding block in the reference view as the inter-view disparity vector candidate (IvDC).
- IvDC inter-view disparity vector candidate
- the prediction unit of the encoding apparatus and the decoding apparatus may shift the disparity vector by a specific value and then induce the motion vector of the corresponding block specified by the shifted disparity vector to the shifted IvMC (IvMCShift). have.
- the prediction unit may shift the disparity vector using the height and width of the current prediction block. For example, when the height of the current block is nPbH and nPbW, the predictor shifts the disparity vector by nPbW * 2 + 2 in the x-axis direction and nPbH * 2 + 2 in the y-axis direction to derive IvMCShift. can do.
- the prediction unit may add IvMCShift as a merge candidate of the current block when IvMC and IvMCShift are not the same.
- the prediction units of the encoding apparatus and the decoding apparatus may shift the disparity vector by a specific value, and then add the shifted disparity vector (shifted IvDC, IvDCShift) to the merge candidate of the current block.
- the prediction unit may use, as the IvDCShift, the disparity vector that moves the IvDC only a predetermined distance (for example, 4) along the x-axis.
- the prediction unit may derive the IvDCShift in consideration of the case where view synthesis prediction is applied. For example, when the view synthesis prediction can be performed, the prediction unit may set the y component of IvDCShift to zero.
- the prediction unit of the encoding apparatus and the decoding apparatus adds a motion vector to the merge candidate list based on inter-view prediction
- the prediction unit may derive a candidate based on the information of the depth map.
- the prediction unit may apply a motion parameter inheritance method (MPI) that uses motion information from the video signal based on the similarity between the video signal and the depth signal.
- MPI motion parameter inheritance method
- different motion vectors may be inherited from the texture for each sub-PU divided from one depth PU.
- the prediction unit may add, as a merge candidate, the depth candidate D derived based on the merge candidate T and the T using the motion vector inherited from the texture. If D is used, the prediction samples may be set to a depth value derived from the corresponding disparity vector.
- the prediction unit of the encoding apparatus and the decoding apparatus may add a disparity vector by view synthesis prediction (VSP) as a merge candidate (VSP).
- VSP view synthesis prediction
- the prediction unit may add the disparity vector of the neighboring block as a merge candidate for the current block, and use the disparity vector to derive depth information of the current block based on the depth value of the corresponding block specified on the depth map.
- the prediction unit of the encoding apparatus and the decoding apparatus may use the merge candidates to form a merge candidate list as follows.
- the merge candidates are positioned in the following order on the merge candidate list.
- the prediction unit adds T and D to the merge candidate list as MPI candidates.
- the prediction unit determines whether T is available, and adds T when it is available.
- the predictor determines whether D is available and adds D after T if it is available.
- the prediction unit inserts IvMC at the position next to D of the merge candidate list when IvMC is available and T is not available or when T and IvMC are different.
- the prediction unit adds A1 to the merge candidate list when A1 is available.
- the prediction unit may compare the identity of the merge candidate with the added A1.
- the merge candidate N already added may be T when the depth is available, and may be IvMC when the depth is not used. If A1 is equal to N, the prediction unit may exclude A1 from the merge candidate.
- the prediction unit adds B1 to the merge candidate list when B1 is available.
- the prediction unit may exclude B1 from the merge candidate when the same as the candidate previously added with B1.
- the prediction unit may add B0 to the merge candidate list when B0 is available.
- the prediction unit may add the IvDC to the merge candidate list when the IvDC is available.
- the prediction unit includes (i) A1 is not available or A1 and IvDC are different, (ii) B1 is not available or B1 and IvDC are different, and (iii) If the number of merge candidates added so far does not exceed the maximum number of candidates in the merge candidate list, the IvMC may be added to the merge candidate list.
- the prediction unit adds the VSP to the merge candidate list if the disparity vector derived from the VSP (hereinafter referred to as VSP) is available and the number of merge candidates added so far does not exceed the maximum candidate number of the merge posterior list. Can be.
- the prediction unit may add the VSP to the merge candidate list on the condition that additional coding methods such as illumination compensation (IC), advanced residual prediction (ARP), and the like are not used to increase coding efficiency.
- the prediction unit may add A0 to the merge candidate list if A0 is available and the number of merge candidates added so far does not exceed the maximum number of candidates in the merge candidate list.
- the prediction unit may add B2 to the merge candidate list if B2 is available and the number of merge candidates added so far does not exceed the maximum number of candidates in the merge candidate list.
- the prediction unit (i) if IvMCShift is available and the number of merge candidates added so far does not exceed the maximum number of candidates in the merge candidate list, and (ii) IvMC is not available or IvMC and IvMCShift are not the same. IvMCShift can be added to the merge candidate list.
- the prediction unit may add IvDCShift to the merge candidate list if IvDCShift is available and the number of merge candidates added so far does not exceed the maximum number of candidates in the merge candidate list.
- candidates T and D by MPI may be used, otherwise T and D may not be used.
- the prediction unit of the encoding device and the decoding device may specify a motion vector in units of sub-blocks (sub-prediction blocks) of the prediction block when the IvMC is used or the VSP may be applied and the block is not partitioned based on the depth. have.
- inter-layer (inter-view) prediction may be performed on a sub-prediction block basis.
- the prediction unit may induce a disparity vector in units of sub-prediction blocks.
- the motion vector when a motion vector is derived in sub prediction blocks, the motion vector may be derived in sub blocks within the current prediction block.
- a motion vector can be specified in units of sub-prediction blocks
- information specifying the size of the sub-prediction blocks at the extension level of the BIO parameter set can be transmitted.
- the sub-prediction block size when MPI is applied may be signaled separately.
- the depth map may be coded by referring to coding information regarding texture pictures of the same time, for example, the same picture order count (POC).
- POC picture order count
- the depth picture is taken at the same time as the texture picture at the same time or is generated from depth information about the texture picture at the same time, the depth picture and the texture picture at the same time are highly correlated.
- block partition information, motion information, or the like of an already coded texture picture may be used when coding a depth picture. This is called a motion parameter inheritance (MPI) as described above.
- MPI motion parameter inheritance
- FIG. 8 is a diagram schematically illustrating a method of deriving information from a texture picture.
- a block 850 in the texture picture 840 corresponding to the sub block 830 of the prediction block 820 in the current picture 810 is specified.
- the current picture 810 may be a depth picture.
- Motion information of the prediction block including the center 860 of the corresponding texture block 850 may be used as the motion information of the sub-block 830.
- the motion information of the prediction block 850 ′ including the center 860 may be used as the motion information of the sub block 830.
- MVI motion vector inheritance
- FIG. 9 is a diagram schematically illustrating a process of deriving a motion vector of a texture picture through MVI.
- a motion vector may be inherited from the corresponding block C ′ 940 in the text picture 930 at the same position as the current block C 920 in the depth picture 910.
- the prediction unit of the encoding apparatus and the decoding apparatus may derive the motion vector Mv 950 at the center of the corresponding block C ′ 940 and use it as the motion vector Mv 960 for the current block 920.
- the texture block 940 at the same position as the current block C 920 is a block to which intra prediction is applied, the prediction unit does not obtain a motion vector from the texture block.
- a motion vector may be obtained in sub-block units from a texture picture corresponding to the current block to increase the accuracy of prediction for the current block.
- the prediction unit of the encoding apparatus and the decoding apparatus may divide the corresponding block in the texture picture into sub-blocks having a predetermined size, and may obtain motion information in the divided sub-block unit and apply the current block to the current block in the depth picture.
- the corresponding block may be a prediction block
- the sub block may be a sub prediction block or a sub PU.
- FIG. 10 is a diagram schematically illustrating a method of deriving a motion vector by applying MVI on a sub-block basis.
- the motion vector of the current block C 1020 in the depth picture 1010 may be inherited from the corresponding block C ′ 1040 in the texture picture 1030.
- the succession of the motion vector may be performed in units of sub blocks in the current block C 1020 and the corresponding block C ′ 1040.
- the sub blocks C1 to C4 and C'1 to C'4 become sub prediction blocks.
- the size of the sub prediction block (sub PU) may be set to NxM (N, where M is an integer greater than 0).
- the prediction unit of the encoding apparatus and the decoding apparatus may obtain a motion vector based on a sub-block specified by a corresponding relationship with the current block C 1020 which is a depth block, regardless of the block division information in the original texture picture. For example, the prediction unit may obtain a motion vector according to the sizes of the sub blocks C1 'to C4' of the corresponding block C '1040.
- the position for obtaining the motion information in the sub PU may be the center of the sub PU.
- the position from which the motion information is obtained from the sub PU may be a left-top position of the sub prediction block.
- Each sub-prediction block may be specified by the upper left position.
- the prediction unit may replace the motion vector of the sub-PU with a neighboring motion vector value.
- the neighboring motion vector may be the motion vector of the left or upper subPU of the corresponding subPU.
- the prediction unit of the encoding apparatus and the decoding apparatus may set a predefined substitute motion vector as the motion vector of the corresponding subPU.
- the alternative motion vector may be a motion vector of a block indicated by NBDV or DoNBDV.
- the prediction unit may set the motion vector derived immediately before as the replacement motion vector and continuously update the replacement motion vector.
- the prediction unit inherits the motion vector from the texture block corresponding to the depth block to be coded (encoded / decoded), whether to use the prediction block unit (i.e., PU unit) or the sub-prediction block (i.e., sub-PU unit) You can decide whether or not to do so.
- the prediction block unit i.e., PU unit
- the sub-prediction block i.e., sub-PU unit
- information indicating whether to inherit the motion vector in units of PUs or the motion vector in units of sub-PUs may be transmitted from the encoding apparatus to the decoding apparatus.
- an indication of whether to inherit the motion vector in units of PUs or the motion vector in units of sub-PUs may be signaled using a flag.
- the decoding apparatus may determine whether to inherit the motion vector in units of PUs or the motion vector in units of sub-PUs based on the received information.
- the prediction unit moves the motion vector Mv1 of C'1 with respect to the subblock C1, and the motion of C'2 corresponding to C2 with respect to the subblock C2.
- the vector Mv2 can be used with the motion vector Mv3 of C'3 corresponding to C3 for the subblock C3, and the motion vector of C'4 corresponding to C4 for the subblock C4.
- FIG. 11 is a flowchart schematically illustrating an operation of a decoding apparatus according to the present invention.
- the decoding apparatus entropy decodes the bitstream and outputs video information necessary for decoding the current block (S1110).
- the video information includes information necessary to inverse transform / dequantize the residual and the radial, information necessary to generate a predictive sample, information necessary to apply filtering to the reconstructed picture, and the like, for decoding the current block.
- the video information may include information indicating whether to inherit the motion information from the texture picture.
- the current block when the current block is a prediction block in the depth view, that is, a PU in the depth view, the current block may include information indicating whether to derive motion information of the current block in units of sub-prediction blocks (sub-PUs).
- the information indicating whether to derive the motion vector in the sub-prediction block unit may indicate whether the motion vector is derived in the prediction block unit or in the sub-prediction block unit by indicating the size of the block from which the motion information is derived.
- the video information may be transmitted at the level of the video parameter set or the extension level of the video parameter set as needed.
- the decoding apparatus may determine a unit for deriving a motion vector for the current block based on the video information (S1120).
- the video information may include motion information derivation unit information indicating whether to derive a motion vector on a sub-prediction block basis.
- the decoding apparatus may determine whether to derive motion information in units of prediction blocks or units of sub blocks (sub prediction blocks) based on the motion information derivation unit information.
- the decoding apparatus may derive a motion vector for the current block based on the determination (S1130).
- motion information eg, a motion vector
- the decoding apparatus may set the motion vector of the corresponding block in the texture picture as the motion vector of the current block in the depth picture.
- motion information (eg, a motion vector) of the current block in the depth picture may be derived from the sub-block of the block in the corresponding texture picture.
- the decoding apparatus may set a motion vector of a subblock of a corresponding block in a texture picture as a motion vector of a subblock of a current block in a depth picture.
- the decoding apparatus may derive the prediction sample for the current block by using the motion vector (S1140).
- the decoding apparatus may derive the prediction sample in the sub block unit by using the motion vector. For example, the decoding apparatus may use the samples of the region indicated by the motion vector specified on a sub-block basis as a prediction sample for the sub-block (eg, sub-prediction block) of the current block (eg, prediction block).
- the decoding apparatus may derive the prediction sample in the prediction block unit by using the motion vector. For example, the decoding apparatus may use the samples of the region indicated by the motion vector specified in the prediction block unit on the reference picture as the prediction samples of the current block (prediction block).
- the decoding apparatus may derive the reconstructed sample by adding the predictive sample and the residual sample.
- the decoding apparatus may omit in-loop filtering or the like in order to reduce the complexity.
- the decoding has been described, but the step of S1120 or less, that is, determining whether to derive the motion vector for the block of the depth map in units of sub-blocks, inducing the motion vector in units of the current block or the sub-blocks of the current block.
- the derivation of the prediction sample using the motion vector and the derivation of the reconstructed sample using the motion vector may be similarly performed in the encoding apparatus.
- the encoding apparatus may determine whether to derive the motion vector on a sub-block basis in consideration of the cost of coding, and then entropy-encode the related information and transmit the related information to the decoding apparatus.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims (14)
- 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치로서,
비트스트림을 엔트로피 디코딩하여 뎁스 픽처 내 현재 블록의 디코딩에 필요한 비디오 정보를 출력하는 엔트로피 디코딩부;
상기 현재 블록의 디코딩에 참조되는 픽처들을 저장하는 메모리; 및
동일 뷰 내 텍스처 픽처의 움직임 정보를 상기 현재 블록에 대한 움직임 정보로서 이용하여 상기 현재 블록에 대한 예측 샘플을 유도하는 예측부를 포함하며,
상기 예측부는 상기 텍스처 픽처의 움직임 정보를 상기 현재 블록의 서브 블록 단위로 유도할 것인지를 결정하고, 상기 결정을 기반으로 상기 현재 블록에 대한 움직임 정보를 유도하는 것을 특징으로 하는 비디오 디코딩 장치. - 제1항에 있어서, 상기 텍스처 픽처 내 현재 블록의 대응 블록으로부터 움직임 정보가 유도되며, 상기 텍스처 픽처는 상기 뎁스 픽처와 동일한 시간의 픽처이며, 상기 대응 블록은 상기 텍스처 픽처 내에서 상기 현재 블록과 동일한 위치에 있는 블록인 것을 특징으로 하는 비디오 디코딩 장치.
- 제1항에 있어서, 상기 비디오 정보는 움직임 정보를 서브 블록 단위로 유도할 것인지를 지시하는 지시 정보를 포함하며,
상기 예측부는 상기 지시 정보를 기반으로 상기 텍스처 픽처의 움직임 정보를 서브 블록 단위로 유도할 것인지를 결정하는 것을 특징으로 하는 비디오 디코딩 장치. - 제1항에 있어서, 상기 현재 블록은 예측 블록인 것을 특징으로 하는 비디오 디코딩 장치.
- 제1항에 있어서, 상기 텍스처 픽처의 움직임 정보를 서브 블록 단위로 유도하는 것으로 결정되는 경우에, 상기 예측부는 상기 텍스처 픽처에서 상기 현재 블록에 대응하는 블록의 서브 블록 단위로 움직임 벡터를 유도하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제5항에 있어서, 상기 대응 블록의 서브 블록은 상기 현재 블록의 서브 블록에 대응하며, 상기 예측부는 상기 대응 블록의 서브 블록에서 유도된 움직임 벡터를 상기 현재 블록의 서브 블록에 대한 움직임 벡터로 설정하는 것을 특징으로 하는 비디오 디코딩 장치.
- 제5항에 있어서, 상기 대응 블록의 서브 블록은 상기 현재 블록의 서브 블록에 대응하며, 상기 예측부는 상기 대응 블록의 서브 블록별로 유도된 움직임 벡터를 기반으로 상기 현재 블록의 서브 블록별로 예측 샘플을 유도하는 것을 특징으로 하는 비디오 디코딩 장치.
- 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 방법으로서,
비트스트림을 엔트로피 디코딩하여 뎁스 픽처 내 현재 블록의 디코딩에 필요한 비디오 정보를 유도하는 단계;
상기 비디오 정보를 기반으로 상기 현재 블록에 대한 움직임 정보를 텍스처 픽처로부터 서브 블록 단위로 유도할 것인지를 결정하는 단계;
상기 결정에 따라 상기 텍스처 픽처로부터 상기 현재 블록에 대한 움직임 정보를 유도하는 단계; 및
상기 움직임 벡터를 이용하여 상기 현재 블록에 대한 예측 샘플을 유도하는 단계를 포함하는 것을 특징으로 하는 비디오 디코딩 방법. - 제8항에 있어서, 상기 텍스처 픽처 내 현재 블록의 대응 블록으로부터 움직임 정보가 유도되며, 상기 텍스처 픽처는 상기 뎁스 픽처와 동일한 시간의 픽처이며, 상기 대응 블록은 상기 텍스처 픽처 내에서 상기 현재 블록과 동일한 위치에 있는 블록인 것을 특징으로 하는 비디오 디코딩 방법.
- 제8항에 있어서, 상기 비디오 정보는 움직임 정보를 서브 블록 단위로 유도할 것인지를 지시하는 지시 정보를 포함하며,
상기 결정 단계에서는 상기 지시 정보를 기반으로 상기 텍스처 픽처의 움직임 정보를 서브 블록 단위로 유도할 것인지를 결정하는 것을 특징으로 하는 비디오 디코딩 방법. - 제8항에 있어서, 상기 현재 블록은 예측 블록인 것을 특징으로 하는 비디오 디코딩 방법.
- 제8항에 있어서, 상기 텍스처 픽처의 움직임 정보를 서브 블록 단위로 유도하는 것으로 결정되는 경우에,
상기 움직임 정보 유도 단계에서는 상기 텍스처 픽처에서 상기 현재 블록에 대응하는 블록의 서브 블록 단위로 움직임 벡터를 유도하는 것을 특징으로 하는 비디오 디코딩 방법. - 제12항에 있어서, 상기 대응 블록의 서브 블록은 상기 현재 블록의 서브 블록에 대응하며,
상기 움직임 정보 유도 단계에서는 상기 대응 블록의 서브 블록에서 유도된 움직임 벡터를 상기 현재 블록의 서브 블록에 대한 움직임 벡터로 설정하는 것을 특징으로 하는 비디오 디코딩 방법. - 제12항에 있어서, 상기 대응 블록의 서브 블록은 상기 현재 블록의 서브 블록에 대응하며,
상기 예측 샘플 유도 단계에서는 상기 대응 블록의 서브 블록별로 유도된 움직임 벡터를 기반으로 상기 현재 블록의 서브 블록별로 예측 샘플을 유도하는 것을 특징으로 하는 비디오 디코딩 방법.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14854693.0A EP3059968A4 (en) | 2013-10-18 | 2014-10-20 | Method and apparatus for decoding multi-view video |
KR1020167009104A KR102343817B1 (ko) | 2013-10-18 | 2014-10-20 | 멀티-뷰 비디오의 디코딩 방법 및 장치 |
CN201480057222.5A CN105637875A (zh) | 2013-10-18 | 2014-10-20 | 用于解码多视图视频的方法和设备 |
JP2016524091A JP6571646B2 (ja) | 2013-10-18 | 2014-10-20 | マルチビュービデオのデコード方法及び装置 |
US15/028,905 US10045048B2 (en) | 2013-10-18 | 2014-10-20 | Method and apparatus for decoding multi-view video |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361892448P | 2013-10-18 | 2013-10-18 | |
US61/892,448 | 2013-10-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015057038A1 true WO2015057038A1 (ko) | 2015-04-23 |
Family
ID=52828406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/009860 WO2015057038A1 (ko) | 2013-10-18 | 2014-10-20 | 멀티-뷰 비디오의 디코딩 방법 및 장치 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10045048B2 (ko) |
EP (1) | EP3059968A4 (ko) |
JP (1) | JP6571646B2 (ko) |
KR (1) | KR102343817B1 (ko) |
CN (1) | CN105637875A (ko) |
WO (1) | WO2015057038A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019009504A1 (ko) * | 2017-07-07 | 2019-01-10 | 삼성전자 주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
US11432003B2 (en) | 2017-09-28 | 2022-08-30 | Samsung Electronics Co., Ltd. | Encoding method and apparatus therefor, and decoding method and apparatus therefor |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015192781A1 (en) * | 2014-06-20 | 2015-12-23 | Mediatek Singapore Pte. Ltd. | Method of sub-pu syntax signaling and illumination compensation for 3d and multi-view video coding |
CN109996075B (zh) * | 2017-12-29 | 2022-07-12 | 华为技术有限公司 | 一种图像解码方法及解码器 |
CN109151436B (zh) * | 2018-09-30 | 2021-02-02 | Oppo广东移动通信有限公司 | 数据处理方法及装置、电子设备及存储介质 |
CN109257609B (zh) * | 2018-09-30 | 2021-04-23 | Oppo广东移动通信有限公司 | 数据处理方法及装置、电子设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050066400A (ko) * | 2003-12-26 | 2005-06-30 | 한국전자통신연구원 | 다시점 영상 및 깊이 정보를 이용한 3차원 객체 추적 장치및 그 방법 |
WO2010043773A1 (en) * | 2008-10-17 | 2010-04-22 | Nokia Corporation | Sharing of motion vector in 3d video coding |
WO2013115025A1 (ja) * | 2012-01-31 | 2013-08-08 | ソニー株式会社 | 符号化装置および符号化方法、並びに、復号装置および復号方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2752001A4 (en) * | 2011-08-30 | 2015-04-15 | Nokia Corp | APPARATUS, METHOD AND COMPUTER PROGRAM FOR VIDEO ENCODING AND DECODING |
US9485503B2 (en) * | 2011-11-18 | 2016-11-01 | Qualcomm Incorporated | Inside view motion prediction among texture and depth view components |
WO2013107931A1 (en) * | 2012-01-19 | 2013-07-25 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
US9998726B2 (en) * | 2012-06-20 | 2018-06-12 | Nokia Technologies Oy | Apparatus, a method and a computer program for video coding and decoding |
WO2014089727A1 (en) * | 2012-12-14 | 2014-06-19 | Qualcomm Incorporated | Inside view motion prediction among texture and depth view components with asymmetric spatial resolution |
US9544601B2 (en) * | 2013-10-15 | 2017-01-10 | Qualcomm Incorporated | Wedgelet pattern extension for depth intra coding |
-
2014
- 2014-10-20 WO PCT/KR2014/009860 patent/WO2015057038A1/ko active Application Filing
- 2014-10-20 EP EP14854693.0A patent/EP3059968A4/en not_active Withdrawn
- 2014-10-20 KR KR1020167009104A patent/KR102343817B1/ko active IP Right Grant
- 2014-10-20 US US15/028,905 patent/US10045048B2/en active Active
- 2014-10-20 CN CN201480057222.5A patent/CN105637875A/zh active Pending
- 2014-10-20 JP JP2016524091A patent/JP6571646B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050066400A (ko) * | 2003-12-26 | 2005-06-30 | 한국전자통신연구원 | 다시점 영상 및 깊이 정보를 이용한 3차원 객체 추적 장치및 그 방법 |
WO2010043773A1 (en) * | 2008-10-17 | 2010-04-22 | Nokia Corporation | Sharing of motion vector in 3d video coding |
WO2013115025A1 (ja) * | 2012-01-31 | 2013-08-08 | ソニー株式会社 | 符号化装置および符号化方法、並びに、復号装置および復号方法 |
Non-Patent Citations (2)
Title |
---|
ISMAEL DARIBO ET AL.: "ARBITRARILY SHAPED SUB-BLOCK MOTION PREDICTION IN TEXTILE MAP COMPRESSION USING DEPTH INFORMATION", 2012 PICTURE CODING SYMPOSIUM, 7 May 2012 (2012-05-07), XP032449843, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/xpl/articleDetailsjsp?tp=&arnumber=6213301> * |
JIN YOUNG LEE ET AL.: "A FAST AND EFFICIENT MULTI-VIEW DEPTH IMAGE CODING METHOD BASED ON TEMPORAL AND INTER-VIEW CORRELATIONS OF TEXTURE IMAGES", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 21, no. 12, December 2011 (2011-12-01), XP011390666, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/apls/abs_all.jsp?arnumber=5766721> * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019009504A1 (ko) * | 2017-07-07 | 2019-01-10 | 삼성전자 주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
KR20200004418A (ko) * | 2017-07-07 | 2020-01-13 | 삼성전자주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
KR20210006027A (ko) * | 2017-07-07 | 2021-01-15 | 삼성전자주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
KR102206084B1 (ko) * | 2017-07-07 | 2021-01-21 | 삼성전자주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
KR102302671B1 (ko) * | 2017-07-07 | 2021-09-15 | 삼성전자주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
US11303920B2 (en) | 2017-07-07 | 2022-04-12 | Samsung Electronics Co., Ltd. | Apparatus and method for encoding motion vector determined using adaptive motion vector resolution, and apparatus and method for decoding motion vector |
US11991383B2 (en) | 2017-07-07 | 2024-05-21 | Samsung Electronics Co., Ltd. | Apparatus and method for encoding motion vector determined using adaptive motion vector resolution, and apparatus and method for decoding motion vector |
US11432003B2 (en) | 2017-09-28 | 2022-08-30 | Samsung Electronics Co., Ltd. | Encoding method and apparatus therefor, and decoding method and apparatus therefor |
Also Published As
Publication number | Publication date |
---|---|
EP3059968A1 (en) | 2016-08-24 |
JP2016537871A (ja) | 2016-12-01 |
US20160261888A1 (en) | 2016-09-08 |
CN105637875A (zh) | 2016-06-01 |
JP6571646B2 (ja) | 2019-09-04 |
KR20160072101A (ko) | 2016-06-22 |
KR102343817B1 (ko) | 2021-12-27 |
US10045048B2 (en) | 2018-08-07 |
EP3059968A4 (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102254599B1 (ko) | 멀티-뷰 비디오 코딩에 있어서, 뷰 합성 예측 방법 및 이를 이용한 머지 후보 리스트 구성 방법 | |
KR102269506B1 (ko) | 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 방법 및 장치 | |
US10659814B2 (en) | Depth picture coding method and device in video coding | |
US10063887B2 (en) | Video decoding apparatus and method for decoding multi-view video | |
US20170310993A1 (en) | Movement information compression method and device for 3d video coding | |
US10587894B2 (en) | Method and device for encoding/decoding 3D video | |
KR20150004289A (ko) | 복수의 레이어를 포함하는 영상의 부호화 및 복호화 방법 | |
US20170310994A1 (en) | 3d video coding method and device | |
KR102343817B1 (ko) | 멀티-뷰 비디오의 디코딩 방법 및 장치 | |
KR20140046385A (ko) | 비디오 데이터 디코딩 방법 및 비디오 데이터 디코딩 장치 | |
US20160255368A1 (en) | Method and apparatus for coding/decoding video comprising multi-view | |
US10397611B2 (en) | Method and device for encoding/decoding 3D video | |
WO2015141977A1 (ko) | 3d 비디오 부호화/복호화 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14854693 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20167009104 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15028905 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2016524091 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014854693 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014854693 Country of ref document: EP |