WO2012070827A2 - 영상 부호화 및 복호화 방법과 이를 이용한 장치 - Google Patents
영상 부호화 및 복호화 방법과 이를 이용한 장치 Download PDFInfo
- Publication number
- WO2012070827A2 WO2012070827A2 PCT/KR2011/008898 KR2011008898W WO2012070827A2 WO 2012070827 A2 WO2012070827 A2 WO 2012070827A2 KR 2011008898 W KR2011008898 W KR 2011008898W WO 2012070827 A2 WO2012070827 A2 WO 2012070827A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- merge
- current block
- information
- mode
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/625—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the present invention relates to image processing, and more particularly, to an inter prediction method and apparatus.
- the demand for high resolution and high quality images such as high definition (HD) images and ultra high definition (UHD) images is increasing in various fields.
- the higher the resolution and the higher quality of the image data the more information or bit rate is transmitted than the existing image data. Therefore, the image data can be transmitted by using a medium such as a conventional wired / wireless broadband line or by using a conventional storage medium.
- the transmission cost and the storage cost are increased. High efficiency image compression techniques can be used to solve these problems.
- Image compression technology includes an inter prediction technique for predicting pixel values included in a current picture from before and / or after a current picture, and for predicting pixel values included in a current picture by using pixel information in the current picture.
- An object of the present invention is to provide an image encoding method and apparatus for improving image compression efficiency.
- Another object of the present invention is to provide an image decoding method and apparatus capable of increasing image compression efficiency.
- Another technical problem of the present invention is to provide a method and apparatus for transmitting image information that can increase image compression efficiency.
- Another technical problem of the present invention is to provide an inter prediction method and apparatus for improving image compression efficiency.
- An embodiment of the present invention is a video information transmission method.
- the method includes performing inter prediction on a current block, encoding mode information about inter prediction of the current block, and transmitting the encoded mode information, wherein the mode information is transmitted to the current block. Residual flag information indicating whether a residual signal exists for a merge signal and merge flag information indicating whether a merge mode is applied to the current block.
- the method includes receiving mode information regarding inter prediction of a current block, decoding the received mode information, and performing inter prediction on the current block based on the decoded mode information.
- the mode information includes residual flag information indicating whether a residual signal exists for the current block and merge flag information indicating whether a merge mode is applied to the current block.
- the inter prediction performing step selects a block used for deriving motion information of the current block from among a plurality of candidate blocks constituting a candidate block list using the decoded mode information. And deriving motion information of the current block by using the selected block, wherein the candidate block list may be configured identically regardless of whether a residual signal exists for the current block. have.
- the candidate block list includes a left peripheral block adjacent to the current block, an upper peripheral block adjacent to the current block, an upper right corner block of the current block, an upper left corner block, and a lower left corner. It may be composed of a block and a co-located block with respect to the current block.
- the candidate block list is a block located at the lowermost end of the neighboring blocks adjacent to the left side of the current block, the block located at the rightmost side of the neighboring blocks adjacent to the top of the current block, and the current block.
- the derived motion information may be one of L0 motion information, L1 motion information, and Bi motion information.
- the residual flag information can be decoded before the merge flag information.
- the method may further include receiving mode information regarding inter prediction of a current block, decoding the received mode information, and performing inter prediction on the current block based on the decoded mode information to generate a prediction block. And generating a reconstruction block by using the generated prediction block, wherein the mode information includes residual flag information indicating whether a residual signal for the current block exists and for the current block. Merge flag information indicating whether the prediction mode is a merge mode is included.
- the inter prediction performing step selects a block used for deriving motion information of the current block from among a plurality of candidate blocks constituting a candidate block list using the decoded mode information. And deriving motion information of the current block by using the selected block, wherein the candidate block list may be configured identically regardless of whether a residual signal exists for the current block. have.
- the candidate block list includes a left neighboring block adjacent to the current block, a top neighboring block adjacent to the current block, a top right corner block of the current block, a top left corner block, and a bottom left corner. It may be composed of a block and a co-located block with respect to the current block.
- the candidate block list is a block located at the bottom of the neighboring blocks adjacent to the left side of the current block, a block located at the rightmost side of the neighboring blocks adjacent to the top of the current block, and of the current block.
- the residual flag information may be decoded before the merge flag information.
- image compression efficiency can be improved.
- the image compression efficiency can be improved.
- the image compression efficiency can be improved.
- image compression efficiency may be improved.
- FIG. 1 is a block diagram schematically illustrating an image encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a conceptual diagram schematically illustrating a prediction unit according to an embodiment of the present invention.
- FIG. 3 is a block diagram schematically illustrating an image decoding apparatus according to an embodiment of the present invention.
- FIG. 4 is a conceptual diagram schematically illustrating a prediction unit of an image decoding apparatus according to an embodiment of the present invention.
- FIG. 5 is a flowchart schematically illustrating an embodiment of an inter prediction method in a merge mode.
- FIG. 6 is a conceptual diagram schematically illustrating an embodiment of merge candidates included in a merge candidate list.
- FIG. 7 is a conceptual diagram schematically illustrating another embodiment of merge candidates included in a merge candidate list.
- FIG. 8 is a conceptual diagram schematically illustrating still another embodiment of merge candidates included in a merge candidate list.
- FIG. 9 is a conceptual diagram schematically illustrating an embodiment of a method of transmitting merge information in an encoder.
- FIG. 10 is a conceptual diagram schematically illustrating an embodiment of an inter prediction method in a decoder.
- 11 is a flowchart schematically illustrating an embodiment of an inter prediction method in an integrated mode.
- FIG. 12 is a conceptual diagram schematically illustrating an embodiment of a method for transmitting unified mode information in an encoder.
- FIG. 13 is a conceptual diagram schematically illustrating another embodiment of an inter prediction method in a decoder.
- each of the components in the drawings described in the present invention are shown independently for the convenience of the description of the different characteristic functions in the image encoding / decoding apparatus, each component is implemented by separate hardware or separate software It does not mean to be.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
- the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included within the scope of the present invention.
- the image encoding apparatus 100 may include a picture splitter 105, a predictor 110, a transformer 115, a quantizer 120, a realigner 125, and an entropy encoder 130. , An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150.
- the picture dividing unit 105 may divide the input picture into at least one processing unit.
- the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
- the predictor 110 may include an inter predictor that performs inter prediction and an intra predictor that performs intra prediction.
- the prediction unit 110 may generate a prediction block by performing prediction on the processing unit of the picture in the picture division unit 105.
- the processing unit of the picture in the prediction unit 110 may be a coding unit, a transformation unit, or a prediction unit.
- the processing unit in which the prediction is performed may differ from the processing unit in which the prediction method and the details are determined.
- the method of prediction and the prediction mode are determined in units of prediction units, and the performance of prediction may be performed in units of transform units.
- a residual value (residual block) between the generated prediction block and the original block may be input to the converter 115.
- prediction mode information and motion vector information used for prediction may be encoded by the entropy encoder 130 along with the residual value and transmitted to the decoder.
- the transformer 115 performs a transform on the residual block in transform units and generates transform coefficients.
- the transform unit in the transform unit 115 may be a transform unit and may have a quad tree structure. In this case, the size of the transform unit may be determined within a range of a predetermined maximum and minimum size.
- the transform unit 115 may convert the residual block using a discrete cosine transform (DCT) and / or a discrete sine transform (DST).
- DCT discrete cosine transform
- DST discrete sine transform
- the quantizer 120 may generate quantization coefficients by quantizing the residual values transformed by the converter 115.
- the value calculated by the quantization unit 120 may be provided to the inverse quantization unit 135 and the reordering unit 125.
- the reordering unit 125 rearranges the quantization coefficients provided from the quantization unit 120. By rearranging the quantization coefficients, the efficiency of encoding in the entropy encoder 130 may be increased.
- the reordering unit 125 may rearrange the quantization coefficients in the form of a two-dimensional block into a one-dimensional vector form through a coefficient scanning method.
- the reordering unit 125 may increase the entropy coding efficiency of the entropy encoder 130 by changing the order of coefficient scanning based on probabilistic statistics of coefficients transmitted from the quantization unit.
- the entropy encoder 130 may perform entropy encoding on the quantized coefficients rearranged by the reordering unit 125.
- the entropy encoder 130 may include quantization coefficient information, block type information, prediction mode information, division unit information, prediction unit information, transmission unit information, and motion vector of the coding unit received from the reordering unit 125 and the prediction unit 110.
- Various information such as information, reference picture information, interpolation information of a block, and filtering information can be encoded.
- Entropy encoding may use encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
- the entropy encoder 130 may store a table for performing entropy coding, such as a variable length coding (VLC) table, and the entropy encoder 130. ) May perform entropy encoding using the stored VLC table.
- VLC variable length coding
- the entropy encoder 130 converts a symbol into a bin and binarizes the symbol, and then performs an arithmetic encoding on the bin according to the occurrence probability of the bin to generate a bitstream. You can also create
- a low value index and a corresponding short codeword are assigned to a symbol having a high probability of occurrence, and a high value index is assigned to a symbol having a low probability of occurrence.
- Corresponding long codewords may be assigned. Accordingly, the bit amount of the symbols to be encoded may be reduced, and image compression performance may be improved by entropy encoding.
- the inverse quantization unit 135 may inverse quantize the quantized values in the quantization unit 120, and the inverse transformer 140 may inversely transform the inverse quantized values in the inverse quantization unit 135.
- the residual value generated by the inverse quantizer 135 and the inverse transformer 140 may be combined with the prediction block predicted by the predictor 110 to generate a reconstructed block.
- the filter unit 145 may apply a deblocking filter and / or an adaptive loop filter (ALF) to the reconstructed picture.
- ALF adaptive loop filter
- the deblocking filter may remove block distortion generated at the boundary between blocks in the reconstructed picture.
- the adaptive loop filter may perform filtering based on a value obtained by comparing a reconstructed image with an original image after the block is filtered through a deblocking filter. ALF may be performed only when high efficiency is applied.
- the filter unit 145 may not apply filtering to the reconstructed block used for inter prediction.
- the memory 150 may store the reconstructed block or the picture calculated by the filter unit 145.
- the reconstructed block or picture stored in the memory 150 may be provided to the predictor 110 that performs inter prediction.
- a coding unit is a unit in which coding / decoding of a picture is performed and may be divided with a depth based on a quad tree structure.
- the coding unit may have various sizes, such as 64x64, 32x32, 16x16, and 8x8.
- the encoder may transmit information about a largest coding unit (LCU) and a minimum coding unit (SCU) to the decoder.
- Information (depth information) regarding the number of splittable times together with information about the maximum coding unit and / or the minimum coding unit may be transmitted to the decoder.
- Information on whether the coding unit is split based on the quad tree structure may be transmitted from the encoder to the decoder through flag information such as a split flag.
- One coding unit may be divided into a plurality of prediction units.
- a prediction mode may be determined in units of prediction units, and prediction may be performed in units of prediction units.
- a prediction mode may be determined in units of prediction units, and intra prediction may be performed in units of transform units.
- the predictor 200 may include an inter predictor 210 and an intra predictor 220.
- the inter prediction unit 210 may generate a prediction block by performing prediction based on information of at least one picture of a previous picture or a subsequent picture of the current picture.
- the intra predictor 220 may generate a prediction block by performing prediction based on pixel information in the current picture.
- the inter prediction unit 210 may select a reference picture with respect to the prediction unit and select a reference block having the same size as the prediction unit in integer pixel sample units. Subsequently, the inter prediction unit 210 is most similar to the current prediction unit in sub-integer sample units such as 1/2 pixel sample unit and 1/4 pixel sample unit, so that the residual signal is minimized and the size of the motion vector to be encoded is also minimal It can generate a prediction block that can be.
- the motion vector may be expressed in units of integer pixels or less, for example, in units of 1/4 pixels for luma pixels and in units of 1/8 pixels for chroma pixels.
- Information about the index and the motion vector of the reference picture selected by the inter prediction unit 210 may be encoded and transmitted to the decoder.
- the image decoder 300 includes an entropy decoder 310, a reordering unit 315, an inverse quantizer 320, an inverse transformer 325, a predictor 330, and a filter 335. And a memory 340.
- the input bit stream may be decoded according to a procedure in which image information is processed by the image encoder.
- the entropy decoding unit 310 may perform entropy decoding on the input bitstream, and the entropy decoding method is similar to the entropy encoding method described above.
- VLC variable length coding
- the entropy decoder 310 may also be identical to the VLC table used in the encoder. Entropy decoding can be performed by implementing a VLC table. Even when CABAC is used to perform entropy encoding in the image encoder, the entropy decoder 310 may perform entropy decoding using CABAC correspondingly.
- a low value index and a corresponding short codeword are assigned to a symbol having a high probability of occurrence, and a high value index is assigned to a symbol having a low probability of occurrence.
- Corresponding long codewords may be assigned. Accordingly, the bit amount of the symbols to be encoded may be reduced, and image compression performance may be improved by entropy encoding.
- Information for generating a prediction block among the information decoded by the entropy decoder 310 may be provided to the predictor 330, and a residual value on which entropy decoding is performed by the entropy decoder may be input to the reordering unit 315. .
- the reordering unit 315 may reorder the bit stream deentropy decoded by the entropy decoding unit 310 based on a method of reordering the image encoder.
- the reordering unit 315 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block.
- the reordering unit 315 may be realigned by receiving information related to coefficient scanning performed by the encoder and performing reverse scanning based on the scanning order performed by the corresponding encoder.
- the inverse quantization unit 320 may perform inverse quantization based on the quantization parameter provided by the encoder and the coefficient values of the rearranged block.
- the inverse transform unit 325 may perform inverse DCT and / or inverse DST on DCT and DST performed by the transform unit of the encoder with respect to the quantization result performed by the image encoder.
- the inverse transform may be performed based on a transmission unit determined by the encoder or a division unit of an image.
- the DCT and / or DST may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transformer 325 of the decoder is performed by the transformer of the encoder.
- Inverse transformation may be performed based on the transformation information.
- the prediction unit 330 may generate the prediction block based on the prediction block generation related information provided by the entropy decoding unit 310 and the previously decoded block and / or picture information provided by the memory 340.
- the reconstruction block may be generated using the prediction block generated by the predictor 330 and the residual block provided by the inverse transform unit 325.
- the reconstructed block and / or picture may be provided to the filter unit 335.
- the filter unit 335 may apply deblocking filtering, sample adaptive offset (SAO), and / or adaptive loop filtering (ALF) to the reconstructed block and / or picture.
- deblocking filtering sample adaptive offset (SAO)
- ALF adaptive loop filtering
- the memory 340 may store the reconstructed picture or block to use as a reference picture or reference block, and may provide the reconstructed picture to the output unit.
- FIG. 4 is a conceptual diagram schematically illustrating a prediction unit of an image decoding apparatus according to an embodiment of the present invention.
- the predictor 400 may include an intra predictor 410 and an inter predictor 420.
- the intra prediction unit 410 may generate a prediction block based on pixel information in the current picture when the prediction mode for the corresponding prediction unit is an intra prediction mode (intra prediction mode).
- the inter prediction unit 420 may include information necessary for inter prediction of the current prediction unit provided by the image encoder, eg, a motion vector, Inter-prediction of the current prediction unit may be performed based on information included in at least one of a previous picture or a subsequent picture of the current picture including the current prediction unit by using information about the reference picture index.
- the motion information may be derived in response to the skip flag, the merge flag, and the like of the coding unit received from the encoder.
- a "picture” or a “picture” can represent the same meaning as a “picture” according to the configuration or expression of the invention, the “picture” may be described as a “picture” or a “picture”.
- inter prediction and inter prediction have the same meaning
- intra prediction and intra prediction have the same meaning.
- prediction modes such as a merge mode, a direct mode, and / or a skip mode may be used to reduce the amount of transmission information according to the prediction.
- merge mode the current block can be merged into another adjacent block either vertically or horizontally.
- merge means that motion information is obtained from motion information of an adjacent block in inter prediction of the current block.
- a block adjacent to the current block is called a neighboring block of the current block.
- the merge related information of the current block may include information indicating whether the prediction mode for the current block is the merge mode, information indicating which neighboring block among neighboring blocks adjacent to the current block.
- information indicating whether a prediction mode for a current block is a merge mode is called a merge flag
- information indicating which neighboring block among adjacent neighboring blocks is merged is called a merge index.
- the merge flag may be represented by merge_flag and the merge index may be represented by merge_index.
- Inter prediction in the merge mode may be performed, for example, in units of CUs, and in this case, the merge may be referred to as a CU merge.
- inter prediction in merge mode may be performed in units of PUs, and in this case, merge may be referred to as PU merge.
- the skip mode is a prediction mode in which transmission of a residual signal, which is a difference between a prediction block and a current block, is omitted.
- values of the residual signals of the prediction block and the current block may be zero. Therefore, in the skip mode, the encoder may not transmit the residual signal to the decoder, and the decoder may generate the prediction block using only the motion information of the residual signal and the motion information.
- the encoder may transmit the motion information to the decoder. In this case, the motion information may be transmitted in a manner of designating any one block among neighboring blocks adjacent to the current block to use the motion vector information of the corresponding block in the current block.
- the direct mode is a prediction mode in which motion information is derived using a block in which encoding / decoding is completed among neighboring blocks adjacent to the current block.
- the encoder may not transmit the motion information itself to the decoder.
- FIG. 5 is a flowchart schematically illustrating an embodiment of an inter prediction method in a merge mode.
- the embodiment of FIG. 5 may be applied to an encoder and a decoder.
- the embodiment of FIG. 5 will be described based on the decoder for convenience.
- the decoder may select a merge candidate used for deriving motion information of the current block among merge candidates constituting the merge candidate list (S510).
- the decoder may select a merge candidate indicated by the merge index transmitted from the encoder as a merge candidate used for deriving motion information of the current block.
- Embodiments of the merge candidates included in the merge candidate list will be described later in the embodiments of FIGS. 6 to 8.
- the decoder may derive motion information of the current block by using the selected merge candidate (S520). For example, the decoder may use the motion information of the selected merge candidate as the motion information of the current block.
- Two reference picture lists may be used for inter prediction, and the two reference picture lists may be referred to as reference picture list 0 and reference picture list 1, respectively.
- Inter prediction using a reference picture selected from the reference picture list 0 is called L0 prediction
- L0 prediction is mainly used for forward prediction
- Inter prediction using the reference picture selected from the reference picture list 1 is called L1 prediction
- L1 prediction is mainly used for backward prediction.
- inter prediction using two reference pictures selected from reference picture list 0 and reference picture list 1 is called bi prediction.
- the motion information used for L0 prediction may be referred to as L0 motion information
- the motion information used for L1 prediction may be referred to as L1 motion information
- the motion information used for pair prediction may be referred to as Bi motion information.
- the motion information of the selected merge candidate block may be L0 motion information, L1 motion information, or Bi motion information. Therefore, L0 motion information, L1 motion information, or Bi motion information of the merge candidate block may be used as the motion information of the current block.
- the encoder may generate a prediction block for the current block by using the derived motion information (S530).
- FIG. 6 is a conceptual diagram schematically illustrating an embodiment of merge candidates included in a merge candidate list.
- the motion information of the current block may be derived using the motion information of any one of the candidate blocks included in the merge candidate list.
- motion information of any one of the candidate blocks included in the merge candidate list may be used as the motion information of the current block.
- the residual signal may be transmitted together with the motion information, or the residual signal may not be transmitted when the pixel value of the prediction block is used as the pixel value of the current block.
- the left neighboring block A of the current block and the top neighboring block B of the current block may be used as merge candidates.
- the left neighboring block of the current block may be the topmost block among the blocks adjacent to the left of the current block, and the top neighboring block of the current block is the most adjacent among the blocks adjacent to the top of the current block It may be a block located on the left side.
- a left neighboring block A of the current block and / or a top neighboring block B of the current block may be used as a merge candidate.
- the left neighboring block of the current block may be the topmost block among the blocks adjacent to the left of the current block, and the top neighboring block of the current block is the most adjacent among the blocks adjacent to the top of the current block. It may be a block located on the left side.
- the lower left corner block C and / or the upper right corner block D may be used as merge candidates included in the merge candidate list.
- the same position block col may be used as a merge candidate included in the merge candidate list.
- the co-located block refers to a block in the same position as the current block among blocks in the reference picture.
- FIG. 7 is a conceptual diagram schematically illustrating another embodiment of merge candidates included in a merge candidate list.
- the left neighboring block A of the current block and / or the top neighboring block B of the current block may be used as a merge candidate.
- the left neighboring block of the current block may be the topmost block among the blocks adjacent to the left of the current block, and the top neighboring block of the current block is the most adjacent among the blocks adjacent to the top of the current block. It may be a block located on the left side.
- the lower left corner block C-1, the upper right corner block C, and / or the upper left corner block C-2 may be used as merge candidates included in the merge candidate list.
- the same position block D may be used as a merge candidate included in the merge candidate list.
- a block B-1 selected from blocks adjacent to the top of the current block may be included as a merge candidate.
- the selected block may be a block having an identical reference picture index as the current block as an available block among neighboring blocks adjacent to the top of the current block.
- a block A-1 selected from blocks adjacent to the left of the current block may be included as a merge candidate.
- the selected block may be a block having a same reference picture index as the current block as a valid block among neighboring blocks adjacent to the left side of the current block.
- FIG. 8 is a conceptual diagram schematically illustrating still another embodiment of merge candidates included in a merge candidate list.
- the merge candidate list may include a lower left corner block A 0 , an upper right corner block B 0 , and / or an upper left corner block B 2 as a merge candidate.
- the merge candidate list may include a left neighboring block A 1 of the current block and / or a top neighboring block B 1 of the current block as a merge candidate.
- the left peripheral block A 1 may be the lowest block among the blocks adjacent to the left side of the current block, and the upper peripheral block B 1 is located at the rightmost side among the blocks adjacent to the top of the current block. It may be a block.
- the same position block col may be included as a merge candidate in the merge candidate list.
- the method of selecting merge candidates included in the merge candidate list may be variously extended.
- the encoder and the decoder may configure merge candidate lists by selecting merge candidates according to the embodiments of FIGS. 6 to 8. In this case, when merge candidates are selected, the encoder and the decoder may exclude duplicate candidates and form a merge candidate list to reduce redundancy.
- the maximum number of merge candidates constituting the merge candidate list may be limited to a predetermined number or less.
- the maximum number of merge candidates is 4, and the merge candidates are ⁇ A, B, C, C-1, D,... Assume that they are added and / or inserted in the order of ⁇ .
- the merge candidates are ⁇ A, B, C, C-1, D,... Assume that they are added and / or inserted in the order of ⁇ .
- all A, B, C, C-1, and D blocks are available, only A, B, C, and C-1 blocks may be determined as merge candidates included in the merge candidate list. If the A, B, C-1, D blocks are valid and the C block is invalid, only the A, B, C-1, D blocks may be determined as merge candidates included in the merge candidate list.
- the maximum number of merge candidates is 5, and the merge candidates are added to the merge candidate list in the order of ⁇ A 0 , A 1 , B 0 , B 1 , B 2 , col ⁇ and / Or assume that it is inserted. In this case, if A 0 , A 1 , B 0 , B 1 , B 2 , and col blocks are all valid, merge candidates in which only A 0 , A 1 , B 0 , B 1 , B 2 blocks are included in the merge candidate list Can be determined.
- the maximum number of spatial merge candidates selected within the current picture is limited to 4, and the same position col selected within the reference picture is always used as the merge candidate. Assume that you can.
- the spatial merge candidates are added and / or inserted into the merge candidate list in the order of A 1 , B 1 , B 0 , A 0 , B 2 .
- the B 1 , B 0 , A 0 , B 2 blocks among the spatial merge candidates are valid and the A 1 blocks are invalid, only the B 1 , B 0 , A 0 , B 2 blocks are included in the merge candidate list.
- the spatial merge candidates may be determined. Therefore, B 1 , B 0 , A 0 , B 2 , and Col blocks including the same location block may be determined as merge candidates included in the merge candidate list.
- the merge information may include a merge flag, a merge index, and / or residual information.
- the encoder may transmit the generated merge information to the decoder.
- the encoder may generate merge information (S910).
- the merge information may include a merge flag.
- the merge flag may indicate whether the prediction mode for the current block is the merge mode.
- the merge flag may be represented by merge_flag.
- the encoder may allocate 1 to merge_flag if the prediction mode for the current block is a merge mode and 0 for merge_flag if the prediction mode for the current block is not a merge mode.
- the merge information may also include a merge index.
- the merge index may indicate to which of the neighboring neighboring blocks the current block is merged.
- the merge index may be represented by merge_index. If the merge flag indicates that the prediction mode for the current block is not the merge mode, merge index information for the current block may not be generated.
- the merge index may indicate one of the two merge candidates, and thus the merge index may be used as a flag having two values. .
- the merge index may have only values of 0 and 1.
- the flag information which may have only two values may indicate that the current block is one of the merge candidate blocks. You cannot tell which block is merged. Therefore, in this case, the method of using the merge index may vary as described below.
- the number of merge candidates indicated by the merge index may be set differently according to the number of merge candidates constituting the merge candidate list.
- the number of merge candidates indicated by the merge index may be three. In this case, one of 0, 1, and 2 may be allocated to the merge index.
- the merge index may indicate a merge candidate used for deriving motion information of the current block among three merge candidates using the assigned value.
- the number of merge candidates indicated by the merge index may be two.
- one of 0 and 1 may be assigned to the merge index, and the merge index may indicate a merge candidate used for deriving motion information of the current block among two merge candidates using the assigned value.
- the number of merge candidates indicated by the merge index may be set to the maximum number.
- the number of merge candidates indicated by the merge index may be four. In this case, one of 0, 1, 2, and 3 may be allocated to the merge index.
- the merge index may indicate a merge candidate used for deriving motion information of the current block among four merge candidates using the assigned value.
- the number of merge candidates indicated by the merge index may be set differently according to the number of valid merge candidates. For example, when the maximum number of merge candidates is limited to four and the number of valid merge candidates is two, the number of merge candidates that the merge index can indicate may be two.
- the merge information may also include residual information.
- the residual information may indicate whether a non-zero transform coefficient is included for each of the luma (Y) and chroma (U, V) component blocks in the merge mode.
- the residual information may be represented by a coded block flag (cbf).
- the residual information for the luma component may be represented by cbf_luma
- the residual information for the chroma component may be represented by cbf_chromaU and cbf_chromaV, respectively.
- overall residual information on a block on which inter prediction is performed in merge mode may be represented by, for example, merge_cbf.
- merge residual information for the merge mode block is called merge residual information.
- the merge residual information may be decoded before the merge flag at the decoder side.
- the encoder should generate merge residual information and transmit the merge residual information regardless of the merge flag value.
- merge residual information may be generated and transmitted even when the prediction mode of the current block is not the merge mode, waste of the amount of transmission bits may occur.
- the encoder may generate merge residual information (eg, merge_cbf) and transmit the merge residual information (eg, merge_cbf) to the decoder.
- the decoder may first decode the merge flag and then decode the merge residual information only when the prediction mode of the current block is the merge mode. Thus, unnecessary overhead can be reduced.
- the encoder may encode the generated merge information (S920). If the merge information is encoded, the encoder may transmit the encoded merge information to the decoder (S930).
- merge information including a merge flag, a merge index, and the like may be transmitted in units of coding units.
- merge information including a merge flag and a merge index may be transmitted in units of prediction units.
- the encoder may not transmit the merge residual information to the decoder.
- the decoder may receive merge information and perform inter prediction on the current block by using the received merge information.
- the decoder may receive merge information from an encoder (S1010).
- the merge information may include a merge flag, a merge index, and residual information.
- the merge flag may indicate whether the prediction mode for the current block is a merge mode, and the merge index may indicate to which neighboring block of neighboring neighboring blocks the current block is merged.
- the number of merge candidates indicated by the merge index may be set differently according to the number of merge candidates constituting the merge candidate list.
- the maximum number of merge candidates constituting the merge candidate list is limited to a predetermined number or less, the number of merge candidates indicated by the merge index may be set to the maximum number.
- the residual information may indicate whether a non-zero transform coefficient is included for each of the luma (Y) and chroma (U, V) component blocks in the merge mode.
- the residual information may be represented by a coded block flag (cbf).
- Overall residual information on the block on which inter prediction is performed in the merge mode may be referred to as merge residual information. Specific embodiments of the merge residual information are the same as described above with reference to the embodiment of FIG. 9.
- the decoder may decode the received merge information (S1020).
- the encoder may generate merge residual information and transmit the merge residual information to the decoder regardless of the merge flag value.
- the decoder may decode the merge residual information before the merge flag.
- the encoder may generate merge residual information only when the prediction mode of the current block is the merge mode and transmit the merge residual information to the decoder.
- the decoder may first decode the merge flag and then decode the merge residual information only when the prediction mode of the current block is the merge mode. In this case, when the prediction mode of the current block is not the merge mode, merge residual information is not transmitted, and thus the amount of bits transmitted from the encoder to the decoder may be reduced.
- the decoder may perform inter prediction using the decoded merge information (S1030).
- the decoder may perform inter prediction in the merge mode.
- the decoder may select a merge candidate used for deriving motion information of the current block among merge candidates constituting the merge candidate list using the merge index.
- the decoder may derive the motion information of the current block from the selected merge candidate, and generate the prediction block using the derived motion information.
- the decoder may omit the decoding process for the residual signal.
- merge mode skip mode
- direct mode may be combined and / or integrated with each other as necessary.
- the merge mode described above has a similarity to the direct mode in that motion information of the current block is derived from a neighboring block adjacent to the current block, and a residual signal can be transmitted from the encoder to the decoder. Therefore, a method of applying the merge mode and the direct mode into one may be considered.
- a mode in which the merge mode and the direct mode are integrated into one is called an integrated merge / direct mode.
- a method of integrating a skip mode and a merge mode may be considered.
- the same method as that used in the merge mode may be used to obtain motion information of the current block.
- the same neighboring blocks may be used as candidate blocks for deriving motion information in the skip mode and the merge mode.
- the motion information of the merge candidate block indicated by the merge index among the merge candidates included in the merge candidate list may be used as the motion information of the current block as it is.
- the skip mode in this case may also be called a merge skip mode.
- a merge mode and a merge mode, a merge mode and a skip mode, and a merge mode, a skip mode, and a direct mode are all collectively referred to as an integrated mode.
- candidate blocks used in the unified mode are called unified mode candidates.
- a list composed of unified mode candidates is called a unified mode candidate list.
- FIG. 11 is a flowchart schematically illustrating an embodiment of an inter prediction method in an integrated mode.
- the embodiment of FIG. 11 may be applied to an encoder and a decoder.
- the embodiment of FIG. 11 will be described based on the decoder for convenience.
- the decoder may select a unified mode candidate used for deriving motion information of a current block among unified mode candidates constituting the unified mode candidate list (S1110).
- the decoder may select a merge candidate indicated by the merge index transmitted from the encoder as a candidate block used for deriving motion information of the current block.
- Embodiments of the unified mode candidates included in the unified mode candidate list may be the same as those of the merge candidates shown in FIGS. 6 to 8.
- the decoder may derive motion information of the current block by using the selected unified mode candidate (S1120).
- an integrated merge / direct mode when used, there may be an integrated merge / direct mode in which a residual signal is transmitted and an integrated merge / direct mode in which a residual signal is not transmitted.
- information on whether the residual signal is transmitted may be transmitted from the encoder to the decoder through a separate flag.
- the flag may be represented as residual_skip_flag or skip_flag.
- the L0 motion information, the L1 motion information, and / or the Bi motion information may be derived as the motion information of the current block. That is, in the merge merge / direct mode in which the residual signal is not transmitted, L0 prediction, L1 prediction, and / or pair prediction may be performed.
- the encoder and the decoder may be a prediction type and / or prediction direction applied to the current block according to the type of motion information (eg, L0 motion information, L1 motion information, Bi motion information) of the selected unified mode candidate. Can be determined.
- the encoder may transmit information about which prediction among L0 prediction, L1 prediction, and pair prediction is performed to the decoder.
- the decoder may determine the prediction type and / or prediction direction applied to the current block by using the information transmitted to the encoder.
- the residual signal may not always be transmitted. In this case, only bi prediction may always be performed on a block to which a skip mode is applied.
- the encoder may generate a prediction block for the current block by using the derived motion information (S1130).
- FIG. 12 is a conceptual diagram schematically illustrating an embodiment of a method for transmitting unified mode information in an encoder.
- the encoder may generate integrated mode information (S1210).
- the unified mode information may include information indicating whether the prediction mode for the current block corresponds to the unified mode.
- the information indicating whether the prediction mode for the current block corresponds to the unified mode is called an unified mode flag.
- the integration mode information may also include residual information.
- the integrated merge / direct mode when used, there may be an integrated merge / direct mode in which a residual signal is transmitted and an integrated merge / direct mode in which a residual signal is not transmitted.
- the residual information may correspond to information on whether the residual signal is transmitted.
- whether a residual signal is transmitted may be indicated by a predetermined flag, which may be indicated by residual_skip_flag or skip_flag.
- residual_skip_flag or skip_flag 1
- the luma component block and the chroma component block may not include non-zero transform coefficients. That is, there may be no residual signal transmitted from the encoder to the decoder for the luma component and the chroma component.
- the residual information on the luma component may be represented by cbf_luma
- the residual information on the chroma component may be represented by cbf_chromaU and cbf_chromaV, respectively. If residual_skip_flag or skip_flag is 1, the encoder may not transmit the residual signal to the decoder.
- the decoding process of the residual signal may be omitted in the decoder.
- the decoder may estimate and / or assume that there is no residual signal for the current block, and the residual value for the luma component (eg cbf_luma) and the residual value for the chroma component (eg For example, cbf_chromaU and cbf_chromaV) may all be derived as zero.
- the residual information may be decoded before the unified mode flag at the decoder side.
- the encoder since the encoder must generate and transmit the residual information regardless of the value of the unified mode flag, it may cause a waste of the transmission bit amount.
- the encoder may generate residual information only when the prediction mode of the current block corresponds to the integrated mode and transmit the residual information to the decoder.
- the decoder may first decode the unified mode flag and then decode the unified mode residual information only when the prediction mode of the current block corresponds to the unified mode.
- a method of deriving the residual information of the current block by referring to the information of the neighboring block adjacent to the current block may be used. For example, when the residual_skip_flag of the unified mode candidate blocks constituting the unified mode candidate list is all 1 or the residual_skip_flag of the plurality of unified mode candidate blocks is 1, the residual_skip_flag of the current block may also be derived as 1. When residual information of the current block is derived by referring to information of a neighboring block, the encoder may not generate and transmit residual information of the current block.
- the encoder may generate information about whether the prediction mode of the current block is the skip mode.
- the residual information included in the unified mode information may correspond to information about whether the prediction mode of the current block is the skip mode.
- residual_skip_flag or skip_flag information on whether the prediction mode of the current block is the skip mode.
- residual_skip_flag or skip_flag 1, it may be inferred that there is no residual signal transmitted from the encoder to the decoder, and the decoder may omit the decoding process on the residual signal.
- the above-described embodiments in the merge merge / direct mode may be applied to the case where the merge skip mode is used and / or the merge mode, the skip mode, and the direct mode are all used together as needed.
- the encoder may encode the generated unified mode information (S1220). If the unified mode information is encoded, the encoder may transmit the encoded unified mode information to the decoder (S1230).
- the decoder may receive the unified mode information and perform inter prediction on the current block by using the received unified mode information.
- the decoder may receive unified mode information from an encoder (S1310).
- the unified mode information may include an unified mode flag, residual information, and the like.
- the unified mode flag may indicate whether the prediction mode for the current block corresponds to the unified mode.
- the integrated merge / direct mode there may be an integrated merge / direct mode in which the residual signal is transmitted and an integrated merge / direct mode in which the residual signal is not transmitted as described above.
- the residual information may correspond to information on whether the residual signal is transmitted.
- the encoder may generate information about whether the prediction mode of the current block is the skip mode.
- the residual information included in the unified mode information may correspond to information about whether the prediction mode of the current block is the skip mode.
- the decoder may decode the received unified mode information (S1320).
- the decoder may decode the flag information, and determine whether the integrated signal / direct mode in which the residual signal is transmitted or the integrated merge / direct mode in which the residual signal is not transmitted based on the decoded flag information. can do.
- the merge merge / direct mode in which the residual exists and the merge merge / direct mode in which the residual does not exist may be treated the same except for the decoding process according to the existence of the residual.
- the unified mode candidate constituting the unified mode candidate list in the unified merge / direct mode in which the residual does not exist may be the same as in the unified merge / direct mode in which the residual exists.
- the decoder can use the same unified mode candidate for motion derivation regardless of the presence of residual.
- the embodiment of the unified mode candidates included in the unified mode candidate list may be the same as the embodiment of the merge candidates shown in FIGS. 6 to 8.
- the decoder may omit the process of decoding the residual signal. For example, when residual_skip_flag or skip_flag is 1, the decoder may skip the decoding process of the residual signal by inferring and / or deeming that the residual signal does not exist. If residual_skip_flag or skip_flag is 1, the decoder may derive both the residual value for the luma component (eg, cbf_luma) and the residual value for the chroma component (eg, cbf_chromaU, cbf_chromaV) to zero. In this case, the decoder may omit the decoding of the residual signal.
- residual_skip_flag or skip_flag the decoder may skip the decoding process of the residual signal by inferring and / or deeming that the residual signal does not exist. If residual_skip_flag or skip_flag is 1, the decoder may derive both the residual value for the luma component (eg,
- the encoder may generate residual information and transmit the residual information regardless of the value of the unified mode flag.
- the decoder may decode the residual information before the unified mode flag.
- the encoder may generate residual information only when the prediction mode of the current block corresponds to the integrated mode and transmit the residual information to the decoder.
- the decoder may first decode the unified mode flag and then decode the unified mode residual information only when the prediction mode of the current block corresponds to the unified mode.
- the decoder may derive the residual information of the current block by referring to the information of the neighboring block. If the residual information of the current block can be derived by referring to the information of the neighboring block, the encoder may not generate and transmit the residual information of the current block. Therefore, the amount of information transmitted from the encoder to the decoder can be reduced.
- An embodiment in which residual information of the current block is derived by referring to information of a neighboring block is as described with reference to FIG. 12.
- the skip mode, and the direct mode are all used together, for example, information about whether the prediction mode of the current block is the skip mode may be represented by residual_skip_flag or skip_flag.
- residual_skip_flag or skip_flag 1
- the decoder may infer that there is no residual signal transmitted from the encoder, and the decoding process for the residual signal may be omitted.
- the above-described embodiments in the merge merge / direct mode may be applied to the case where the merge skip mode is used and / or the merge mode, the skip mode, and the direct mode are all used together as needed.
- the decoder may perform inter prediction using the decoded unified mode information (S1330).
- 11 to 13 described above are described in terms of the unified mode, but may be applied to the prediction mode other than the unified mode as needed, and may be applied to the merge mode, for example.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (14)
- 현재 블록에 대한 인터 예측을 수행하는 단계;
상기 현재 블록의 인터 예측에 관한 모드 정보를 부호화하는 단계; 및
상기 부호화된 모드 정보를 전송하는 단계를 포함하고,
상기 모드 정보는 상기 현재 블록에 대한 레지듀얼 신호가 존재하는지 여부를 지시하는 레지듀얼 플래그 정보 및 상기 현재 블록에 대해 머지 모드가 적용되는지 여부를 지시하는 머지 플래그 정보를 포함하는 영상 정보 전송 방법. - 현재 블록의 인터 예측에 관한 모드 정보를 수신하는 단계;
상기 수신된 모드 정보를 복호화하는 단계; 및
상기 복호화된 모드 정보를 기반으로 상기 현재 블록에 대한 인터 예측을 수행하는 단계를 포함하고,
상기 모드 정보는, 상기 현재 블록에 대한 레지듀얼 신호가 존재하는지 여부를 지시하는 레지듀얼 플래그 정보 및 상기 현재 블록에 대한 예측 모드가 머지 모드인지 여부를 지시하는 머지 플래그 정보를 포함하는 인터 예측 방법. - 청구항 2에 있어서, 상기 인터 예측 수행 단계는,
상기 복호화된 모드 정보를 이용하여, 후보 블록 리스트를 구성하는 복수의 후보 블록 중에서, 상기 현재 블록의 움직임 정보 도출에 사용되는 블록을 선택하는 단계; 및
상기 선택된 블록을 이용하여 상기 현재 블록의 움직임 정보를 도출하는 단계를 더 포함하고,
상기 후보 블록 리스트는 상기 현재 블록에 대한 레지듀얼 신호의 존재 여부에 관계 없이 동일하게 구성되는 것을 특징으로 하는 인터 예측 방법. - 청구항 3에 있어서, 상기 후보 블록 리스트는,
상기 현재 블록에 인접한 좌측 주변 블록, 상기 현재 블록에 인접한 상단 주변 블록, 상기 현재 블록의 우측 상단 코너 블록, 좌측 상단 코너 블록, 좌측 하단 코너 블록 및 상기 현재 블록에 대한 동일 위치 블록(co-located block)으로 구성되는 것을 특징으로 하는 인터 예측 방법. - 청구항 4에 있어서, 상기 후보 블록 리스트는,
상기 현재 블록의 좌측에 인접한 주변 블록 중 최하단에 위치한 블록, 상기 현재 블록의 상단에 인접한 주변 블록 중 최우측에 위치한 블록, 상기 현재 블록의 우측 상단 코너 블록, 좌측 상단 코너 블록, 좌측 하단 코너 블록 및 상기 현재 블록에 대한 동일 위치 블록으로 구성되는 것을 특징으로 하는 인터 예측 방법. - 청구항 3에 있어서, 상기 도출된 움직임 정보는,
L0 움직임 정보, L1 움직임 정보 및 Bi 움직임 정보 중 하나인 인터 예측 방법. - 청구항 2에 있어서,
상기 현재 블록에 대한 레지듀얼 신호가 존재하지 않는 경우에,
상기 모드 정보 복호화 단계에서는 루마(luma) 성분에 대한 레지듀얼 값 및 크로마(chroma) 성분에 대한 레지듀얼 값을 0으로 도출하는 것을 특징으로 하는 인터 예측 방법. - 청구항 2에 있어서, 상기 모드 정보 복호화 단계에서는,
상기 레지듀얼 플래그 정보를 상기 머지 플래그 정보보다 먼저 복호화하는 것을 특징으로 하는 인터 예측 방법. - 청구항 2에 있어서,
상기 모드 정보 복호화 단계에서는,
상기 머지 플래그 정보를 상기 레지듀얼 플래그 정보보다 먼저 복호화하고,
상기 레지듀얼 플래그 정보는 상기 복호화된 머지 플래그 정보가 상기 현재 블록에 대한 예측 모드가 머지 모드임을 지시하는 경우에만 복호화하는 것을 특징으로 하는 인터 예측 방법. - 현재 블록의 인터 예측에 관한 모드 정보를 수신하는 단계;
상기 수신된 모드 정보를 복호화하는 단계;
상기 복호화된 모드 정보를 기반으로 상기 현재 블록에 대한 인터 예측을 수행하여 예측 블록을 생성하는 단계; 및
상기 생성된 예측 블록을 이용하여 복원 블록을 생성하는 단계를 포함하고,
상기 모드 정보는, 상기 현재 블록에 대한 레지듀얼 신호가 존재하는지 여부를 지시하는 레지듀얼 플래그 정보 및 상기 현재 블록에 대한 예측 모드가 머지 모드인지 여부를 지시하는 머지 플래그 정보를 포함하는 영상 복호화 방법. - 청구항 10에 있어서, 상기 인터 예측 수행 단계는,
상기 복호화된 모드 정보를 이용하여, 후보 블록 리스트를 구성하는 복수의 후보 블록 중에서, 상기 현재 블록의 움직임 정보 도출에 사용되는 블록을 선택하는 단계; 및
상기 선택된 블록을 이용하여 상기 현재 블록의 움직임 정보를 도출하는 단계를 더 포함하고,
상기 후보 블록 리스트는 상기 현재 블록에 대한 레지듀얼 신호의 존재 여부에 관계 없이 동일하게 구성되는 것을 특징으로 하는 영상 복호화 방법. - 청구항 11에 있어서, 상기 후보 블록 리스트는,
상기 현재 블록에 인접한 좌측 주변 블록, 상기 현재 블록에 인접한 상단 주변 블록, 상기 현재 블록의 우측 상단 코너 블록, 좌측 상단 코너 블록, 좌측 하단 코너 블록 및 상기 현재 블록에 대한 동일 위치 블록(co-located block)으로 구성되는 것을 특징으로 하는 영상 복호화 방법. - 청구항 12에 있어서, 상기 후보 블록 리스트는,
상기 현재 블록의 좌측에 인접한 주변 블록 중 최하단에 위치한 블록, 상기 현재 블록의 상단에 인접한 주변 블록 중 최우측에 위치한 블록, 상기 현재 블록의 우측 상단 코너 블록, 좌측 상단 코너 블록, 좌측 하단 코너 블록 및 상기 현재 블록에 대한 동일 위치 블록으로 구성되는 것을 특징으로 하는 영상 복호화 방법. - 청구항 10에 있어서, 상기 모드 정보 복호화 단계에서는,
상기 레지듀얼 플래그 정보를 상기 머지 플래그 정보보다 먼저 복호화하는 것을 특징으로 하는 영상 복호화 방법.
Priority Applications (13)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020137000268A KR101503611B1 (ko) | 2010-11-23 | 2011-11-22 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
EP11842773.1A EP2645720A4 (en) | 2010-11-23 | 2011-11-22 | METHOD FOR ENCODING AND DECODING IMAGES, AND DEVICE USING THE SAME |
US13/814,745 US9172956B2 (en) | 2010-11-23 | 2011-11-22 | Encoding and decoding images using inter-prediction |
KR1020147007387A KR101425772B1 (ko) | 2010-11-23 | 2011-11-22 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
CN201180041402.0A CN103081475B (zh) | 2010-11-23 | 2011-11-22 | 编码和解码图像的方法及使用该方法的设备 |
US14/848,485 US9369729B2 (en) | 2010-11-23 | 2015-09-09 | Method for encoding and decoding images, and device using same |
US15/148,689 US9621911B2 (en) | 2010-11-23 | 2016-05-06 | Method for encoding and decoding images, and device using same |
US15/442,257 US9800888B2 (en) | 2010-11-23 | 2017-02-24 | Method for encoding and decoding images, and device using same |
US15/721,135 US10148975B2 (en) | 2010-11-23 | 2017-09-29 | Method for encoding and decoding images, and device using same |
US16/176,715 US10440381B2 (en) | 2010-11-23 | 2018-10-31 | Method for encoding and decoding images, and device using same |
US16/552,680 US10757436B2 (en) | 2010-11-23 | 2019-08-27 | Method for encoding and decoding images, and device using same |
US16/930,127 US11234013B2 (en) | 2010-11-23 | 2020-07-15 | Method for encoding and decoding images, and device using same |
US17/551,546 US11627332B2 (en) | 2010-11-23 | 2021-12-15 | Method for encoding and decoding images, and device using same |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US41630210P | 2010-11-23 | 2010-11-23 | |
US61/416,302 | 2010-11-23 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/814,745 A-371-Of-International US9172956B2 (en) | 2010-11-23 | 2011-11-22 | Encoding and decoding images using inter-prediction |
US14/848,485 Continuation US9369729B2 (en) | 2010-11-23 | 2015-09-09 | Method for encoding and decoding images, and device using same |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2012070827A2 true WO2012070827A2 (ko) | 2012-05-31 |
WO2012070827A3 WO2012070827A3 (ko) | 2012-07-19 |
Family
ID=46146270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2011/008898 WO2012070827A2 (ko) | 2010-11-23 | 2011-11-22 | 영상 부호화 및 복호화 방법과 이를 이용한 장치 |
Country Status (5)
Country | Link |
---|---|
US (9) | US9172956B2 (ko) |
EP (1) | EP2645720A4 (ko) |
KR (2) | KR101425772B1 (ko) |
CN (6) | CN105847831B (ko) |
WO (1) | WO2012070827A2 (ko) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104737538A (zh) * | 2012-09-14 | 2015-06-24 | 高通股份有限公司 | 执行量化以促进解块滤波 |
CN109417617A (zh) * | 2016-06-22 | 2019-03-01 | 韩国电子通信研究院 | 帧内预测方法和装置 |
CN110545422A (zh) * | 2012-10-12 | 2019-12-06 | 韩国电子通信研究院 | 图像编码/解码方法和使用其的装置 |
WO2020186882A1 (zh) * | 2019-03-18 | 2020-09-24 | 华为技术有限公司 | 基于三角预测单元模式的处理方法及装置 |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012209911A (ja) * | 2010-12-20 | 2012-10-25 | Sony Corp | 画像処理装置および方法 |
KR101888515B1 (ko) * | 2011-01-13 | 2018-08-14 | 캐논 가부시끼가이샤 | 화상 부호화장치, 화상 부호화방법, 화상복호장치, 화상복호방법 및 기억매체 |
JP5982734B2 (ja) | 2011-03-11 | 2016-08-31 | ソニー株式会社 | 画像処理装置および方法 |
CN106791834B (zh) * | 2011-06-14 | 2020-07-10 | 三星电子株式会社 | 对图像进行解码的方法和设备 |
KR20130004173A (ko) | 2011-07-01 | 2013-01-09 | 한국항공대학교산학협력단 | 비디오 부호화 방법 및 복호화 방법과 이를 이용한 장치 |
WO2013005966A2 (ko) * | 2011-07-01 | 2013-01-10 | 한국전자통신연구원 | 비디오 부호화 방법 및 복호화 방법과 이를 이용한 장치 |
CN104378637B (zh) * | 2011-10-18 | 2017-11-21 | 株式会社Kt | 视频信号解码方法 |
KR102134367B1 (ko) * | 2012-09-10 | 2020-07-15 | 선 페이턴트 트러스트 | 화상 부호화 방법, 화상 복호화 방법, 화상 부호화 장치, 화상 복호화 장치, 및 화상 부호화 복호화 장치 |
EP3013052A4 (en) | 2013-07-12 | 2017-02-01 | Samsung Electronics Co., Ltd. | Method and apparatus for inter-layer encoding and method and apparatus for inter-layer decoding video using residual prediction |
EP3058726A1 (en) * | 2013-10-16 | 2016-08-24 | Huawei Technologies Co., Ltd. | A method for determining a corner video part of a partition of a video coding block |
KR102329126B1 (ko) | 2014-03-14 | 2021-11-19 | 삼성전자주식회사 | 인터 레이어 비디오의 복호화 및 부호화를 위한 머지 후보 리스트 구성 방법 및 장치 |
US10432928B2 (en) | 2014-03-21 | 2019-10-01 | Qualcomm Incorporated | Using a current picture as a reference for video coding |
US20150271491A1 (en) * | 2014-03-24 | 2015-09-24 | Ati Technologies Ulc | Enhanced intra prediction mode selection for use in video transcoding |
US9860548B2 (en) * | 2014-05-23 | 2018-01-02 | Hfi Innovation Inc. | Method and apparatus for palette table prediction and signaling |
US10412387B2 (en) | 2014-08-22 | 2019-09-10 | Qualcomm Incorporated | Unified intra-block copy and inter-prediction |
KR101895429B1 (ko) * | 2014-10-07 | 2018-09-05 | 삼성전자주식회사 | 뷰 병합 예측을 이용하여 영상을 부호화 또는 복호화 하는 방법 및 그 장치 |
US9918105B2 (en) * | 2014-10-07 | 2018-03-13 | Qualcomm Incorporated | Intra BC and inter unification |
WO2016178485A1 (ko) * | 2015-05-05 | 2016-11-10 | 엘지전자 주식회사 | 영상 코딩 시스템에서 코딩 유닛 처리 방법 및 장치 |
KR20170019544A (ko) | 2015-08-11 | 2017-02-22 | 삼성디스플레이 주식회사 | 곡면 액정표시장치 및 이의 제조방법 |
CN108965871B (zh) | 2015-09-29 | 2023-11-10 | 华为技术有限公司 | 图像预测的方法及装置 |
KR20180021941A (ko) * | 2016-08-22 | 2018-03-06 | 광운대학교 산학협력단 | 부호화 유닛들의 병합을 사용하는 비디오 부호화 방법 및 장치, 그리고 비디오 복호화 방법 및 장치 |
US10694202B2 (en) * | 2016-12-01 | 2020-06-23 | Qualcomm Incorporated | Indication of bilateral filter usage in video coding |
CN117395395A (zh) * | 2017-03-22 | 2024-01-12 | 韩国电子通信研究院 | 使用参考块的预测方法和装置 |
CN107426573B (zh) * | 2017-08-08 | 2020-11-06 | 鄂尔多斯应用技术学院 | 基于运动同质性的自适应快速预测单元划分方法及装置 |
US20200221077A1 (en) * | 2017-09-05 | 2020-07-09 | Lg Electronics Inc. | Inter prediction mode-based image processing method and apparatus therefor |
EP3700208A4 (en) | 2017-10-18 | 2021-04-07 | Electronics and Telecommunications Research Institute | IMAGE ENCODING / DECODING METHOD AND DEVICE AND RECORDING MEDIUM WITH BITSTREAM STORED ON IT |
CN115022631A (zh) * | 2018-01-05 | 2022-09-06 | Sk电信有限公司 | 对视频进行编码或解码的方法和非暂时性计算机可读介质 |
US20190313107A1 (en) * | 2018-03-15 | 2019-10-10 | University-Industry Cooperation Group Of Kyung Hee University | Image encoding/decoding method and apparatus |
CN118200551A (zh) | 2018-03-19 | 2024-06-14 | 株式会社Kt | 对图像进行解码或编码的方法以及非暂态计算机可读介质 |
WO2020060312A1 (ko) * | 2018-09-20 | 2020-03-26 | 엘지전자 주식회사 | 영상 신호를 처리하기 위한 방법 및 장치 |
WO2020114509A1 (zh) * | 2018-12-07 | 2020-06-11 | 华为技术有限公司 | 视频图像解码、编码方法及装置 |
CN111294601A (zh) | 2018-12-07 | 2020-06-16 | 华为技术有限公司 | 视频图像解码、编码方法及装置 |
US11234007B2 (en) | 2019-01-05 | 2022-01-25 | Tencent America LLC | Method and apparatus for video coding |
EP3954117A4 (en) * | 2019-05-25 | 2022-06-08 | Beijing Bytedance Network Technology Co., Ltd. | ENCODING OF BLOCK VECTORS FOR INTRABLOCKCOPY ENCODED BLOCKS |
CN113497936A (zh) * | 2020-04-08 | 2021-10-12 | Oppo广东移动通信有限公司 | 编码方法、解码方法、编码器、解码器以及存储介质 |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19951341B4 (de) | 1999-10-25 | 2012-02-16 | Robert Bosch Gmbh | Verfahren zur bewegungskompensierenden Prädiktion von Bewegtbildern sowie Einrichtung hierzu |
CN1310519C (zh) * | 2001-09-18 | 2007-04-11 | 皇家飞利浦电子股份有限公司 | 视频编码和解码方法以及相应信号 |
US7003035B2 (en) | 2002-01-25 | 2006-02-21 | Microsoft Corporation | Video coding methods and apparatuses |
US7589788B1 (en) * | 2003-02-28 | 2009-09-15 | Intel Corporation | Method and apparatus for video motion compensation, reduction and color formatting |
KR100612015B1 (ko) | 2004-07-22 | 2006-08-11 | 삼성전자주식회사 | 컨텍스트 적응형 이진 산술 부호화 방법 및 그 장치 |
US7961783B2 (en) * | 2005-07-07 | 2011-06-14 | Mediatek Incorporation | Methods and systems for rate control in video encoder |
JP2007180808A (ja) * | 2005-12-27 | 2007-07-12 | Toshiba Corp | 映像符号化装置、映像復号化装置、及び映像符号化方法 |
WO2007114368A1 (ja) | 2006-03-30 | 2007-10-11 | Kabushiki Kaisha Toshiba | 画像符号化装置及び方法並びに画像復号化装置及び方法 |
KR100934673B1 (ko) | 2006-03-30 | 2009-12-31 | 엘지전자 주식회사 | 비디오 신호를 디코딩/인코딩하기 위한 방법 및 장치 |
WO2007116551A1 (ja) * | 2006-03-30 | 2007-10-18 | Kabushiki Kaisha Toshiba | 画像符号化装置及び画像符号化方法並びに画像復号化装置及び画像復号化方法 |
DK2011234T3 (da) | 2006-04-27 | 2011-03-14 | Dolby Lab Licensing Corp | Audioforstærkningskontrol anvendende specifik-lydstyrke-baseret auditiv hændelsesdetektering |
US9319708B2 (en) * | 2006-06-16 | 2016-04-19 | Via Technologies, Inc. | Systems and methods of improved motion estimation using a graphics processing unit |
WO2008023968A1 (en) | 2006-08-25 | 2008-02-28 | Lg Electronics Inc | A method and apparatus for decoding/encoding a video signal |
KR20080022055A (ko) * | 2006-09-05 | 2008-03-10 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및, 비디오 신호 디코딩 장치 |
US7655986B2 (en) | 2006-12-21 | 2010-02-02 | Intel Corporation | Systems and methods for reducing contact to gate shorts |
CN101227601B (zh) * | 2007-01-15 | 2011-09-14 | 飞思卡尔半导体公司 | 在视频再现中进行几何变换的方法和设备 |
CN101617537A (zh) * | 2007-01-17 | 2009-12-30 | Lg电子株式会社 | 用于处理视频信号的方法和装置 |
JP4788649B2 (ja) * | 2007-04-27 | 2011-10-05 | 株式会社日立製作所 | 動画像記録方法及びその装置 |
US8605786B2 (en) * | 2007-09-04 | 2013-12-10 | The Regents Of The University Of California | Hierarchical motion vector processing method, software and devices |
KR102037328B1 (ko) | 2007-10-16 | 2019-10-28 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
US20090154567A1 (en) * | 2007-12-13 | 2009-06-18 | Shaw-Min Lei | In-loop fidelity enhancement for video compression |
JP5014195B2 (ja) * | 2008-02-25 | 2012-08-29 | キヤノン株式会社 | 撮像装置及びその制御方法及びプログラム |
EP2152009A1 (en) * | 2008-08-06 | 2010-02-10 | Thomson Licensing | Method for predicting a lost or damaged block of an enhanced spatial layer frame and SVC-decoder adapted therefore |
DK2559240T3 (da) * | 2010-04-13 | 2019-10-07 | Ge Video Compression Llc | Interplansprædiktion |
CN101835044B (zh) * | 2010-04-23 | 2012-04-11 | 南京邮电大学 | 一种频率域分布式视频编码中的分类组合方法 |
RS57809B1 (sr) * | 2010-07-09 | 2018-12-31 | Samsung Electronics Co Ltd | Metod za dekodiranje video zapisa korišćenjem objedinjavanja blokova |
EP4322530A1 (en) * | 2010-09-02 | 2024-02-14 | LG Electronics Inc. | Inter prediction method and device |
CN107525859B (zh) | 2017-07-26 | 2020-04-10 | 中国人民解放军第二军医大学 | 一种筛选保健品中非法添加化合物衍生物快速检测条件的方法 |
-
2011
- 2011-11-22 EP EP11842773.1A patent/EP2645720A4/en not_active Ceased
- 2011-11-22 CN CN201610184661.3A patent/CN105847831B/zh active Active
- 2011-11-22 US US13/814,745 patent/US9172956B2/en active Active
- 2011-11-22 CN CN201180041402.0A patent/CN103081475B/zh active Active
- 2011-11-22 CN CN201610181815.3A patent/CN105791856B/zh active Active
- 2011-11-22 KR KR1020147007387A patent/KR101425772B1/ko active IP Right Grant
- 2011-11-22 KR KR1020137000268A patent/KR101503611B1/ko active IP Right Grant
- 2011-11-22 CN CN201610183427.9A patent/CN105847830B/zh active Active
- 2011-11-22 CN CN201610183319.1A patent/CN105847829B/zh active Active
- 2011-11-22 WO PCT/KR2011/008898 patent/WO2012070827A2/ko active Application Filing
- 2011-11-22 CN CN201610181831.2A patent/CN105812817B/zh active Active
-
2015
- 2015-09-09 US US14/848,485 patent/US9369729B2/en active Active
-
2016
- 2016-05-06 US US15/148,689 patent/US9621911B2/en active Active
-
2017
- 2017-02-24 US US15/442,257 patent/US9800888B2/en active Active
- 2017-09-29 US US15/721,135 patent/US10148975B2/en active Active
-
2018
- 2018-10-31 US US16/176,715 patent/US10440381B2/en active Active
-
2019
- 2019-08-27 US US16/552,680 patent/US10757436B2/en active Active
-
2020
- 2020-07-15 US US16/930,127 patent/US11234013B2/en active Active
-
2021
- 2021-12-15 US US17/551,546 patent/US11627332B2/en active Active
Non-Patent Citations (1)
Title |
---|
None |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104737538A (zh) * | 2012-09-14 | 2015-06-24 | 高通股份有限公司 | 执行量化以促进解块滤波 |
CN110545422A (zh) * | 2012-10-12 | 2019-12-06 | 韩国电子通信研究院 | 图像编码/解码方法和使用其的装置 |
CN110545421A (zh) * | 2012-10-12 | 2019-12-06 | 韩国电子通信研究院 | 图像编码/解码方法和使用其的装置 |
CN110545420A (zh) * | 2012-10-12 | 2019-12-06 | 韩国电子通信研究院 | 图像编码/解码方法和使用其的装置 |
CN110545422B (zh) * | 2012-10-12 | 2022-11-22 | 韩国电子通信研究院 | 图像编码/解码方法和使用其的装置 |
CN110545421B (zh) * | 2012-10-12 | 2022-11-22 | 韩国电子通信研究院 | 图像编码/解码方法和使用其的装置 |
US11743491B2 (en) | 2012-10-12 | 2023-08-29 | Electronics And Telecommunications Research Institute | Image encoding/decoding method and device using same |
CN109417617A (zh) * | 2016-06-22 | 2019-03-01 | 韩国电子通信研究院 | 帧内预测方法和装置 |
CN109417617B (zh) * | 2016-06-22 | 2023-10-20 | Lx 半导体科技有限公司 | 帧内预测方法和装置 |
WO2020186882A1 (zh) * | 2019-03-18 | 2020-09-24 | 华为技术有限公司 | 基于三角预测单元模式的处理方法及装置 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2012070827A2 (ko) | 영상 부호화 및 복호화 방법과 이를 이용한 장치 | |
US11924441B2 (en) | Method and device for intra prediction | |
WO2012173315A1 (ko) | 인트라 예측 모드 부호화/복호화 방법 및 장치 | |
KR102485470B1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2013036071A2 (ko) | 인터 예측 방법 및 그 장치 | |
WO2012138032A1 (ko) | 영상 정보 부호화 방법 및 복호화 방법 | |
WO2012091519A1 (ko) | 영상 정보 부호화 방법 및 복호화 방법과 이를 이용한 장치 | |
KR102546142B1 (ko) | 비디오 코딩 시스템에서 블록 구조 도출 방법 및 장치 | |
WO2012128453A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2013055040A1 (ko) | 인트라 예측 모드에 따른 변환 및 역변환 방법과 이를 이용한 부호화 및 복호화 장치 | |
WO2012086963A1 (ko) | 영상 부호화 및 복호화 방법과 이를 이용한 장치 | |
KR20120100839A (ko) | 엔트로피 부호화/복호화 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180041402.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11842773 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 20137000268 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13814745 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011842773 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |