WO2012018197A2 - 인트라 예측 복호화 장치 - Google Patents
인트라 예측 복호화 장치 Download PDFInfo
- Publication number
- WO2012018197A2 WO2012018197A2 PCT/KR2011/005590 KR2011005590W WO2012018197A2 WO 2012018197 A2 WO2012018197 A2 WO 2012018197A2 KR 2011005590 W KR2011005590 W KR 2011005590W WO 2012018197 A2 WO2012018197 A2 WO 2012018197A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- prediction
- intra prediction
- unit
- block
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
Definitions
- the present invention relates to an intra prediction decoding apparatus, and more particularly, to an apparatus for generating a reconstruction block by reconstructing an intra prediction mode and adaptively decoding a prediction block and a residual block accordingly.
- a picture is divided into macro blocks to encode an image.
- Each macro block is encoded using inter prediction or intra prediction.
- intra prediction does not refer to a reference picture in order to encode a block of a current picture, but encodes using a pixel value spatially adjacent to the current block to be encoded.
- an intra prediction mode with less distortion compared to the original macro block is selected using adjacent pixel values.
- the prediction value for the current block to be encoded is calculated using the selected intra prediction mode and the adjacent pixel value, and the difference between the prediction value and the pixel value of the original current block is obtained, and then encoded through transform encoding, quantization, and entropy encoding. .
- the prediction mode is also encoded.
- Intra prediction modes are classified into 4 ⁇ 4 intra prediction modes of luminance components, 8 ⁇ 8 intra prediction modes, 16 ⁇ 16 intra prediction modes, and intra prediction modes of chrominance components.
- the 16 ⁇ 16 intra prediction mode there are a total of four modes of a vertical mode, a horizontal mode, a direct current mode, and a plane mode.
- the 4x4 intra prediction mode includes a vertical mode, a horizontal mode, a direct current mode, a diagonal down-left mode, a diagonal down-right mode, There are a total of nine modes: vertical right mode, vertical left mode, horizontal-up mode, and horizontal-down mode.
- the prediction mode numbers indexed to each mode are numbers determined according to the frequency with which each mode is used.
- Vertical mode which is stochastic mode 0, is the most used mode to perform intra prediction on the target block, and horizontal mode which is 8 is the least used mode.
- the current block is encoded in 13 modes, that is, the 4 ⁇ 4 intra prediction mode and the 16 ⁇ 16 intra prediction mode, and a bitstream of the current block according to the optimal mode among them.
- An object of the present invention is to provide an intra prediction decoding apparatus for effectively reconstructing an encoded image having high compression efficiency by generating or reconstructing a predictive block close to an original image.
- An intra prediction decoding apparatus includes an entropy decoding unit for reconstructing quantized residual coefficients, intra prediction information, and size information of a prediction unit from a received bit stream, intra prediction information received from the entropy decoding unit, and a current prediction unit.
- a prediction mode decoder for restoring the intra prediction mode of the current prediction unit based on the size information, a residual signal decoder for restoring the residual signal according to the intra prediction mode received from the prediction mode decoder, or not available for the current prediction unit
- a reference pixel generator for reconstructing the non-referenced pixel and adaptively filtering the reference pixel based on the intra prediction mode of the current prediction unit received from the prediction mode decoder, and corresponding to the intra prediction mode received from the prediction mode decoder Using the reference pixels
- a prediction block post processor configured to adaptively filter the prediction blocks generated from the prediction block generator according to the intra prediction mode received from the prediction mode decoder, and a prediction block decoder received from the prediction mode decoder.
- the intra prediction decoding apparatus generates a reference pixel to generate a prediction block close to the original image, and adaptively filters the reference pixel, thereby generating a prediction block close to the original image.
- the prediction block close to the original image may be reconstructed by generating the prediction block or transforming the prediction block by using pixels that are not used to generate the prediction block, thereby increasing the image compression ratio.
- FIG. 1 is a block diagram illustrating a video encoding apparatus according to the present invention.
- FIG. 2 is a diagram illustrating a configuration of an intra prediction unit according to the present invention.
- FIG. 3 is a diagram illustrating a directional intra prediction mode according to the present invention.
- FIG. 4 is a diagram illustrating an intra prediction mode encoding process of a current prediction unit performed by a prediction mode encoder according to the present invention.
- FIG. 5 is a block diagram illustrating an intra prediction decoding apparatus according to the present invention.
- each picture consists of a plurality of slices, and each slice consists of a plurality of coding units.
- each slice consists of a plurality of coding units.
- an HD class or higher image there are many areas where the image is relatively flat, so that encoding by various coding units may increase the image compression rate.
- the coding unit according to the present invention may be divided hierarchically using depth information into a quad tree structure.
- the largest coding unit is referred to as a largest coding unit (LCU), and the smallest coding unit is referred to as a smallest coding unit (SCU).
- Information related to the largest coding unit (LCU) and the minimum coding unit (SCU) may be included in a sequence parameter set (SPS) and transmitted.
- SPS sequence parameter set
- the maximum coding unit consists of one or more coding units.
- the largest coding unit has the form of a recursive coding tree to include the coding structure and the partition structure of the coding unit. Therefore, if the largest coding unit is not divided into four lower coding units, the coding tree may be composed of one coding unit and information indicating that the coding tree is not divided. However, if the largest coding unit is divided into four lower coding units, the coding tree may be composed of information indicating splitting and four lower coding trees. Similarly, each sub-coding tree has the same structure as the coding tree of the largest coding unit. However, it is not divided below the minimum coding unit.
- each coding unit in the coding tree is subjected to intra prediction or inter prediction in units of coding units or sub partitions.
- a unit for performing the intra prediction or the inter prediction is called a prediction unit.
- the prediction unit for intra prediction may be 2N ⁇ 2N or N ⁇ N.
- Prediction units for inter prediction may be 2Nx2N, 2NxN, Nx2N, NxN.
- 2N means the length and width of the coding unit.
- the prediction unit for intra prediction may not be square.
- a square square coding unit may split intra into four hN ⁇ 2N or four 2N ⁇ hN to perform intra prediction.
- the compression efficiency can be increased by getting closer between the reference pixel for intra prediction and the pixel of the prediction block.
- This intra prediction method is called short distance intra prediction (SDIP).
- the coding unit includes the prediction mode of the prediction unit in the coding unit and the size information (partmode) of the prediction unit.
- joint coding may be performed by combining the prediction mode information and the size information of the prediction unit.
- each coding unit includes a joint coded prediction type (pred_type).
- the coding unit includes the additional information and the residual signal necessary to generate the predictive block of each prediction unit in the coding unit.
- the side information is defined for each prediction unit in the coding unit.
- the side information includes encoded intra prediction information.
- the side information includes coded motion information.
- the motion information includes a motion vector and a reference picture index.
- the residual signal is included for each coding unit.
- the residual signal includes one transform tree, one luminance residual signal carrier and two chrominance residual signal carriers.
- the residual signal carrier includes encoded residual information of one or more transform units.
- the maximum transform unit is less than or equal to the coding unit size.
- the transform unit can have a maximum transform unit or lower transform unit size.
- the transform tree includes information indicating the partition structure of the transform units for the residual signal included in the coding unit.
- the transform tree also includes information indicating whether or not the residual signal in each transform unit is zero.
- the residual signal carrying unit carries encoded residual information of a transform unit corresponding to information representing a partition structure in the transform tree in units of coding units.
- intra or inter prediction may be more effective for residual signal compression by dividing the image signal unevenly in a specific direction according to the shape of the boundary portion of the image.
- the simplest adaptive mode is to split the coding unit into two blocks using straight lines to extract the statistical dependence of the local topography of the prediction region.
- the boundary portion of the image is matched and divided into straight lines.
- a method of dividing a block may be defined in four directions: horizontal, vertical, upward diagonal, and downward diagonal. It may also be limited only horizontally and vertically.
- the divisible number is 3, 5, 7 and the like. The divisible number may vary depending on the size of the block. For example, it is possible to increase the number of splittables relative to a large coding unit.
- inter prediction when one coding unit is divided into two prediction units in order to predict more adaptively, motion prediction and motion compensation should be performed for each prediction unit.
- two prediction blocks may be added to generate a prediction block having a size of one coding unit.
- pixels located at the division boundaries may be filtered.
- one of the prediction blocks may be generated by smoothing the overlapping boundary portions.
- FIG. 1 is a block diagram illustrating a video encoding apparatus according to the present invention.
- the video encoding apparatus 100 may include a picture splitter 110, a transformer 120, a quantizer 130, a scanning unit 131, an entropy encoder 140, and an intra.
- the picture dividing unit 110 analyzes the input video signal, divides the picture into coding units having a predetermined size for each largest coding unit, and determines a prediction mode, and determines the size of the prediction unit for each coding unit.
- the picture splitter 110 transmits the prediction unit to be encoded to the intra predictor 150 or the inter predictor 160 according to the prediction mode.
- the picture dividing unit 110 sends the prediction unit to be encoded to the subtracting unit 190.
- the transformer 120 converts the residual block that is the residual signal of the original block of the input prediction unit and the prediction block generated by the intra predictor 150 or the inter predictor 160.
- the residual block is composed of a coding unit.
- the residual block composed of the coding units is divided into optimal transform units and transformed.
- the transformation matrix may be adaptively determined according to the prediction mode (intra or inter) and the intra prediction mode.
- the transform unit can be transformed by two (horizontal, vertical) one-dimensional transform matrices. For example, in the case of inter prediction, one predetermined transformation matrix is determined.
- a DCT-based integer matrix is applied in the vertical direction and DST-based in the horizontal direction.
- a KLT-based integer matrix is applied in the vertical direction and a DCT based integer matrix in the horizontal direction.
- a transform matrix may be adaptively determined depending on the size of the transform unit.
- the quantization unit 130 determines the quantization step size for quantizing the coefficients of the residual block transformed by the transform matrix for each coding unit.
- the quantization step size is determined for each coding unit of a predetermined size or more.
- the predetermined size may be 8x8 or 16x16, and the coefficients of the transform block are quantized using a quantization matrix determined according to the determined quantization step size and the prediction mode.
- the quantization unit 130 uses the quantization step size of the coding unit adjacent to the current coding unit as the quantization step size predictor of the current coding unit.
- the quantization unit 130 searches in order of the left coding unit, the upper coding unit, and the upper left coding unit of the current coding unit, and determines the quantization step size predictor of the current coding unit by using the quantization step sizes of at least one valid coding unit.
- the difference value is transmitted to the entropy encoder 140.
- candidates may be the coding units adjacent to the current coding unit and the coding unit immediately before the coding order within the maximum coding unit.
- the order may be reversed and the upper left coding unit may be omitted.
- the quantized transform block is provided to the inverse quantization unit 135 and the scanning unit 131.
- the scanning unit 131 scans the coefficients of the quantized transform block and converts them into one-dimensional quantization coefficients.
- the coefficient scanning method is determined according to the prediction mode and the intra prediction mode. In addition, the coefficient scanning scheme may be determined differently according to the size of the transform unit.
- the scanning unit 131 determines whether the current transform unit divides the quantized coefficient block into a plurality of subsets according to the size. When the size of the transform unit is larger than the first reference size, the quantized coefficient block is divided into a plurality of subsets.
- the first reference size is 4x4 or 8x8.
- the scanning unit 131 determines a scan pattern to be applied to the quantized coefficient block.
- a scan pattern predetermined according to the intra prediction mode may be applied.
- the scan pattern may vary depending on the directional intra prediction mode. Apply zigzag scan for non-directional modes.
- the non-directional mode may be a DC mode or a planar mode.
- the scan order of the quantization coefficients scans in the reverse direction.
- the same scan pattern is applied to the quantized coefficients in each subset.
- the plurality of subsets consists of one main subset and at least one residual subset.
- the main subset is located on the upper left side containing the DC coefficients, and the remaining subset covers an area other than the main subset.
- Scan patterns between subsets apply a zigzag scan.
- the scan pattern is preferably scanned from the main subset to the remaining subsets in the forward direction, but vice versa. It is also possible to set scan patterns between subsets in the same manner as scan patterns of quantized coefficients in the subset. In this case, the scan pattern between the subsets is determined according to the intra prediction mode.
- the encoder sends information to the decoder that can indicate the location of the last non-zero quantization coefficient in the transform unit. Information that can indicate the location of the last non-zero quantization coefficient in each subset is also sent to the decoder. The information may be information indicating the position of the last non-zero quantization coefficient in each subset.
- Inverse quantization 135 inverse quantizes the quantized quantization coefficients.
- the inverse transform unit 125 restores the inverse quantized transform coefficients to the residual block of the spatial domain.
- the adder combines the residual block reconstructed by the inverse transform unit and the prediction block from the intra predictor 150 or the inter predictor 160 to generate a reconstructed block.
- the post-processing unit 160 performs a deblocking filtering process for removing a blocking effect occurring in the reconstructed picture, an adaptive offset application process for compensating a difference value from the original image in pixel units, and a difference from the original image in the coding unit.
- An adaptive loop filter process is performed to compensate for the value.
- the deblocking filtering process is preferably applied to the boundary of the prediction unit and the transform unit having a size of a predetermined size or more.
- the size may be 8x8.
- the deblocking filtering process includes determining a boundary to filter, determining a boundary filtering strength to be applied to the boundary, determining whether to apply a deblocking filter, and the deblocking filter. If it is determined to apply, the method includes selecting a filter to apply to the boundary.
- Whether or not the deblocking filter is applied indicates i) whether the boundary filtering intensity is greater than 0 and ii) the degree of change of pixel values in two block (P block, Q block) boundary portions adjacent to the boundary to be filtered. The value is determined by whether or not the value is smaller than the first reference value determined by the quantization parameter.
- the said filter is at least 2 or more.
- a filter that performs relatively weak filtering is selected.
- the second reference value is determined by the quantization parameter and the boundary filtering intensity.
- the adaptive offset application process is to reduce the distortion between the original pixel and the pixel in the image to which the deblocking filter is applied.
- the picture or slice may be divided into a plurality of offset regions, and an offset mode may be determined for each offset region.
- the offset mode may include four edge offset modes, two band offset modes, and an offset free mode. According to each offset mode, pixels in each offset region are classified into a predetermined number of classes, and an offset corresponding to the classified classes is added.
- the class of the current pixel is determined by comparing pixel values of the current pixel and two or more adjacent pixels.
- the adaptive loop filter process may perform filtering based on a value obtained by comparing the reconstructed image and the original image that have undergone the deblocking filtering process or the adaptive offset application process.
- the adaptive loop filter is detected through one Laplacian Activity value based on a 4x4 block.
- the determined ALF may be applied to all pixels included in a 4x4 or 8x8 block.
- Whether to apply the adaptive loop filter may be determined for each coding unit.
- the size and coefficients of the loop filter to be applied according to each coding unit may vary.
- Information indicating whether the adaptive loop filter is applied to each coding unit, filter coefficient information, and filter type information may be included in each slice header and transmitted to the decoder. In the case of a chrominance signal, it may be determined whether to apply an adaptive loop filter on a picture basis.
- the shape of the loop filter may have a rectangular shape unlike the luminance.
- the picture storage unit 180 receives the post-processed image data from the post processor 160 and restores and stores the image in picture units.
- the picture may be an image in a frame unit or an image in a field unit.
- the picture storage unit 180 includes a buffer (not shown) that can store a plurality of pictures.
- the inter prediction unit 150 performs motion estimation using at least one reference picture stored in the picture storage unit 180, and determines a reference picture index and a motion vector representing the reference picture.
- the prediction block corresponding to the prediction unit to be encoded is output from a reference picture used for motion estimation among a plurality of reference pictures stored in the picture storage unit 150 according to the determined reference picture index and the motion vector. .
- the intra prediction unit 140 performs intra prediction encoding by using the reconstructed pixel values in the picture in which the current prediction unit is included.
- the intra prediction unit 140 receives the current prediction unit to be predictively encoded, selects one of a preset number of intra prediction modes according to the size of the current block, and performs intra prediction.
- the intra predictor adaptively filters the reference pixel to generate an intra prediction block. If the reference pixel is invalid, the reference pixels of the invalid position may be generated using the valid reference pixels.
- the entropy encoder 130 entropy encodes the quantization coefficients quantized by the quantizer 130, intra prediction information received from the intra predictor 140, motion information received from the inter predictor 150, and the like.
- FIG. 2 is a diagram illustrating a configuration of an intra predictor 140 according to the present invention.
- the intra prediction unit 140 may include a prediction unit receiver 141, a reference pixel generator 142, a prediction block generator 143, a prediction block post processor 144, and a prediction mode determiner 145. And a prediction mode encoder 146.
- the prediction unit receiver 141 receives the prediction unit input from the picture division unit 110.
- the prediction unit receiver 141 transmits the received size information of the prediction unit to the prediction mode determiner 145 and the reference pixel generator 142, and transmits the prediction unit to the reference pixel generator 142 and the prediction block generator ( 143).
- L is the width of the current prediction unit
- M is the height of the current prediction unit.
- reference pixels are generated with a constant value.
- the reference block is generated by copying the nearest available pixel value.
- the value of the generated reference pixels may be a rounded value of an average value of the reference pixel p and the reference pixel q.
- the generated values of the reference pixels may be generated using a change in the difference between the pixel values of the reference pixel p and the reference pixel q.
- the reference pixel may be generated through linear interpolation according to the generated pixel position or the weighted average of the two reference pixels.
- a difference value between both boundary pixels positioned at the boundary of the two upper prediction units is a difference value between adjacent pixels in each upper prediction unit. Is likely to be greater than This case results from the error caused by the quantization parameter. The error is likely to occur in the case of directional intra prediction modes in which a prediction block is generated using two adjacent reference pixels.
- the modes (mode numbers 3, 6, and 9) having a directionality of 45 ° based on the horizontal or vertical direction of FIG. 3 are most affected.
- the vertical or horizontal modes (mode numbers 0 and 1) are less affected since one pixel is used for pixel generation of the prediction block.
- a smoothing filter is applied to the reference pixels for the directional intra prediction modes (mode numbers 3, 6, and 9) having a directionality of 45 °, and the vertical and horizontal directional intra prediction modes (mode numbers 0, For 1), no filter is applied to the reference pixels.
- the filter is not applied to the reference pixels even in the DC mode among the non-directional modes. For these modes, whether or not the above-described application may be determined regardless of the size of the current prediction unit.
- the filtering is performed on the reference pixels according to the size of the prediction unit.
- filter can be adaptively applied.
- the filter when the filter is applied to the reference pixels in the first directional mode, the filter may or may not be applied to the reference pixels in the second directional mode.
- the change in the difference value between the pixels in the large size prediction unit is more likely to be gentler than the change in the difference value between the pixels in the small size prediction unit. Therefore, as the size of the prediction unit increases, the reference pixel can be filtered for more directional modes, and a stronger filter can be applied. In contrast, the filter may not be applied when the size of the prediction unit becomes smaller than a specific size.
- a first filter is applied to a prediction unit having a size smaller than or equal to a first size, and is larger than the first size.
- the second filter which is a stronger filter than the first filter, may be applied to the large prediction unit.
- the first size may vary depending on directional prediction modes.
- the prediction unit having a size smaller than or equal to the second size may be used.
- the first filter may be applied in a prediction unit having a size larger than the second size and smaller than or equal to the third size
- the second filter may be used in a prediction unit having a size larger than the third size.
- the second size and the third size may vary according to directional prediction modes.
- the first filter may be [1, 2, 1], which is a 3-tap filter, or [1, 2, 4, 2, 1], which is a 5-tap filter.
- the second filter may use a strong filter having a greater smoothing effect than the first filter.
- the prediction block generator 143 generates a prediction block by using corresponding reference pixels in the intra prediction mode.
- corresponding reference pixels differ according to the intra prediction mode.
- the intra prediction mode is the vertical mode
- Non-directional intra prediction modes are DC mode and planar mode.
- a reference pixel of a prediction block is generated by using a corner reference pixel, a left reference pixel, and an upper reference pixel.
- the intra prediction modes on the right side of the vertical mode (mode number 0) of FIG. 3 when the prediction block is generated using only the upper reference pixels, the pixels of the lower left region of the generated prediction block and the pixels of the original prediction unit are generated. And the probability that the difference increases. However, in at least some of the modes, the difference value may be reduced by generating a prediction block using upper reference pixels and left reference pixels. This effect is greatest in intra prediction mode with mode number 6. Similarly, the same is true for the intra prediction modes below the horizontal mode (mode number 1) of FIG. 3, which is the largest in the intra prediction mode having the mode number 9.
- one upper interpolation reference pixel and one left interpolation reference pixel may be used to generate a prediction pixel.
- the prediction pixel may be generated by a linear interpolation method of the one upper interpolation reference pixel and one left interpolation reference pixel, or may use a rounded average value.
- a prediction block may be generated using the left reference pixels and the upper reference pixels. That is, when the intra prediction mode is a prediction mode within a predetermined number (for example, four) of 6 or 6, the prediction block may be generated using the left reference pixels and the upper reference pixels.
- the method may not be applied in an intra prediction mode having a mode number larger than a predetermined mode number (eg, 9 or 17) in order to reduce complexity.
- a predetermined mode number e.g. 9 or 17
- the size of the current prediction unit may be applied only to prediction units of a predetermined size or more (eg, 8x8 or 16x16).
- the predictive block postprocessor 144 adaptively filters the predictive block generated by the predictive block generator 143. In order to reduce the difference in pixel values between the reference pixel and the pixels in the prediction block adjacent to the reference pixel, some or all of the pixels in the prediction block and the adjacent prediction block are adaptively filtered according to the intra prediction mode.
- pixels in the prediction block adjacent to the reference pixel are generated using the reference pixel, and thus no filter is applied.
- a filter is applied because the average value of the reference pixel is used.
- different types of filters may be used according to the size of the prediction unit (the size of the prediction block).
- the filter used for the large prediction unit may use the same filter as that used for the small prediction unit or a strong filter having a large smoothing effect.
- the difference between the pixels of the generated prediction block and the corresponding pixels of the original prediction unit when the corresponding prediction block is generated using only the upper reference pixels, the difference between the pixels of the generated prediction block and the corresponding pixels of the original prediction unit. The higher the value is, the more likely it is to go to the lower left area. In particular, in the intra prediction mode having the mode number 6, the difference value becomes larger.
- the difference between the pixels of the prediction block and the corresponding pixels of the original prediction unit increases toward the bottom
- the horizontal mode mode number 1
- the pixels of the prediction block and the original prediction unit The difference between the corresponding pixels of and the larger becomes to the right.
- some pixels in the prediction block may be adaptively filtered according to the directional intra prediction mode.
- the some pixels of the prediction block are filtered using the reference pixels of the prediction unit not used for generating the prediction block.
- the area to be filtered may be set differently according to the directional intra prediction mode.
- the area to be filtered may be the same or wider.
- the filter may not be applied in the intra prediction mode having a mode number greater than a predetermined mode number (for example, 9 or 17) to reduce the complexity.
- some pixels of the prediction block may be adaptively filtered according to the size of the prediction unit. As the size of the prediction unit increases, the pixel ratio to be filtered may be maintained or increased.
- the prediction block may not be filtered.
- the prediction block may not be filtered.
- the prediction unit of 32x32 or more all eight boundary pixels may be filtered.
- the filter intensity to be applied to the pixels of the prediction block may vary according to the size of the prediction unit. As the size of the prediction unit increases, the filter strength may be maintained or increased.
- the prediction mode determiner 145 determines the intra prediction mode of the current prediction unit by using the reference pixels.
- the prediction mode determiner 145 may determine, as the intra prediction mode of the current prediction unit, an intra prediction mode in which the prediction coding amount of the residual block generated by using the prediction block for each intra prediction mode or the post-processed prediction block is minimum. .
- the prediction mode encoder 146 encodes the intra prediction mode of the current prediction unit by using the intra prediction modes of the prediction unit adjacent to the current prediction unit.
- FIG. 4 is a diagram illustrating an intra prediction mode encoding process of a current prediction unit performed by the prediction mode encoder 146 according to the present invention.
- an intra prediction mode candidate of a current prediction unit is searched for (S110).
- the intra prediction mode candidate may be the top and left modes of the current prediction unit.
- the corner intra prediction mode may be added, and a new mode may be added according to the upper and left intra prediction modes.
- the intra prediction mode of the first valid prediction unit is set as the upper intra prediction mode while scanning in a predetermined direction (for example, right to left). Even when there are a plurality of left prediction units of the current prediction unit, the intra prediction mode of the first valid prediction unit may be set as the left intra prediction mode while scanning in a predetermined direction (eg, lower). Alternatively, the smallest mode number among the mode numbers of the plurality of valid prediction units may be set as the upper intra prediction mode.
- the corner intra prediction mode may be a prediction mode of the prediction unit adjacent to the upper right side or the upper left side of the current prediction unit.
- the first intra prediction mode may be valid when scanning is performed according to a predetermined order (for example, upper right lower left upper left) among intra prediction modes adjacent to the upper left, upper right and lower right sides of the current prediction unit.
- Two (right top, top left) or three (top right, top left, bottom left) corner intra prediction modes may be added or may not exist as candidates for intra prediction modes of the current prediction unit.
- the intra prediction mode of the valid intra prediction mode candidate is changed (S130).
- the intra prediction mode value of the valid intra prediction mode candidate is a predetermined number of intra prediction mode values. Convert to one of these.
- the predetermined number may vary depending on the size of the current prediction unit. For example, if the size of the current prediction unit is 4x4, it maps to one of nine modes (modes 0 to 8) or 18 modes. If the size of the current prediction unit is 64x64, the four modes (number 0 to 2) Mode).
- an intra prediction candidate list of the current prediction unit is constructed (S140).
- the candidate list may be in order of mode number. Further, the frequency may be in the order of high frequency, but in the case of the same frequency, the mode number may be used. If a plurality of intra prediction candidates have the same mode, the rest except one is deleted from the list.
- a mode change value for changing the intra prediction mode of the current prediction unit is obtained (S170).
- the mode change value is the number of intra prediction mode candidates having a mode value not greater than the intra prediction mode value of the current prediction unit by comparing the intra prediction mode values of the intra prediction candidate list.
- the intra prediction mode of the current prediction unit is changed using the mode change value (S180).
- the intra prediction mode of the changed current prediction unit is determined.
- the intra prediction mode of the changed current prediction unit is transmitted to the entropy encoder 140.
- FIG. 5 is a block diagram illustrating an intra prediction decoding apparatus 200 according to the present invention.
- the intra prediction decoding apparatus 200 includes an entropy decoding unit 210, a residual signal decoding unit 220, a prediction mode decoding unit 230, a reference pixel generator 240, and a prediction block generator 250.
- the prediction block post processor 260 and the image restorer 270 are included.
- the entropy decoder 210 extracts the quantized residual coefficients from the received bitstream, and transmits the quantized residual coefficients and the size information of the transform unit to the residual signal decoder 220. In addition, the entropy decoder 210 transmits the intra prediction information and the size information of the prediction unit to be decoded from the received bit stream to the prediction mode decoder 230.
- the residual signal decoder 220 converts the received quantized residual coefficient into a dequantized block of a two-dimensional array.
- One of the plurality of scanning patterns is selected for the conversion.
- the scanning pattern of the transform block is determined based on at least one of a prediction mode and an intra prediction mode.
- the reverse scanning operation is the same as the reverse process of the operation of the scanning unit 131 of FIG. That is, if the size of the current transform unit to be decoded is larger than the first reference size, inverse scanning is performed for each of a plurality of subsets based on the inverse scanning pattern, and an inverse having the size of the transform unit using the plurality of inverse-scanned subsets is obtained. Generate a quantization block. On the other hand, if the size of the current transform unit to be decoded is not larger than the first reference size, inverse scanning is performed based on the inverse scanning pattern to generate an inverse quantization block having the size of the transform unit.
- the prediction mode decoder 230 restores the intra prediction mode of the current prediction unit based on the intra prediction information received from the entropy decoder 210 and the size information of the current prediction unit.
- the received intra prediction information is recovered through the inverse process of FIG. 4.
- the reference pixel generator 240 reconstructs a reference pixel that is not available in the current prediction unit and adaptively selects the reference pixel based on the intra prediction mode of the current prediction unit received from the prediction mode decoder 230. To filter.
- the method of generating the reference pixel and the method of filtering the reference pixel are the same as the method of generating the reference pixel and the method of filtering the reference pixel of the reference pixel generator 142 of the intra predictor 140 of FIG. 2.
- L is the width of the current prediction unit
- M is the height of the current prediction unit.
- a reference pixel is generated if reference pixels for generating the prediction block based on the intra prediction mode are not available or not sufficient.
- reference pixels are generated with a constant value.
- the reference block is generated by copying the nearest available pixel value.
- the value of the generated reference pixels may be a rounded value of an average value of the reference pixel p and the reference pixel q.
- the generated values of the reference pixels may be generated using a change in the difference between the pixel values of the reference pixel p and the reference pixel q.
- the reference pixel may be generated through linear interpolation according to the generated pixel position or the weighted average of the two reference pixels.
- a difference value between both boundary pixels positioned at the boundary of the two upper prediction units is a difference value between adjacent pixels in each upper prediction unit. Is likely to be greater than This case results from the error caused by the quantization parameter. The error is likely to occur in the case of directional intra prediction modes in which a prediction block is generated using two adjacent reference pixels.
- the modes (mode numbers 3, 6, and 9) having a directionality of 45 ° based on the horizontal or vertical direction of FIG. 3 are most affected.
- the vertical or horizontal modes (mode numbers 0 and 1) are less affected since one pixel is used for pixel generation of the prediction block.
- a smoothing filter is applied to the reference pixels for the directional intra prediction modes (mode numbers 3, 6, and 9) having a directionality of 45 °, and the vertical and horizontal directional intra prediction modes (mode numbers 0, For 1), no filter is applied to the reference pixels.
- the filter is not applied to the DC mode among the non-directional modes. For these modes, whether or not the above-described application may be determined regardless of the size of the current prediction unit.
- a smoothing filter may be adaptively applied to intra prediction modes (mode numbers 3, 6, and 9) having 45 ° directionality and directional modes having directionality between the vertical or vertical intra prediction modes. .
- the change in the difference value between the pixels in the large size prediction unit is more likely to be gentler than the change in the difference value between the pixels in the small size prediction unit. Therefore, as the size of the prediction unit increases, the reference pixel can be filtered for more directional modes, and a stronger filter can be applied. In contrast, the filter may not be applied when the size of the prediction unit becomes smaller than a specific size.
- a first filter is applied to a prediction unit having a size smaller than or equal to a first size, and is larger than the first size.
- the second filter which is a stronger filter than the first filter, may be applied to the large prediction unit.
- the first size may vary depending on directional prediction modes.
- the prediction unit having a size smaller than or equal to the second size may be used.
- the first filter may be applied in a prediction unit having a size larger than the second size and smaller than or equal to the third size
- the second filter may be used in a prediction unit having a size larger than the third size.
- the second size and the third size may vary according to directional prediction modes.
- the first filter may be [1, 2, 1], which is a 3-tap filter, or [1, 2, 4, 2, 1], which is a 5-tap filter.
- the second filter may use a strong filter having a greater smoothing effect than the first filter.
- the prediction block generator 250 generates the prediction block according to the intra prediction mode of the current prediction unit received from the prediction mode decoder 230.
- the prediction block generation method is the same as the prediction block generation method of the prediction block generation unit 142 of the intra prediction unit 140 of FIG. 2.
- corresponding reference pixels vary according to the intra prediction mode.
- the intra prediction mode is the vertical mode
- Non-directional intra prediction modes are DC mode and planar mode.
- a reference pixel of a prediction block is generated by using a corner reference pixel, a left reference pixel, and an upper reference pixel.
- the intra prediction modes on the right side of the vertical mode (mode number 0) of FIG. 3 when the prediction block is generated using only the upper reference pixels, the pixels of the lower left region of the generated prediction block and the pixels of the original prediction unit are generated. And the probability that the difference increases. However, in at least some of the modes, the difference value may be reduced by generating a prediction block using upper reference pixels and left reference pixels. This effect is greatest in intra prediction mode with mode number 6. Similarly, the same is true for the intra prediction modes below the horizontal mode (mode number 1) of FIG. 3, which is the largest in the intra prediction mode having the mode number 9.
- one upper interpolation reference pixel and one left interpolation reference pixel may be used to generate a prediction pixel.
- the prediction pixel may be generated by a linear interpolation method of the one upper interpolation reference pixel and one left interpolation reference pixel, or may use a rounded average value.
- a prediction block may be generated using the left reference pixels and the upper reference pixels.
- the method may not be applied in an intra prediction mode having a mode number larger than a predetermined mode number (eg, 9 or 17) in order to reduce complexity.
- the size of the current prediction unit may be applied only to prediction units of a predetermined size or more (eg, 8x8 or 16x16).
- the prediction block post processor 260 adaptively filters the prediction blocks generated by the prediction block generator 250 according to the intra prediction mode of the current prediction unit received from the prediction mode decoder 230.
- the predictive block post processor may be integrated into the predictive block generator 250.
- the filtering method of the prediction block is the same as the prediction block filtering method of the prediction block post processor 144 of the intra prediction unit 140 of FIG. 2.
- some or all of the pixels in the prediction block and the adjacent prediction block are adaptively filtered according to the intra prediction mode.
- pixels in a prediction block adjacent to the reference pixel are generated using the reference pixel, and thus no filter is applied.
- a filter is applied because the average value of the reference pixel is used.
- different types of filters may be used according to the size of the prediction unit (the size of the prediction block).
- the filter used for the large prediction unit may use the same filter as that used for the small prediction unit or a strong filter having a large smoothing effect.
- the difference between the pixels of the generated prediction block and the corresponding pixels of the original prediction unit when the corresponding prediction block is generated using only the upper reference pixels, the difference between the pixels of the generated prediction block and the corresponding pixels of the original prediction unit. The higher the value is, the more likely it is to go to the lower left area. In particular, in the intra prediction mode having the mode number 6, the difference value becomes larger.
- the difference between the pixels of the prediction block and the corresponding pixels of the original prediction unit increases toward the bottom
- the horizontal mode mode number 1
- the pixels of the prediction block and the original prediction unit The difference between the corresponding pixels of and the larger becomes to the right.
- some pixels in the prediction block may be adaptively filtered according to the directional intra prediction mode.
- the some pixels of the prediction block are filtered using the reference pixels of the prediction unit not used for generating the prediction block.
- the area to be filtered may be set differently according to the directional intra prediction mode.
- the area to be filtered may be the same or wider.
- the filter may not be applied in the intra prediction mode having a mode number greater than a predetermined mode number (for example, 9 or 17) to reduce the complexity.
- some pixels of the prediction block may be adaptively filtered according to the size of the prediction unit. As the size of the prediction unit increases, the pixel ratio to be filtered may be maintained or increased.
- the prediction block may not be filtered.
- the prediction block may not be filtered.
- the prediction unit of 32x32 or more all eight boundary pixels may be filtered.
- the filter intensity to be applied to the pixels of the prediction block may vary according to the size of the prediction unit. As the size of the prediction unit increases, the filter strength may be maintained or increased.
- the image reconstruction unit 270 receives the prediction block in units of prediction units from the prediction block generator 250 or the prediction block filtering unit 260 according to the intra prediction mode reconstructed by the prediction mode decoder 230.
- the image reconstructor 270 receives the residual block reconstructed by the residual signal decoder 220 in units of transform units.
- the image reconstructor 250 generates a reconstructed image by adding the received prediction block and the residual block.
- the reconstructed image may be reconstructed in units of coding units.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Television Signal Processing For Recording (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
Description
Claims (7)
- 인트라 예측 복호화 장치에 있어서,수신된 비트 스트림으로부터 양자화된 잔차계수, 인트라 예측 정보 및 예측 유닛의 크기 정보를 복원하는 엔트로피 복호화부;상기 엔트로피 복호화부으로부터 수신된 인트라 예측 정보 및 현재 예측 유닛의 크기 정보에 기초하여 현재 예측 유닛의 인트라 예측 모드를 복원하는 예측모드 복호화부;상기 예측모드 복호화부로부터 수신된 인트라 예측 모드에 따라 잔차신호를 복원하는 잔차신호 복호화부;현재 예측 유닛의 이용 가능하지 않은 참조 화소를 생성하고, 예측모드 복호화부로부터 수신된 현재 예측 유닛의 인트라 예측 모드에 기초하여 참조화소를 적응적으로 필터링하는 참조화소 생성부;상기 예측모드 복호화부로부터 수신된 인트라 예측 모드에 대응하는 참조픽셀들을 이용하여 예측블록을 생성하는 예측블록 생성부;상기 예측모드 복호화부로부터 수신된 인트라 예측 모드에 따라 상기 예측블록 생성부로부터 생성된 예측블록을 적응적으로 필터링하는 예측블록 후처리부; 및상기 예측 모드 복호화부로부터 수신된 인트라 예측 모드에 따라 예측 블록 생성부 또는 예측 블록 필터링부로부터 예측 유닛 단위로 예측 블록을 수신하고, 상기 잔차신호 복호화부로부터 수신된 복원된 잔차블록을 이용하여 복원영상을 생성하는 영상 복원부를 포함하는 인트라 예측 복호화 장치.
- 제1항에 있어서,상기 참조화소 생성부는 수평 모드 또는 수직 모드를 기준으로 45° 방향성을 인트라 예측 모드(모드번호 3, 6, 9)와 상기 수평 모드 또는 수직 모드 사이의 방향성을 갖는 인트라 예측 모드들에 대해서는 예측 유닛의 크기에 따라 적응적으로 참조 픽셀을 필터링하는 것을 특징으로 하는 인트라 예측 복호화 장치.
- 제1항에 있어서,상기 참조화소 생성부는 미리 정해진 크기보다 작은 예측 유닛의 참조픽셀들에 대해서는 필터를 적용하지 않는 것을 특징으로 하는 인트라 예측 복호화 장치.
- 제1항에 있어서,상기 참조화소 생성부는 수평 모드 또는 수직 모드와 45° 방향성을 인트라 예측 모드(모드번호 3, 6 또는 9) 방향 사이에 존재하는 제1 방향성 모드와 제2 방향성 모드들 중 제1 방향성 모드가 제2 방향성 모드보다 상기 45°의 방향성을 갖는 인트라 예측 모드에 더 인접한 방향성을 갖는 경우, 제2 방향성 모드의 참조픽셀들에 필터를 적용하면, 제1 방향성 모드의 참조픽셀들에도 필터를 적용하는 것을 특징으로 하는 인트라 예측 복호화 장치.
- 제1항에 있어서,상기 예측블록 생성부는, 인트라 예측 모드가 플래너 모드(planar mode)일 경우, 코너 참조 화소, 좌측 참조 화소 및 상측 참조화소를 이용하여 예측 블록의 참조 화소를 생성하는 것을 특징으로 하는 인트라 예측 복호화 장치.
- 제1항에 있어서,인트라 예측 모드가 수직 모드인 경우, 상기 참조화소 생성부는 참조화소를 필터링하지 않고, 상기 예측블록 후처리부는 예측 블록 내의 일부 화소들을 필터링하기 위해 예측 블록 생성에 이용되지 않은 참조화소들을 이용하는 것을 특징으로 하는 인트라 예측 복호화 장치.
- 제1항에 있어서,상기 참조화소 생성부는, 인트라 예측 모드가 수직 모드와 45° 방향성을 인트라 예측 모드(모드번호 6) 또는 상기 모드에 인접하는 방향성을 갖는 미리 정해진 개수 범위 내의 인트라 예측 모드인 경우, 상측 참조 화소들과 좌측 참조화소들을 이용하여 예측 블록을 생성하는 것을 특징으로 하는 인트라 예측 복호화 장치.
Priority Applications (44)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SI201130847A SI2600613T1 (sl) | 2010-07-31 | 2011-07-29 | Intrapredikcijska dekodirna naprava |
ES11814797.4T ES2575381T3 (es) | 2010-07-31 | 2011-07-29 | Dispositivo de decodificación de intra-predicción |
EP16161943.2A EP3059961B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for encoding an image |
EP16161928.3A EP3059954B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for decoding an image |
EP16160964.9A EP3051815B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for decoding an image |
PL16161933T PL3059957T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do kodowania obrazu |
MX2015001380A MX336416B (es) | 2010-07-31 | 2011-07-29 | Aparato para decodificar una imagen en movimiento. |
PL16161927T PL3059953T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do kodowania obrazu |
PL16161937T PL3059959T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do kodowania obrazu |
EP16161936.6A EP3059958B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for decoding an image |
MX2015001382A MX336418B (es) | 2010-07-31 | 2011-07-29 | Aparato para decodificar una imagen en movimiento. |
EP16161933.3A EP3059957B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for encoding an image |
DK11814797.4T DK2600613T3 (en) | 2010-07-31 | 2011-07-29 | Device for intraprædiktionsdekodning |
EP16161929.1A EP3059955B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for decoding an image |
MX2015001383A MX336419B (es) | 2010-07-31 | 2011-07-29 | Aparato para decodificar una imagen en movimiento. |
JP2013523088A JP5997152B2 (ja) | 2010-07-31 | 2011-07-29 | イントラ予測復号化装置 |
MX2015001381A MX336417B (es) | 2010-07-31 | 2011-07-29 | Aparato para decodificar una imagen en movimiento. |
PL16161943T PL3059961T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do kodowania obrazu |
MX2013001232A MX2013001232A (es) | 2010-07-31 | 2011-07-29 | Dispositivo de decodificacion de intraprediccion. |
PL16161928T PL3059954T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do dekodowania obrazu |
EP11814797.4A EP2600613B1 (en) | 2010-07-31 | 2011-07-29 | Intra-prediction decoding device |
PL16161929T PL3059955T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do dekodowania obrazu |
CN201180042188.0A CN103081474B (zh) | 2010-07-31 | 2011-07-29 | 用于对运动图片进行解码的装置 |
RS20160460A RS55052B1 (sr) | 2010-07-31 | 2011-07-29 | Intraprediktivni dekodirajući uređaj |
PL16160964T PL3051815T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do dekodowania obrazu |
EP16161927.5A EP3059953B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for encoding an image |
EP16161937.4A EP3059959B1 (en) | 2010-07-31 | 2011-07-29 | Apparatus for encoding an image |
PL16161936T PL3059958T3 (pl) | 2010-07-31 | 2011-07-29 | Urządzenie do dekodowania obrazu |
US13/624,852 US9307246B2 (en) | 2010-07-31 | 2012-09-21 | Apparatus for decoding moving picture |
US14/929,685 US9491468B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for encoding an image |
US14/929,516 US9609325B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for decoding an image |
US14/929,567 US9451263B2 (en) | 2010-07-31 | 2015-11-02 | Intra prediction apparatus |
US14/929,534 US9615094B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for decoding an image |
US14/929,668 US9451264B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for encoding an image |
US14/929,589 US9467702B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for encoding an image |
US14/929,602 US9445099B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for encoding an image |
US14/929,544 US9438919B2 (en) | 2010-07-31 | 2015-11-02 | Intra prediction apparatus |
US14/929,643 US9462281B2 (en) | 2010-07-31 | 2015-11-02 | Apparatus for encoding an image |
SM201600168T SMT201600168B (it) | 2010-07-31 | 2016-06-10 | Dispositivo di decodifica di intra-predizione |
HRP20160911TT HRP20160911T1 (hr) | 2010-07-31 | 2016-07-20 | Uređaj za intrapredikcijsko dekodiranje |
CY20161100743T CY1117856T1 (el) | 2010-07-31 | 2016-07-27 | Συσκευη αποκωδικοποιησης ενδο-προβλεψης |
CY20191100254T CY1121337T1 (el) | 2010-07-31 | 2019-02-28 | Συσκευη για την αποκωδικοποιηση μιας εικονας |
CY20191100498T CY1122995T1 (el) | 2010-07-31 | 2019-05-09 | Συσκευη για την κωδικοποιηση μιας εικονας |
US17/343,127 USRE49565E1 (en) | 2010-07-31 | 2021-06-09 | Apparatus for encoding an image |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2010-0074460 | 2010-07-31 | ||
KR20100074460 | 2010-07-31 | ||
KR1020110063288A KR20120012385A (ko) | 2010-07-31 | 2011-06-28 | 인트라 예측 부호화 장치 |
KR10-2011-0063288 | 2011-06-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/624,852 Continuation US9307246B2 (en) | 2010-07-31 | 2012-09-21 | Apparatus for decoding moving picture |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2012018197A2 true WO2012018197A2 (ko) | 2012-02-09 |
WO2012018197A3 WO2012018197A3 (ko) | 2012-04-12 |
Family
ID=45559905
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2011/005590 WO2012018197A2 (ko) | 2010-07-31 | 2011-07-29 | 인트라 예측 복호화 장치 |
Country Status (19)
Country | Link |
---|---|
US (11) | US9307246B2 (ko) |
EP (11) | EP3059961B1 (ko) |
JP (10) | JP5997152B2 (ko) |
KR (13) | KR20120012385A (ko) |
CN (13) | CN106067978B (ko) |
CY (5) | CY1117856T1 (ko) |
DK (5) | DK2600613T3 (ko) |
ES (11) | ES2691195T3 (ko) |
HR (5) | HRP20160911T1 (ko) |
HU (11) | HUE044403T2 (ko) |
LT (4) | LT3051815T (ko) |
MX (5) | MX336417B (ko) |
PL (11) | PL3059960T3 (ko) |
PT (5) | PT3059953T (ko) |
RS (5) | RS58564B1 (ko) |
SI (5) | SI2600613T1 (ko) |
SM (1) | SMT201600168B (ko) |
TR (7) | TR201809830T4 (ko) |
WO (1) | WO2012018197A2 (ko) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016123118A (ja) * | 2011-04-01 | 2016-07-07 | アイベックス・ピイティ・ホールディングス・カンパニー・リミテッド | イントラ予測モードにおける映像復号化方法 |
US10212425B2 (en) | 2012-06-08 | 2019-02-19 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
EP3429203A3 (en) * | 2012-10-01 | 2019-04-17 | GE Video Compression, LLC | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
US10542290B2 (en) | 2012-06-27 | 2020-01-21 | Sun Patent Trust | Image decoding method and image decoding apparatus for sample adaptive offset information |
US10645415B2 (en) | 2011-04-25 | 2020-05-05 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
EP3843393A1 (en) * | 2011-11-04 | 2021-06-30 | Innotive Ltd | Method of deriving an intra prediction mode |
JP2022000950A (ja) * | 2011-11-04 | 2022-01-04 | イノティヴ リミテッド | 映像符号化方法、映像復号化方法、コンピュータ可読記憶媒体 |
JP7585533B2 (ja) | 2011-11-04 | 2024-11-18 | ゲンスクエア エルエルシー | 画像符号化装置 |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110113561A (ko) * | 2010-04-09 | 2011-10-17 | 한국전자통신연구원 | 적응적인 필터를 이용한 인트라 예측 부호화/복호화 방법 및 그 장치 |
SG10201503180SA (en) * | 2010-04-23 | 2015-06-29 | M&K Holdings Inc | Apparatus For Encoding A Moving Picture |
EP2624563A4 (en) | 2010-09-30 | 2016-03-16 | Mitsubishi Electric Corp | MOTION VIDEO-CODING DEVICE, MOTION VIDEO-CODING DEVICE, MOTION VIDEO-CODING METHOD AND MOTION VIDEO-CODING METHOD |
US9008175B2 (en) | 2010-10-01 | 2015-04-14 | Qualcomm Incorporated | Intra smoothing filter for video coding |
KR20120140181A (ko) | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치 |
CN107257480B (zh) | 2011-08-29 | 2020-05-29 | 苗太平洋控股有限公司 | 以amvp模式对图像编码的方法 |
JP2013093792A (ja) * | 2011-10-27 | 2013-05-16 | Sony Corp | 画像処理装置および方法 |
WO2015001700A1 (ja) * | 2013-07-01 | 2015-01-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 画像符号化方法、及び、画像符号化装置 |
JP6253406B2 (ja) * | 2013-12-27 | 2017-12-27 | キヤノン株式会社 | 画像符号化装置、撮像装置、画像符号化方法、及びプログラム |
WO2017030418A1 (ko) * | 2015-08-19 | 2017-02-23 | 엘지전자(주) | 다중 그래프 기반 모델에 따라 최적화된 변환을 이용하여 비디오 신호를 인코딩/ 디코딩하는 방법 및 장치 |
EP3343926A4 (en) * | 2015-08-28 | 2019-01-30 | KT Corporation | METHOD AND DEVICE FOR PROCESSING A VIDEO SIGNAL |
EP3306930A4 (en) * | 2015-09-10 | 2018-05-02 | Samsung Electronics Co., Ltd. | Encoding device, decoding device, and encoding and decoding method thereof |
ES2844525B1 (es) * | 2015-09-11 | 2022-07-05 | Kt Corp | Metodo para decodificar un video |
CN106550238B (zh) * | 2015-09-16 | 2019-09-13 | 福州瑞芯微电子股份有限公司 | 一种图像处理方法及系统 |
KR102345475B1 (ko) * | 2016-01-05 | 2022-01-03 | 한국전자통신연구원 | 잔차 신호에 대한 예측 방법 및 장치 |
US10785499B2 (en) * | 2016-02-02 | 2020-09-22 | Lg Electronics Inc. | Method and apparatus for processing video signal on basis of combination of pixel recursive coding and transform coding |
US20190037217A1 (en) * | 2016-02-16 | 2019-01-31 | Samsung Electronics Co., Ltd. | Video encoding method and apparatus, and decoding method and apparatus therefor |
KR102684942B1 (ko) * | 2016-03-17 | 2024-07-12 | 세종대학교산학협력단 | 인트라 예측 기반의 비디오 신호 처리 방법 및 장치 |
KR102346713B1 (ko) | 2016-04-12 | 2022-01-03 | 세종대학교산학협력단 | 인트라 예측 기반의 비디오 신호 처리 방법 및 장치 |
KR102601732B1 (ko) | 2016-05-31 | 2023-11-14 | 삼성디스플레이 주식회사 | 영상 부호화 방법 및 영상 복호화 방법 |
ES2908214T3 (es) | 2016-06-24 | 2022-04-28 | Kt Corp | Filtración adaptativa de muestras de referencia para intra predicción usando líneas de píxeles distantes |
US11405620B2 (en) * | 2016-08-01 | 2022-08-02 | Electronics And Telecommunications Research Institute | Image encoding/decoding method and apparatus with sub-block intra prediction |
CN114286091B (zh) * | 2016-09-05 | 2024-06-04 | 罗斯德尔动力有限责任公司 | 图像编码和解码方法、比特流存储介质及数据传输方法 |
KR20180029905A (ko) * | 2016-09-13 | 2018-03-21 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
US10652575B2 (en) * | 2016-09-15 | 2020-05-12 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
US20180091812A1 (en) * | 2016-09-23 | 2018-03-29 | Apple Inc. | Video compression system providing selection of deblocking filters parameters based on bit-depth of video data |
JP6953523B2 (ja) | 2016-10-14 | 2021-10-27 | インダストリー アカデミー コーオペレイション ファウンデーション オブ セジョン ユニバーシティ | 画像の符号化/復号化方法及び装置 |
US10694202B2 (en) * | 2016-12-01 | 2020-06-23 | Qualcomm Incorporated | Indication of bilateral filter usage in video coding |
KR102371266B1 (ko) | 2016-12-23 | 2022-03-07 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 미리 결정된 방향성 인트라 예측 모드들의 세트로부터 방향성 인트라 예측 모드를 제거하기 위한 인트라 예측 장치 |
WO2018124853A1 (ko) * | 2017-01-02 | 2018-07-05 | 한양대학교 산학협력단 | 참조 화소에 대하여 적응적 필터링을 수행하기 위한 화면 내 예측 방법 및 장치 |
KR102719084B1 (ko) | 2017-01-02 | 2024-10-16 | 한양대학교 산학협력단 | 참조 화소에 대하여 적응적 필터링을 수행하기 위한 화면 내 예측 방법 및 장치 |
US10638126B2 (en) * | 2017-05-05 | 2020-04-28 | Qualcomm Incorporated | Intra reference filter for video coding |
CN117354514A (zh) | 2017-07-28 | 2024-01-05 | 韩国电子通信研究院 | 图像编码方法和图像解码方法以及计算机可读记录介质 |
CN116634154A (zh) * | 2017-09-08 | 2023-08-22 | 株式会社Kt | 视频信号处理方法及装置 |
JP7236800B2 (ja) * | 2017-10-10 | 2023-03-10 | 株式会社デンソー | 車両洗浄システム |
KR102402539B1 (ko) * | 2017-11-28 | 2022-05-27 | 한국전자통신연구원 | 양방향 인트라 예측 방법 및 장치 |
EP3496401A1 (en) * | 2017-12-05 | 2019-06-12 | Thomson Licensing | Method and apparatus for video encoding and decoding based on block shape |
EP3744093A4 (en) * | 2018-01-25 | 2022-01-26 | LG Electronics Inc. | VIDEO DECODER AND RELATED CONTROL METHOD |
US10735757B2 (en) | 2018-01-30 | 2020-08-04 | Lg Electronics Inc. | Video decoder and controlling method thereof |
WO2019203559A1 (ko) * | 2018-04-17 | 2019-10-24 | 엘지전자 주식회사 | 영상 코딩 시스템에서 리그레션 모델 기반 필터링을 사용하는 영상 디코딩 방법 및 장치 |
SG11202009345QA (en) * | 2018-05-10 | 2020-10-29 | Samsung Electronics Co Ltd | Video encoding method and apparatus, and video decoding method and apparatus |
CN112954349B (zh) * | 2018-06-21 | 2024-08-23 | 株式会社Kt | 用于处理视频信号的方法和设备 |
TW202247652A (zh) * | 2018-06-29 | 2022-12-01 | 弗勞恩霍夫爾協會 | 擴充參考圖像內預測技術 |
EP4152748A1 (en) * | 2018-09-02 | 2023-03-22 | LG Electronics, Inc. | Method and apparatus for processing image signal |
EP3888368A4 (en) * | 2018-11-27 | 2022-03-02 | OP Solutions, LLC | ADAPTIVE TIME REFERENCE IMAGE CROSS-REFERENCE FILTER NOT AVAILABLE TO RELATED APPLICATIONS |
US11102513B2 (en) | 2018-12-06 | 2021-08-24 | Tencent America LLC | One-level transform split and adaptive sub-block transform |
US11356699B2 (en) * | 2019-01-11 | 2022-06-07 | Hfi Innovation Inc. | Method and apparatus of sub-block deblocking in video coding |
CN111263156B (zh) * | 2019-02-20 | 2022-03-25 | 北京达佳互联信息技术有限公司 | 视频解码方法、视频编码方法及装置 |
US11190777B2 (en) * | 2019-06-30 | 2021-11-30 | Tencent America LLC | Method and apparatus for video coding |
CN112859327B (zh) * | 2019-11-27 | 2022-10-18 | 成都理想境界科技有限公司 | 一种图像输出控制方法及光纤扫描成像系统 |
WO2023055199A1 (ko) * | 2021-09-30 | 2023-04-06 | 한국전자통신연구원 | 영상 부호화/복호화를 위한 방법, 장치 및 기록 매체 |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69933858T2 (de) * | 1998-12-10 | 2007-05-24 | Matsushita Electric Industrial Co., Ltd., Kadoma | Arithmetische filtervorrichtung |
DE10158658A1 (de) * | 2001-11-30 | 2003-06-12 | Bosch Gmbh Robert | Verfahren zur gerichteten Prädiktion eines Bildblockes |
US7289562B2 (en) * | 2003-08-01 | 2007-10-30 | Polycom, Inc. | Adaptive filter to improve H-264 video quality |
US7782954B2 (en) * | 2003-09-07 | 2010-08-24 | Microsoft Corporation | Scan patterns for progressive video content |
KR101014660B1 (ko) * | 2003-10-24 | 2011-02-16 | 삼성전자주식회사 | 인트라 예측 방법 및 장치 |
JP2006100871A (ja) * | 2004-09-28 | 2006-04-13 | Sony Corp | 符号化装置、符号化方法、符号化方法のプログラム及び符号化方法のプログラムを記録した記録媒体 |
KR100679035B1 (ko) * | 2005-01-04 | 2007-02-06 | 삼성전자주식회사 | 인트라 bl 모드를 고려한 디블록 필터링 방법, 및 상기방법을 이용하는 다 계층 비디오 인코더/디코더 |
JP4231020B2 (ja) | 2005-03-29 | 2009-02-25 | 日本電信電話株式会社 | イントラ予測モード選択方法,画像符号化装置,画像符号化プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体 |
JP4722125B2 (ja) | 2005-04-01 | 2011-07-13 | パナソニック株式会社 | 画像復号化装置及び画像復号化方法 |
KR100750128B1 (ko) * | 2005-09-06 | 2007-08-21 | 삼성전자주식회사 | 영상의 인트라 예측 부호화, 복호화 방법 및 장치 |
KR100727990B1 (ko) * | 2005-10-01 | 2007-06-13 | 삼성전자주식회사 | 영상의 인트라 예측 부호화 방법 및 그 방법을 사용하는부호화 장치 |
EP2733952A1 (en) * | 2005-10-21 | 2014-05-21 | Electronics and Telecommunications Research Institute | Method for encoding moving picture using adaptive scanning |
KR100750145B1 (ko) * | 2005-12-12 | 2007-08-21 | 삼성전자주식회사 | 영상의 인트라 예측 부호화, 복호화 방법 및 장치 |
JP4542064B2 (ja) | 2006-05-11 | 2010-09-08 | 日本電信電話株式会社 | 階層間予測方法,装置,プログラムおよびその記録媒体 |
BRPI0621935A2 (pt) * | 2006-07-28 | 2016-09-13 | Toshiba Kk Toshiba Corp | método e aparelho para codificar e decodificar imagem |
BRPI0717639A2 (pt) * | 2006-10-30 | 2013-11-12 | Nippon Telegraph & Telephone | Método de geração de informações de referência preditas, métodos de codificação de decodificação de vídeo, aparelhos destinados aos mesmos, programas destinados aos mesmos, e mídias de armazenamento que armazenam os programas |
KR100856392B1 (ko) | 2006-11-08 | 2008-09-04 | 한국전자통신연구원 | 현재 영상의 복원영역을 참조하는 동영상 부호화/복호화장치 및 그 방법 |
US8326064B2 (en) * | 2007-01-22 | 2012-12-04 | Nec Corporation | Image re-encoding method to decode image data which is orthogonally transformed per first block and encoded by a first encoding method |
WO2008130367A1 (en) * | 2007-04-19 | 2008-10-30 | Thomson Licensing | Adaptive reference picture data generation for intra prediction |
KR101378338B1 (ko) * | 2007-06-14 | 2014-03-28 | 삼성전자주식회사 | 영상 복구를 이용한 인트라 예측 부호화, 복호화 방법 및장치 |
JPWO2009001793A1 (ja) * | 2007-06-26 | 2010-08-26 | 株式会社東芝 | 画像符号化と画像復号化の方法及び装置 |
JP5437807B2 (ja) | 2007-09-18 | 2014-03-12 | 富士通株式会社 | 動画像符号化装置および動画像復号装置 |
US8204327B2 (en) * | 2007-10-01 | 2012-06-19 | Cisco Technology, Inc. | Context adaptive hybrid variable length coding |
KR101228020B1 (ko) * | 2007-12-05 | 2013-01-30 | 삼성전자주식회사 | 사이드 매칭을 이용한 영상의 부호화 방법 및 장치, 그복호화 방법 및 장치 |
WO2009088340A1 (en) * | 2008-01-08 | 2009-07-16 | Telefonaktiebolaget Lm Ericsson (Publ) | Adaptive filtering |
US20090245351A1 (en) * | 2008-03-28 | 2009-10-01 | Kabushiki Kaisha Toshiba | Moving picture decoding apparatus and moving picture decoding method |
CN100592796C (zh) * | 2008-04-15 | 2010-02-24 | 中国科学院计算技术研究所 | 一种视频编码器及其帧内预测模式选择方法 |
WO2009136743A2 (ko) | 2008-05-07 | 2009-11-12 | Lg전자 | 비디오 신호의 디코딩 방법 및 장치 |
JP2009284298A (ja) * | 2008-05-23 | 2009-12-03 | Hitachi Ltd | 動画像符号化装置、動画像復号化装置、動画像符号化方法及び動画像復号化方法 |
JP2009302776A (ja) * | 2008-06-11 | 2009-12-24 | Canon Inc | 画像符号化装置、その制御方法、及びコンピュータプログラム |
US9055146B2 (en) * | 2008-06-17 | 2015-06-09 | International Business Machines Corporation | Social network based call management |
US9479786B2 (en) * | 2008-09-26 | 2016-10-25 | Dolby Laboratories Licensing Corporation | Complexity allocation for video and image coding applications |
KR101590500B1 (ko) | 2008-10-23 | 2016-02-01 | 에스케이텔레콤 주식회사 | 동영상 부호화/복호화 장치, 이를 위한 인트라 예측 방향에기반한 디블록킹 필터링 장치 및 필터링 방법, 및 기록 매체 |
WO2010063881A1 (en) * | 2008-12-03 | 2010-06-10 | Nokia Corporation | Flexible interpolation filter structures for video coding |
KR101538704B1 (ko) * | 2009-01-28 | 2015-07-28 | 삼성전자주식회사 | 보간 필터를 적응적으로 사용하여 영상을 부호화 및 복호화하는 방법 및 장치 |
CN101674475B (zh) * | 2009-05-12 | 2011-06-22 | 北京合讯数通科技有限公司 | 一种h.264/svc的自适应层间纹理预测方法 |
KR101062111B1 (ko) | 2009-07-06 | 2011-09-02 | 안양과학대학 산학협력단 | 가로등용 엘이디 등기구 |
CN101715135B (zh) * | 2009-09-30 | 2013-01-09 | 武汉大学 | 基于匹配模板的自适应帧内预测滤波编码方法 |
KR101464538B1 (ko) | 2009-10-01 | 2014-11-25 | 삼성전자주식회사 | 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
CN101854549B (zh) * | 2010-05-28 | 2012-05-02 | 浙江大学 | 基于空域预测的视频和图像编解码方法和装置 |
KR20110054244A (ko) | 2009-11-17 | 2011-05-25 | 삼성전자주식회사 | 미디언 필터를 이용한 깊이영상 부호화의 인트라 예측 장치 및 방법 |
KR20110113561A (ko) * | 2010-04-09 | 2011-10-17 | 한국전자통신연구원 | 적응적인 필터를 이용한 인트라 예측 부호화/복호화 방법 및 그 장치 |
KR101547041B1 (ko) * | 2011-01-12 | 2015-08-24 | 미쓰비시덴키 가부시키가이샤 | 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법 및 화상 복호 방법 |
-
2011
- 2011-06-28 KR KR1020110063288A patent/KR20120012385A/ko unknown
- 2011-07-25 KR KR1020110073415A patent/KR101339903B1/ko active IP Right Review Request
- 2011-07-29 ES ES16161943.2T patent/ES2691195T3/es active Active
- 2011-07-29 MX MX2015001381A patent/MX336417B/es unknown
- 2011-07-29 CN CN201610630184.9A patent/CN106067978B/zh not_active Expired - Fee Related
- 2011-07-29 RS RS20190435A patent/RS58564B1/sr unknown
- 2011-07-29 EP EP16161943.2A patent/EP3059961B1/en active Active
- 2011-07-29 TR TR2018/09830T patent/TR201809830T4/tr unknown
- 2011-07-29 HU HUE16160964 patent/HUE044403T2/hu unknown
- 2011-07-29 CN CN201610630597.7A patent/CN106231314B/zh not_active Expired - Fee Related
- 2011-07-29 ES ES16160964T patent/ES2718937T3/es active Active
- 2011-07-29 TR TR2018/09836T patent/TR201809836T4/tr unknown
- 2011-07-29 TR TR2018/16492T patent/TR201816492T4/tr unknown
- 2011-07-29 HU HUE16161928A patent/HUE041736T2/hu unknown
- 2011-07-29 PL PL16161940T patent/PL3059960T3/pl unknown
- 2011-07-29 EP EP16160964.9A patent/EP3051815B1/en active Active
- 2011-07-29 CN CN201610630521.4A patent/CN106231313B/zh not_active Expired - Fee Related
- 2011-07-29 RS RS20180742A patent/RS57368B1/sr unknown
- 2011-07-29 PT PT16161927T patent/PT3059953T/pt unknown
- 2011-07-29 CN CN201180042188.0A patent/CN103081474B/zh active Active
- 2011-07-29 ES ES16161929T patent/ES2696525T3/es active Active
- 2011-07-29 HU HUE16161929A patent/HUE041737T2/hu unknown
- 2011-07-29 HU HUE16161937A patent/HUE041267T2/hu unknown
- 2011-07-29 ES ES11814797.4T patent/ES2575381T3/es active Active
- 2011-07-29 TR TR2018/15017T patent/TR201815017T4/tr unknown
- 2011-07-29 HU HUE16161936A patent/HUE041476T2/hu unknown
- 2011-07-29 HU HUE16161943A patent/HUE041266T2/hu unknown
- 2011-07-29 CN CN201610630190.4A patent/CN106210723B/zh not_active Expired - Fee Related
- 2011-07-29 ES ES16161927T patent/ES2720581T3/es active Active
- 2011-07-29 PL PL16161929T patent/PL3059955T3/pl unknown
- 2011-07-29 SI SI201130847A patent/SI2600613T1/sl unknown
- 2011-07-29 CN CN201610630496.XA patent/CN106303532B/zh not_active Expired - Fee Related
- 2011-07-29 SI SI201131505T patent/SI3059956T1/en unknown
- 2011-07-29 EP EP16161933.3A patent/EP3059957B1/en active Active
- 2011-07-29 ES ES16161933.3T patent/ES2694041T3/es active Active
- 2011-07-29 TR TR2018/16486T patent/TR201816486T4/tr unknown
- 2011-07-29 LT LTEP16160964.9T patent/LT3051815T/lt unknown
- 2011-07-29 ES ES16161940.8T patent/ES2685600T3/es active Active
- 2011-07-29 LT LTEP16161940.8T patent/LT3059960T/lt unknown
- 2011-07-29 PL PL16161943T patent/PL3059961T3/pl unknown
- 2011-07-29 RS RS20160460A patent/RS55052B1/sr unknown
- 2011-07-29 PL PL16161933T patent/PL3059957T3/pl unknown
- 2011-07-29 PL PL11814797.4T patent/PL2600613T3/pl unknown
- 2011-07-29 CN CN201510038304.1A patent/CN104602009B/zh not_active Expired - Fee Related
- 2011-07-29 ES ES16161928T patent/ES2700623T3/es active Active
- 2011-07-29 PT PT16161930T patent/PT3059956T/pt unknown
- 2011-07-29 PT PT118147974T patent/PT2600613E/pt unknown
- 2011-07-29 CN CN201610630484.7A patent/CN106067980B/zh active Active
- 2011-07-29 MX MX2015001380A patent/MX336416B/es unknown
- 2011-07-29 ES ES16161937.4T patent/ES2692842T3/es active Active
- 2011-07-29 CN CN201610630511.0A patent/CN106231312B/zh not_active Expired - Fee Related
- 2011-07-29 CN CN201510055880.7A patent/CN104602011B/zh not_active Expired - Fee Related
- 2011-07-29 ES ES16161936T patent/ES2700624T3/es active Active
- 2011-07-29 HU HUE11814797A patent/HUE030354T2/en unknown
- 2011-07-29 LT LTEP16161930.9T patent/LT3059956T/lt unknown
- 2011-07-29 RS RS20180741A patent/RS57381B1/sr unknown
- 2011-07-29 CN CN201610630183.4A patent/CN106067977B/zh active Active
- 2011-07-29 LT LTEP16161927.5T patent/LT3059953T/lt unknown
- 2011-07-29 MX MX2013001232A patent/MX2013001232A/es active IP Right Grant
- 2011-07-29 CN CN201610630500.2A patent/CN106067981B/zh active Active
- 2011-07-29 EP EP16161936.6A patent/EP3059958B1/en active Active
- 2011-07-29 SI SI201131506T patent/SI3059960T1/en unknown
- 2011-07-29 MX MX2015001382A patent/MX336418B/es unknown
- 2011-07-29 CN CN201610630188.7A patent/CN106067979B/zh active Active
- 2011-07-29 PT PT16160964T patent/PT3051815T/pt unknown
- 2011-07-29 WO PCT/KR2011/005590 patent/WO2012018197A2/ko active Application Filing
- 2011-07-29 PL PL16161927T patent/PL3059953T3/pl unknown
- 2011-07-29 PL PL16160964T patent/PL3051815T3/pl unknown
- 2011-07-29 EP EP11814797.4A patent/EP2600613B1/en active Active
- 2011-07-29 PL PL16161928T patent/PL3059954T3/pl unknown
- 2011-07-29 EP EP16161927.5A patent/EP3059953B1/en active Active
- 2011-07-29 PL PL16161937T patent/PL3059959T3/pl unknown
- 2011-07-29 DK DK11814797.4T patent/DK2600613T3/en active
- 2011-07-29 JP JP2013523088A patent/JP5997152B2/ja not_active Expired - Fee Related
- 2011-07-29 PT PT16161940T patent/PT3059960T/pt unknown
- 2011-07-29 DK DK16161940.8T patent/DK3059960T3/en active
- 2011-07-29 MX MX2015001383A patent/MX336419B/es unknown
- 2011-07-29 SI SI201131687T patent/SI3059953T1/sl unknown
- 2011-07-29 EP EP16161929.1A patent/EP3059955B1/en active Active
- 2011-07-29 EP EP16161937.4A patent/EP3059959B1/en active Active
- 2011-07-29 EP EP16161928.3A patent/EP3059954B1/en active Active
- 2011-07-29 SI SI201131682T patent/SI3051815T1/sl unknown
- 2011-07-29 TR TR2018/16011T patent/TR201816011T4/tr unknown
- 2011-07-29 DK DK16160964.9T patent/DK3051815T3/en active
- 2011-07-29 RS RS20190246A patent/RS58388B1/sr unknown
- 2011-07-29 TR TR2018/15529T patent/TR201815529T4/tr unknown
- 2011-07-29 ES ES16161930.9T patent/ES2685577T3/es active Active
- 2011-07-29 PL PL16161936T patent/PL3059958T3/pl unknown
- 2011-07-29 EP EP16161930.9A patent/EP3059956B1/en active Active
- 2011-07-29 HU HUE16161933A patent/HUE041269T2/hu unknown
- 2011-07-29 HU HUE16161927 patent/HUE044305T2/hu unknown
- 2011-07-29 PL PL16161930T patent/PL3059956T3/pl unknown
- 2011-07-29 HU HUE16161930A patent/HUE039985T2/hu unknown
- 2011-07-29 DK DK16161927.5T patent/DK3059953T3/en active
- 2011-07-29 DK DK16161930.9T patent/DK3059956T3/en active
- 2011-07-29 EP EP16161940.8A patent/EP3059960B1/en active Active
- 2011-07-29 HU HUE16161940A patent/HUE040206T2/hu unknown
-
2012
- 2012-09-21 US US13/624,852 patent/US9307246B2/en active Active
-
2013
- 2013-02-04 KR KR1020130012404A patent/KR102322188B1/ko active IP Right Grant
- 2013-08-02 KR KR1020130092202A patent/KR20130098255A/ko active Search and Examination
-
2014
- 2014-04-17 KR KR1020140045822A patent/KR102164752B1/ko active IP Right Grant
- 2014-04-17 KR KR1020140045825A patent/KR20140057511A/ko not_active Application Discontinuation
- 2014-04-17 KR KR1020140045824A patent/KR20140057510A/ko active Search and Examination
-
2015
- 2015-11-02 US US14/929,589 patent/US9467702B2/en active Active
- 2015-11-02 US US14/929,567 patent/US9451263B2/en active Active
- 2015-11-02 US US14/929,534 patent/US9615094B2/en not_active Ceased
- 2015-11-02 US US14/929,685 patent/US9491468B2/en active Active
- 2015-11-02 US US14/929,643 patent/US9462281B2/en active Active
- 2015-11-02 US US14/929,602 patent/US9445099B2/en active Active
- 2015-11-02 US US14/929,516 patent/US9609325B2/en active Active
- 2015-11-02 US US14/929,668 patent/US9451264B2/en active Active
- 2015-11-02 US US14/929,544 patent/US9438919B2/en active Active
-
2016
- 2016-06-10 SM SM201600168T patent/SMT201600168B/it unknown
- 2016-07-20 HR HRP20160911TT patent/HRP20160911T1/hr unknown
- 2016-07-27 CY CY20161100743T patent/CY1117856T1/el unknown
- 2016-08-25 JP JP2016164927A patent/JP6163595B2/ja not_active Expired - Fee Related
- 2016-08-25 JP JP2016164930A patent/JP6163598B2/ja active Active
- 2016-08-25 JP JP2016164924A patent/JP6158997B2/ja active Active
- 2016-08-25 JP JP2016164923A patent/JP6158996B2/ja not_active Expired - Fee Related
- 2016-08-25 JP JP2016164929A patent/JP6163597B2/ja active Active
- 2016-08-25 JP JP2016164925A patent/JP6158998B2/ja active Active
- 2016-08-25 JP JP2016164926A patent/JP6152461B2/ja not_active Expired - Fee Related
- 2016-08-25 JP JP2016164928A patent/JP6163596B2/ja not_active Expired - Fee Related
- 2016-08-25 JP JP2016164922A patent/JP6158995B2/ja not_active Expired - Fee Related
-
2018
- 2018-07-30 CY CY20181100784T patent/CY1120472T1/el unknown
- 2018-07-30 CY CY20181100785T patent/CY1120479T1/el unknown
- 2018-09-13 HR HRP20181471TT patent/HRP20181471T1/hr unknown
- 2018-09-13 HR HRP20181470TT patent/HRP20181470T1/hr unknown
-
2019
- 2019-02-28 CY CY20191100254T patent/CY1121337T1/el unknown
- 2019-03-20 HR HRP20190543TT patent/HRP20190543T1/hr unknown
- 2019-04-25 HR HRP20190773TT patent/HRP20190773T1/hr unknown
- 2019-05-09 CY CY20191100498T patent/CY1122995T1/el unknown
-
2020
- 2020-04-27 KR KR1020200050920A patent/KR20200050451A/ko not_active Application Discontinuation
-
2021
- 2021-06-09 US US17/343,127 patent/USRE49565E1/en active Active
- 2021-07-23 KR KR1020210097241A patent/KR20210096029A/ko not_active Application Discontinuation
- 2021-10-22 KR KR1020210141998A patent/KR20210131950A/ko not_active Application Discontinuation
- 2021-10-22 KR KR1020210141997A patent/KR20210131949A/ko not_active Application Discontinuation
- 2021-10-22 KR KR1020210141999A patent/KR20210131951A/ko not_active Application Discontinuation
- 2021-10-22 KR KR1020210142000A patent/KR20210131952A/ko not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
None |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017085592A (ja) * | 2011-04-01 | 2017-05-18 | アイベックス・ピイティ・ホールディングス・カンパニー・リミテッド | イントラ予測モードにおける映像復号化方法 |
JP2018057030A (ja) * | 2011-04-01 | 2018-04-05 | アイベックス・ピイティ・ホールディングス・カンパニー・リミテッド | イントラ予測モードにおける映像復号化方法 |
JP2018057029A (ja) * | 2011-04-01 | 2018-04-05 | アイベックス・ピイティ・ホールディングス・カンパニー・リミテッド | イントラ予測モードにおける映像復号化方法 |
JP2019033516A (ja) * | 2011-04-01 | 2019-02-28 | アイベックス・ピイティ・ホールディングス・カンパニー・リミテッド | イントラ予測モードにおける映像復号化方法 |
JP2016123118A (ja) * | 2011-04-01 | 2016-07-07 | アイベックス・ピイティ・ホールディングス・カンパニー・リミテッド | イントラ予測モードにおける映像復号化方法 |
US10645415B2 (en) | 2011-04-25 | 2020-05-05 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
US11910010B2 (en) | 2011-04-25 | 2024-02-20 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
US11006146B2 (en) | 2011-04-25 | 2021-05-11 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
JP2023029552A (ja) * | 2011-11-04 | 2023-03-03 | ゲンスクエア エルエルシー | 残差ブロック生成方法 |
EP3843393A1 (en) * | 2011-11-04 | 2021-06-30 | Innotive Ltd | Method of deriving an intra prediction mode |
JP7585533B2 (ja) | 2011-11-04 | 2024-11-18 | ゲンスクエア エルエルシー | 画像符号化装置 |
JP7500786B2 (ja) | 2011-11-04 | 2024-06-17 | ゲンスクエア エルエルシー | 残差ブロック生成方法 |
JP7445793B2 (ja) | 2011-11-04 | 2024-03-07 | ゲンスクエア エルエルシー | 画像符号化方法 |
JP2023036986A (ja) * | 2011-11-04 | 2023-03-14 | ゲンスクエア エルエルシー | 画像符号化方法 |
JP7210664B2 (ja) | 2011-11-04 | 2023-01-23 | ゲンスクエア エルエルシー | 映像符号化方法、映像復号化方法 |
JP2022000950A (ja) * | 2011-11-04 | 2022-01-04 | イノティヴ リミテッド | 映像符号化方法、映像復号化方法、コンピュータ可読記憶媒体 |
US11375195B2 (en) | 2012-06-08 | 2022-06-28 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US10812800B2 (en) | 2012-06-08 | 2020-10-20 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US10212425B2 (en) | 2012-06-08 | 2019-02-19 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US11849116B2 (en) | 2012-06-08 | 2023-12-19 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US10542290B2 (en) | 2012-06-27 | 2020-01-21 | Sun Patent Trust | Image decoding method and image decoding apparatus for sample adaptive offset information |
US11477467B2 (en) | 2012-10-01 | 2022-10-18 | Ge Video Compression, Llc | Scalable video coding using derivation of subblock subdivision for prediction from base layer |
US11575921B2 (en) | 2012-10-01 | 2023-02-07 | Ge Video Compression, Llc | Scalable video coding using inter-layer prediction of spatial intra prediction parameters |
US11589062B2 (en) | 2012-10-01 | 2023-02-21 | Ge Video Compression, Llc | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
EP3429203A3 (en) * | 2012-10-01 | 2019-04-17 | GE Video Compression, LLC | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
US10694183B2 (en) | 2012-10-01 | 2020-06-23 | Ge Video Compression, Llc | Scalable video coding using derivation of subblock subdivision for prediction from base layer |
US11134255B2 (en) | 2012-10-01 | 2021-09-28 | Ge Video Compression, Llc | Scalable video coding using inter-layer prediction contribution to enhancement layer prediction |
US10477210B2 (en) | 2012-10-01 | 2019-11-12 | Ge Video Compression, Llc | Scalable video coding using inter-layer prediction contribution to enhancement layer prediction |
US10694182B2 (en) | 2012-10-01 | 2020-06-23 | Ge Video Compression, Llc | Scalable video coding using base-layer hints for enhancement layer motion parameters |
US12010334B2 (en) | 2012-10-01 | 2024-06-11 | Ge Video Compression, Llc | Scalable video coding using base-layer hints for enhancement layer motion parameters |
US10687059B2 (en) | 2012-10-01 | 2020-06-16 | Ge Video Compression, Llc | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
US10681348B2 (en) | 2012-10-01 | 2020-06-09 | Ge Video Compression, Llc | Scalable video coding using inter-layer prediction of spatial intra prediction parameters |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2012018197A2 (ko) | 인트라 예측 복호화 장치 | |
WO2012018198A2 (ko) | 예측 블록 생성 장치 | |
WO2018174402A1 (ko) | 영상 코딩 시스템에서 변환 방법 및 그 장치 | |
WO2012023762A2 (ko) | 인트라 예측 복호화 방법 | |
WO2013062197A1 (ko) | 영상 복호화 장치 | |
TWI608726B (zh) | 影像編碼設備 | |
WO2011099792A2 (ko) | 비디오 신호의 처리 방법 및 장치 | |
WO2017069419A1 (ko) | 비디오 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2017052000A1 (ko) | 영상 코딩 시스템에서 움직임 벡터 정제 기반 인터 예측 방법 및 장치 | |
WO2012002785A2 (ko) | 화면내 예측 부호화를 위한 영상 부호화/복호화 장치 및 방법 | |
WO2013062192A1 (ko) | 인트라 예측 정보 부호화 방법 및 장치 | |
WO2012081879A1 (ko) | 인터 예측 부호화된 동영상 복호화 방법 | |
WO2011145819A2 (ko) | 영상 부호화/복호화 장치 및 방법 | |
WO2013025065A2 (ko) | 정밀한 단위의 필터 선택을 적용한 영상 부호화/복호화 장치 및 방법 | |
WO2012144876A2 (ko) | 인루프 필터링을 적용한 예측 방법을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2013062198A1 (ko) | 영상 복호화 장치 | |
WO2013062195A1 (ko) | 인트라 예측 모드 복호화 방법 및 장치 | |
WO2011126285A2 (ko) | 부호화 모드에 대한 정보를 부호화, 복호화하는 방법 및 장치 | |
WO2020185004A1 (ko) | 예측 유닛을 서브 유닛들로 분할하여 예측하는 인트라 예측 방법 및 장치 | |
WO2019112071A1 (ko) | 영상 코딩 시스템에서 크로마 성분의 효율적 변환에 기반한 영상 디코딩 방법 및 장치 | |
WO2013157820A1 (ko) | 고속 에지 검출을 이용하는 비디오 부호화 방법 및 장치, 그 비디오 복호화 방법 및 장치 | |
WO2013062194A1 (ko) | 복원 블록을 생성하는 방법 및 장치 | |
WO2022114742A1 (ko) | 비디오 부호화 및 복호화를 위한 장치 및 방법 | |
KR20180111378A (ko) | 병렬 처리를 위한 움직임 정보를 처리하는 영상 처리 방법, 그를 이용한 영상 복호화, 부호화 방법 및 그 장치 | |
KR20130098254A (ko) | 예측 블록 생성 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180042188.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11814797 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 2013523088 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2013/001232 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011814797 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: P-2016/0460 Country of ref document: RS |