This application claims in the korean patent application that on July 2nd, 2010 submits in Korean Intellectual Property Office (KIPO)
The priority of No.10-2010-0064009, the entire disclosure of which is incorporated herein by reference.
Detailed description of the invention
The present invention can be carried out various amendment, and the present invention can have multiple embodiment.Retouch in detail referring to the drawings
State specific embodiment.
But, the invention is not restricted to specific embodiment, and be to be understood that spirit and the skill that present invention resides in the present invention
All modifications, equivalent or the replacement included in the range of art.
Term " first " and " second " may be used for describing various assembly, but assembly is not limited to this.These terms are only used
In assembly is distinguished from each other.Such as, the first assembly can also named second assembly, and similarly, the second assembly can be ordered
Entitled first assembly.Term "and/or" includes the combination of multiple continuous item as described in this or any of multiple continuous item
One.
When assembly " connects " or during " coupled " to another assembly, and assembly can be connected or coupled to another assembly.Phase
Instead, when assembly is connected or coupled to another assembly, there is no assembly intervention.
Provide term as used herein to describe embodiment, but the present invention of being not intended to limit.Singular references includes multiple
Number term, unless expressly stated.As used herein, term " includes " or " having " is used for indicating existence the most special
Levy, numeral, step, operation, assembly, part or a combination thereof, but be not excluded for adding one or more feature, numeral, step, behaviour
The existence of work, assembly, part or a combination thereof or probability.
Unless otherwise defined, all terms including technology or scientific terminology as used herein have skill common with this area
The implication that implication that art personnel are generally understood that is identical.Should be explained at the such term defined in dictionary as normally used
For with the identical implication of implication understood in correlative technology field, and unless otherwise defined, should ideally or the most not just
It is understood likes.
Hereinafter, the preferred embodiments of the present invention it are described more fully with reference to the accompanying drawings.Describe for convenience, whole
In individual drawing and description, identical reference number is used for representing identical assembly, and it is not carried out repeated description.
Example embodiment according to the present invention, it is possible to use can be applicable to that there is HD(fine definition) or higher resolution
The extended macroblock of size 32 × 32 pixel or bigger of high-definition picture performs to include interframe/infra-frame prediction, converts, quantifies
With coding and the decoding of entropy code, and can use recursive compilation unit (CU) structure being described below carry out coding conciliate
Code.
Fig. 1 is the concept map illustrating recursive compilation cellular construction according to an embodiment of the invention.
With reference to Fig. 1, each compilation unit CU, there is square, and can have variable-sized 2N × 2N(unit: as
Element).Inter prediction, infra-frame prediction can be performed based on each compilation unit, convert, quantify and entropy code.Compilation unit CU
Maximum compilation unit LCU and minimum compilation unit SCU can be included.The chi of maximum compilation unit LCU or minimum compilation unit SCU
Very little can be represented by the power of 2, it can be 8 or bigger.
According to example embodiment, compilation unit CU can have recursive tree structure.Fig. 1 illustrate maximum compilation unit LCU(or
CU0) while having size 2N0(, it is 128(N0=64)) and greatest level or levels deep are the example of 5.Can be by one
Series of markings represents recursive structure.Such as, there is the situation of mark value 0 in grade or compilation unit CUk that levels deep is k
Under, at present level or levels deep, compilation unit CUk is performed compiling.When mark value is 1, compilation unit CUk is divided into 4
Individual independent compilation unit CUk+1, coding unit CUk+1 have grade or levels deep k+1 and a size of Nk+1 × Nk+1.?
In the case of this, can recursively process compilation unit CUk+1, until its grade or levels deep reach to allow greatest level or
Levels deep.When the grade of compilation unit CUk+1 or levels deep with can allow greatest level or levels deep (as shown in Figure 4,
Such as time, 4) identical, do not allow Further Division.
The size of maximum compilation unit LCU and minimum compilation unit SCU can be included in sequence parameter set (SPS).Sequence
Row parameter set SPS can include allowed greatest level or the levels deep of maximum compilation unit LCU.Such as, illustrate in fig. 2
Example in, can allow greatest level or levels deep is 5, and has size 128 picture as maximum compilation unit LCU
During element, five compilation unit sizes, such as 128 × 128 (LCU), 64 × 64,32 × 32,16 × 16 and 8 × 8 can be allowed
(SCU).It is to say, give the size of maximum compilation unit LCU and greatest level or levels deep can be allowed, it may be determined that compile
Translate the allowed size of unit.
Use above-mentioned recursive compilation unit can provide advantages below.
First, the size of size more than existing 16 × 16 macro blocks can be supported.If image-region interested is equal
Even, then, compared with when using multiple fritter, maximum compilation unit LCU can represent figure interested with fewer number of symbol
As region.
Second, compared with during the macro block using fixed dimension, the maximum compilation unit LCU of any size can be supported, make
Codec can must be easily optimized for various contents, application and device.It is to say, maximum can be properly selected
The size of compilation unit LCU, greatest level or levels deep so that in addition to intended application, can optimize hierarchical block structure.
3rd, no matter it is macro block, sub-macroblock or extended macroblock, it is possible to use the compilation unit LCU of individual unit type,
Allow to only by using maximum compilation unit LCU, greatest level (or greatest level degree of depth) and a series of labelling simple table
Show multi-level hierarchical structure.When the syntactic representation with size independence is used together, compilation unit LCU be enough to indicate for remaining volume
Translate a common-use size syntax item of instrument, and such concordance can simplify the dissection process of reality.Greatest level value
(or greatest level depth value) can be any value, and can have more than the permission in existing H.264/AVC encoding scheme
Value.By using the syntactic representation of size independence to indicate all grammers according to the consistent mode of the size independence with compilation unit CU
Element.The division that can recursively indicate compilation unit processes, and leaf compilation unit (last compiling list in this grade
Unit) syntactic element can be defined as same size and not consider the size of compilation unit.Above-mentioned expression is reducing parsing again
Miscellaneous degree aspect is highly effective, and when allowing high-grade or levels deep, this expression can be made clarification further.
If layering division processed, then can perform on the leaf node of compilation unit delaminating units inter prediction or
Infra-frame prediction, and without Further Division.This leaf compilation unit is used as predicting unit PU, and it is inter prediction or infra-frame prediction
Elementary cell.
For inter prediction or infra-frame prediction, leaf compilation unit realizes segmentation.It is to say, in predicting unit PU
Perform segmentation.Here, predicting unit PU is the elementary cell of inter prediction or infra-frame prediction, and it can be existing macroblock unit
Or sub-macroblock unit, or the extended macroblock unit of a size of 32 × 32 pixels or bigger or compilation unit.
The all information (difference etc. motion vector, motion vector between) relevant to prediction can be using as inter prediction
The predicting unit of elementary cell is that unit is sent to decoder.
For inter prediction or infra-frame prediction, segmentation can include asymmetric segmentation, any shape in addition to square
The geometry segmentation of shape, or the segmentation along edge direction, now will be described in further detail.
Fig. 2 to Fig. 5 is the concept map illustrating the asymmetric segmentation according to embodiment.
Being natural number when predicting unit PU for inter prediction or infra-frame prediction has M × M(M, dimensional units is picture
Element) size time, along compilation unit horizontal or vertical direction perform asymmetric segmentation.Fig. 3 to Fig. 5 illustrates predicting unit PU
Size is the example of 64 × 64,32 × 32,16 × 16,8 × 8 pixels.The size that Fig. 3 and Fig. 4 illustrates wherein predicting unit PU is big
In the asymmetric segmentation as macroblock size 16 × 16 pixel.
With reference to Fig. 2, in the case of a size of 64 × 64, carry out asymmetric segmentation in the horizontal direction, with by predicting unit
It is divided into part P11a and part P21a of a size of 64 × 48 of a size of 64 × 16, or is divided into the part of a size of 64 × 48
P12a and part P22a of a size of 64 × 16.Or, vertically perform asymmetric segmentation, so that predicting unit is divided into chi
Very little be 16 × 64 part P13a and part P23a of a size of 48 × 64, or be divided into a size of 48 × 64 part P14a and
Part P24a of a size of 16 × 64.
With reference to Fig. 3, in the case of a size of 32 × 32, it was predicted that unit can stand the asymmetric segmentation of horizontal direction, with
It is divided into part P11b and part P21b of a size of 32 × 24 of a size of 32 × 8, or is divided into the part of a size of 32 × 24
P12b and part P22b of a size of 32 × 8.Or, it was predicted that unit can stand the asymmetric segmentation of vertical direction, to be divided into chi
Very little be 8 × 32 part P13b and part P23b of a size of 24 × 32, or be divided into a size of 24 × 32 part P14b and
Part P24b of a size of 8 × 32.
With reference to Fig. 4, in the case of a size of 16 × 16, it was predicted that unit PU can stand the asymmetric segmentation of horizontal direction,
To be divided into part P11c and part P21c of a size of 16 × 12 of a size of 16 × 4, or be divided into a size of 16 × 12 upper
Part and the lower part (although being shown without in the accompanying drawings) of a size of 16 × 4.Although additionally, be shown without in the accompanying drawings, but
That predicting unit PU can stand the asymmetric segmentation of vertical direction, be divided into a size of 4 × 16 left half and a size of 12 ×
The right half of 16, or it is divided into left half and the right half of a size of 4 × 16 of a size of 12 × 16.
With reference to Fig. 5, in the case of a size of 8 × 8, it was predicted that unit PU can stand the asymmetric segmentation of horizontal direction, with
It is divided into part P11d and part P21d of a size of 8 × 6 of a size of 8 × 2, or is divided into upper part and the chi of a size of 8 × 6
Very little be 8 × 2 lower part (although being shown without in the accompanying drawings).Although additionally, be shown without in the accompanying drawings, but predicting unit
PU can stand the asymmetric segmentation of vertical direction, to be divided into left half and the right half of a size of 6 × 8 of a size of 2 × 8, or
Person is divided into left half and the right half of a size of 2 × 8 of a size of 6 × 8.
Fig. 6 is the concept illustrating the intra-frame predictive encoding method using asymmetric block of pixels according to an embodiment of the invention
Figure.
Fig. 7 to 9 is to illustrate the intraframe predictive coding side using asymmetric block of pixels according to another embodiment of the present invention
The concept map of method.Fig. 6 to Fig. 9 illustrates the example of infra-frame prediction when using the asymmetric segmentation combining Fig. 2 to Fig. 5 description.
But, the invention is not restricted to this.Intra-frame predictive encoding method shown in Fig. 6 to Fig. 9 also apply be applicable to use institute in Fig. 2 to Fig. 5
The various types of asymmetric segmentation shown.
Fig. 6 is for describing the diagram that part P11d to a size of 8 × 2 performs the predictive mode of infra-frame prediction, this portion
Point P11d is obtained by the predicting unit PU asymmetric segmentation of execution to a size of 8 × 8 in the horizontal direction.
With reference to Fig. 6, the pixel value in the block previously encoded along prediction direction is used to carry out the part that predicted size is 8 × 2
Pixel value in P11d, described prediction direction includes vertical direction (predictive mode 0), horizontal direction (predictive mode 1), meansigma methods
Prediction (predictive mode 2), diagonal angle lower right (predictive mode 3) and diagonal angle lower left (predictive mode 4).
Such as, in the case of predictive mode 0, owing to predicted pixel values is in part P11d of a size of 8 × 2, make
With previous coding upper piece in the pixel value that is placed in a perpendicular direction.
In the case of predictive mode 1, owing to predicted pixel values is in part P11d of a size of 8 × 2, use previously
The pixel value placed in the horizontal direction in left piece of coding.
In the case of predictive mode 2, owing to predicted pixel values is in part P11d of a size of 8 × 2, use previously
Left piece of coding and upper piece in the meansigma methods of pixel.
In the case of predictive mode 3, owing to predicted pixel values is in part P11d of a size of 8 × 2, use previously
Coding upper piece in diagonally lower right place pixel value.In the case of predictive mode 3, when part P11d upper piece in
Pixel inadequate time, it is possible to use two pixels in upper right block make up.
In the case of predictive mode 4, owing to predicted pixel values is in part P11d of a size of 8 × 2, use previously
The pixel value that in the upper left block of coding, diagonally lower left is placed.
Fig. 7 illustrates part P21d to a size of 8 × 6 and performs the predictive mode of infra-frame prediction, and this part P21d is passed through
In horizontal direction, the predicting unit PU asymmetric segmentation of execution to a size of 8 × 8 obtains.
With reference to Fig. 7, the pixel value in the block previously encoded along prediction direction is used to carry out the part that predicted size is 8 × 6
Pixel value in P21d, described prediction direction includes vertical direction (predictive mode 0), horizontal direction (predictive mode 1), meansigma methods
Prediction (predictive mode 2), diagonal angle lower right (predictive mode 3) and diagonal angle lower left (predictive mode 4).
Such as, in the case of predictive mode 0, owing to predicted pixel values is positioned in part P21d of a size of 8 × 6, make
With previous coding upper piece in the pixel value that is placed in a perpendicular direction.
In the case of predictive mode 1, owing to predicted pixel values is in part P21d of a size of 8 × 6, use previously
The pixel value placed in the horizontal direction in left piece of coding.
In the case of predictive mode 2, owing to predicted pixel values is in part P21d of a size of 8 × 6, use previously
Left piece of coding and upper piece in the meansigma methods of pixel.
In the case of predictive mode 3, owing to predicted pixel values is in part P21d of a size of 8 × 6, use previously
Coding upper piece in diagonally lower right place pixel value.In the case of predictive mode 3, when part P21d upper piece in
Pixel inadequate time, it is possible to use two pixels in upper right block make up.
In the case of predictive mode 4, owing to predicted pixel values is in part P21d of a size of 8 × 6, use previously
The pixel value that in the upper left block of coding, diagonally lower left is placed.
Fig. 8 illustrates part P11c to a size of 16 × 4 and performs the predictive mode of infra-frame prediction, and this part P11c is passed through
The predicting unit PU asymmetric segmentation of execution to a size of 16 × 16 in the horizontal direction obtains.
With reference to Fig. 8, the pixel value in the block previously encoded along prediction direction is used to carry out the part that predicted size is 16 × 4
Pixel value in P11c, described prediction direction includes vertical direction (predictive mode 0), horizontal direction (predictive mode 1), meansigma methods
Prediction (predictive mode 2), diagonal angle lower right (predictive mode 3) and diagonal angle lower left (predictive mode 4).
Such as, in the case of predictive mode 0, owing to predicted pixel values is in part P11c of a size of 16 × 4, make
With previous coding upper piece in the pixel value that is placed in a perpendicular direction.
In the case of predictive mode 1, owing to predicted pixel values is in part P11c of a size of 16 × 4, use elder generation
The pixel value placed in the horizontal direction in left piece of front coding.
In the case of predictive mode 2, owing to predicted pixel values is in part P11c of a size of 16 × 4, use elder generation
Left piece of front coding and upper piece in the meansigma methods of pixel.
In the case of predictive mode 3, owing to predicted pixel values is in part P11c of a size of 16 × 4, use elder generation
Front coding upper piece in diagonally lower right place pixel value.In the case of predictive mode 3, when part P11c upper piece
In pixel inadequate time, it is possible to use four pixels in upper right block make up.
In the case of predictive mode 4, owing to predicted pixel values is in part P11c of a size of 16 × 4, use elder generation
The pixel value that in the upper left block of front coding, diagonally lower left is placed.
Fig. 9 illustrates part P11b to a size of 32 × 8 and performs the predictive mode of infra-frame prediction, and this part P11b is passed through
The predicting unit PU asymmetric segmentation of execution to a size of 32 × 32 in the horizontal direction obtains.
With reference to Fig. 9, the pixel value along the block of prediction direction previous coding is used to carry out the part that predicted size is 32 × 8
Pixel value in P11b, described prediction direction includes vertical direction (predictive mode 0), horizontal direction (predictive mode 1), meansigma methods
Prediction (predictive mode 2), diagonal angle lower right (predictive mode 3) and diagonal angle lower left (predictive mode 4).
Such as, in the case of predictive mode 0, owing to predicted pixel values is in part P11b of a size of 32 × 8, make
With previous coding upper piece in the pixel value that is placed in a perpendicular direction.
In the case of predictive mode 1, owing to predicted pixel values is in part P11b of a size of 32 × 8, use elder generation
The pixel value placed in the horizontal direction in left piece of front coding.
In the case of predictive mode 2, owing to predicted pixel values is in part P11b of a size of 32 × 8, use elder generation
Left piece of front coding and upper piece in the meansigma methods of pixel.
In the case of predictive mode 3, owing to predicted pixel values is in part P11b of a size of 32 × 8, use elder generation
Front coding upper piece in diagonally lower right place pixel value.In the case of predictive mode 3, when part P11b upper piece
In pixel inadequate time, it is possible to use eight pixels in upper right block make up.
In the case of predictive mode 4, owing to predicted pixel values is in part P11b of a size of 32 × 8, use elder generation
The pixel value that in the upper left block of front coding, diagonally lower left is placed.
Fig. 6 to 9 illustrates the predicting unit of each size for asymmetric segmentation block and uses predetermined number predictive mode
Example, and the predictive mode along other direction (not shown) can also be used for each predicting unit.For example, it is possible to make
With in the upper left block of previous coding pixel value along in whole 360 ° in omnirange with identical predetermined angular (such as, 22.5 ° or
11.25 °) line that formed performs infra-frame prediction.Or, any angle can be specified by encoder in advance so that along referring to according to this
Determine the line of angle restriction to perform infra-frame prediction.Such as, for specified angle, the dx and edge having in the horizontal direction can be defined
The slope of the dy of vertical direction, can be by the information about dx and dy from encoder transmission to decoder.Can also be by predetermined angle
Degree information is from encoder transmission to decoder.
Figure 10 is illustrate intra-frame predictive encoding method based on planar prediction according to another embodiment of the present invention general
Read figure.
Extended macroblock a size of 16 × 16 or bigger is for the high-definition picture with HD or higher resolution
Carry out encoding or in the case of the size of predicting unit increases to 8 × 8 or bigger, if existing intra prediction mode is applied to
Predicting unit the rightest and most descend pixel value, then prediction produces distortion, and therefore making it be difficult to image smoothing is smoothed image.
In this case, single plane mode can be defined, and when activating plane mode labelling, such as Figure 10
Shown in, in order to obtain the rightest of predicting unit and descend most the predicted pixel values of pixel (1010), it is possible to use pixel (1001 Hes
1003) value and/or the value of the interior pixel of prediction block perform linear interpolation.Pixel 1001 is positioned at the predicting unit of previous coding
Upper piece in, and with the rightest and most descend pixel (1010) corresponding in vertical direction.Pixel 1003 is positioned at previous coding
In left piece of predicting unit, and in a horizontal direction with the rightest and most descend pixel (1010) corresponding.Interior pixel is in pre-
Survey the pixel in block, and interior pixel both horizontally and vertically in the rightest and most descend pixel (1010) corresponding.
That is, when activating plane mode labelling, as shown in Figure 10, in order to obtain the rightest of predicting unit and most descend pixel
The predicted pixel values of (such as, pixel 1010), it is possible to use the value of pixel (1001 and 1003) performs linear interpolation.Pixel
1001 predicting unit being positioned at previous coding upper piece in, and with the rightest and most descend pixel (1010) relative in vertical direction
Should.Pixel 1003 is positioned in left piece of the predicting unit of previous coding, and in a horizontal direction with the rightest and most descend pixel
(1010) corresponding.
Alternatively, when activating plane mode labelling, as shown in Figure 10, in order to obtain the rightest of predicting unit and most play picture
The predicted pixel values of element (such as pixel 1010), it is possible to use the value of pixel (1001 and 1003) and/or the value of interior pixel are held
Line linearity interpolation.Pixel 1001 be positioned at the predicting unit of previous coding upper piece in, and with the rightest and in vertical direction
Lower pixel (1010) is corresponding, and in left piece of pixel 1003 predicting unit that is positioned at previous coding, and in the horizontal direction
In with the rightest and most descend pixel (1010) corresponding.Interior pixel is positioned at prediction block, and interior pixel is in the horizontal direction and vertically
With the rightest and most descend pixel (1010) corresponding in direction.When activating plane mode labelling, can the rightest by predicting unit
And most descend pixel (1010) value from encoder transmission to decoder.Here, in current prediction unit as shown in Figure 10 by 8 × 8
In the case of individual prediction block composition, respective pixel value vertically and/or horizontally in the left side block of previous coding and upper endpiece
(1001,1003) indicate among the most encoded block adjacent with predicting block the pixel value of pixel, position in left side block and upper endpiece
The value of the respective pixel value instruction pixel 1003 of the horizontal direction of the pixel 1010 in the rightest of predicting unit and bottom, and
It is positioned at the value of the respective pixel value instruction pixel 1001 of the vertical direction of the pixel 1010 of the rightest of predicting unit and bottom, and
And in prediction block the interior predicted pixel values instruction of horizontal direction along pixel 1003 and the rightest and most descend the level between pixel 1010
The value of at least one pixel that direction is placed, and predict that in block, the corresponding interior predicted pixel values of vertical direction indicates along pixel
1001 and the rightest and value of at least one pixel of most descending the vertical direction between pixel 1010 to arrange.
Additionally, activating in the case of plane prediction mode labelling, can be by using the left side block of previous coding and upper
In the pixel value of the respective pixel vertically and/or horizontally in end block and/or predicting unit vertically and/or horizontally
(such as, corresponding inner boundary predicted pixel values vertically and/or horizontally indicates along pixel corresponding inner boundary predicted pixel values
1003 and the rightest and value of at least one pixel of most descending the horizontal direction between pixel 1010 to place or along pixel 1001 with
The rightest and the value of at least one pixel of most descending the vertical direction between pixel 1010 to arrange) perform bilinear interpolation and obtain pre-
Survey the predicted pixel values of the interior pixel of unit.Here, along water in the predicted pixel values indication predicting block of the interior pixel of predicting unit
Square (due to shown in Figure 10 8 × 8 pieces, therefore there are 8 horizontal lines, and in advance to the predicted pixel values of the interior pixel arranged
Survey in block that the predicted pixel values instruction of the interior pixel arranged in the horizontal direction is arranged along 8 each directions horizontal 8
The predicted pixel values of interior pixel) or predict that the predicted pixel values of interior pixel vertically disposed in block is (due in Figure 10
Illustrate 8 × 8 pieces, therefore there are 8 vertical lines, and predict that the predicted pixel values of interior pixel vertically disposed in block refers to
Show the predicted pixel values of 8 interior pixels of each the vertical direction layout along 8 vertical lines).
In Fig. 10, in the case of obtaining the predicted pixel values of interior pixel of predicting unit, the left side block of previous coding
With respective pixel value instruction vertically and/or horizontally in upper endpiece and the left side block predicting the adjacent previous coding block of block and
The pixel value of the pixel in upper endpiece.In the case of current prediction unit as shown in Figure 10 is made up of 8 × 8 prediction blocks, in advance
Survey the respective pixel value instruction edge of the horizontal direction of 8 pixels (8 pixels i.e., from the top to the bottom) of the rightest line of unit
The respective pixel identical bits of the rightest line of the predicting unit among the left side block of the previous coding that horizontal direction is adjacent with prediction block
Put the pixel value of the pixel that place arranges, it was predicted that 8 pixels (that is, 8 pictures from the leftmost side to the rightmost side of the bottom line of unit
Element) vertical direction the instruction of respective pixel value vertically with the respective pixel same position rolled off the production line most of predicting unit at
The pixel value of the pixel arranged.
Additionally, in Fig. 10, in the case of obtaining the predicted pixel values of interior pixel of predicting unit, it was predicted that unit hangs down
The inner boundary predicted pixel values indication predicting block of the respective pixel of straight and/or horizontal direction rolls off the production line most or places at the rightest line
The pixel value (predicted pixel values) of pixel.In the case of current prediction unit as shown in Figure 10 is made up of 8 × 8 prediction blocks,
Such as, from the 5th horizontal line that predicting unit top starts among 8 pixels with the 7th, the right side picture that pixel is corresponding
The inner boundary predicted pixel values of element can be the rightest among 8 pixels from the 5th horizontal line that predicting unit top starts
The pixel value (or predicted pixel values) of pixel.In this case, it is possible to use the 5th started from predicting unit top
The pixel value (or predicted pixel values) of the rightest pixel among 8 pixels on horizontal line, and the previous volume adjacent with prediction block
Code left side block in pixel pixel value among in the horizontal direction with the 5th horizontal line started from predicting unit top on 8
The pixel value of the pixel of the previous coding of the 7th the location of pixels aligned identical in right side among individual pixel, by performing two-way plug
Value, obtains from the 5th horizontal line that predicting unit top starts the prediction picture of the 7th pixel in right side among 8 pixels
Element value.
Additionally, in Fig. 10, in the case of obtaining the predicted pixel values of interior pixel of predicting unit, it was predicted that unit hangs down
Straight and/or the inner boundary predicted pixel values of horizontal direction respective pixel, such as, in current prediction unit as shown in Figure 10 by 8 × 8
During prediction block composition, start from the leftmost side of predicting unit to hang down from edge, top among 8 pixels the 5th vertical line in right side
The leftmost side that Nogata can be in from predicting unit to the inner boundary predicted pixel values of the corresponding pixel of the 7th pixel is opened
The pixel value of the pixel of the bottom of 8 pixels on the 5th vertical line on the right side of beginning.
In this case, it is possible to use from the 5th, the right side vertical line that the leftmost side of predicting unit starts 8
The pixel value (or predicted pixel values) of the pixel of bottom and the upper of the previous coding adjacent with prediction block it is positioned among pixel
The 5th, the right side vertically started with the leftmost side from predicting unit among the pixel value of the pixel in lateral mass vertical line
Start the picture of the pixel of the previous coding arranged at vertically the 7th pixel same position from top among upper 8 pixels
Element value (or predicted pixel values), obtains the 5th, the right side that the leftmost side from predicting unit starts hang down by performing two-way interpolation
Start the predicted pixel values of vertically the 7th pixel among 8 pixels from top on straight line.
Meanwhile, in the case of activating plane prediction mode labelling, can be by the rightest of predicting unit and bottom pixel
Pixel value be sent to decoder from encoder.In addition it is possible to use from transmit the rightest of encoder and most go up pixel
1001 and the rightest and most descend pixel 1010, the pixel of the pixel on the rightest line being positioned at Figure 10 is obtained by performing linear interpolation
Value.Can use from transmit the most left of encoder and descend most pixel 1003 and the rightest and most descend pixel 1010, by execution
Linear interpolation obtains the pixel value of the pixel being positioned on the rolling off the production line most of Figure 10.
Figure 11 is illustrate intra-frame predictive encoding method based on planar prediction according to another embodiment of the present invention general
Read figure.
When activating plane prediction mode labelling, as shown in figure 11, it is temporally located at and is included in as to be coded of
Determine at N-1 picture before the N picture of photo current that N picture has first size (such as, 8 in Figure 11
× 8 pixels) the reference prediction unit of current prediction unit.In order to obtain the rightest in current prediction unit and most descend the pre-of pixel
Survey pixel value, the left side of not only adjacent with current prediction unit previous coding and upper piece vertically and horizontally corresponding in 213
Pixel value, and a left side for the previous coding adjacent with the corresponding predicting unit of N-1 picture and upper piece in 233 vertically and horizontally side
To respective pixel value be all used for calculating their meansigma methods or perform linear interpolation.
Or, in order to obtain predicted pixel values that is the rightest in current prediction unit and that descend most pixel, N picture current pre-
Survey in unit vertically and horizontally corresponding in the pixel value previous coding adjacent with current prediction unit a left side and upper piece
Respective pixel value vertically and horizontally and the previous coding adjacent with the corresponding predicting unit of N-1 picture in 213
Left and upper piece in 233 respective pixel value vertically and horizontally be all used for calculating their meansigma methods or performing linear inserting
Value.
Additionally, in order to obtain predicted pixel values that is the rightest in current prediction unit and that descend most pixel, the correspondence of N-1 picture
In unit vertically and horizontally corresponding the rightest and descend in the interior pixel value of pixel and the current prediction unit of N picture most
In vertically and horizontally corresponding the previous coding that pixel value is adjacent with current prediction unit a left side and upper piece vertical in 213
With the respective pixel value of horizontal direction and a left side for the previous coding adjacent with the corresponding predicting unit of N-1 picture and upper piece
In 233, respective pixel value vertically and horizontally is used for calculating their meansigma methods or performing linear interpolation.
And, in the case of activating plane prediction mode labelling, it is possible to use in the corresponding predicting unit of N-1 picture
In corresponding inner boundary pixel value vertically and horizontally, the corresponding predicting unit of N-1 picture previous coding a left side and upper piece
In respective pixel value vertically and horizontally, N picture current prediction unit in vertically and horizontally corresponding inner edge
In the current prediction unit of boundary's pixel value and/or N picture previous coding a left side and upper piece in correspondence vertically and horizontally
Pixel value, the predicted pixel values of pixel in obtaining by performing two-way interpolation in the predicting unit of N picture.
Although Figure 11 illustrates current prediction unit and the corresponding predicting unit of N-1 picture using n-th picture
Carry out the example of infra-frame prediction, but the present invention is not so limited.For example, it is also possible to use the current prediction unit of n-th picture
With the corresponding predicting unit of N+1 picture, the current prediction unit of use n-th picture and N-1 picture and N+1
The corresponding predicting unit of picture, or use the current prediction unit of n-th picture and the N-2 picture, the N-1 picture,
N+1 picture and the corresponding predicting unit of N+2 picture, perform infra-frame prediction.
The current prediction unit with the second size is probably the square of 8 × 8,16 × 16 or 32 × 32 pixels, or
It can be the asymmetrical shape shown in Fig. 2 to Fig. 5.In current prediction unit, there is asymmetrical shape as shown in Figures 2 to 6
In the case of, the embodiment combining Figure 10 and Figure 11 description can be applied to perform inter prediction.
It is to say, for there is the predicting unit of symmetric shape (such as rectangle or square) and having asymmetric
Shape or the predicting unit of any geometry, intra-frame predictive encoding method based on planar prediction as shown in Figure 10 and Figure 11
Can apply to the intraframe coding/decoding of block of pixels.
Figure 12 is the concept map illustrating geometry segmentation according to another embodiment of the present invention.
Predicting unit PU that illustrates Figure 12 stands geometry segmentation so that the part of division has its in addition to square
The example of his shape.
With reference to Figure 12, for predicting unit, geometrical edge boundary line L can being defined below between subregion.Can by through
Predicting unit PU is divided into four quadrants by the x-axis of the center O of predicting unit PU and y-axis.Vertical line signs in boundary line from center O
L.It is then possible to by specifying from center O to distance ρ of boundary line L with counter clockwise direction angle, θ from x-axis to vertical line
Any possible boundary line L being positioned on any direction.
For interframe or infra-frame prediction, it was predicted that unit PU is divided into four quadrants relative to its center.Predicting unit PU
The second quadrant of upper left be divided into a subregion, and remaining L shape is divided into a subregion.As used herein
, " part " of predicting unit PU corresponding with segmentation subregion or some segmentation subregions is also referred to as " block ".Or, it was predicted that single
The third quadrant of bottom left section of unit PU is divided into a subregion, and remains quadrant and be divided into a subregion.Alternatively, it was predicted that
The first quartile of the upper right portion of unit PU is divided into a subregion, and remains quadrant and be divided into a subregion.And, with
The lower right-most portion of predicting unit PU that four-quadrant is corresponding is divided into a subregion, and wherein residue quadrant is divided into a subregion.
Additionally, fourth quadrant, it was predicted that the lower right-most portion of unit PU is divided into a subregion, and wherein residue quadrant is divided into a subregion.
As it has been described above, predicting unit can be divided so that cut section has L shape.Therefore, if when segmentation,
There is mobile object in edge block (such as, upper left, upper right, bottom right or lower-left block), then when being divided into four pieces with predicting unit PU
Compare, can more efficiently encode.Edge block according to Moving Objects place is positioned among four subregions, can select
With the corresponding subregion of use.
Size for the block of estimation can change.It addition, according to an exemplary embodiment, when asymmetric segmentation
Or during geometry segmentation application, the shape of block is possible not only to be existing square, it is also possible to be other geometries, such as rectangle
Or other asymmetrical shapes, " L " shape, or triangle, as shown in Fig. 2 to Fig. 9.
It addition, in the case of the above-mentioned geometry block segmentation including combining the block segmentation that Figure 10 describes, can convert and profit
With the predictive mode of application in Fig. 6 to Fig. 9 so that geometry block is performed infra-frame prediction.
Figure 13 is the configuration illustrating the picture coding device performing intraframe predictive coding according to an embodiment of the invention
Block diagram.
With reference to Figure 13, picture coding device includes encoder 630.In encoder 630 includes inter prediction unit 632, frame
Predicting unit 635, subtractor 637, converter unit 639, quantifying unit 641, entropy code unit 643, inverse quantization unit 645, inverse
Converter unit 647, adder 649 and frame buffer 651.Inter prediction unit 632 includes that motion prediction unit 631 and motion are mended
Repay unit 633.
Encoder 630 performs coding to input picture.For the inter prediction in inter prediction unit 632 or infra-frame prediction
Infra-frame prediction in unit 635, uses input picture based on each predicting unit PU.
After can being stored in the relief area (not shown) included in the encoder according to input picture, this relief area is deposited
The temporal frequency characteristic of the frame (or picture) of storage determines the size of the predicting unit being applied to inter prediction or infra-frame prediction.Example
As, it was predicted that unit determines that unit 610 analyzes (n-1)th frame (or picture) and the temporal frequency characteristic of n-th frame (or picture), and
And if the temporal frequency characteristic value analyzed is less than the first threshold preset, it is determined that the size of predicting unit is 64 × 64 pictures
Element.If the temporal frequency characteristic value analyzed is equal to or more than the first threshold preset and less than Second Threshold, it is determined that prediction
The size of unit is 32 × 32 pixels, and if analyze temporal frequency characteristic value equal to or more than preset Second Threshold,
The size then determining predicting unit is 16 × 16 pixels or less.Here, first threshold refers to the change between frame (or picture)
Change less than temporal frequency characteristic value during Second Threshold.
After can being stored in the relief area (not shown) included in the encoder according to input picture, this relief area is deposited
The spatial frequency characteristic of the frame (or picture) of storage determines the size of the predicting unit being applied to inter prediction or infra-frame prediction.Example
As, incoming frame (or picture) have high uniformity or homogeneous in the case of, predicting unit can be sized to greatly,
Such as, 32 × 32 pixels or bigger, incoming frame (or picture) have low uniformity or homogeneous in the case of (it is, work as
During spatial frequency height), predicting unit can be sized to little, such as, 16 × 16 pixels or less.
Although being shown without in fig. 13, but can be by receive input picture coding controller (not shown) or
Determine that unit (not shown) performs to determine the behaviour of the size of predicting unit by the independent predicting unit receiving input picture
Make.Such as, it was predicted that the size of unit can be 16 × 16,32 × 32 or 64 × 64 pixels.
As it has been described above, include that the predicting unit information of the size of the predicting unit determined for interframe or infra-frame prediction is provided
It is provided to encoder 630 to entropy code unit 643 and based on the predicting unit with the size determined.Especially, make
In the case of performing coding and decoding by the size of extended macroblock and extended macroblock, it was predicted that block message can include about grand
The information of the size of block or extended macroblock.Here, the size of extended macroblock refers to 32 × 32 pixels or bigger, it may for example comprise 32
× 32,64 × 64 or 128 × 128 pixels.In the case of above-mentioned recursive compilation unit CU is used for performing coding and decoding, it was predicted that
Unit information can include, is not the information of the size about macro block, but the maximum used about interframe or infra-frame prediction
The information of the size of compilation unit LCU, say, that the size of predicting unit, and further, it was predicted that unit information can be wrapped
Include the size of maximum compilation unit LCU, the minimum size of compilation unit SCU, maximum allowable grade or levels deep and labelling letter
Breath.
Encoder 630 performs coding to having the predicting unit determining size.
Inter prediction unit 632 is to currently carrying out drawing by the predicting unit of above-mentioned asymmetric segmentation or geometry partition encoding
Point, and perform estimation to generate motion vector based on each division subregion.
The current prediction unit provided is divided by motion prediction unit 631 by various dividing methods, and for
Each blockette, (is encoded being positioned at least one reference picture before or after present encoding picture and is stored in
In frame buffer 651) in the search region similar with the blockette of present encoding, thus based on each piece of generation motion vector.
Size for the block of estimation can change, and according to embodiment, when applying asymmetric segmentation or geometry segmentation, block
Shape be possible not only to be existing square, it is also possible to be other geometries, such as rectangle or other asymmetrical shapes,
" L " shape, or triangle, as shown in Fig. 2 to Fig. 9.
Motion compensation units 633 performs by using the motion vector generated from motion prediction unit 631 and reference picture
Motion compensation generates prediction block (or predicting unit of prediction).
Inter prediction unit 632 performs merged block to block, and obtains the kinematic parameter of each merging block.The fortune that will obtain
Dynamic parameter is transferred to decoder.
Intraprediction unit 635 can use the pixel interdependence between block to perform intraframe predictive coding.According to combination
The various embodiments that Figure 22 to Figure 27 describes, intraprediction unit 635 performs infra-frame prediction, and this infra-frame prediction is by from present frame
Previous coding pixel value prediction pixel value in the block of (or picture) finds the prediction block of current prediction unit.
The adder 637 prediction block predicting unit of prediction (or) to providing from motion compensation units 633 and current block (or
Current prediction unit) between perform subtraction to generate residual error, and converter unit 639 and quantifying unit 641 perform respectively
DCT(discrete cosine transform) and the quantization of residual error.Here, converter unit 639 can determine unit 1810 based on from predicting unit
The predicting unit dimension information provided performs conversion.Such as, converter unit 639 can be to 32 × 32 or 64 × 64 Pixel Dimensions
Convert.Or, converter unit 639 can determine, independent of from predicting unit, the predicting unit size letter that unit 610 provides
Breath performs conversion based on single converter unit TU.Such as, converter unit TU size minimum can be 4 × 4 pixels, and maximum can
To be 64 × 64 pixels.Or, the full-size of converter unit TU can be more than 64 × 64 pixels, such as, 128 × 128 pixels.
Converter unit dimension information can be included in converter unit information and be transferred to decoder.
Entropy code unit 643 to header information perform entropy code, header information include quantify DCT coefficient, motion vector,
Predicting unit information, segmentation information and the converter unit information determined, thus generate bit stream.
Inverse quantization unit 645 and inverse transformation block 647 respectively the data that quantified by quantifying unit 641 are performed re-quantization and
Inverse transformation.The predicting unit of the data of inverse transformation with the prediction provided from motion compensation units 633 is added by adder 649, with
Reconstruct image, and the image of reconstruct is supplied to frame buffer 651 so that frame buffer 651 stores the image of this storage.
Figure 14 is the flow chart illustrating the method for encoding images applying intraframe predictive coding according to an embodiment of the invention.
With reference to Figure 14, when image is input to code device (step S1401), for input picture, by above-mentioned non-right
Title or geometry dividing method divide the predicting unit (step S1403) of interframe or infra-frame prediction.
In the case of Active Frame inner estimation mode, unsymmetric block or the application of geometry block to segmentation combine Fig. 6 to Figure 11
The intra-frame prediction method described, thus perform infra-frame prediction (step S1405).
Or, when Active Frame inner estimation mode, by for each blockette, before being positioned at present encoding picture
And/or at least one reference picture (be encoded and be stored in frame buffer 651) afterwards is searched for and present encoding
The region that blockette is similar, generates prediction block (or predicting unit of prediction), thus generates motion vector based on each piece, subsequently
The motion vector generated and picture is used to perform motion compensation.
It follows that code device obtains (infra-frame prediction or inter prediction) prediction list of current prediction unit and prediction
Difference between unit, to generate residual error, then performs transform and quantization (step S1407) to the residual error generated.Thereafter, coding dress
Put the header information to including DCT coefficient and the kinematic parameter quantified carry out entropy code and generate bit stream (step S1409).
Figure 15 is the block diagram illustrating the configuration of picture decoding apparatus according to an embodiment of the invention.
With reference to Figure 30, decoding device includes that entropy decoding unit 731, inverse quantization unit 733, inverse transformation block 735, motion are mended
Repay unit 737, intraprediction unit 739, frame buffer 741 and adder 743.
Entropy decoding unit 731 receives the bit stream of compression, and decodes the bit stream execution entropy of compression, thus growing amount
Change coefficient.Inverse quantization unit 733 and inverse transformation block 735 perform re-quantization and inverse transformation to quantization parameter respectively, residual to reconstruct
Difference.
The header information decoded by entropy decoding unit 731 can include predicting unit dimension information, and this information can include
Such as extended macroblock size 16 × 16,32 × 32,64 × 64 or 128 × 128 pixel.Additionally, the header information of decoding includes using
In motion compensation and the kinematic parameter of prediction.Kinematic parameter can be included as each of the merged block method merging according to embodiment
The kinematic parameter that block transmits.Decoder header information also includes the labelling indicating whether to activate plane mode and has above-mentioned non-
Each unit prediction mode information of symmetric shape.
Motion compensation units 737 use kinematic parameter pair with based on by entropy decoding unit 731 from the header of bit stream decoding
The predicting unit of information coding has the predicting unit of same size and performs motion compensation, thus generates the predicting unit of prediction.
Motion compensation units 737 uses each piece of kinematic parameter transmitted merged for the merged block method according to embodiment to perform motion
Compensate, thus generate the predicting unit of prediction.
Intraprediction unit 739 uses the pixel interdependence between block to perform intraframe predictive coding.Intraprediction unit
739 intra-frame predictive encoding methods that can describe by combining Fig. 6 to Figure 11 obtain the predicted pixel values of current prediction unit.
Adder 743 is by the residual error provided from inverse transformation block 735 and from motion compensation units 737 or intraprediction unit
The predicting unit of 739 predictions provided is added, and to reconstruct image, and residual error provides frame buffer 741 so that frame buffers
Device 741 stores the image of reconstruct.
Figure 16 is the flow chart illustrating picture decoding method according to an embodiment of the invention.
With reference to Figure 16, decoding device receives bit stream (step S1601) from encoding device.
Thereafter, the decoding device bit stream to receiving performs entropy decoding (step S1603).Decoded by entropy
Data include that residual error, residual error refer to the difference between current prediction unit and the predicting unit of prediction.Decoded by entropy
Header information can include the kinematic parameter of predicting unit information, motion compensation and prediction, indicate whether to activate planar prediction mould
The labelling of formula and asymmetric type each predicting unit prediction mode information.Predicting unit information can include predicting unit chi
Very little information.
Here, be not to use the size of extended macroblock and extended macroblock to perform coding and decoding, at above-mentioned recursive compilation list
Unit CU is in the case of coding and decoding, it was predicted that unit PU information can include that maximum compilation unit LCU and minimum compiling are single
Unit's size of SCU, maximum allowable grade or levels deep and label information.
Solve code controller (not shown) and can receive the predicting unit PU size of application code device from code device, and
And according to the size of predicting unit PU of application in code device, can perform pre-in the motion compensation decoding being described, frame
Survey coding, inverse transformation or re-quantization.
Decoding apparatus carries out re-quantization and inverse transformation (step S1605) to the residual error of entropy code.Can be based on predicting unit
Size (such as, 32 × 32 or 64 × 64 pixel) performs inverse transformation.
Inter prediction or intra-frame prediction method can be applied to have variously-shaped predicting unit, such as by decoding apparatus
Asymmetric or the geometry described in conjunction with Fig. 6 to Figure 11, thus generate the predicting unit (step S1607) of prediction.
The residual error of the inverse transformation of re-quantization is added by decoder with the predicting unit predicted by interframe or infra-frame prediction,
Thus reconstruct image (step S1609).
Although an embodiment of the invention has been described, but those of ordinary skill in the art it will be appreciated that without departing from
The present invention can be carried out various amendment in the case of the scope of the present invention defined in the appended claims.