[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN1535027A - Inframe prediction method used for video frequency coding - Google Patents

Inframe prediction method used for video frequency coding Download PDF

Info

Publication number
CN1535027A
CN1535027A CNA2004100006663A CN200410000666A CN1535027A CN 1535027 A CN1535027 A CN 1535027A CN A2004100006663 A CNA2004100006663 A CN A2004100006663A CN 200410000666 A CN200410000666 A CN 200410000666A CN 1535027 A CN1535027 A CN 1535027A
Authority
CN
China
Prior art keywords
prediction
current block
pixel
block
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2004100006663A
Other languages
Chinese (zh)
Other versions
CN100536573C (en
Inventor
孔德慧
张楠
尹宝才
王雁来
孙艳丰
岳文颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN 200410000666 priority Critical patent/CN100536573C/en
Publication of CN1535027A publication Critical patent/CN1535027A/en
Application granted granted Critical
Publication of CN100536573C publication Critical patent/CN100536573C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

An in-frame predication method for video encode in order to improve video encode quality is disclosed. The original video stream taken by camera is used as input, which is input to computer by video acquisition card, and then processed by computer and the JVT video encode technique. An operation rule for calculating the DC predication mode by use of the samples of the decoded pixel in adjacent blocks is defined. Multiple predication modes can be recombined and sorted.

Description

Intra-frame prediction method for video coding
Technical Field
The invention relates to the technical field of computer digital video coding, and aims to provide a video coding system. The specific research content is the intra-frame prediction technology.
Background
In order to transmit and store images in the current limited transmission bandwidth and storage media, we must perform compression coding processing on the images. In the compression coding technique of a moving image, a coding algorithm is divided into two cases of intra-frame coding and inter-frame coding. The first image in the video sequence or the first image after the scene change is coded by adopting intra-frame transformation, and other images are coded by adopting inter-frame coding. In the prior art, intra-coding uses spatial prediction to exploit spatial statistical correlation in a source signal, and inter-coding uses block-based inter-prediction to exploit temporal statistical correlation. During specific coding, a prediction mode of a basic processing block in an image is specified, if interframe prediction is adopted, a motion vector of a current block is calculated according to a corresponding algorithm to obtain a prediction value of the current block; otherwise, adopting intra-frame prediction, and predicting by using the adjacent reconstructed pixels of the current block according to the corresponding intra-frame prediction technology; further, the prediction residual is transformed to remove the spatial correlation in the transform block, and then quantization is carried out; finally, the quantized transform coefficient information is encoded using variable length coding or arithmetic coding of the existing JVT technique.
At present, a video coding standard proposed by an audio/video standardization organization JVT formed by combining ITU-T and ISO/IEC JTC1 is a very popular coding standard at home and abroad, and is widely applied to the fields of television image compression, multimedia communication, multimedia computers, image databases, communication and the like. In JVT a macroblock consists of a 16 x 16 block of luminance samples and two corresponding blocks of chrominance samples, which are used as basic processing units for the video codec process.
In the intra prediction technique of the version 5.0 video coding standard provided by the JVT standard, the prediction of luminance or chrominance samples employs a prediction structure based on p × q blocks, where p denotes the number of columns of a block and q denotes the number of rows of the block, which is a prediction of pixel samples of the current block according to some prediction modes and their calculation rules, using pixel samples already reconstructed above, above right, above left and below left (fig. 3) of the p × q block, where i is 0, 1iRepresenting the ith row of pixel samples, s, of the column to the left of the current blockjRepresenting the j-th column of pixel samples, a, of the upper row of the current blockijRepresenting the pixel sample of ith row and j column of the current block; wherein,
in processing luminance samples, the JVT standard defines that when the blocks are 4 × 4, 4 × 8, 8 × 4 and 8 × 8 blocks, i.e. p is 4 or p is 8 and q is 4 or q is 8, 9 prediction modes are used, these prediction modes and the order being:
mode 0: vertical prediction (vertical prediction)
Mode 1: horizontal prediction (horizontal prediction)
Mode 2: DC prediction (DC prediction)
Mode 3: 45 degree directional prediction (diagonaldown/left prediction)
Mode 4: 135 degree directional prediction (diagonaldown/right prediction)
Mode 5: 112.5 degree Direction prediction (vertical-right prediction)
Mode 6: 157.5 degree Direction prediction (horizontal-Down prediction)
Mode 7: 67.5 degree Direction prediction (vertical-left prediction)
Mode 8: 22.5 degree Direction prediction (horizontal-up prediction)
Wherein, except for the DC prediction mode, the remaining 8 prediction modes are called directional prediction modes, the numerals in fig. 4 designate the directions of the respective directional prediction modes, and 2, which is not labeled, denotes the DC prediction mode. The DC prediction mode is defined as:
i. if s isj(j=0,1,2,...,p-1),ti(i-0, 1, 2.., q-1) is available, then all prediction samples are available
Figure A20041000066600091
Is equal to <math> <mrow> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>p</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>s</mi> <mi>j</mi> </msub> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>q</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>+</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>/</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>+</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
if ti( i ═ 0, 1, 2.., q-1) is not available, sj(j ═ 0, 1, 2.., p-1) available, then all predicted samples are available
Figure A20041000066600093
Is equal to <math> <mrow> <mrow> <mo>(</mo> <mover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>p</mi> <mo>-</mo> <mn>1</mn> </mrow> </mover> <msub> <mi>s</mi> <mi>j</mi> </msub> <mo>+</mo> <mi>p</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mi>p</mi> <mo>;</mo> </mrow> </math>
if sj(j ═ 0, 1, 2.., p-1) is not available, ti(i-0, 1, 2.., q-1) is available, then all prediction samples are available
Figure A20041000066600095
Is equal to <math> <mrow> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>q</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>+</mo> <mi>q</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>/</mo> <mi>q</mi> <mo>;</mo> </mrow> </math>
if sj(j=0,1,2,...,p-1),ti(i-0, 1, 2.., q-1) is not available, then all prediction samples are availableEqual to 128, i 0, 1, 2.., q-1, representing pixel row coordinates, j 0, 1, 2.., p-1, representing pixel column coordinates;
the JVT standard also defines that when processing luminance samples, 4 prediction modes are used when p q 16, which are the sum sequence:
mode 0: vertical prediction (vertical prediction)
Mode 1: horizontal prediction (horizontal prediction)
Mode 2: DC prediction (DC prediction)
Mode 3: plate prediction (plane prediction)
The DC prediction mode definition therein is consistent with the prediction mode definitions of luminance blocks of 4 × 4 blocks, 4 × 8 blocks, 8 × 4 blocks and 8 × 8 blocks.
The JVT standard defines 4 prediction modes and orders for an 8 x 8 block when processing chroma samples as:
mode 0: DC prediction (DC prediction)
Mode 1: horizontal prediction (horizontal prediction)
Mode 2: vertical prediction (vertical prediction)
Mode 3: plate prediction (plane prediction)
The DC prediction mode definition therein is consistent with the prediction mode definitions of luminance blocks of 4 × 4 blocks, 4 × 8 blocks, 8 × 4 blocks and 8 × 8 blocks.
The JVT has a fine prediction structure, but its prediction accuracy in the DC prediction mode is not high enough, and its prediction modes for the samples are many, for example, its prediction for the luminance samples has 9 prediction modes in 4 × 4 blocks, 4 × 8 blocks, 8 × 4 blocks and 8 × 8 blocks, resulting in a high complexity of the whole algorithm.
Disclosure of Invention
The invention aims to overcome the defect of inaccurate prediction of a DC prediction mode, reduce the computational complexity of an intra-frame prediction algorithm in the coding process and provide an intra-frame prediction method for video coding.
The system block diagram of the invention is shown in fig. 1, the intra-frame prediction method for video coding is that an original video sequence is obtained by a video camera as input, the original video sequence is changed into video sequence data by a video capture card and enters a computer, and the video coding technology provided by JVT is adopted, and the processing and operation are carried out by the computer. The method comprises the following steps: the computer system receives the original video stream processed by the acquisition card, then reads out an image of the received video sequence, and divides the pixel sample value of the image into 16 × 16 macro blocks from left to right and from top to bottom; the macro block read from the computer memory is sent to the intra-frame predicting module, when in concrete coding, the predicting mode of the basic processing block in the image is stipulated, if the inter-frame prediction is adopted, the motion vector of the current block is calculated according to the corresponding algorithm, and the predicting value is obtained; otherwise, adopting intra-frame prediction, using the adjacent reconstructed pixels of the current block to predict according to the corresponding intra-frame prediction technology, namely, performing sample value prediction according to the prediction mode of JVT and the intra-frame prediction method provided by the invention, or performing sample value prediction according to the simplified prediction mode and the prediction mode calculation method provided by the invention, then transforming the prediction residual error according to the JVT standard method to remove the spatial correlation in the transform block, and then quantizing; then, coding the quantized transform coefficient information by using variable length coding or arithmetic coding of the existing JVT technology until the image coding is completed, and finally outputting the image coding bit stream; the next picture in the received sequence is read, and so on until all pictures are encoded, the flow is shown in fig. 6.
In the intra prediction technique of the version 5.0 video coding standard provided by the JVT standard, the prediction of luminance or chrominance samples employs a prediction structure based on p × q blocks, where p denotes the number of columns of a block, q denotes the number of rows of the block, which is used to predict the pixel samples of the current block according to some prediction modes and their calculation rules using the pixel samples that have been reconstructed above, below, above, and below the p × q block, where i is 0, 1,irepresenting the ith row of pixel samples, s, of the column to the left of the current blockjRepresenting the j-th column of pixel samples, a, of the upper row of the current blockijRepresenting the pixel sample of ith row and j column of the current block; wherein,
1) in processing luminance samples, the JVT standard defines that when the blocks are 4 × 4, 4 × 8, 8 × 4 and 8 × 8, i.e. p is 4 or p is 8 and q is 4 or q is 8, 9 prediction modes are used, these prediction modes and the order being:
mode 0: vertical prediction
Mode 1: horizontal prediction
Mode 2: DC prediction
Mode 3: 45 degree direction prediction
Mode 4: 135 degree direction prediction
Mode 5: 112.5 degree Direction prediction
Mode 6: 157.5 degree direction prediction
Mode 7: 67.5 degree direction prediction
Mode 8: 22.5 degree direction prediction
Wherein, except for the DC prediction mode, the remaining 8 prediction modes are called directional prediction modes;
2) when processing luminance samples, the JVT standard also defines that 4 prediction modes are used when p q 16, which are sum-ordered:
mode 0: vertical prediction
Mode 1: horizontal prediction
Mode 2: DC prediction
Mode 3: plate prediction
3) In processing chroma samples, the JVT standard defines 4 prediction modes and order for an 8 × 8 block as:
mode 0: DC prediction
Mode 1: horizontal prediction
Mode 2: vertical prediction
Mode 3: plate prediction
The invention is characterized in that, after reading macroblock data from a computer memory, entering an intra prediction module, said predicting of pixel samples in each 16 × 16 macroblock selected for intra prediction consists of the following steps in sequence:
(1) taking a 16 × 16 macroblock as a current prediction macroblock;
(2) dividing the macroblock into p × q in order from left to right, top to bottom, p representing the number of columns of the block, which may be equal to 4, 8, or 16, q representing the number of rows of the block, which may be equal to 4, 8, or 16;
(3) taking a p × q block as a current block;
(4) predicting a pixel luminance or chrominance sample value of the current block p × q;
(5) taking the next p × q block as the current block, and repeating the processes from the step (3) to the step (5) until the macro block is predicted completely;
when predicting the pixel brightness or chroma sample value of the current block p × q, the DC prediction mode method is mainly characterized in that:
the current block is calculated for its DC prediction mode using samples of already decoded pixels in neighboring blocks (U, L, UR, UL, DL), wherein the symbol C is defined to represent the current block, the symbol U to represent an upper block adjacent to the current block, the symbol L to represent a left block adjacent to the current block, the symbol UL to represent an upper left block adjacent to the current block, the symbol UR to represent an upper right block adjacent to the current block, and the symbol DL to represent a lower left block adjacent to the current block;
1) when the upper, upper right, upper left and lower left blocks adjacent to the current block can be used, defining that all pixel predicted values of the current block in the DC prediction mode can be obtained by using a method similar to 8 prediction directions of the JVT standard, but the filtering method is different from the filtering method of the 8 prediction directions;
2) when the upper block adjacent to the current block is available, defining that all pixel predicted values of the current block in the DC prediction mode can be obtained by a method similar to 8 prediction directions of the JVT standard, but different from a filtering method of the 8 prediction directions;
3) when a left block adjacent to the current block is available, defining that all pixel predicted values of the current block in the DC prediction mode can be obtained by using a method similar to 8 prediction directions of the JVT standard, but the filtering method is different from the filtering method of the 8 prediction directions;
4) when the upper and left blocks adjacent to the current block are not available, defining the predicted value of all pixels of the current block to be 128 in the DC prediction mode.
The present invention is further characterized in that, after entering the intra prediction module, the DC prediction mode can be defined by the following method:
1) when the upper, upper right, upper left and lower left blocks adjacent to the current block are all available, the predicted values of all pixels of the current block in the DC prediction mode are defined to be obtained by a bidirectional prediction method, see DC in FIG. 50
2) When the upper block adjacent to the current block is available, all pixel prediction values of the current block in the DC prediction mode can be obtained by using an approximate vertical direction prediction method, see DC in FIG. 51(ii) a Although the method is consistent with the direction of vertical prediction, the adjacent pixels selected in the operation process are different from the filtering method;
3) when the left block adjacent to the current block is available, the predicted values of all pixels of the current block in the DC prediction mode can be obtained by using an approximate horizontal direction prediction method, as shown in DC in FIG. 52(ii) a Although the method is consistent with the direction of horizontal prediction, the adjacent pixels selected in the operation process are different from the filtering method;
4) when neither the upper block nor the left block adjacent to the current block is available, the predicted value of all pixels of the current block in the DC prediction mode is defined as 128, which is the same as the existing JVT standard.
The intra prediction method for video coding according to the present invention is further characterized in that, after entering the intra prediction module, the DC prediction mode may specifically define values by using the following method:
(1) firstly, the adjacent pixel t reconstructed by the current block is processedi、sjF, according to JVT method making low-pass filtering of correspondent point, placing it into array, recording said array as EP, and recording m-th array variable in said array as EPmWhere i 0, 1., 2q-1 denotes pixel row coordinates, j 0, 1., 2p-1 denotes pixel column coordinates, p × q denotes the block size, p denotes the number of columns of the block, which may be equal to 4, 8, or 16, q denotes the number of rows of the block, which may be equal to 4, 8, or 16Equal to 4, 8, or 16, tiRepresenting the ith row of pixel samples, s, of the column to the left of the current blockjRepresenting the j-th column of pixel samples, a, of the upper row of the current blockijRepresenting the pixel sample of ith row and j column of the current block; m represents the subscript variable of the array EP;
in the following calculations, the symbol ">" represents a bit right shift operation;
the EP is derived from the following algorithm:
A. if the current macroblock has a reconstructed pixel with an upper edge neighbor, i.e. sjCan be used, wherein, j is 0, 1, 2, p-1, then
a)EP(j+1)=s(j);j=0,...,p-1
b) If the current macroblock has a reconstructed pixel adjacent to the top right, i.e. sjIt is possible to use, among other things,
j ═ p, p +1, p +2,.., 2p-1, then
EP(1+j+p)=s(p+j); j=0,…,p-1
EP(1+j+p)=s(p+p-1);j=p,…,q-1
Otherwise
EP(1+j+p)=EP(p); j=0,...,p-1
c)EP(1+j+p)=EP(p+j); j =p,...,q+1
d)EP(0)=s0
B. If the current macroblock has left-neighboring reconstructed pixels, ti, available, where i is 0, 1, 2
a)EP(-1-i)=t(i);i=0,…,q-1
b) If the current macro block has a lower left edgeNeighboring reconstructed pixels, i.e. tiUseful are, among others, i ═ q, q +1, q +2
EP(-1-i-q)=t(q+i);i=0,…,q-1
EP(-1-i-q)=t(q+q-1);i=q,…,p-1
Otherwise
EP(-1-i-q)=EP(-q);i=0,...,q-1
c)EP(-1-i-q)=EP(-i-q);i=p,p+1
d)EP(0)=t0
C. If s isjAvailable, and t is availableiWherein i is 0, 1, 2.., q-1, and wherein j is 0, 1, 2.., p-1, then
EP(0)=f;
D. Define variable last _ pix equal to EP(-(p+q))
Taking i equal to- (p + q), where i represents a counter, the following steps are performed,
a) let the variable new _ pix equal (last _ pix + (EP)(i)<<1)+EP(i+1)+2)>>2;
b) Let variable last _ pix equal EP(i)
c) Let the index be the array variable EP of iiEqual to new _ pix;
d) increasing i by 1, turning to a), until i is greater than (p + q);
(2) operation rule of DC prediction mode
i. If s isj,tiAll are available, then all prediction samples
Figure A20041000066600161
Is equal to (EP)i+EPj) > 1, see FIG. 5
DC0: wherein,
i-0, 1, 2., q-1, representing pixel row coordinates, and j-0, 1, 2., p-1, representing pixel column coordinates;
if tiNot available, sjIf available, all prediction samplesEqualing EPjSee DC in FIG. 5i
Wherein i is 0, 1, 2., q-1, which indicates pixel row coordinates, and j is 0, 1, 2., p-1, which indicates pixel column coordinates;
if sjUnusable, tiIf available, all prediction samples
Figure A20041000066600163
Equaling EPiSee DC in FIG. 52: it is composed of
Where i ═ 0, 1, 2., q-1, denotes pixel row coordinates, and j ═ 0, 1, 2., p-1, denotes pixel column coordinates;
if sj,tiAll are not available, then all prediction samples areEqual to 128, where i 0, 1, 2., q-1, denotes pixel row coordinates and j 0, 1, 2.., p-1, denotes pixel column coordinates.
The intra prediction method for video coding according to the present invention is further characterized in that, in terms of hardware implementation, when the p × q block is 4 × 4 or 4 × 8 or 8 × 4 or 8 × 8, the structures of the 9 prediction modes of the luminance samples are very complex, and it is desirable to use a simpler prediction mode to compress the image and to ensure that the performance of image compression is not reduced. Therefore, the invention provides that partial prediction modes in 9 prediction modes can be selected, and mode sequencing is carried out on the modes again according to the coding requirement; for example, a prediction method of intra luminance sample value based on 5 prediction modes, i.e. the DC prediction proposed by the present invention, and the vertical prediction, horizontal prediction, 45-degree direction prediction, 135-degree direction prediction modes adopted in JVT can be adopted;
mode 0: vertical prediction (vertical prediction)
Mode 1: horizontal prediction (horizontal prediction)
Mode 2: DC prediction (DC prediction, the DC prediction mode proposed by the present invention)
Mode 3: 45 degree directional prediction (diagonaldown/left prediction)
Mode 4: 135 degree directional prediction (diagonaldown/right prediction)
Compared with the 9 prediction modes adopted by the original JVT, the simplified prediction structure reduces the prediction calculation in 4 directions, thereby greatly reducing the calculation complexity.
Likewise, for 16 × 16 luma blocks and 8 × 8 chroma blocks, only the DC prediction mode proposed by the present invention and one or two prediction modes selected from the vertical prediction mode, the horizontal prediction mode, and the flat prediction mode in JVT may be used.
Compared with the intra-frame prediction method of the JVT standard, the intra-frame prediction method for video coding has the advantages that the DC prediction mode enables prediction to be more accurate, and the coding quality of images is improved; the simplified prediction mode greatly reduces the complexity of calculation under the condition of ensuring that the image coding performance is not reduced.
Drawings
FIG. 1 is a block diagram of a system;
FIG. 2 is a diagram of the locations of a current block and its neighboring blocks;
FIG. 3 is a block diagram of a prediction structure for p × q block samples;
FIG. 4 8 prediction patterns for p × q blocks of luminance samples;
FIG. 5 is a schematic diagram of DC prediction mode;
FIG. 6 is a system flow diagram;
FIG. 78 is a block diagram of a prediction structure of luminance samples of the X8 block;
FIG. 8 is a graph of sample signal-to-noise ratio and bit rate for luminance samples under 9 prediction modes as defined by the present invention and the JVT standard;
FIG. 9 is a graph of sample signal-to-noise ratio and bit rate for luminance samples in 5 prediction modes defined by the present invention and in 9 prediction modes defined by the JVT standard;
FIG. 10 is a graph of the signal-to-noise ratio and bit rate of samples U for the first of two samples of chroma under 2 prediction modes of DC prediction and flat panel prediction as defined by the present invention and under 4 prediction modes as defined by the JVT standard;
FIG. 11 is a graph of the signal-to-noise ratio and bit rate of samples V for the second of the two chroma samples in the 2 prediction modes of DC prediction and flat panel prediction as defined by the present invention and the 4 prediction modes as defined by the JVT standard;
Detailed Description
According to the technical scheme of the invention, as shown in fig. 1 and fig. 6, an original video sequence is obtained by a video camera as input, the input is changed into a video data stream by a video acquisition card and enters a computer, and the intra-frame prediction based on 8 x 8 blocks is carried out on the brightness sample value of an image in the sequence by adopting the video coding technology provided by JVT, and the method comprises the following specific steps:
1. reading an image in the sequence;
2. dividing an image into macroblocks in a size of 16 × 16;
3. taking a 16 x 16 macro block as a current prediction macro block;
4. dividing the macro block into 8 x 8 blocks from left to right and from top to bottom;
5. taking an 8 x 8 block as a current block;
6. predicting the pixel luminance samples of the current block 8 x 8 block;
the positions of the already coded pixel luminance samples around the current 8 × 8 block are shown in fig. 7, and the 9 prediction modes are defined in the following order:
mode 0: vertical prediction (vertical prediction)
Mode 1: horizontal prediction (horizontal prediction)
Mode 2: DC prediction (DC prediction)
Mode 3: 45 degree directional prediction (diagonaldown/left prediction)
Mode 4: 135 degree directional prediction (diagonaldown/right prediction)
Mode 5: 112.5 degree Direction prediction (vertical-right prediction)
Mode 6: 157.5 degree Direction prediction (horizontal-Down prediction)
Mode 7: 67.5 degree Direction prediction (vertical-left prediction)
Mode 8: 22.5 degree Direction prediction (horizontal-up prediction)
According to the intra prediction method for video coding proposed by the present invention, the prediction modes of the block in 9 modes are defined as follows:
1. firstly, the adjacent pixel t reconstructed by the current block is processedi、sjF, according to JVT method making low-pass filtering of correspondent point, placing it into array, recording said array as EP, and recording m-th array variable in said array as EPmWherein i ═ 0, 1.., 2q-1, tablePixel row coordinates, j 0, 1.., 2p-1, pixel column coordinates, block size, p × q, p representing the number of columns of the block, which may be equal to 4, 8, or 16, q representing the number of rows of the block, which may be equal to 4, 8, or 16, tiRepresenting the ith row of pixel samples, s, of the column to the left of the current blockjRepresenting the j-th column of pixel samples, a, of the upper row of the current blockijRepresenting the pixel sample of ith row and j column of the current block; m represents the subscript variable of the array EP;
in the following calculations, the symbol ">" represents a bit right shift operation.
The EP is derived from the following algorithm:
A. if the current macroblock has a reconstructed pixel with an upper edge neighbor, i.e. sj(j ═ 0, 1, 2.., 7) is available,
then
a)EP(j+1)=s(j);j=0,...,7
b) If the current macroblock has a reconstructed pixel adjacent to the top right, i.e. sj(j ═ 7, 8.., 15) may be used
Use of
EP(1+j+p)=s(p+j); j=0,...,7
Otherwise
EP(1+j+p)=EP(p); j=0,...,7
c)EP(1+j+p)=EP(p+j);j=8,9
d)EP(0)=s0
B. If the current macroblock has a reconstructed pixel adjacent to the left, i.e. ti(i ═ 0, 1, 2,. 7) is available,
then
a)EP(-1-i)=t(i); i=0,...,7
b) If the current macroblock has reconstructed pixels adjacent to the bottom left, i.e. ti(i=8,9,...,15)
Can be used, then
EP(-1-i-q)=t(q+i));i=0,...,7
Otherwise
EP(-1-i-q)=EP(-q);i=0,...,7
c)EP(-1-i-q)=EP(-i-q);i=8,9
d)EP(0)=t0
C. If s isj(j ═ 0, 1, 2,. 7) is available, and t isi(i ═ 0, 1, 2.., 7) available, then
EP(0)=f;
E. Define variable last _ pix equal to EP(-16)
Taking i equal to-16, wherein i represents a counter, the following steps are performed,
a) let the variable new _ pix equal (last _ pix + (EP)(i)<<1)+EP(i+1)+2)>>2;
b) Let variable last _ pix equal EP(i)
c) Let the index be the array variable EP of iiEqual to new _ pix;
d) increasing i by 1, turning to a) until i is greater than 16;
2. calculating the predicted value in each mode
a. Mode 0: vertical Prediction (vertical Prediction)
The requirement for using this mode is sj(j ═ 0, 1, 2.., 7) available, predictive samples
Figure A20041000066600201
The generation method of (a) is as follows:
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>000</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>010</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>020</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>070</mn> </msub> <mo>=</mo> <msub> <mi>s</mi> <mn>0</mn> </msub> <mo>;</mo> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>001</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>011</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>021</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>071</mn> </msub> <mo>=</mo> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo>;</mo> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>000</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>012</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>022</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>072</mn> </msub> <mo>=</mo> <msub> <mi>s</mi> <mn>2</mn> </msub> <mo>;</mo> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>007</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>017</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>027</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>077</mn> </msub> <mo>=</mo> <msub> <mi>s</mi> <mn>7</mn> </msub> <mo>;</mo> </mrow> </math>
b. mode 1: horizontal prediction (horizontal prediction)
The requirement for using this mode is ti(i ═ 0, 1, 2.., 7) is available, predictive samplesThe generation method of (a) is as follows:
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>100</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>101</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>102</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>107</mn> </msub> <mo>=</mo> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo>;</mo> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>110</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>111</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>112</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>117</mn> </msub> <mo>=</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>;</mo> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>120</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>121</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>122</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>127</mn> </msub> <mo>=</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>;</mo> </mrow> </math>
<math> <mrow> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>170</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>171</mn> </msub> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>172</mn> </msub> <mo>=</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>=</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mn>177</mn> </msub> <mo>=</mo> <msub> <mi>t</mi> <mn>7</mn> </msub> <mo>;</mo> </mrow> </math>
c. mode 2: DC prediction (DC prediction)
i. If s isj(j=0,1,2,...,7),ti(i ═ 0, 1, 2.., 7) is available, then all predicted samples are availableIs equal to (EP)i+EPj)>>1;
if ti(i ═ 0, 1, 2,. 7) is not available, sj(j ═ 0, 1, 2.., 7) available, then all prediction samples are availableEqualing EPj
if sj(j ═ 0, 1, 2,. 7) unusable, ti(i ═ 0, 1, 2.., 7) available, then all prediction samples are available
Figure A20041000066600218
Equaling EPi
if sj(j=0,1,2,...,7),ti(i ═ 0, 1, 2.., 7) is not available, then all predicted samples are available
Figure A20041000066600219
Equal to 128, i 0, 1, 2.., 7, representing pixel row coordinates, j 0, 1, 2., 7, representing pixel column coordinates;
the other prediction modes and their operation rules are the same as the JVT standard.
7. Determining an optimal prediction mode for a current block
a. Defining k to represent the current prediction mode, and enabling the initial value to be 0;
b. obtaining a prediction residual value delta under the prediction mode k by the following prediction residual formulak
<math> <mrow> <msub> <mi>&Delta;</mi> <mi>k</mi> </msub> <mo>=</mo> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mover> <mi>a</mi> <mo>~</mo> </mover> <mi>kij</mi> </msub> </mrow> </math>
Here, aijRepresenting the luminance sample of the original pixel,representing the predicted pixel luminance sample value in mode k, where k represents the prediction mode code number; i-0, 1, 2., q-1 denotes pixel row coordinates; j-0, 1, 2,.. p-1 denotes pixel column coordinates;
c. performing DCT (DCT) transformation (DCT refers to discrete cosine transform), quantization and entropy coding on the prediction residual error of each pixel by adopting a coding method in JVT (JVT), and calculating the coding bit number of the current block in the current mode; and after DCT transformation and quantization are carried out on the prediction residual error of each pixel, inverse quantization and inverse DCT transformation are carried out, and then a predicted value is addedThe brightness sample value of each pixel point in the reconstructed block is recorded as
Figure A20041000066600224
k represents a prediction mode coding number; i-0, 1, 2., q-1 denotes pixel row coordinates; j-0, 1, 2,.. p-1 denotes pixel column coordinates;
d. calculating the distortion rate of the block in the current prediction mode by adopting a method in JVT, and recording the distortion rate as rdcost;
<math> <mrow> <mi>distortion</mi> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mover> <mi>a</mi> <mo>^</mo> </mover> <mi>kij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
rd cost=distortion+lambda×rate;
wherein, the distortion is the square sum of the original brightness sample values of all pixels of the current block and the difference of the prediction values, lambda is a constant, and rate is the number of bits used for encoding the current block in the current mode;
e.k, increasing the value by 1, repeating the steps b, c, d and e until all prediction modes of the block are executed;
d. comparing the rdcost in each mode, and selecting the mode with the minimum rdcost as the current optimal prediction mode;
8. taking the predicted value in the optimal prediction mode as the last predicted value of the block, and recording the last predicted value as the last predicted valuei-0, 1, 2., q-1 denotes pixel row coordinates; j-0, 1, 2,.. p-1 denotes pixel column coordinates; taking the reconstructed value in the optimal prediction mode as the last reconstructed value of the block, and recording the last reconstructed value as the last reconstructed value
Figure A20041000066600227
i-0, 1, 2., q-1 denotes pixel row coordinates; j-0, 1, 2,.. p-1 denotes pixel column coordinates;
9. taking the next 8 x 8 block as the current block, and repeating the processes of steps 6 to 9 until the macro block is completely encoded;
10. and taking the next macro block as the current prediction macro block, and repeating the processes of the steps 3 to 10 until the whole image is coded.
11. Taking down one image and repeating the process from step 2 to step 11 until the coding of the whole sequence is completed.
If the simplified prediction mode is selected, only the prediction values of the prediction modes selected as required in the step 6 need to be calculated each time, for example, only the prediction values of the first 5 prediction modes can be calculated.
Example results
1. By utilizing the prediction structure of 9 prediction modes of the improved DC prediction mode, 10-frame full-frame intra prediction tests are carried out on a 1280 × 720 high-definition video sequence by taking 8 × 8 blocks as basic processing blocks, and compared with the sample signal-to-noise ratio and the bit rate of the intra prediction technology in the existing JVT standard under different quantization values (the following table), a graph (shown as a Bitrate) of the brightness sample signal-to-noise ratio (marked as PSNRY) and the bit rate (marked as Bitrate) is drawn (FIG. 8).
High definition video sequenceThe following test results: (frame rate: 30Hz, 10 frames, 1280 x 720)
qp=29 qp=32 qp=37 qp=43 Gain
JVT PSNRY 40.74 39.07 36.44 33.65
Bitrate 49975.51 39797.21 26065.39 16003.66
The invention PSNRY 40.74 39.07 36.45 33.66
Bitrate 48273.29 38174.86 24659.59 15053.78 0.317872
As can be seen from the figure, the curves obtained by the 9 prediction methods of the improved DC prediction mode proposed by the present invention are above the curves obtained by the JVT intra prediction method, which shows the improvement of the compression performance of the image without any increase of complexity by the present invention.
2. By using the simplified prediction structure provided by the invention, if only the first 5 prediction modes are adopted, the original sequence is kept, 10-frame full-frame intra-frame tests are respectively carried out on a 1280 × 720 high-definition video sequence, the sample signal-to-noise ratio and the bit rate are compared with the sample signal-to-noise ratio and the bit rate of the intra-frame prediction technology in the existing JVT standard under different quantization values (the following table), and a sample signal-to-noise ratio and bit rate curve chart is drawn (figure 9).
High definition video sequence test results: (frame rate: 30Hz, 10 frames, 1280 x 720)
qp=29 qp=32 qp=37 qp=43 Gain
JVT PSNRY 40.74 39.07 36.44 33.65
Bitrate 49975.51 39797.21 26065.39 16003.66
The invention PSNRY 40.71 39.05 36.42 33.62
Bitrate 49494.07 39244.87 25611.6 15811.37 0.070442
It can be seen from the figure that the curve obtained by the simplified prediction mode method proposed by the present invention substantially coincides with the curve obtained by the JVT intra prediction method, which shows that the present invention can maintain the compression performance of the image well under the condition of reducing a great deal of complexity.
3. The simplified prediction structure provided by the invention is utilized to carry out DC prediction (mode 0) and 2 prediction modes of flat prediction (mode 1) on two chrominance samples, 10-frame full-frame test is respectively carried out on a 1280 x 720 high-definition video sequence, the sample signal-to-noise ratio and the bit rate under different quantization values compared with the sample signal-to-noise ratio and the bit rate of the intra-frame prediction technology in the existing JVT standard (the following table) are carried out, and graphs (fig. 10 and fig. 11) of the sample signal-to-noise ratio and the bit rate of the chrominance sample signal-to-noise ratio (marked as PSNRU and PSNRV) are drawn.
High definition video sequence test results: (frame rate: 30Hz, 10 frames, 1280 x 720)
QP=27 QP=30 QP=35 QP=40 Gain
JVT PSNRU 43.73 42.51 40.39 38.42
PSNRV 45.02 43.81 41.73 39.75
Bitrate 10852.48 8886.85 6255.15 4354.17
The invention PSNRU 43.65 42.44 40.34 38.36
PSNRV 44.95 43.74 41.69 39.71 -0.09104
Bitrate 10898.41 8916.32 6299.61 4372.89 -0.08189
It can be seen from the figure that the curve obtained by the simplified prediction mode method proposed by the present invention substantially coincides with the curve obtained by the JVT intra prediction method, which shows that the present invention can maintain the compression performance of the image well under the condition of reducing a great deal of complexity.

Claims (4)

1. A intra-frame prediction method for video coding, the video coding is to obtain the original video stream as the input through the video camera, and the original video stream is changed into the video data stream to enter the computer after passing the video capture card, and the video coding technique provided by JVT is adopted, and the computer processes and operates, the method steps are: the computer system receives the original video stream processed by the acquisition card, then reads out an image of the received video sequence, and divides the pixel sample value of the image into 16 × 16 macro blocks from left to right and from top to bottom; the macro block read from the computer memory is sent to the intra-frame predicting module, when in concrete coding, the predicting mode of the basic processing block in the image is stipulated, if the inter-frame prediction is adopted, the motion vector of the current block is calculated according to the corresponding algorithm, and the predicting value is obtained; otherwise, adopting intra-frame prediction, using the adjacent reconstructed pixels of the current block to predict according to the corresponding intra-frame prediction technology, then transforming the prediction residual error according to the method of JVT standard to remove the spatial correlation in the transformed block, and then quantizing; then, coding the quantized transform coefficient information by using variable length coding or arithmetic coding of JVT technology until the image coding is completed, and finally outputting the image coding bit stream; reading the next image in the received sequence, and repeating the steps until all the images are coded;
in the intra prediction technique of the version 5.0 video coding standard provided by the JVT standard, the prediction of luminance or chrominance samples uses a prediction structure based on p × q blocks, where p denotes the number of columns of a block, q denotes the number of rows of a block, which is used to predict the pixel samples of the current block according to some prediction modes and their calculation rules using the pixel samples that have been reconstructed above, above right, above left and below left of the p × q block, where i is 0, 1, …, 2q-1 denotes the pixel row coordinates, j is 0, 1, …, 2p-1 denotes the pixel column coordinates, t is 0, 1, …, 2p-1 denotes the pixel column coordinates, and t is a pixel row coordinatesiRepresenting the ith row of pixel samples, s, of the column to the left of the current blockjRepresenting the j-th column of pixel samples, a, of the upper row of the current blockijRepresenting the pixel sample of ith row and j column of the current block; wherein,
1) in processing luminance samples, the JVT standard defines that when the blocks are 4 × 4, 4 × 8, 8 × 4 and 8 × 8, i.e. p is 4 or p is 8 and q is 4 or q is 8, 9 prediction modes are used, these prediction modes and the order being:
mode 0: vertical prediction
Mode 1: horizontal prediction
Mode 2: DC prediction
Mode 3: 45 degree direction prediction
Mode 4: 135 degree direction prediction
Mode 5: 112.5 degree Direction prediction
Mode 6: 157.5 degree direction prediction
Mode 7: 67.5 degree direction prediction
Mode 8: 22.5 degree direction prediction
Wherein, except for the DC prediction mode, the remaining 8 prediction modes are called directional prediction modes;
2) when processing luminance samples, the JVT standard also defines that 4 prediction modes are used when p q 16, which are sum-ordered:
mode 0: vertical prediction
Mode 1: horizontal prediction
Mode 2: DC prediction
Mode 3: plate prediction
3) In processing chroma samples, the JVT standard defines 4 prediction modes and order for an 8 × 8 block as:
mode 0: DC prediction
Mode 1: horizontal prediction
Mode 2: vertical prediction
Mode 3: plate prediction
The invention is characterized in that, after reading macroblock data from a computer memory, entering an intra prediction module, said predicting of pixel samples in each 16 × 16 macroblock selected for intra prediction consists of the following steps in sequence:
(1) taking a 16 × 16 macroblock as a current prediction macroblock;
(2) dividing the macroblock into p × q in order from left to right, top to bottom, p representing the number of columns of the block, which may be equal to 4, 8, or 16, q representing the number of rows of the block, which may be equal to 4, 8, or 16;
(3) taking a p × q block as a current block;
(4) predicting a pixel luma or chroma sample value of the current block p × q;
(5) taking the next p × q block as the current block, and repeating the processes from (3) to (5) until the macro block is predicted;
when predicting the pixel brightness or chroma sample value of the current block p × q, the DC prediction mode method is mainly characterized in that:
the current block is calculated for its DC prediction mode using samples of already decoded pixels in neighboring blocks (U, L, UR, UL, DL), wherein the symbol C is defined to represent the current block, the symbol U to represent an upper block adjacent to the current block, the symbol L to represent a left block adjacent to the current block, the symbol UL to represent an upper left block adjacent to the current block, the symbol UR to represent an upper right block adjacent to the current block, and the symbol DL to represent a lower left block adjacent to the current block;
1) when the upper, upper right, upper left and lower left blocks adjacent to the current block can be used, defining that all pixel predicted values of the current block in the DC prediction mode can be obtained by using a method similar to 8 prediction directions of the JVT standard, but the filtering method is different from the filtering method of the 8 prediction directions;
2) when the upper block adjacent to the current block is available, defining that all pixel predicted values of the current block in the DC prediction mode can be obtained by a method similar to 8 prediction directions of the JVT standard, but different from a filtering method of the 8 prediction directions;
3) when a left block adjacent to the current block is available, defining that all pixel predicted values of the current block in the DC prediction mode can be obtained by using a method similar to 8 prediction directions of the JVT standard, but the filtering method is different from the filtering method of the 8 prediction directions;
4) when the upper and left blocks adjacent to the current block are not available, defining the predicted value of all pixels of the current block to be 128 in the DC prediction mode.
2. The method of claim 1, wherein the DC prediction mode is defined as follows after entering an intra prediction module:
1) when the upper, upper right, upper left and lower left blocks adjacent to the current block can be used, all pixel predicted values of the current block can be obtained by a bidirectional prediction method under the DC prediction mode;
2) when the upper block adjacent to the current block is available, all pixel predicted values of the current block in a DC prediction mode are defined to be obtained by an approximate vertical direction prediction method, although the method is consistent with the vertical prediction direction, the adjacent pixels selected in the operation process are different from the filtering method;
3) when a left block adjacent to the current block is available, all pixel predicted values of the current block under a DC prediction mode are defined and can be obtained by using an approximate horizontal direction prediction method, although the method is consistent with the horizontal prediction direction, the adjacent pixels selected in the operation process are different from the filtering method;
4) and when the upper and left blocks adjacent to the current block are not available, defining the predicted value of all pixels of the current block to be 128 in the DC prediction mode.
3. The method according to claim 1 or 2, wherein the DC prediction mode, after entering the intra prediction module, specifically adopts the following values:
(1) firstly, the adjacent pixel t reconstructed by the current block is processedi、sjF, according to JVT method making low-pass filtering of correspondent point, placing it into array, recording said array as EP, and recording m-th array variable in said array as EPmWhere i is 0, 1, …, 2q-1 indicating pixel row coordinates, j is 0, 1, …, 2p-1 indicating pixel column coordinates, p × q indicating the block size, p indicating the number of columns of the block, which may be equal to 4, 8, or 16, q indicating the number of rows of the block, which may be equal to 4, 8, or 16, tiRepresenting the ith row of pixel samples, s, of the column to the left of the current blockjRepresenting the j-th column of pixel samples, a, of the upper row of the current blockijRepresenting the pixel sample of ith row and j column of the current block; m represents the subscript variable of the array EP;
in the following calculations, the symbol ">" represents a bit right shift operation;
the EP is derived from the following algorithm:
A. if the current macroblock has a reconstructed pixel with an upper edge neighbor, i.e. sjWhere j is 0,
1, 2, …, p-1, then
a)EP(j+1)=s(j);j=0,…,p-1
b) If the current macroblock has a reconstructed pixel adjacent to the top right, i.e. sjIt is possible to use, among other things,
j is p, p +1, p +2, …, 2p-1, then
EP(1+j+p)=s(p+j);j=0,…,p-1
EP(1+j+p)=S(p+p-1);j=p,…,q-1
Otherwise
EP(1+j+p)=EP(p);j=0,…,p-1
c)EP(1+j+p)=EP(p+j);j=p,…,q+1
d)EP(0)=s0
B. If the current macroblock has a reconstructed pixel adjacent to the left, i.e. tiWhere, i ═ 0,
1, 2, …, q-1, then
a)EP(-1-i)=t(i);i=0,…,q-1
b) If the current macroblock has reconstructed pixels adjacent to the bottom left, i.e. tiIt is possible to use, among other things,
q, q +1, q +2, …, 2q-1, then
EP(-1-i-q)=t(q+i);i=0,…,q-1
EP(-1-i-q)=t(q+q-1);i=q,…,p-1
Otherwise
EP(-1-i-q)=EP(-q);i=0,…,q-1
c)EP(-1-i-q)=EP(-i-q);i=p,p+1
d)EP(0)=t0
C. If s isjAvailable, and t is availableiWherein i is 0, 1, 2, …, q-1, wherein j is 0,
1, 2, …, p-1, then
EP(0)=f;
D. Define variable last _ ix equal to EP(-(p+q))
Taking i equal to- (p + q), where i represents a counter, the following steps are performed,
a) let the variable new _ pix equal (last _ pix + (EP)(i)<<1)+EP(i+1)+2)>>2;
b) Let variable last _ pix equal EP(i)
c) Let the index be the array variable EP of iiEqual to new _ pix;
d) increasing i by 1, turning to a), until i is greater than (p + q);
(2) operation rule of DC prediction mode
i. If s isj,tiAll are available, then all prediction samples
Figure A2004100006660007C1
Is equal to (EP)i+EPj) > 1, wherein,
i-0, 1, 2, …, q-1, representing pixel row coordinates, j-0, 1, 2, …, p-1, representing pixel column coordinates;
if tiNot available, sjIf available, all prediction samples
Figure A2004100006660007C2
Equaling EPjWhere i is 0, 1, 2, …, q-1, representing pixel row coordinates, and j is 0, 1, 2, …, p-1, representing pixel column coordinates;
if sjUnusable, tiIf available, all prediction samples
Figure A2004100006660007C3
Equaling EPiWhere i is 0, 1, 2, …, q-1, representing pixel row coordinates, and j is 0, 1, 2, …, p-1, representing pixel column coordinates;
if sj,tiAll are not available, then all prediction samples are
Figure A2004100006660007C4
Equal to 128 where i-0, 1, 2, …, q-1 denotes pixel row coordinates and j-0, 1, 2, …, p-1 denotes pixel column coordinates.
4. The method of claim 1, wherein a part of the 9 prediction modes can be selected and re-ordered according to coding requirements; for example, a prediction method of intra luminance sample value based on 5 prediction modes, i.e. the DC prediction proposed by the present invention, and the vertical prediction, horizontal prediction, 45-degree direction prediction, 135-degree direction prediction modes adopted in JVT can be adopted;
likewise, for 16 × 16 luminance blocks and 8 × 8 chrominance blocks, it is also possible to use only one or two prediction modes of the DC prediction proposed by the present invention, the vertical prediction, the horizontal prediction, and the flat prediction modes used in JVT.
CN 200410000666 2004-01-16 2004-01-16 Inframe prediction method used for video frequency coding Expired - Fee Related CN100536573C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200410000666 CN100536573C (en) 2004-01-16 2004-01-16 Inframe prediction method used for video frequency coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200410000666 CN100536573C (en) 2004-01-16 2004-01-16 Inframe prediction method used for video frequency coding

Publications (2)

Publication Number Publication Date
CN1535027A true CN1535027A (en) 2004-10-06
CN100536573C CN100536573C (en) 2009-09-02

Family

ID=34305379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200410000666 Expired - Fee Related CN100536573C (en) 2004-01-16 2004-01-16 Inframe prediction method used for video frequency coding

Country Status (1)

Country Link
CN (1) CN100536573C (en)

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100359953C (en) * 2004-09-08 2008-01-02 华为技术有限公司 Image chroma prediction based on code in frame
CN100393137C (en) * 2004-06-17 2008-06-04 佳能株式会社 Moving image coding apparatus
CN100397906C (en) * 2005-08-24 2008-06-25 天津大学 Fast frame-mode selection of video-frequency information
CN100426868C (en) * 2005-01-25 2008-10-15 中国科学院计算技术研究所 Frame image brightness predictive coding method
CN100442857C (en) * 2005-10-12 2008-12-10 华为技术有限公司 Method of enhanced layer in-frame predicting method and encoding and decoding apparatus
CN100461867C (en) * 2004-12-02 2009-02-11 中国科学院计算技术研究所 Inage predicting encoding method in frame
CN100515082C (en) * 2006-05-23 2009-07-15 中国科学院声学研究所 Method for reducing video decoding complexity via decoding quality
CN100531348C (en) * 2005-02-04 2009-08-19 索尼株式会社 Encoding apparatus and method, decoding apparatus and method, image processing system and method
CN100596202C (en) * 2008-05-30 2010-03-24 四川虹微技术有限公司 Fast mode selection method in frame
WO2010031352A1 (en) * 2008-09-19 2010-03-25 华为技术有限公司 Video coding/decoding method and apparatus
CN101160972B (en) * 2005-04-13 2010-05-19 汤姆逊许可公司 Luma and chroma decoding using a common predictor
CN101115207B (en) * 2007-08-30 2010-07-21 上海交通大学 Method and device for implementing interframe forecast based on relativity between future positions
CN101193302B (en) * 2006-12-01 2010-09-29 三星电子株式会社 Illumination compensation method and apparatus and video encoding and decoding method and apparatus
CN101385356B (en) * 2006-02-17 2011-01-19 汤姆森许可贸易公司 Process for coding images using intra prediction mode
CN101502124B (en) * 2006-07-28 2011-02-23 株式会社东芝 Image encoding and decoding method and apparatus
US7933334B2 (en) 2004-10-26 2011-04-26 Nec Corporation Image encoder and method thereof, computer program of image encoder, and mobile terminal
CN101605255B (en) * 2008-06-12 2011-05-04 华为技术有限公司 Method and device for encoding and decoding video
CN101300849B (en) * 2005-11-01 2011-07-06 三叉微系统(远东)有限公司 Data processing system
CN1852443B (en) * 2005-04-22 2011-09-14 索尼英国有限公司 Data processing device
CN101389029B (en) * 2008-10-21 2012-01-11 北京中星微电子有限公司 Method and apparatus for video image encoding and retrieval
CN101822052B (en) * 2007-08-09 2012-05-23 国立大学法人大阪大学 Video stream processing device, its control method
CN101529916B (en) * 2006-10-31 2012-07-18 汤姆森许可贸易公司 Video encoding with intra encoding selection
CN102611885A (en) * 2011-01-20 2012-07-25 华为技术有限公司 Encoding and decoding method and device
CN102695061A (en) * 2011-03-20 2012-09-26 华为技术有限公司 Method and apparatus for determining weight factors, and method and apparatus for predicting intra-frame weighting
CN101056412B (en) * 2006-04-13 2012-10-17 三星电子株式会社 Apparatus and method for spatial prediction of image data, apparatus and method for encoding and decoding image data using the same
CN101822062B (en) * 2007-10-15 2013-02-06 日本电信电话株式会社 Image encoding device and decoding device, image encoding method and decoding method
CN103096051A (en) * 2011-11-04 2013-05-08 华为技术有限公司 Image block signal component sampling point intra-frame decoding method and device thereof
CN101945270B (en) * 2009-07-06 2013-06-19 联发科技(新加坡)私人有限公司 Video coder, method for internal prediction and video data compression
CN103238333A (en) * 2010-11-29 2013-08-07 Sk电信有限公司 Method and apparatus for encoding/decoding images to minimize redundancy of intra-rediction mode
CN103299637A (en) * 2011-01-12 2013-09-11 三菱电机株式会社 Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, and dynamic image decoding method
CN103339943A (en) * 2010-12-08 2013-10-02 Lg电子株式会社 Intra prediction method and encoding apparatus and decoding apparatus using same
CN103703773A (en) * 2011-05-20 2014-04-02 株式会社Kt Method and apparatus for intra prediction within display screen
US8718134B2 (en) 2005-04-13 2014-05-06 Thomson Licensing Luma and chroma decoding using a common predictor
CN101133648B (en) * 2005-01-13 2014-11-12 高通股份有限公司 Mode selection techniques for intra-prediction video encoding
CN104378644A (en) * 2013-08-16 2015-02-25 上海天荷电子信息有限公司 Fixed-width variable-length pixel sample value string matching strengthened image compression method and device
CN104702948A (en) * 2009-08-17 2015-06-10 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
CN101710991B (en) * 2004-11-04 2015-06-24 汤姆森特许公司 Fast intra mode prediction for a video encoder
CN104822065A (en) * 2009-01-22 2015-08-05 株式会社Ntt都科摩 Device, method and program for image prediction encoding, device, method and program for image prediction decoding, and encoding/decoding system and method
CN104902283A (en) * 2010-04-09 2015-09-09 韩国电子通信研究院 Method for encoding videos
CN104954805A (en) * 2011-06-28 2015-09-30 三星电子株式会社 Method and apparatus for image encoding and decoding using intra prediction
CN106231303A (en) * 2016-07-22 2016-12-14 上海交通大学 A kind of HEVC coding uses the method that predictive mode carries out complexity control
WO2021027928A1 (en) * 2019-08-14 2021-02-18 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
US11350088B2 (en) 2019-03-12 2022-05-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Intra prediction method and apparatus, and computer-readable storage medium
US11659202B2 (en) 2019-08-14 2023-05-23 Beijing Bytedance Network Technology Co., Ltd Position-dependent intra prediction sample filtering

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1129385A (en) * 1995-02-13 1996-08-21 大宇电子株式会社 Method and apparatus for encoding a video signal using pixel-by-pixel motion prediction
CN1204753C (en) * 2003-05-19 2005-06-01 北京工业大学 Interframe predicting method based on adjacent pixel prediction

Cited By (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100393137C (en) * 2004-06-17 2008-06-04 佳能株式会社 Moving image coding apparatus
CN100359953C (en) * 2004-09-08 2008-01-02 华为技术有限公司 Image chroma prediction based on code in frame
US7933334B2 (en) 2004-10-26 2011-04-26 Nec Corporation Image encoder and method thereof, computer program of image encoder, and mobile terminal
CN101710991B (en) * 2004-11-04 2015-06-24 汤姆森特许公司 Fast intra mode prediction for a video encoder
CN100461867C (en) * 2004-12-02 2009-02-11 中国科学院计算技术研究所 Inage predicting encoding method in frame
CN101133648B (en) * 2005-01-13 2014-11-12 高通股份有限公司 Mode selection techniques for intra-prediction video encoding
CN100426868C (en) * 2005-01-25 2008-10-15 中国科学院计算技术研究所 Frame image brightness predictive coding method
CN100531348C (en) * 2005-02-04 2009-08-19 索尼株式会社 Encoding apparatus and method, decoding apparatus and method, image processing system and method
US8761251B2 (en) 2005-04-13 2014-06-24 Thomson Licensing Luma-chroma coding with one common or three distinct spatial predictors
US8718134B2 (en) 2005-04-13 2014-05-06 Thomson Licensing Luma and chroma decoding using a common predictor
CN101160972B (en) * 2005-04-13 2010-05-19 汤姆逊许可公司 Luma and chroma decoding using a common predictor
CN101189875B (en) * 2005-04-13 2010-11-17 汤姆逊许可公司 Luma and chroma decoding using a common predictor
US8767826B2 (en) 2005-04-13 2014-07-01 Thomson Licensing Luma and chroma encoding using a common predictor
US10123046B2 (en) 2005-04-13 2018-11-06 Thomson Licensing Method and apparatus for video decoding
US8750376B2 (en) 2005-04-13 2014-06-10 Thomson Licensing Luma and chroma decoding using a common predictor
US8724699B2 (en) 2005-04-13 2014-05-13 Thomson Licensing Luma and chroma encoding using a common predictor
CN1852443B (en) * 2005-04-22 2011-09-14 索尼英国有限公司 Data processing device
CN100397906C (en) * 2005-08-24 2008-06-25 天津大学 Fast frame-mode selection of video-frequency information
CN100442857C (en) * 2005-10-12 2008-12-10 华为技术有限公司 Method of enhanced layer in-frame predicting method and encoding and decoding apparatus
CN101300849B (en) * 2005-11-01 2011-07-06 三叉微系统(远东)有限公司 Data processing system
CN101385356B (en) * 2006-02-17 2011-01-19 汤姆森许可贸易公司 Process for coding images using intra prediction mode
CN101056412B (en) * 2006-04-13 2012-10-17 三星电子株式会社 Apparatus and method for spatial prediction of image data, apparatus and method for encoding and decoding image data using the same
CN100515082C (en) * 2006-05-23 2009-07-15 中国科学院声学研究所 Method for reducing video decoding complexity via decoding quality
CN101502124B (en) * 2006-07-28 2011-02-23 株式会社东芝 Image encoding and decoding method and apparatus
CN101529916B (en) * 2006-10-31 2012-07-18 汤姆森许可贸易公司 Video encoding with intra encoding selection
CN101193302B (en) * 2006-12-01 2010-09-29 三星电子株式会社 Illumination compensation method and apparatus and video encoding and decoding method and apparatus
CN101822052B (en) * 2007-08-09 2012-05-23 国立大学法人大阪大学 Video stream processing device, its control method
CN101115207B (en) * 2007-08-30 2010-07-21 上海交通大学 Method and device for implementing interframe forecast based on relativity between future positions
CN101822062B (en) * 2007-10-15 2013-02-06 日本电信电话株式会社 Image encoding device and decoding device, image encoding method and decoding method
CN100596202C (en) * 2008-05-30 2010-03-24 四川虹微技术有限公司 Fast mode selection method in frame
CN101605255B (en) * 2008-06-12 2011-05-04 华为技术有限公司 Method and device for encoding and decoding video
CN101677406B (en) * 2008-09-19 2011-04-20 华为技术有限公司 Method and apparatus for video encoding and decoding
WO2010031352A1 (en) * 2008-09-19 2010-03-25 华为技术有限公司 Video coding/decoding method and apparatus
CN101389029B (en) * 2008-10-21 2012-01-11 北京中星微电子有限公司 Method and apparatus for video image encoding and retrieval
CN104822065A (en) * 2009-01-22 2015-08-05 株式会社Ntt都科摩 Device, method and program for image prediction encoding, device, method and program for image prediction decoding, and encoding/decoding system and method
CN104822065B (en) * 2009-01-22 2018-04-10 株式会社Ntt都科摩 Image prediction/decoding device, method and coder/decoder system and method
CN101945270B (en) * 2009-07-06 2013-06-19 联发科技(新加坡)私人有限公司 Video coder, method for internal prediction and video data compression
CN104702948A (en) * 2009-08-17 2015-06-10 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
CN104702948B (en) * 2009-08-17 2018-07-20 三星电子株式会社 Method and apparatus to Video coding and to the decoded method and apparatus of video
CN104902283A (en) * 2010-04-09 2015-09-09 韩国电子通信研究院 Method for encoding videos
CN104902283B (en) * 2010-04-09 2018-12-14 韩国电子通信研究院 Video encoding/decoding method
CN103238333A (en) * 2010-11-29 2013-08-07 Sk电信有限公司 Method and apparatus for encoding/decoding images to minimize redundancy of intra-rediction mode
CN103238333B (en) * 2010-11-29 2016-08-31 Sk电信有限公司 Carry out encoding/decoding image so that the method and apparatus that minimizes of the redundancy of intra prediction mode
US11677961B2 (en) 2010-12-08 2023-06-13 Lg Electronics Inc. Intra prediction method and encoding apparatus and decoding apparatus using same
US10785487B2 (en) 2010-12-08 2020-09-22 Lg Electronics Inc. Intra prediction in image processing
CN107197257A (en) * 2010-12-08 2017-09-22 Lg 电子株式会社 Interior prediction method and the encoding apparatus and decoding apparatus using this method
CN103339943A (en) * 2010-12-08 2013-10-02 Lg电子株式会社 Intra prediction method and encoding apparatus and decoding apparatus using same
US9832472B2 (en) 2010-12-08 2017-11-28 Lg Electronics, Inc. Intra prediction in image processing
US10812808B2 (en) 2010-12-08 2020-10-20 Lg Electronics Inc. Intra prediction method and encoding apparatus and decoding apparatus using same
CN103339943B (en) * 2010-12-08 2017-06-13 Lg电子株式会社 Interior prediction method and the encoding apparatus and decoding apparatus using the method
US11102491B2 (en) 2010-12-08 2021-08-24 Lg Electronics Inc. Intra prediction in image processing
CN107197257B (en) * 2010-12-08 2020-09-08 Lg 电子株式会社 Intra prediction method performed by encoding apparatus and decoding apparatus
US10469844B2 (en) 2010-12-08 2019-11-05 Lg Electronics Inc. Intra prediction in image processing
CN103299637A (en) * 2011-01-12 2013-09-11 三菱电机株式会社 Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, and dynamic image decoding method
CN102611885A (en) * 2011-01-20 2012-07-25 华为技术有限公司 Encoding and decoding method and device
WO2012097746A1 (en) * 2011-01-20 2012-07-26 华为技术有限公司 Coding-decoding method and device
CN102695061B (en) * 2011-03-20 2015-01-21 华为技术有限公司 Method and apparatus for determining weight factors, and method and apparatus for predicting intra-frame weighting
CN102695061A (en) * 2011-03-20 2012-09-26 华为技术有限公司 Method and apparatus for determining weight factors, and method and apparatus for predicting intra-frame weighting
WO2012126340A1 (en) * 2011-03-20 2012-09-27 华为技术有限公司 Method and device for determining weight factor, and method and device for intra-frame weighted prediction
US9843808B2 (en) 2011-05-20 2017-12-12 Kt Corporation Method and apparatus for intra prediction within display screen
US10158862B2 (en) 2011-05-20 2018-12-18 Kt Corporation Method and apparatus for intra prediction within display screen
CN103703773B (en) * 2011-05-20 2017-11-07 株式会社Kt The method and apparatus that infra-frame prediction is carried out in display screen
US9584815B2 (en) 2011-05-20 2017-02-28 Kt Corporation Method and apparatus for intra prediction within display screen
CN103703773A (en) * 2011-05-20 2014-04-02 株式会社Kt Method and apparatus for intra prediction within display screen
US9756341B2 (en) 2011-05-20 2017-09-05 Kt Corporation Method and apparatus for intra prediction within display screen
US9749639B2 (en) 2011-05-20 2017-08-29 Kt Corporation Method and apparatus for intra prediction within display screen
US9749640B2 (en) 2011-05-20 2017-08-29 Kt Corporation Method and apparatus for intra prediction within display screen
US10045043B2 (en) 2011-06-28 2018-08-07 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
US10075730B2 (en) 2011-06-28 2018-09-11 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
US10085037B2 (en) 2011-06-28 2018-09-25 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
US10045042B2 (en) 2011-06-28 2018-08-07 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
CN104954805A (en) * 2011-06-28 2015-09-30 三星电子株式会社 Method and apparatus for image encoding and decoding using intra prediction
US9788006B2 (en) 2011-06-28 2017-10-10 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
CN104954805B (en) * 2011-06-28 2019-01-04 三星电子株式会社 Method and apparatus for using intra prediction to carry out image coding and decoding
US9813727B2 (en) 2011-06-28 2017-11-07 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
US10506250B2 (en) 2011-06-28 2019-12-10 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
CN103096051B (en) * 2011-11-04 2017-04-12 华为技术有限公司 Image block signal component sampling point intra-frame decoding method and device thereof
US9674529B2 (en) 2011-11-04 2017-06-06 Huawei Technologies Co., Ltd. Intra-frame decoding method and apparatus for signal component sampling point of image block
CN103096051A (en) * 2011-11-04 2013-05-08 华为技术有限公司 Image block signal component sampling point intra-frame decoding method and device thereof
CN104378644A (en) * 2013-08-16 2015-02-25 上海天荷电子信息有限公司 Fixed-width variable-length pixel sample value string matching strengthened image compression method and device
CN106231303B (en) * 2016-07-22 2020-06-12 上海交通大学 Method for controlling complexity by using prediction mode in HEVC (high efficiency video coding)
CN106231303A (en) * 2016-07-22 2016-12-14 上海交通大学 A kind of HEVC coding uses the method that predictive mode carries out complexity control
US11350088B2 (en) 2019-03-12 2022-05-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Intra prediction method and apparatus, and computer-readable storage medium
US11843724B2 (en) 2019-03-12 2023-12-12 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Intra prediction method and apparatus, and computer-readable storage medium
WO2021027928A1 (en) * 2019-08-14 2021-02-18 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
US11533477B2 (en) 2019-08-14 2022-12-20 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
US11659202B2 (en) 2019-08-14 2023-05-23 Beijing Bytedance Network Technology Co., Ltd Position-dependent intra prediction sample filtering
US12096026B2 (en) 2019-08-14 2024-09-17 Beijing Bytedance Network Technology Co., Ltd. Position-dependent intra prediction sample filtering

Also Published As

Publication number Publication date
CN100536573C (en) 2009-09-02

Similar Documents

Publication Publication Date Title
CN1535027A (en) Inframe prediction method used for video frequency coding
CN1225126C (en) Space predicting method and apparatus for video encoding
CN1265649C (en) Moving picture signal coding method, decoding method, coding apparatus, and decoding apparatus
CN1214647C (en) Method for encoding images, and image coder
CN1703096A (en) Prediction encoder/decoder, prediction encoding/decoding method, and recording medium
CN1076932C (en) Method and apparatus for coding video signal, and method and apparatus for decoding video signal
CN1578477A (en) Video encoding/decoding apparatus and method for color image
CN1187988C (en) Motion compensating apparatus, moving image coding apparatus and method
CN1254113C (en) Image encoding device, image encoding method, image decoding device, image decoding method, and communication device
CN1275194C (en) Image processing method and its device
CN1956546A (en) Image coding apparatus
CN1705375A (en) Method of forecasting encoder/decoder and forecasting coding/decoding
CN1203679C (en) Method and device used for automatic data converting coding video frequency image data
CN1694537A (en) Adaptive de-blocking filtering apparatus and method for MPEG video decoder
CN1658673A (en) Video compression coding-decoding method
CN1593065A (en) Video encoding and decoding of foreground and background wherein picture is divided into slice
CN1910933A (en) Image information encoding device and image information encoding method
CN1950832A (en) Bitplane coding and decoding for AC prediction status and macroblock field/frame coding type information
CN1535024A (en) Video encoding device, method and program and video decoding device, method and program
CN1537384A (en) Method for sub-pixel valve interpolation
CN1638484A (en) Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program
CN1910931A (en) Video encoding method and device, video decoding method and device, program thereof, and recording medium containing the program
CN1197359C (en) Device and method for video signal with lowered resolution ratio of strengthened decode
CN101039421A (en) Method and apparatus for realizing quantization in coding/decoding process
CN1455600A (en) Interframe predicting method based on adjacent pixel prediction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090902

Termination date: 20130116