[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101325714A - Method and apparatus for processing transformation data, method and apparatus for encoding and decoding - Google Patents

Method and apparatus for processing transformation data, method and apparatus for encoding and decoding Download PDF

Info

Publication number
CN101325714A
CN101325714A CN 200810087919 CN200810087919A CN101325714A CN 101325714 A CN101325714 A CN 101325714A CN 200810087919 CN200810087919 CN 200810087919 CN 200810087919 A CN200810087919 A CN 200810087919A CN 101325714 A CN101325714 A CN 101325714A
Authority
CN
China
Prior art keywords
transformation
quantization
data
transformed
transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200810087919
Other languages
Chinese (zh)
Other versions
CN101325714B (en
Inventor
何芸
武燕楠
郑萧桢
郑建铧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Huawei Technologies Co Ltd
Original Assignee
Tsinghua University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Huawei Technologies Co Ltd filed Critical Tsinghua University
Priority to CN 200810087919 priority Critical patent/CN101325714B/en
Priority to PCT/CN2008/071255 priority patent/WO2008151570A1/en
Publication of CN101325714A publication Critical patent/CN101325714A/en
Application granted granted Critical
Publication of CN101325714B publication Critical patent/CN101325714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention claims a processing method for transforming data, comprising: estimating and analyzing the numerical value range of the image data after two transformations based on pre-setting two needed transforming matrixes needed by two transformations; estimating the characteristic different value of numerical value of two transformations based on two transformed numerical value; applying the first transformation of two transformations to the data to be transformed, and compensating the data after the first transformation based on the different value of estimated numerical value. The embodiment of this invention further discloses a processing device for transforming data, method and device of coding and decoding. Appling the embodiment of this invention, which can adjust the numerical value of the transformed data so as to enable the numerical values of the image data to be the same after different transformation and really reflect the influence of the transformation to the data when applying the self-adoptive block transformation technique, and to select the transformation with better effect and improve the coding efficiency.

Description

Transform data processing method and apparatus, and encoding and decoding method and apparatus
Technical Field
The present invention relates to video compression coding technologies, and in particular, to a method and an apparatus for processing transform data, and a method, an apparatus and a system for encoding and decoding.
Background
To reduce the amount of data in the transmission or storage of video data, it is generally necessary to compression encode the video data. In the field of video compression coding, transformation is an important technology, and functions to transform an image, image content in a region, and information to concentrate in a specific region, so that a video compression algorithm can more effectively compress the content. Then, the data after transformation is quantized, entropy-coded and the like to form the video data after compression coding.
In video codec standards such as MPEG-2, h.264, AVS all use transform techniques. In these standards, an image or a region in an image is divided into small blocks or sub-regions, called sub-blocks, and the transformation is performed in units of sub-blocks. In general, the size of a sub-block may be 4x4 or 8x8, where 4 and 8 are in units of image pixels.
A video file is composed of a plurality of video images, and one image usually contains rich content, and different parts of the image have different characteristics. Therefore, if all the images in a segment of video or an image is divided into sub-blocks with the same size (e.g. 8 × 8), and then transformed, the effect is not necessarily optimal, i.e. the content of all the sub-blocks can not be effectively concentrated in a certain area after being transformed. Based on this, an adaptive block transform technique is proposed, whose principle is: a specific region is divided into different sub-block sizes, different transformations are performed on the sub-blocks of different sizes (for example, an image is divided into blocks of 4x4 and 8x8, 4x4 transformation is used for 4x4 blocks, and 8x8 transformation is used for 8x8 blocks), and then information of the blocks is concentrated into the specific region according to certain criteria to judge which transformation is more effective under different transformation conditions. And finally storing the better transformation result. When decoding the image transformed in the above manner, the decoding end acquires information of a transform scale (such as 4x4 or 8x8) according to corresponding information in the code stream, and then processes the region by using a corresponding inverse transform (such as 4x4 inverse transform or 8x8 inverse transform) to acquire original video data.
The specific mode of the application of the adaptive block transform technology in h.264 is to define a set of 4x4 transform matrices and 8x8 transform matrices, and respectively make a set of quantization tables at the codec end according to the characteristics of 4x4 transform and 8x8 transform, and compare the result of data respectively subjected to 4x4 transform and quantization with the result of data subjected to 8x8 transform and quantization to determine a better transform mode. Because the 4x4 transformation and the 8x8 transformation in H.264 have similar transformation characteristics, and the quantization tables which are respectively matched and made can ensure that the numerical range of the data of the 4x4 block which is transformed by 4x4 and quantized is basically consistent with the numerical range of the data of the 8x8 block which is transformed by 8x8 and quantized. Therefore, the adaptive block transform technique in h.264 can effectively improve the coding efficiency.
However, in the process of compression coding of video, different transforms may need to be fused for various purposes, for example, the 4x4 transform matrix is DCT-based, the 8x8 transform is wavelet-based, the two sets of transform matrices are likely not to have too many same transform characteristics, and the value ranges of the same data after the transforms are not consistent. Since quantization may cause data information to be damaged, no matter the same or different quantization tables are used for quantization in the quantization process, the degree of loss of the transformed data after quantization may be inconsistent due to inconsistent degree of change of the value range of the same data after different transformations. In this case, a certain criterion cannot be used to determine a better transform mode, and thus, the data encoding efficiency cannot be effectively improved.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method and apparatus for adjusting the value range of encoded data, so that the value ranges of the data after different transformations are substantially consistent. Correspondingly, an encoding method, a decoding method, a device and a system are also provided, the adjusting parameters are written into the code stream at the encoding end, and the received data can be correspondingly adjusted at the decoding end according to the received adjusting parameters.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a method of transform data processing, comprising:
estimating the numerical value ranges of the image data after two kinds of transformation according to transformation matrixes required by two kinds of preset transformation;
estimating the feature difference value of the two transformed numerical ranges according to the two transformed numerical ranges;
receiving data to be transformed, applying a first of the two transformations to the data, and compensating the first transformed data according to the estimated value range characteristic difference.
The first transformation is one of the two transformations. Similarly, the other of the two transforms may be referred to as the second transform. The two transforms may also be referred to as a first transform and a second transform, or transform a and transform B, respectively.
A transformed data processing apparatus includes a first numerical range estimation unit, a second numerical range estimation unit, a numerical range difference unit, and a transformation compensation unit,
the first numerical range estimation unit is used for estimating a numerical range of the image data after first transformation according to a transformation matrix required by the first transformation in two preset transformations, and providing the numerical range difference unit with the estimated numerical range;
the second numerical range estimation unit is used for estimating a numerical range of the image data after second transformation according to a transformation matrix required by the second transformation in the two preset transformations, and providing the numerical range difference unit with the estimated numerical range;
the numerical range difference unit is used for estimating the characteristic difference of the two transformed numerical ranges according to the numerical ranges respectively subjected to the first transformation and the second transformation, and providing the characteristic difference to the transformation compensation unit;
and the transformation compensation unit is used for receiving the data to be transformed, applying the first transformation of the two transformations to the data, and compensating the data after the first transformation according to the estimated value range characteristic difference.
An encoding method, comprising:
receiving data to be transformed;
performing first transformation on the data to be transformed to obtain first transformed data;
performing second transformation on the data to be transformed to obtain second transformed data;
determining an adjusting parameter according to the second transformed parameter, the first transformed data and the second transformed data, and adjusting the first transformed data according to the adjusting parameter and the second transformed parameter;
and writing the adjustment parameters into an encoding code stream.
A decoding method, comprising:
receiving a code stream, and decoding the code stream to obtain first converted data and an adjustment parameter;
and adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
An encoding apparatus comprising:
a data receiving unit for receiving data to be transformed;
the conversion unit is used for carrying out first conversion on the data to be converted received by the receiving unit to obtain first converted data; performing second transformation on the data to be transformed to obtain second transformed data;
a first adjusting unit which determines an adjusting parameter according to a second transformed parameter, the first transformed data and the second transformed data, and adjusts the first transformed data according to the adjusting parameter and the second transformed parameter;
and the writing unit is used for writing the adjusting parameters into the coding code stream.
A decoding apparatus, comprising:
a code stream receiving unit for receiving the code stream;
the decoding unit is used for decoding the code stream to obtain first converted data and adjustment parameters;
and the second adjusting unit is used for adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
A codec system comprising: an encoding device and a decoding device;
the encoding apparatus includes:
a data receiving unit for receiving data to be transformed;
the conversion unit is used for carrying out first conversion on the data to be converted received by the receiving unit to obtain first converted data; performing second transformation on the data to be transformed to obtain second transformed data;
a first adjusting unit which determines an adjusting parameter according to a second transformed parameter, the first transformed data and the second transformed data, and adjusts the first transformed data according to the adjusting parameter and the second transformed parameter;
and the writing unit is used for writing the adjusting parameters into the coding code stream.
The decoding apparatus includes:
a code stream receiving unit for receiving the code stream;
the decoding unit is used for decoding the code stream to obtain first converted data and adjustment parameters;
and the second adjusting unit is used for adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
An encoding method, comprising:
receiving data to be transformed and encoded parameter information;
performing first transformation on the data to be transformed to obtain first transformed data;
determining an adjustment parameter according to the encoded parameter information and a second transformed parameter, and adjusting the first transformed data according to the adjustment parameter and the second transformed parameter;
and writing the adjustment parameters into an encoding code stream.
An encoding apparatus comprising:
a third receiving unit for receiving data to be transformed and encoded parameter information;
a third conversion unit, configured to perform first conversion on the data to be converted received by the third receiving unit to obtain first converted data;
a third adjusting unit, configured to determine an adjustment parameter according to the encoded parameter information and a second transformed parameter received by a third receiving unit, and adjust the first transformed data obtained by the third transforming unit according to the adjustment parameter and the second transformed parameter;
and the third writing unit is used for writing the adjusting parameters obtained by the third adjusting unit into the coding code stream.
According to the technical scheme, in the embodiment of the invention, the numerical value ranges of video data after two kinds of transformation are estimated according to the transformation matrixes required by the two kinds of preset transformation; and then, estimating the difference value of the video data respectively subjected to the two transformations and the corresponding quantized value ranges according to the two transformed value ranges and the quantized points respectively corresponding to the two transformations. Finally, after applying one of the transformations to the data to be transformed, the difference value of the estimated value range is used for compensating the data after the first transformation. The compensation of the numerical range is carried out according to the relationship between two kinds of transformed and quantized numerical ranges, so that the transformed and quantized numerical ranges can be consistent with the other kind of transformed and quantized numerical ranges after the compensation is carried out, the influence of the transformation on data can be truly reflected when the self-adaptive block transformation technology is applied, the transformation with better effect is selected, and the coding efficiency is further improved. The coding and decoding method and the device can ensure that the data range after the first transformation is consistent with the data range after the second transformation, and simultaneously write the adjustment parameters into the code stream, so that the decoding end can perform corresponding processing on the received data according to the adjustment parameters.
Drawings
FIG. 1 is a general flow chart of a transformation data processing method according to an embodiment of the present invention;
FIG. 2 is a general block diagram of a transform data processing apparatus according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for transforming data according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a detailed structure of a transform data processing apparatus according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for processing transformed data according to a second embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for processing transformed data according to a third embodiment of the present invention;
FIG. 7 is a diagram of a detailed structure of a transform data processing apparatus according to a third embodiment of the present invention;
FIG. 8 is a flowchart illustrating a method for processing transformed data according to a fourth embodiment of the present invention;
FIG. 9 is a diagram illustrating a detailed structure of a transform data processing apparatus according to a fourth embodiment of the present invention;
fig. 10 is a schematic flowchart of an encoding method according to a fifth embodiment of the present invention;
fig. 11 is a flowchart illustrating a decoding method according to a sixth embodiment of the present invention;
fig. 12 is a schematic structural diagram of an encoding apparatus according to a seventh embodiment of the present invention;
fig. 13 is a schematic structural diagram of a decoding apparatus according to an eighth embodiment of the present invention;
fig. 14 is a flowchart illustrating an encoding method according to a ninth embodiment of the present invention;
fig. 15 is a schematic structural diagram of an encoding apparatus according to a tenth embodiment of the present invention.
Detailed Description
To make the objects, technical means and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
In the embodiment of the invention, the change of the data numerical range of the data after two different transformations is firstly analyzed, and then the data after one transformation is compensated according to the change of the data numerical range after different transformations and quantization, so as to ensure that the numerical range of the data after the transformation and quantization of the same data to be coded is basically consistent with the numerical range of the data after the transformation and quantization of the other transformation and quantization, thereby improving the coding efficiency.
Fig. 1 is a general flowchart of a transformation data processing method according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step 101, estimating the numerical value ranges of the image data after two kinds of transformation according to the transformation matrixes required by two kinds of preset transformation.
And 102, estimating the characteristic difference value of the two transformed numerical value ranges according to the two transformed numerical value ranges.
According to different compensation modes, the values of the difference values of the two transformed numerical ranges are different, and the difference values can be the difference values of the numerical ranges of the image data after the two transforms and the corresponding quantization respectively, or the difference values of the numerical ranges of the image data after the two transforms respectively. When the characteristic difference is a difference between the numerical ranges after two transformations and corresponding quantization, the characteristic difference is calculated according to the quantization step corresponding to the two transformations.
Step 103, one of the two transformations is applied to the data to be transformed, and the transformed data is compensated according to the difference value of the estimated numerical range.
Fig. 2 is a general configuration diagram of a transform data processing apparatus according to an embodiment of the present invention. As shown in fig. 2, the apparatus includes a first numerical range estimation unit, a second numerical range estimation unit, a numerical range difference unit, and a transform compensation unit.
The first numerical range estimation unit is used for estimating the numerical range of the image data after the first transformation according to a transformation matrix required by the first transformation in two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the second numerical range estimation unit is used for estimating the numerical range of the image data after the second transformation according to a transformation matrix required by the second transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the numerical range difference unit is used for estimating the characteristic difference of the two transformed numerical ranges according to the numerical ranges respectively subjected to the first transformation and the second transformation and providing the characteristic difference to the transformation compensation unit.
And the transformation compensation unit is used for applying a first transformation to the data to be transformed and compensating the data after the first transformation according to the estimated value range characteristic difference.
In the method and apparatus, the manner of compensating the transformed data may be: determining a proper adjusting factor according to the relation between the numerical ranges of the two kinds of transformed data, multiplying the data obtained by one kind of transformation by the adjusting factor, and then quantizing according to the corresponding quantization point, so that the numerical range of the data obtained by the transformation and quantization is consistent with the numerical range of the data obtained by the other kind of transformation and quantization.
Alternatively, the way of compensating the transformed data may be: according to the relation between the numerical ranges of the two kinds of transformed data, the quantization point corresponding to one kind of transformation is adjusted, and then the data obtained by the transformation is quantized according to the adjusted quantization point, so that the numerical range of the data obtained by the transformation and quantization can be consistent with the numerical range of the data obtained by the other kind of transformation and quantization.
Still alternatively, the method of compensating the transformed data may be: and according to the relation between the numerical ranges of the two kinds of transformed data, reestablishing a quantization table for one kind of transformation, and quantizing the data obtained by the transformation according to the reestablished quantization table, so that the numerical range of the data obtained by the transformation and quantization can be consistent with the numerical range of the data obtained by the other kind of transformation and quantization.
The following describes the embodiments of the present invention in the three different processing modes by different examples. The first to second embodiments are directed to the first processing method, the third embodiment is directed to the second processing method, and the fourth embodiment is directed to the third processing method.
The first embodiment is as follows:
in this embodiment, it is assumed that there is a coded data block C, and transform a and transform B. The scale of transform a is n × n and the scale of transform B is m × m. Using quantization point QP when encoding data block CsTransform A and transform B are QP at the quantization pointsThe quantization step lengths searched are QTAB1 s]And QTAB2[ s ]]. Here, when quantizing data, one quantization point is assigned to each different coding region (for example, data block C), and a corresponding quantization step is found from the quantization point, and quantization processing is performed. And adjusting the numerical range of the data obtained by the transformation A and the corresponding quantization to be consistent with the numerical range of the data obtained by the transformation B and the corresponding quantization. In this embodiment, the difference value of the numerical range characteristics of the two kinds of transformations is a difference value of the numerical range of the image data after the two kinds of transformations and the corresponding quantization are respectively performed.
Fig. 3 is a flowchart illustrating a method for processing transformed data according to an embodiment of the present invention. As shown in fig. 3, the method includes:
step 301, estimating the value ranges of the image data after two transformations according to the transformation matrix required by the two transformations.
In this step, the coding block C is divided into sub-blocks according to the scale of the transform A, B, and the transform A, B is applied to each sub-block. And estimating the transformed numerical range of each pixel data in the divided sub-blocks according to the transformation matrix corresponding to the transformation A, B. For any transform, the estimation of the transformed value range is performed in the same manner, and the following description will take the estimation of the value range of each pixel data in the transformed a sub-block as an example to describe a specific estimation method.
Assuming that the size of the coding block C is M × N, T is a transform matrix corresponding to the transform a, and T' is a transposed matrix of T, applying the transform a to the coding block C specifically is: computing
Figure A20081008791900241
Wherein S isScaling factor matrix for making [ T.C.T']The data is then normalized, the symbols are matrix multiplied,
Figure A20081008791900242
representing the multiplication of the respective corresponding elements of the matrix. [ T.C.T 'can be calculated by matrix multiplication']In increments of the numerical range.
First, estimate [ T.C.T']Increment of numerical range with respect to C: the sum of absolute values of coefficients in the jth row in T' is djMultiplying the row by the ith row in C to obtain the numerical range of the data, and changing the numerical range of the data into d of the data numerical range of the jth row in C at mostjAnd (4) doubling. Since both the computer and the hardware processor store information in binary units, the numerical range is represented in binary form, where the maximum increment of the numerical range of the data obtained by multiplying the column by the ith row in C is represented as log2(dj). [ T.C.T 'can be calculated as described above']Maximum value range increment for each element in postc.
Then estimatingRelative to [ T.C.T']Increment of the numerical range of (c): if the absolute value of the element (k, h) in S is S(k,h)Then, then
Figure A20081008791900251
Then, relative to [ T.C.T']The increment of the numerical range of the point (k, h) is log2(S(k,h))。
Adding the numerical range increments of the two estimations to obtain the result
Figure A20081008791900252
Followed by a maximum value range increment for each point in C. And then, according to the numerical range of the coding block C, the numerical maximum numerical range of each point in each subblock after transformation can be obtained.
The maximum value range of the coding block C after being transformed is obtained through the estimation in the mode, and in the actual coding, the data in the coding block C obeys certain mathematical distribution and is difficult to reach the maximum value range of the coding block C, so the value range after being transformed is probably not reached to the maximum value range. Therefore, it is preferable to further consider the mathematical distribution of the coding blocks C when performing the numerical range estimation.
Specifically, assuming that the values in block C bear a certain mathematical distribution, transform a is applied to the distribution, and then the region where the value range of the transformed data of C is most likely to appear can be calculated by the above method. For example, assume that the mathematical distribution model of data in a data block C of M × N size is P (x, y), the maximum possible value range of each point in the ith row of the data block is represented as f (P (x, i)), (0 < ═ x < ═ M), where f (·) is a mapping relationship obtained from the mathematical distribution model P, and the meaning is the maximum possible value range corresponding to the point (x, y), (0 < ═ x < ═ M, 0 < ≦ y < > N), that is, the most probable value of the point (x, y) at the time of encoding is 2f(P(x,y))2 is to bef(P(x,y))Is set as vi. Meanwhile, assuming that the coefficient values of the points at the jth column of T ' are T ' (j, y), (0 < ═ y < > N), the maximum increment of the data value range of the ith row of C after the jth column of T ' is multiplied by the ith row of C is:
Figure A20081008791900253
from which a transformed can be calculated
Figure A20081008791900254
The range of values in which each point of the last data block C is most likely to occur.
After the numerical range is estimated in the above manner, the numerical range of each point data of the subblock in the coding block C after being transformed by the transform a can be obtained:
A11 A12.........A1(n-1)A1n
A21 A22........A2(n-1)A2n
.
.
.
.
An1 An2..........An(n-1)Ann
and the numerical range of each point data of the sub-block in the coding block C after being transformed by the B is as follows:
B11 B12 B13................B1(m-2)B1(m-1)B1mm
B21 B22 B23.................B2(m-2)B2(m-1)B2mm
B31 B32 B33.................B3(m-2)B3(m-1)B3mm
.
.
.
.
Bm1 Bm2 Bm3..............Bm(m-2)Bm(m-1)Bmm
step 302, for the two transformations, the average value ranges after transformation are calculated respectively.
In this step, the average value range calculation method is the same for both transformations. Still taking transform a as an example, the way to calculate the transformed average value range may be: <math> <mrow> <msub> <mi>Avr</mi> <mi>A</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>n</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>Aij</mi> <mo>,</mo> </mrow> </math> wherein,aij is the numerical range of the ith row and jth column pixel data in the subblock after transformation a obtained in step 301. Similarly, the mean value range after transformation B is calculated as: <math> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>m</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>Bij</mi> <mo>,</mo> </mrow> </math> wherein, Bij is the numerical range of the j-th row and column pixel data in the ith row in the subblock after B transformation.
Step 303, calculating the difference between the two transformed and corresponding quantized value ranges according to the transformed average value range obtained in step 302.
In this step, the numerical range after two transformations and corresponding quantization is first calculated. Still illustrated is transform a and corresponding quantization. As previously described, quantization point QP is used for coding block CsCorresponding to the quantization point, transform A finds a quantization step size QTAB1[ s ]]Using the Avr calculated in step 202ARepresenting the average value range of each point of the data block obtained after A transformation, wherein the value range is represented in binary form, and the average value of the values obtained after A transformation of the coded data is
Figure A20081008791900263
The quantization process of data in video codec can be expressed as (X QTAB s)])>>shift[s](1) Where X is the value to be quantified, s is the point of quantification, QTAB [ n ]]Shift s is a quantization step size looked up in the quantization table according to the quantization point n]The corresponding shift of the quantization point s. According to the above steps, quantization is usedStep QTAB1[ s ]]The average value of the values obtained after quantization is <math> <mrow> <mrow> <mo>(</mo> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>&CenterDot;</mo> <msup> <mn>2</mn> <msub> <mi>Avr</mi> <mi>A</mi> </msub> </msup> <mo>)</mo> </mrow> <mo>></mo> <mo>></mo> <mi>shift</mi> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>,</mo> </mrow> </math> The data is represented in binary form with a value range of (Avr)A+log2QTAB1[s])>>shift[s]. Similarly, transform B and QTAB2[ s ]]After quantization for the quantization step size, the resulting data has a value range of (Avr)B+log2QTAB2[s])>>shift[s]。
Next, the two transforms and the corresponding quantized value range differences are calculated. The specific calculation method is as follows: calculating a multiple relationship between two transforms and the corresponding quantized values, i.e. <math> <mrow> <mfrac> <mrow> <mrow> <mo>(</mo> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>&times;</mo> <msup> <mn>2</mn> <msub> <mi>Avr</mi> <mi>b</mi> </msub> </msup> <mo>)</mo> </mrow> <mo>></mo> <mo>></mo> <mi>shift</mi> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> <mrow> <mrow> <mo>(</mo> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>&times;</mo> <msup> <mn>2</mn> <msub> <mi>Avr</mi> <mi>A</mi> </msub> </msup> <mo>)</mo> </mrow> <mo>></mo> <mo>></mo> <mi>shift</mi> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math> The difference between the ranges of values that convert this multiple relationship into a binary representation is: <math> <mrow> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mi>l</mi> <msub> <mi>og</mi> <mn>2</mn> </msub> <mfrac> <mrow> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>&times;</mo> <msup> <mn>2</mn> <msub> <mi>Avr</mi> <mi>B</mi> </msub> </msup> <mo>></mo> <mo>></mo> <mi>shift</mi> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> <mrow> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>&times;</mo> <msup> <mn>2</mn> <msub> <mi>Avr</mi> <mi>A</mi> </msub> </msup> <mo>></mo> <mo>></mo> <mi>shift</mi> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math> that is, D' ═ AvrB+log2QTAB2[s])-(AvrA+log2QTAB1[s])。
Step 304, determining the adjustment factor of transform a based on the difference in step 303.
In this step, the adjustment factor of transform a determined according to the value range difference obtained in step 303 is q 2D′。
The required data in the calculation process of the steps are all prior data, namely the data which can be obtained before encoding, so the steps can be completed before encoding and the result can be stored without calculation in the encoding process.
In actual encoding, the adjustment factor of transform a may be adjusted due to the difference of encoded data, so the adjustment factor of transform a may be adjusted appropriately according to the characteristics of encoded data. The encoded data characteristic may be the actual average range of values of the data block after transformation a. And writing the adjustment factor of the transformation A into the coding code stream. The adjustment factor of transform a may be written in a sequence header or a picture header or a slice header or a macroblock header of the codestream. The adjustment factor of transform a is used in the same way as described above and will not be described here.
Step 305, applying transform a to coding block C to obtain transformed data, and multiplying the obtained data by the adjustment factor of transform a.
In this step, transform a is applied to coding block C, and the specific process is the same as that in the conventional manner, which is not described herein again. Multiplying the transformed data by the adjustment factor in step 204, that is, for each transformed data: cij′=Cij·2D', wherein, CijFor the transformed pixel data value of the ith row and jth column, Cij' is the reconstructed value of the adjusted ith row and jth column pixel data.
Step 306, using the quantization step found by transform A to each Cij' quantization is performed.
According to the existing mode, each C adjusted after A is transformedij' quantization is performed. Since each C is adjusted by the adjustment factor in step 205ijIs reconstructed to obtain Cij' and the adjustment factor is determined from the difference between the two transforms and the corresponding quantized value ranges, so this step is for each CijThe numerical range of the quantized data can be consistent with the numerical range of the data obtained by the coding block C after the transformation B and the corresponding quantization.
The specific derivation process is as follows: since the shift during quantization has the same effect on both transform a and transform B, the effect of the shift is not considered in the following.
As previously mentioned, the average value range of the coding block C after applying the transformation a is AvrAThe value obtained by transforming the encoded data by A is
Figure A20081008791900281
The result of multiplying by the adjustment factor is the result <math> <mrow> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>A</mi> </msub> </mrow> </msup> <mo>&CenterDot;</mo> <msup> <mn>2</mn> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> </msup> <mo>=</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>A</mi> </msub> <mo>+</mo> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> </mrow> </msup> <mo>=</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>A</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>+</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>)</mo> </mrow> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>Avr</mi> <mi>A</mi> </msub> <mo>+</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>)</mo> </mrow> </mrow> </msup> <mo>=</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>+</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>-</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> </msup> <mo>,</mo> </mrow> </math> The value range expressed in binary is AvrB+log2QTAB2[s]-log2QTAB1[s](ii) a Then using quantization step QTAB1[ s ]]The result obtained after the quantification is <math> <mrow> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>+</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>-</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> </msup> <mo>&CenterDot;</mo> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>=</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>+</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> </msup> <mo>,</mo> </mrow> </math> The value range expressed in binary is AvrB+log2QTAB2[s]. And the average value range of the coding block C after applying the transformation B is AvrBThe value obtained by transforming the encoded data by A is
Figure A20081008791900284
Then using quantization step QTAB2[ s ]]Carry out quantizationThe result obtained is
Figure A20081008791900285
The value range expressed in binary is AvrB+log2QTAB2[s]. It can be seen that the same encoding block C, after the operations of steps 205 and 206, has the same value range as the data obtained after transform B and corresponding quantization. The purpose of making the final value ranges of the transformation A and the transformation B consistent is achieved.
So far, the flow of the method for processing the transformed data provided by this embodiment is finished.
The transform data processing on the encoding side is performed in accordance with the above method, and the superior transform scheme of transform a and transform B is selected. If the finally selected transformation mode is transformation A, the data sent to the decoding end is the data after the transformation A and the corresponding quantization, entropy coding and the like. At this time, after the decoding end performs inverse quantization on the data according to the quantization point corresponding to the transform a, the inverse transform is performed after dividing the result of the inverse quantization by the adjustment factor of the transform a, so as to reconstruct the encoding end data. The adjustment factor of the transform A can be obtained at the decoding end according to the processes of the steps 301 to 304. If the decoding end is to avoid division, the decoded data can be divided by the adjustment factor by multiplication and shift.
If the code stream already contains the adjustment factor of the transform A during the encoding, the decoding end obtains the adjustment factor of the transform A from the position containing the adjustment factor in the code stream, such as a sequence header, an image header, a slice header and a macro block header. The adjustment factor of transform a is used in the same way as described above and will not be described here.
The embodiment also provides a specific implementation manner of the transformation data processing device, which can be used for implementing the method shown in fig. 3. Fig. 4 is a specific configuration diagram of the down-conversion data processing apparatus according to this embodiment. As shown in fig. 4, the apparatus includes a first numerical range estimation unit, a second numerical range estimation unit, a numerical range difference unit, and a transform compensation unit. The transformation compensation unit comprises an adjustment factor determining subunit, a transformation post-processing subunit and a quantization subunit.
And the first numerical range estimation unit is used for estimating the numerical range of the image data after the first transformation according to a transformation matrix required by the first transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the second numerical range estimation unit is used for estimating the numerical range of the image data after the second transformation according to a transformation matrix required by the second transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the numerical range difference unit is used for estimating the difference of the image data respectively subjected to the first transformation and the second transformation and the corresponding quantized numerical range according to the numerical range respectively subjected to the first transformation and the second transformation and the corresponding quantized points of the first transformation and the second transformation, and providing the difference to the transformation compensation unit.
In the conversion compensation unit, an adjustment factor determining subunit is configured to determine an adjustment factor according to the difference value of the numerical range provided by the numerical range difference value unit, and provide the adjustment factor to the conversion post-processing subunit. And the transformation subunit is used for applying a first transformation to the data to be transformed and providing the transformation result to the post-transformation processing subunit. And a transform post-processing subunit for multiplying the data after the first transform by the determined adjustment factor and providing the result to the quantization subunit. And the quantization subunit quantizes the result multiplied by the adjustment factor according to the quantization point corresponding to the image data.
If the encoding end writes the adjustment factor into the code stream, an adjustment factor writing unit is correspondingly added and used for writing the adjustment factor into the code stream.
In this embodiment, the data after transformation a is first multiplied by an adjustment factor, and then subjected to quantization processing. As can be seen from the quantization process represented by the formula (1), in the quantization process, the quantization step is first multiplied, and then the multiplication result is right-shifted, that is, the lower information is discarded. Then the data after transformation a multiplied by the adjustment factor and the result multiplied by the quantization step QTAB1 s may not be 0 low but still collect part of the information, which results in loss of data accuracy when shifting. Based on this, in order to reduce the loss of data precision in the quantization process, an evolution processing mode is also proposed on the basis of the embodiment, and the numerical range of the data after the transformation a is changed by multiplying the adjustment factor of the transformation a and increasing the shift.
Specifically, in the flow shown in fig. 3, the operations of steps 301 to 303 are not changed, and in step 304, the determined adjustment factor of transform a is changed to q' 2D′×2nThe value of n is determined according to the hardware storage range of the encoding end, and the larger the value of n is, the better the value of n is, under the condition that the data after the transformation A is multiplied by the adjustment factor q' and the data of the subsequent operation do not exceed the hardware storage range. This makes it possible to concentrate information in the data after the conversion a on the upper bits as much as possible.
Next, in step 305, the transformed data is multiplied by the above-mentioned adjustment factor q'.
Finally, when the data is quantized in step 306, n bits are added to the shift bit number corresponding to the quantization point, that is, after the quantization operation is performed, the quantization result is shifted to the right by n bits. Thus, the numerical range can be expanded by 2 due to the change of the adjustment factor in the step 304 modified abovenThe times are compensated back.
Briefly, the operations in the modified steps 305 to 306 are: (2D′×2n×X×QTAB[s])>>shift[s]+ n, where X represents the transformed data. Intuitively, the operation is similar to the operation (2) in the previous steps 305-306D′×X×QTAB[s])>>shift[s]In contrast, the result is the same, but when implemented in a hardware platform, the former suffers less loss of accuracy than the latter. This is because, in the modified step 305, after multiplication by the adjustment factor q', the data X is enlarged by 2D′×2nThe useful information is enlarged and concentrated more in the high order bits than the data after the original operation in step 305, so that the useful information lost by shifting during the shifting process involved in the quantization in step 306 after the modification is relatively reduced. Thus, the data accuracy can be more effectively improved.
After the encoding end adopts the evolution method of the first embodiment, when the decoding end performs inverse transform and inverse quantization on the data processed by the transform a, the corresponding inverse operation is also performed. Specifically, in the inverse quantization, the number of shift bits is reduced by n, and the inverse quantization result is divided by the adjustment factor q' of the transform a before inverse transform after the inverse quantization, and then inverse transform is performed to reconstruct the encoding-side data more accurately.
Of course, the present invention also provides a corresponding evolved apparatus structure corresponding to the above evolved mode, which is similar to the structure shown in fig. 4, except that the transform compensation unit includes an adjustment coefficient and shift offset quantum unit, a transform subunit, a transform post-processing subunit, and a quantization subunit. The structure and function of the first numerical range estimating unit, the second numerical range estimating unit and the numerical range difference unit are the same as those of the apparatus shown in fig. 4.
In the transformation compensation subunit of the evolution device, the adjustment coefficient and shift offset quantum unit is used for determining the adjustment coefficient and the shift offset during quantization according to the estimated value range characteristic difference value, and respectively providing the adjustment coefficient and the shift offset to the transformation post-processing subunit and the quantization subunit. And the transformation subunit is used for applying a first transformation to the data to be transformed and providing the transformation result to the post-transformation processing subunit. And a transform post-processing subunit for multiplying the data after the first transform by the determined adjustment coefficient and providing the result to the quantization subunit. And the quantization subunit quantizes the result multiplied by the adjustment factor according to the shift offset and the quantization step corresponding to the first transformation.
In the above embodiment, the difference between two transforms and the corresponding quantized value ranges is calculated using the average value range of the two transforms. In order to make the value range estimation more accurate, the method of the second embodiment may also be adopted.
Example two:
in this embodiment, similar to the first embodiment, the difference value of the numerical ranges of the two kinds of transformations is a difference value of the numerical ranges of the image data after the two kinds of transformations and the corresponding quantization are respectively performed, but the specific calculation method of the difference value is different from that in the first embodiment.
Fig. 5 is a specific flowchart of a method for processing transformed data according to this embodiment. As shown in fig. 5, the method includes:
step 501, estimating the numerical value ranges of the image data after two kinds of transformation according to transformation matrixes required by two kinds of preset transformation.
The estimation method of this step is the same as step 301 in the first embodiment, and is not described here again.
Step 502, calculate the average value range after transformation B.
The way of calculating the average value range after transformation B in this step is the same as that in the first embodiment, specifically, it is <math> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>m</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>Bij</mi> <mo>,</mo> </mrow> </math> Wherein, Bij is the numerical range of the j-th row and column pixel data in the ith row in the subblock after B transformation.
Step 503, calculating the difference between the two transformed and corresponding quantized value ranges according to the value ranges of the data after transformation a obtained in step 501 and the average value range after transformation B obtained in step 502.
In this step, the principle of the method for calculating the difference value of the numerical range is the same as that in the first embodiment, except that the difference value of the data range in this embodiment is calculated according to the numerical range of each data after the conversion a and the average numerical range after the conversion B. Specifically, after the coding block C is divided into sub-blocks according to the scale of the transform a, the difference value between the data range of the pixel data in the ith row and the jth column in the sub-block after the transform a and the data range after the transform B is: d'(i,j)=(AvrB+log2QTAB2[n])-(Aij+log2QTAB1[n]) And (1 < i < m, 1 < j < m), wherein Aij is the numerical range of the ith row and the jth column of pixel data in the subblock after the A is transformed.
Step 504, determining the adjustment factor of transformation a according to the value range difference obtained in step 503.
In this step, the manner of determining the adjustment factor is the same as that of the first embodiment, and is determined according to the pixel data of the ith row and the jth column in the sub-block, specifically, the adjustment factor is determined according to the pixel data of the ith row and the jth column in the sub-block <math> <mrow> <msub> <mi>q</mi> <mi>ij</mi> </msub> <mo>=</mo> <msup> <mn>2</mn> <msub> <msup> <mi>d</mi> <mo>&prime;</mo> </msup> <mi>ij</mi> </msub> </msup> <mo>.</mo> </mrow> </math>
Thus, the obtained adjustment factor is changed according to the change of the pixel position.
And 505, applying the transformation A to the coding block C to obtain transformed data, and multiplying the obtained data by an adjustment factor of the transformation A correspondingly.
In this step, when multiplying the adjustment factor, the result of the conversion of the pixel data of the ith row and the jth column is associated with the adjustment factor qijBy multiplication, i.e. <math> <mrow> <msup> <msub> <mi>C</mi> <mi>ij</mi> </msub> <mo>&prime;</mo> </msup> <mo>=</mo> <msub> <mi>C</mi> <mi>ij</mi> </msub> <mo>&CenterDot;</mo> <msup> <mn>2</mn> <msub> <msup> <mi>d</mi> <mo>&prime;</mo> </msup> <mi>ij</mi> </msub> </msup> </mrow> </math> Thus, the adjustment of the transformation result is more accurate.
Step 506, finding the quantization step length for each C by using the transformation Aij' quantization is performed.
So far, the compensation of the data obtained by transformation a is completed, and the compensation principle is the same as the derivation process of the first embodiment, and will not be described here again. The difference is that in the compensation process, the calculation of the adjustment factor changes according to the change of the position of the pixel data, so that the compensation result is more accurate, the consistency degree of the numerical value ranges of the final two transformations and the data obtained after corresponding quantization is higher, the better transformation can be more effectively selected, and the coding efficiency is further improved.
In actual encoding, the adjustment factor of the transform a according to the position of the pixel may be adjusted due to the difference of the encoded data, so that the adjustment factor of the transform a may be properly adjusted according to the characteristics of the encoded data. The encoded data characteristic may be the value of each pixel position of the block after transformation a. And writing the adjustment factor of the transformation A into the coding code stream. The adjustment factor of transform a may be written in a sequence header or a picture header or a slice header or a macroblock header of the codestream. The adjustment factor of transform a is used in the same way as described above and will not be described here.
The method in this embodiment may also be implemented by using the apparatus shown in fig. 4 in the first embodiment.
In view of the problem of compensation accuracy, the present embodiment also providesAn evolved compensation method is provided, specifically: determining the adjustment factor f according to the difference between the two transforms and the corresponding quantized value range(i,j)And coefficient offset s(i,j)When performing compensation, first use f(i,j)Multiplying the transformed data by the transformed data, and adding the result of the multiplication to the coefficient offset s(i,j)The object is to make the following relationship hold(i,j)×Aij+s(i,j)=AvrBAnd (1 < i < n, 1 < j < n), thereby ensuring the accuracy of compensation. And finally, quantizing the result added with the offset according to the quantization step corresponding to the transformation A. Accordingly, at the decoding side, f is set in correspondence with the encoding side(i,j)And s(i,j)And subtracting s after carrying out inverse quantization on the received data to be subjected to inverse quantization(i,j)Then divided by f(i,j)And then inverse transformation is carried out to accurately reconstruct the image data at the encoding end.
For the above-mentioned manner of compensating for evolution, the present invention also provides a corresponding structure of an evolution apparatus, which is similar to the structure shown in fig. 4, except that the transform compensation unit includes an adjustment factor and offset quantum unit, a transform subunit, a transform post-processing subunit, and a quantization subunit. The other first numerical range estimating unit, second numerical range estimating unit, and numerical range difference unit are the same in structure and function as the apparatus shown in fig. 4.
In the transformation compensation unit, an adjustment factor and coefficient offset quantum unit is used for determining an adjustment factor and coefficient offset according to the value range difference provided by the value range difference unit and providing the adjustment factor and coefficient offset to the transformation post-processing subunit. And the transformation subunit is used for applying a first transformation to the data to be transformed and providing the transformation result to the post-transformation processing subunit. A transform post-processing subunit for multiplying the first transformed data by the determined adjustment factor plus the offset and providing the result to the quantization subunit. And the quantization subunit quantizes the result which is multiplied by the adjustment factor and added with the coefficient offset according to the quantization step corresponding to the first transformation.
Similar to the embodiment, the adjustment factor of the transform a may be pre-calculated at the decoding end, and when the transform a is used as a better transform mode, the decoding end performs inverse quantization, then divides the inverse quantization result by the respective corresponding adjustment factor of the transform a, and then performs inverse transform, thereby reconstructing the data at the encoding end.
If the code stream already contains the adjustment factor of the transform A during the encoding, the decoding end obtains the adjustment factor of the transform A from the position containing the adjustment factor in the code stream, such as a sequence header, an image header, a slice header and a macro block header. The adjustment factor of transform a is used in the same way as described above and will not be described here.
Example three:
in this embodiment, the compensation method for the data obtained by transformation a is different from the first two embodiments, and compensation is performed by adjusting the quantization point. In addition, similar to the embodiment, the difference value of the numerical range features of the two kinds of transformations in the embodiment is the difference value of the numerical range of the image data after the two kinds of transformations and the corresponding quantization are respectively performed. Specifically, fig. 6 is a specific flowchart of a method for processing transformed data according to the third embodiment. As shown in fig. 6, the method includes:
601-603, estimating the numerical value ranges of the image data after two kinds of transformation according to a transformation matrix required by two kinds of preset transformation; calculating the average value range after transformation respectively aiming at the two transformations; and calculates the difference between the two transforms and the corresponding quantized value ranges.
The operations in steps 601 to 603 are the same as those in steps 301 to 303 in the first embodiment, and are not described here again.
In step 604, the quantization point offset is determined according to the difference in step 603.
In this embodiment, the quantization point corresponding to transform a is adjusted to compensate the data obtained from transform a, so that the data can obtain a value range consistent with transform B, and a better transform mode is selected.
The quantization point corresponding to transform a may be a preset quantization point or a quantization point obtained from transform B.
In a video standard, quantization step sizes corresponding to two adjacent quantization points generally form a certain multiple relationship, which specifically includes: every k quantization points, the corresponding quantization step is one half of the original value, so the multiple relation between the quantization steps corresponding to adjacent quantization points is obtained
Figure A20081008791900351
The multiple relation between quantization steps corresponding to quantization points spaced by i is defined as
Figure A20081008791900352
Since the quantization step is proportional to the value obtained by quantization using the quantization step, the method for adjusting the quantization point offset can also achieve the purpose of adjusting the data value range.
From the transformed A, B value range difference D' obtained in step 603, the quantization point offset required to compensate the value range difference can be obtained. Specifically, the specific numerical multiple corresponding to the numerical range difference D' is 2D' and, after adding the offset Δ QP to the quantization point, the corresponding quantization step is the original quantization step number <math> <mrow> <msup> <mrow> <mo>[</mo> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> </msup> <mo>]</mo> </mrow> <mi>&Delta;QP</mi> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msup> <mn>2</mn> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> </msup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mi>&Delta;QP</mi> </mrow> </msup> </mrow> </math> And if the two are equal, the quantization point offset Δ QP ═ k × D' |, where | | is the operator for rounding. Since the size of the quantization step affects the actual encoding effect of the data, the Δ QP may be adjusted according to the result of the experiment data based on the above calculation formula.
Of course, other formulas may be used for calculating the quantization point offset amount as long as the difference value of the numerical range obtained in step 603 can be reflected.
Step 605, applying transform a to the coding block C to obtain transformed data, subtracting the determined quantization offset from the quantization point corresponding to transform a, and quantizing according to the quantization step found by the adjusted quantization point lookup.
As can be seen from the process of deriving Δ QP ═ k × D' | in step 604, after the quantization point is adjusted, the multiple relationship between the adjusted quantization step size and the original quantization step size is 2D', i.e. the adjusted quantization step becomes 2D′·QTAB1[s]Wherein, QTAB1[ s ]]The original quantization step size. Then the result of quantization with this modified quantization step is <math> <mrow> <msup> <mn>2</mn> <msub> <mi>Avr</mi> <mi>A</mi> </msub> </msup> <mo>&CenterDot;</mo> <mi>QTAB</mi> <mn>1</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> <mo>&CenterDot;</mo> <msup> <mn>2</mn> <msup> <mi>D</mi> <mo>&prime;</mo> </msup> </msup> <mo>=</mo> <msup> <mn>2</mn> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>+</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mi>QTAB</mi> <mn>2</mn> <mo>[</mo> <mi>s</mi> <mo>]</mo> </mrow> </msup> <mo>,</mo> </mrow> </math> The value range expressed in binary is AvrB+log2QTAB2[s]. Obviously, the numerical range is the same as that of the data obtained after the transformation B and the corresponding quantization, and the purpose of enabling the final numerical ranges of the transformation A and the transformation B to be consistent is achieved.
So far, the flow of the method for processing the transformed data provided by this embodiment is finished.
The transform data processing on the encoding side is performed in accordance with the above method, and the superior transform scheme of transform a and transform B is selected. If the finally selected transformation mode is transformation A, the data sent to the decoding end is the data after the transformation A and the corresponding quantization, entropy coding and the like. At this time, the decoding end needs to calculate the quantization point offset corresponding to the transform a in advance according to steps 601 to 604, and compensate the quantization point offset, specifically, add Δ QP to the original corresponding quantization point of the transform a, and then perform inverse quantization and inverse transform on the data by using the quantization step corresponding to the changed quantization point, so as to reconstruct the encoding end data accurately.
In the decoding end step, when the quantization point offset is compensated, Δ QP may be subtracted from the original corresponding quantization point of transform a, and the quantization step corresponding to the changed quantization point may be used to perform inverse quantization and inverse transform on the data. The delta QP is added or subtracted on the basis of the quantization point originally corresponding to the transform A and is determined by the characteristics of a quantization table, and the method is a preset step and aims to accurately recover the data at a decoding end after the data at the encoding end is processed by the transform A.
In actual encoding, the quantization offset of transform a may be adjusted due to the difference of the encoded data, so the adjustment factor of transform a may be adjusted appropriately according to the characteristics of the encoded data. The encoded data characteristic may be the actual average range of values of the data block after transformation a. And writing the quantization offset of the transformation A into the coding code stream. That is, the quantization offset subtracted from the quantization point corresponding to transform a may depend on the range of values of data after transform a when encoding a sequence, a picture, a slice, or a macroblock, and the subtracted quantization offset is written into the encoded code stream. The quantization offset of transform a may be written in a sequence header or a picture header or a slice header or a macroblock header of the codestream. If the decoding end knows that the code stream of the encoding end contains the quantization point offset information, the decoding end obtains the quantization offset from a sequence header or an image header or a strip header or a macro block header in the code stream, and subtracts the quantization offset on the basis of the quantization point corresponding to the original A. Here, the quantization point corresponding to the encoding-side transform a and the quantization point originally corresponding to the decoding-side transform a may be preset quantization points or may be the same quantization point as the transform B.
This embodiment also provides another specific implementation of the transformation data processing apparatus, which can be used to implement the method shown in fig. 6. Fig. 7 is a specific configuration diagram of the down-conversion data processing apparatus according to this embodiment. As shown in fig. 7, the apparatus includes a first numerical range estimation unit, a second numerical range estimation unit, a numerical range difference unit, and a transform compensation unit. The transformation compensation unit comprises a quantization point modification subunit, a transformation subunit and a quantization subunit.
And the first numerical range estimation unit is used for estimating the numerical range of the image data after the first transformation according to a transformation matrix required by the first transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the second numerical range estimation unit is used for estimating the numerical range of the image data after the second transformation according to a transformation matrix required by the second transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the numerical range difference unit is used for estimating the difference of the image data respectively subjected to the first transformation and the second transformation and the corresponding quantized numerical range according to the numerical range respectively subjected to the first transformation and the second transformation and the corresponding quantized points of the first transformation and the second transformation, and providing the difference to the transformation compensation unit.
In the transformation compensation unit, the quantization point correction subunit is configured to determine a quantization point offset according to the difference value of the numerical range provided by the numerical range difference value unit, subtract the quantization point offset from the quantization point corresponding to the first transformation, and provide the adjusted result to the quantization subunit. And a transformation subunit, configured to apply a first transformation to the data to be transformed, and provide the transformed result to the quantization subunit. And the quantization subunit is used for quantizing the transformed result according to the adjusted quantization point provided by the quantization point modification subunit.
In the implementation of the embodiment, the difference of the transformed A, B value ranges is calculated according to the average value range. In fact, the calculation of the difference value may be performed in the second embodiment, that is, the difference value of the value range is calculated for each pixel in the sub-block after the transformation a. Accordingly, when the quantization point corresponding to transform a is adjusted in step 604, the adjustment is also made for each pixel in the sub-block after transform a, specifically, Δ QPij=|n×Δd′(i,j)L. Then, in step 605, quantization is performed according to the quantization step corresponding to the adjusted quantization point according to different pixels. Therefore, the compensation result can be more accurate, the consistency degree of the numerical ranges of the final two transformations and the data obtained after corresponding quantization is higher, the better transformation can be more effectively selected, and the coding efficiency is further improved.
Correspondingly, after the coding end selects the transformation A as a better transformation mode, the decoding end obtains the quantization point offset corresponding to each pixel data after the transformation A in advance according to the method, adds the offset to the quantization point corresponding to each data to be inversely quantized, and then carries out inverse quantization and inverse transformation by using the quantization step corresponding to the changed quantization point so as to accurately reconstruct the coding end data.
When the quantization offset needs to be written into the code stream, the method can be used for writing and analyzing the code stream and processing the quantization point offset originally corresponding to the transformation A at the encoding end and the decoding end by imitating the method.
In the above three embodiments, respectivelyThe transformed data are compensated in two different ways, specifically, one way is to multiply the transformed data by an adjustment factor, and the other way is to change the quantization point corresponding to the transform. In fact, the above two approaches can be combined to compensate the transformed data. For example, the transformed data is multiplied by an adjustment factor f, and the quantization point is shifted by s, in order to make the following relationship: f x AvrA+s=AvrB(1 < ═ i < ═ n, 1 < ═ j < - > n). Due to AvrAAnd AvrBFor a priori data, f and s can be preset values which are not unique, and specific f and s can be referred to as AvrAAnd AvrBThe data size relationship and the codec implementation conditions. If f and s need to be adjusted according to the image coded data, f and s can be written into the sequence header, the image header, the slice header, the macro block header and other places in the coded code stream. If the coded code stream contains f and s, the decoding end analyzes the corresponding positions of the coded code stream, such as a sequence header, an image header, a slice header, a macro block header and the like, to obtain f and s.
Of course, all the pixel data after sub-block transformation may be multiplied by a corresponding adjustment factor, and the quantization point may be shifted by s(i,j)The object is to establish the following relationship: f. of(i,j)×Aij+s(i,j)=AvrB(1 < ═ i < ═ n, 1 < ═ j < - > n). Wherein f is(i,j)And s(i,j)Can be viewed as AvrAAnd AvrBThe data size relationship and the codec implementation conditions. If f(i,j)And s(i,j)Needs to be adjusted according to the image encoding data, f(i,j)And s(i,j)Can be written into the sequence header, the image header, the strip header, the macro block header and the like in the coded code stream.
If the data obtained by the transformation A is compensated in the above manner, the same f can be preset at the decoding end(i,j)And s(i,j)After receiving the coded data corresponding to transform A, the quantization corresponding to transform A is adjusted before inverse quantization of the dataPoints, adding s to the original quantization points(i,j)While dividing the data to be dequantized by the corresponding adjustment factor f(i,j)And then carrying out inverse quantization on the data to be inversely quantized after the adjustment factor is divided by using quantization compensation corresponding to the changed quantization point so as to accurately reconstruct the encoding end data.
If the coded code stream contains f(i,j)And s(i,j)Then the decoding end parses the corresponding position of the encoded code stream, such as the sequence header, the image header, the slice header, the macro block header, etc., to obtain f(i,j)And s(i,j)
Corresponding to the method combining the two compensation modes, the invention also provides a corresponding device structure. The apparatus is similar to the structure shown in fig. 7, except that the transform compensation unit includes an adjustment factor and quantization point modification subunit, a transform post-processing subunit, and a quantization subunit. The other first numerical range estimating unit, second numerical range estimating unit, and numerical range difference unit are the same in structure and function as the apparatus shown in fig. 7.
In the transform compensation unit, an adjustment factor and quantization point correction subunit is configured to determine an adjustment factor and a quantization point offset according to the difference value of the numerical range provided by the numerical range difference value unit, subtract the quantization point offset from the quantization point corresponding to the first transform, provide the adjusted quantization point to the quantization subunit, and provide the determined adjustment factor to the post-transform processing subunit. And the transformation subunit is used for applying a first transformation to the data to be transformed and providing the transformation result to the post-transformation processing subunit. And a transform post-processing subunit for multiplying the data after the first transform by the determined adjustment factor and providing the result to the quantization subunit. And the quantization subunit quantizes the result multiplied by the adjustment factor according to the quantization step corresponding to the adjusted quantization point.
In the above three embodiments, the transform a and the transform B may use different quantization tables, that is, the quantization step size found according to the quantization point lookup is different, but the loss of data precision is greater in the above manner for the case where the quantization tables are the same.
For the case of the same quantization tables, the present invention provides a fourth implementation manner, and for one of the transforms, a set of quantization tables is redesigned, so that the two transforms are consistent with the corresponding numerical ranges of the quantized data.
Example four:
different from the three embodiments, in this embodiment, the difference value of the two transformed numerical ranges is the difference value of the two transformed numerical ranges of the image data.
Fig. 8 is a flowchart illustrating a method for processing transformed data according to a fourth embodiment of the present invention. As shown in fig. 8, the method includes:
801-802, estimating numerical value ranges of image data after two kinds of transformation according to transformation matrixes required by two kinds of preset transformation; calculating the average value range after transformation respectively aiming at the two transformations;
the operations in steps 801 to 802 are the same as those in steps 301 to 302 in the first embodiment, except that the above operations are performed for different quantization points. That is, for all different macroblocks, the average value ranges of the macroblock after two transformations are calculated respectively: avrA(n) and AvrB(n), wherein n is a quantization point index.
Step 803, calculate the difference between the two transformed ranges.
Calculating two transformed value range differences for different quantization points, in particular Δ d (n) ═ AvrB(n)-AvrA(n)。
In fact, as mentioned above, since in the video coding and decoding theory, a uniform quantization method is usually used, that is, every several quantization points, the quantization step size is reduced to half of the original quantization step size, the value of Δ d (n) has a certain regularity. Based on this, a simplification can be made here. Specifically, assuming that the quantization step size is reduced to half of the original value every m quantization points, the calculation of m consecutive Δ d (n) values may be selected, for example, only Δ d (1), Δ d (2), L, Δ d (m) values are calculated in this step, and it is assumed that the m difference values are calculated in the following.
And step 804, setting quantization step sizes and shift digits corresponding to the quantization points, and forming a quantization table.
In this step, the quantization step length is set according to the difference calculated in step 803. Specifically, corresponding to the m difference values calculated in step 803, the quantization step sizes corresponding to the 1 st to m-th quantization points are set as follows: 2Δd(1)×R,..........2Δd(m)Xr, where R is a constant coefficient value, aims to make the value of the quantization step an integer and improve the calculation accuracy (the numerator is large and naturally more information can be retained), but has its upper limit in view of the limitation of the hardware memory range. According to the uniform quantization principle, the quantization step sizes of other quantization points can be deduced from the quantization step sizes corresponding to the 1 st to mth quantization points.
Of course, when setting the quantization step size, all quantization step sizes may be set, and the quantization step size corresponding to the nth quantization point is 2Δd(n)×R。
The number of shift bits is set to a fixed value t when set. When t is larger, the coding effect is poorer, and the compression coding rate is high; when t is smaller, the coding effect is better and the compression coding rate is lower. Therefore, the specific value of t needs to comprehensively consider the coding effect and the compression coding rate to find a t corresponding to a better balance point.
The quantization table is formed according to the set shift bit number and quantization step size, and the process is the same as the conventional implementation mode, and is not described herein again.
Step 805, receiving the data to be transformed, applying transform a to obtain transformed data, and quantizing the transformed data according to the quantization table.
After the above treatment, the numerical range of the data after the coding end data is quantized by the transformation A and the quantization table can be basically consistent with the numerical range after the data is transformed B and correspondingly quantized.
The principle of this embodiment for achieving the object of the present invention is the same as that of the first embodiment, and specifically, since the quantization step size is proportional to the quantized data in the quantization process, the process of quantizing the data obtained from transform a according to the quantization step size is substantially equivalent to the process of multiplying the data obtained from transform a by an adjustment factor in the first embodiment, and is multiplied by 2Δd(n)The relationship (2) of (c). Therefore, this embodiment can also achieve the purpose of adjusting the value range to be consistent with the transform B and the corresponding quantized value range.
Correspondingly, an inverse quantization table corresponding to the quantization table of the encoding end is designed at the decoding end. In the inverse quantization table, considering that the quantization step size of the nth quantization point is twice the quantization step size of the (n + m) th quantization point according to the uniform quantization method at the encoding end, the multiple relationship between the quantization step sizes is compensated by using a shift in the inverse quantization table, that is, the number of shift bits of the nth quantization point is equal to the number of shift bits of the (n + m) th quantization point +1 in inverse quantization. The number of shift bits of the 1 st to m th dequantization points can be set to k, and the number of shift bits of the m +1 th to 2m th dequantization points is k-1 accordingly. The setting of the k value is also made in consideration of the storage range of the hardware, and the principle is the same as that in the case of considering the storage range of the hardware.
Next, according to the above setting of the shift bit number, the inverse quantization step size of the inverse quantization table is calculated to be iq (n), where n is a quantization point. IQ (n) requirement satisfying relationship <math> <mrow> <msup> <mn>2</mn> <mrow> <mi>&Delta;d</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>&times;</mo> <mi>R</mi> <mo>&times;</mo> <mi>IQ</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mn>2</mn> <mrow> <mo>(</mo> <mi>h</mi> <mo>-</mo> <mo>|</mo> <mfrac> <mi>n</mi> <mi>m</mi> </mfrac> <mo>|</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> </mrow> </math> Wherein,reflecting the variation in the number of shift bits, i.e., every m quantization points, the number of shift bits is increased by 1. According to the formula <math> <mrow> <mi>IQ</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mn>2</mn> <mrow> <mo>(</mo> <mi>h</mi> <mo>-</mo> <mo>|</mo> <mfrac> <mi>n</mi> <mi>m</mi> </mfrac> <mo>|</mo> <mo>)</mo> </mrow> </msup> <mrow> <msup> <mn>2</mn> <mrow> <mi>&Delta;d</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>&CenterDot;</mo> <mi>R</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Where h is a constant, the larger the value, the better, but also limited by the hardware memory range.
The inverse quantization table is established in the above manner, so that reconstructed data can be restored at the decoding end using the data of the transform a after inverse transform using the established inverse quantization table.
This embodiment also provides a specific implementation of the transformation data processing apparatus, which can be used to implement the method shown in fig. 8. Fig. 9 is a specific configuration diagram of the down-conversion data processing apparatus according to this embodiment. As shown in fig. 9, the apparatus includes a first numerical range estimation unit, a second numerical range estimation unit, a numerical range difference unit, and a transform compensation unit. The transformation compensation unit comprises a quantization table establishing subunit, a transformation subunit and a quantization subunit.
And the first numerical range estimation unit is used for estimating the numerical range of the image data after the first transformation according to a transformation matrix required by the first transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the second numerical range estimation unit is used for estimating the numerical range of the image data after the second transformation according to a transformation matrix required by the second transformation in the two preset transformations and providing the numerical range difference unit with the estimated numerical range.
And the numerical range difference unit is used for estimating the difference of the two transformed numerical ranges according to the numerical ranges respectively subjected to the first transformation and the second transformation, and providing the difference serving as a numerical range characteristic difference to the transformation compensation unit.
In the transformation compensation unit, a quantization table establishing subunit is configured to establish a corresponding quantization table for the first transformation according to the value range characteristic difference, and provide the quantization table to the quantization subunit. A transform subunit for applying a first transform to the data to be transformed and providing the transform result to the quantization subunit. And the quantization subunit quantizes the result after the first transformation according to the quantization table corresponding to the first transformation.
In the above embodiments of the present invention, the compensation of the transformed data a is taken as an example to describe a specific implementation manner of the present invention, and in fact, the transformed data B may also be compensated, which is the same as the above implementation manner, and only the parameters corresponding to the transformed data a and the transformed data B are exchanged in the corresponding formula, and thus, the details are not described here.
The above method for processing transform data can be used in an encoding method, and fig. 10 is a schematic flow chart of an encoding method provided in fifth embodiment of the present invention, where the encoding method includes the following steps:
a1, receiving data to be transformed;
the data to be transformed comprises image block residuals obtained through prediction.
A2, carrying out first transformation on the data to be transformed to obtain first transformed data; performing second transformation on the data to be transformed to obtain second transformed data;
the first transformation is here a transformation of 4x4 size;
the second transformation is here a transformation of 8x8 size;
and A3, determining an adjusting parameter of the first transformation according to the first transformed data and the second transformed data, and adjusting the first transformed data according to the adjusting parameter and the second transformed parameter.
The first converted data and the second converted data include the first converted numerical range and the second converted numerical range, and the first converted data and the second converted data are called as "first converted data" and "second converted data" for convenience of description.
Adjusting parameters may include determining an adjustment factor from the first transformed data and second transformed data, and then adjusting the first transformed data according to the adjustment parameter and the second transformed parameter may include: multiplying the first transformed data by the adjustment factor; and quantizing the data multiplied by the adjustment factor according to the corresponding quantization step.
The adjusting parameter may include a quantization offset determined according to first transformed data and second transformed data, the second transformed parameter being a quantization point of the second transform, and the adjusting the first transformed data according to the adjusting parameter and the second transformed parameter includes: subtracting the quantization offset from the quantization point corresponding to the second transform to serve as the quantization point of the first transform; and quantizing the data after the first transformation by using the adjusted quantization point of the first transformation.
The adjustment parameters may include a quantization offset and an adjustment factor determined from the first transformed data and the second transformed data; if the parameter of the second transform is a quantization point corresponding to the second transform, the adjusting the data after the first transform according to the adjustment parameter and the parameter of the second transform includes: multiplying the data after the first transformation by the adjusting factor, and subtracting the quantization offset from the quantization point corresponding to the second transformation to be used as the quantization point of the first transformation; and quantizing the data multiplied by the adjustment factor by using the quantization point of the first transformation.
The adjustment parameters may further include an adjustment coefficient and a quantization shift offset determined from the first transformed data and the second transformed data; the parameter of the second transform comprises a shift number of quantization points of the second transform data; adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises: multiplying the data after the first transformation by the adjusting coefficient, and quantizing the data multiplied by the adjusting coefficient by using the shift digit of the quantization point corresponding to the first transformation; the shift digit of the quantization point corresponding to the first transformation is the sum of the shift digit of the quantization point corresponding to the second transformation and the quantization shift offset.
The adjustment parameter may include an adjustment factor and a coefficient offset determined according to the first transformed data and the second transformed data, and then adjusting the first transformed data according to the adjustment parameter and the second transformed parameter includes: multiplying the first transformed data by the adjustment factor, adding the coefficient offset, and quantizing.
And A4, writing the adjusting parameters into the coding code stream.
And writing the adjustment parameters into a sequence header or an image header or a strip header or a macro block header in the coded code stream for a decoding end to use.
The numerical range of the processed data after the first transformation is approximately the same as the numerical range of the processed data after the second transformation, so that the influence of the transformation on the data can be more effectively reflected during encoding, the transformation with better effect is selected, and the encoding efficiency is further improved.
Fig. 11 is a schematic flowchart of a decoding method according to a sixth embodiment of the present invention, where the method includes:
b1, receiving a code stream, and decoding the code stream to obtain first converted data and adjustment parameters;
the data after the first transformation is the data of the image block residual after the first transformation inverse transformation after the entropy decoding if the current decoding image block is judged to use the first transformation;
the first transform is here an inverse transform of size 4x 4;
the second transform is here an inverse transform of size 8x 8;
the adjustment parameters correspond to adjustment parameters written into the code stream during encoding.
And B2, adjusting the data after the first transformation according to the adjusting parameters and the parameters after the second transformation.
This step includes a number of methods, respectively:
(1) determining an adjustment factor for the first transformed data according to the adjustment parameter and the second transformed parameter: and performing inverse quantization on the data after the first transformation according to the quantization step corresponding to the first transformation, and performing inverse transformation on the quotient of the inverse quantization result and the adjustment factor.
(2) If the received adjustment parameter is a quantization offset, the parameter of the second transform is a quantization point corresponding to the second transform, the quantization point of the first transform is a quantization point corresponding to the second transform minus the quantization offset, and the adjusting the data after the first transform according to the quantization offset and the quantization point corresponding to the second transform comprises: and carrying out inverse quantization on the data after the first transformation by using the quantization point corresponding to the first transformation, and carrying out inverse transformation on the inverse quantization result.
(3) If the adjustment parameter includes a quantization offset and an adjustment factor for the first transformed data; the parameter of the second transformation is the quantization point corresponding to the second transformation, and the quantization point of the first transformation is the quantization point of the second transformation minus the quantization offset; the adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises: carrying out inverse quantization on the data after the first transformation by using the quantization point corresponding to the first transformation; and performing inverse transformation on the quotient of the inverse quantization result and the adjustment factor.
(4) If the adjustment parameters comprise adjustment coefficients of the first transformation and quantization shift offset of the first transformation data; the parameter of the second transform comprises a shift number of quantization points of the second transform data; said adjusting said first transformed data according to said adjustment parameters and said second transformed parameters comprises: subtracting the quantization shift offset from the shift bit number of the quantization point corresponding to the second transformation to obtain the shift bit number of the quantization point corresponding to the first transformation; carrying out inverse quantization on the received data after the first transformation by using the shift bit number of the quantization point corresponding to the first transformation; and performing inverse transformation on the quotient of the inverse quantization result and the adjustment coefficient.
(5) Determining an adjustment factor for the first transformed data according to the adjustment parameter and the parameter of the second transformation; determining the coefficient offset of the first transformation data according to the adjustment parameter and the parameter of the second transformation; the adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises: and carrying out inverse quantization on the received data according to the quantization step corresponding to the first transformation, subtracting the coefficient offset from the inverse quantization result, and carrying out inverse transformation on the quotient of the subtraction result and the adjustment factor.
And corresponding to the encoding end, the adjusting parameter is obtained from a sequence header or an image header or a strip header or a macro block header in the code stream.
Fig. 12 is a schematic structural diagram of an encoding apparatus according to a seventh embodiment of the present invention, where the encoding apparatus includes:
a data receiving unit for receiving data to be transformed;
the conversion unit is used for carrying out first conversion on the data to be converted received by the receiving unit to obtain first converted data; performing second transformation on the data to be transformed to obtain second transformed data;
a first adjusting unit which determines an adjusting parameter according to a second transformed parameter, the first transformed data and the second transformed data, and adjusts the first transformed data according to the adjusting parameter and the second transformed parameter;
and the writing unit is used for writing the adjusting parameters into the coding code stream.
Fig. 13 is a schematic structural diagram of a decoding apparatus according to an eighth embodiment of the present invention, where the decoding apparatus includes:
a code stream receiving unit for receiving the code stream;
the decoding unit is used for decoding the code stream to obtain first converted data and adjustment parameters;
and the second adjusting unit is used for adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
The above coding device and decoding device can be used in a set of system, and are not described herein.
Example nine: coding method
An embodiment ninth provides an encoding method, which can adaptively obtain a quantization offset (the foregoing adjustment parameter) at an encoding end, as shown in fig. 14, the method includes:
c1, receiving data to be transformed and coded parameter information;
c2, carrying out first transformation on the data to be transformed to obtain first transformed data;
c3, determining an adjusting parameter according to the coded parameter information and a second transformed parameter, and adjusting the first transformed data according to the adjusting parameter and the second transformed parameter;
and C4, writing the adjusting parameters into the coding code stream.
As described in detail below.
Let the scale of transform a be 4x4 and the scale of transform B be 8x 8.
In this embodiment, the data to be transformed and the encoded parameter information are received first. And performing first transformation on the data to be transformed to obtain first transformed data, and determining an adjusting parameter according to the coded parameter information and the second transformation parameter. Here, the second transformation parameter is a quantization point corresponding to the second transformation, and the adjustment parameter is a quantization offset.
The encoded parameter information includes the number of intra-prediction image blocks using a first transform, the number of intra-prediction image blocks using a second transform, the number of all intra-prediction image blocks using the first transform, the number of intra-prediction image blocks using the first transform in a P frame or a B frame, the number of intra-prediction image blocks using the first transform in a P frame and a B frame, the number of intra-prediction image blocks using the second transform in a P frame or a B frame, the number of intra-prediction image blocks using the second transform in a P frame and a B frame, the number of intra-prediction image blocks using the skip mode or the direct mode in a P frame, the number of image blocks using the skip mode and the direct mode in a P frame, the number of image blocks using the skip mode or the direct mode in a B frame, the number of image blocks using the skip mode and the direct mode in a B frame, one or more of the number of inter prediction mode image blocks used in a P frame, or the number of inter prediction mode image blocks used in a B frame. The skipping mode is a common technology in video coding and decoding, and means that a motion vector of a current image block is obtained according to information of a coded or decoded image block, and no coded residual error exists in the image block; the direct mode means that the motion vector of the current image block is obtained according to the information of the coded or decoded image block, but the image block contains a coding residual error.
The number of the intra-frame prediction image blocks using the first transformation is recorded as img- > intra4x4num, the number of the intra-frame prediction image blocks using the first transformation is recorded as img- > intra _ num, the number of the image blocks using a skip mode and/or a direct mode in the P frame is recorded as img- > pskip _ num, the number of the intra-frame prediction image blocks used in the P frame is recorded as img- > P _ intra, the number of the image blocks using an inter-frame prediction mode in the P frame is recorded as img- > pnum, the quantization point corresponding to the second transformation is recorded as img- > qp, and the quantization offset is recorded as img- > qp _ shift.
In this embodiment, three tables are stored at the encoding end for determining the quantization offset, each table includes 64 elements, the three tables are represented as three arrays, QPshift _ table [0], QPshift _ table [1] and QPshift _ table [2], each array includes 64 elements, and the expression form is as follows:
QPshift_table[0]={0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,7,8,8,8,8,8,8,8,8,9,9,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10}
QPshift_table[1]={0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6}
QPshift_table[2]={0,1,1,1,2,2,3,3,3,4,4,4,5,5,6,6,7,8,8,9,9,10,10,10,10,10,11,11,11,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,13,13,13,13,13,13,13,13,13,13,13,13}
when the self-adaptive block transform technology is used, if a current coded picture is an I frame and is a first picture of a current sequence or a current picture group, values of img- > intra4x4num, img- > intra _ num, img- > pskip _ num, img- > p _ intra and img- > pnum are recorded as 0, QPshift _ table [0] is used for determining that img- > QP _ shift, and the determination method comprises the following steps: img- > QP _ shift QPshift _ table [0] [ img- > QP ]. The group of pictures here includes several I frames, P frames, B frames or a combination of these types.
If the current coded picture is an I-frame and is not the first picture of the current sequence or current group of pictures but there are no pictures in the current sequence or current group of pictures already coded with P-frames, two parameters are set, which are noted as a first determined value v1 and a second determined value v2, v1 and v2, and the calculation method is as follows:
v1=img->intra4x4num*5.0/img->intra_num/(img->QP_shift+12)
v2=0.2
where the symbol '. times.represents multiplication and the symbol '/' represents division.
After v1 and v2 are obtained, the quantization offset img- > QP _ shift is obtained according to the following logical judgment:
(1) if v1 is less than 0.3 and v2 is less than 0.4, or v1 is less than 0.4 and v2 is less than 0.15, or v1 is less than 0.5 and v2 is less than 0.1, the value of img- > QP _ shift is obtained by searching in a table QPshift _ table [1] with img- > QP as an index value;
if the condition (1) is not satisfied, the following judgment is made:
(2) if v1 is greater than 0.5, or v2 is greater than 0.5, or v1 v2 is greater than 0.2, the value of img- > QP _ shift is obtained by searching in a table QPshift _ table [2] with img- > QP as an index value;
if the conditions (1) and (2) are not met, the value of img- > QP _ shift is obtained by searching in a table QPshift _ table [0] by taking img- > QP as an index value.
If the current coded picture is not the first picture of the current sequence or the current group of pictures and the current sequence or the current group of pictures already have pictures coded in P frames, two parameters are set, and the calculation methods marked as the first determined value v1 and the second determined value v2, v1 and v2 are as follows:
v1=img->intra4x4num*5.0/img->intra_num/(img->QP_shift+12);
v2=img->pskip_num*1.0/img->pnum
where the symbol '. times.represents multiplication and the symbol '/' represents division.
After v1 and v2 are obtained, the quantization offset img- > QP _ shift is obtained according to the following logical judgment:
(1) if v1 is less than 0.3 and v2 is less than 0.4, or v1 is less than 0.4 and v2 is less than 0.15, or v1 is less than 0.5 and v2 is less than 0.1, the value of img- > QP _ shift is obtained by searching in a table QPshift _ table [1] with img- > QP as an index value;
if the condition (1) is not satisfied, the following judgment is made:
(2) if v1 is greater than 0.5, or v2 is greater than 0.5, or v1 v2 is greater than 0.2, the value of img- > QP _ shift is obtained by searching in a table QPshift _ table [2] with img- > QP as an index value;
if the conditions (1) and (2) are not met, the value of img- > QP _ shift is obtained by searching in a table QPshift _ table [0] by taking img- > QP as an index value.
And after the quantization offset img- > QP _ shift is obtained, obtaining a weight coefficient lambda corresponding to a first transformation in a rate-distortion optimization model (rate-distortion optimization) according to a table pre-stored at the encoding end. The table is denoted as QP _ lambda _ table _4x4, and in this embodiment, includes 16 elements. The first transform uses the intra-coded blocks of lambda as img- > lambda4x4I, the inter-prediction mode blocks of lambda as img- > lambda4x4p, img- > lambda4x4I, and img- > lambda4x4p, and the method of calculation is:
img->lambda4x4I=QP_lambda_table_4x4[img->QP_shift]
img->lambda4x4p=img->lambda4x4I*0.9
the representation of QP _ lambda _ table _4x4 is as follows:
QP_lambda_table_4x4[16]={1.0,1.1,1.3,1.5,1.8,2.1,2.6,3.1,3.5,4.0,4.6,5.1,5.7,6.3,100,100}
if the current picture to be coded is a P frame, img- > QP _ shift is set to 5 and img- > lambda4x4P is set to 1.6.
If the current image to be coded is a B frame, img- > QP _ shift is set to 5.
After img- > QP _ shift is obtained by the method, the quantization point of the first transformation is calculated according to the parameter of the second transformation (namely the quantization point of the second transformation), and the specific method is that the quantization point of the second transformation minus the quantization offset is used as the quantization point of the first transformation. And quantizing the data after the first transformation by using the calculated quantization point of the first transformation. The quantization offset img- > QP _ shift is written into the encoded stream.
Correspondingly, this embodiment also provides an encoding apparatus, which can be used to execute the encoding method provided in the ninth embodiment, as shown in fig. 15, where the encoding apparatus includes:
a third receiving unit for receiving data to be transformed and encoded parameter information;
a third conversion unit, configured to perform first conversion on the data to be converted received by the third receiving unit to obtain first converted data;
a third adjusting unit, configured to determine an adjustment parameter according to the encoded parameter information and a second transformed parameter received by a third receiving unit, and adjust the first transformed data obtained by the third transforming unit according to the adjustment parameter and the second transformed parameter;
and the third writing unit is used for writing the adjusting parameters obtained by the third adjusting unit into the coding code stream.
The third receiving unit in this embodiment receives data to be transformed and encoded parameter information. The third conversion unit performs first conversion on the data to be converted to obtain first converted data, and the third adjustment unit determines adjustment parameters according to the encoded parameter information and the second conversion parameters. Here, the second transformation parameter is a quantization point corresponding to the second transformation, and the adjustment parameter is a quantization offset.
The encoded parameter information includes the number of intra-prediction image blocks using a first transform, the number of intra-prediction image blocks using a second transform, the number of all intra-prediction image blocks using the first transform, the number of intra-prediction image blocks using the first transform in a P frame or a B frame, the number of intra-prediction image blocks using the first transform in a P frame and a B frame, the number of intra-prediction image blocks using the second transform in a P frame or a B frame, the number of intra-prediction image blocks using the second transform in a P frame and a B frame, the number of intra-prediction image blocks using the skip mode or the direct mode in a P frame, the number of image blocks using the skip mode and the direct mode in a P frame, the number of image blocks using the skip mode or the direct mode in a B frame, the number of image blocks using the skip mode and the direct mode in a B frame, one or more of the number of inter prediction mode image blocks used in a P frame, or the number of inter prediction mode image blocks used in a B frame. The skipping mode is a common technology in video coding and decoding, and means that a motion vector of a current image block is obtained according to information of a coded or decoded image block, and no coded residual error exists in the image block; the direct mode means that the motion vector of the current image block is obtained according to the information of the coded or decoded image block, but the image block contains a coding residual error.
The third adjusting unit calculates a quantization offset according to the encoded information and the quantization point of the second transform, calculates a quantization point of the first transform according to the quantization offset and the quantization point of the second transform, and quantizes the data after the first transform by using the quantization point of the first transform (the data after the first transform obtained by the third transforming unit is adjusted according to the adjustment parameter and the parameter of the second transform). The specific calculation steps can refer to the contents described in example nine.
The third writing unit writes the quantization offset into the coding code stream so as to obtain the quantization offset when decoding and perform decoding operation.
The ninth and tenth embodiments obtain an adjustment parameter (quantization offset) according to the encoded parameter information and the second transform parameter, so that the encoding end has better flexibility. Because the encoding end writes the calculated adjustment parameter into the encoding code stream, the decoding end only needs to obtain the adjustment parameter from the code stream. The method of the embodiment considers the coding performance and does not additionally increase the burden of a decoding end.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (56)

1. A method of processing transformed data, the method comprising:
estimating the numerical value ranges of the image data after two kinds of transformation according to transformation matrixes required by two kinds of preset transformation;
estimating the feature difference value of the two transformed numerical ranges according to the two transformed numerical ranges;
receiving data to be transformed, applying a first of the two transformations to the data, and compensating the first transformed data according to the estimated value range characteristic difference.
2. The method of claim 1, wherein the difference between the two transforms is further estimated based on their respective quantization step sizes found by the quantization point lookup corresponding to the image data; the feature difference value of the numerical range of the two kinds of transformation is the difference value of the numerical range of the image data after the two kinds of transformation and corresponding quantization are respectively carried out.
3. The method of claim 2, wherein the compensating the first transformed data based on the estimated value range feature difference is:
determining an adjustment factor according to the estimation value range characteristic difference, and multiplying the data after the first transformation by the adjustment factor;
and quantizing the data multiplied by the adjustment factor according to the corresponding quantization step.
4. The method of claim 3, wherein determining the adjustment factor q based on the estimated value range feature difference value is: q is 2DWhere D' is the difference between the two transformed and corresponding quantized ranges.
5. The method of claim 2, further comprising: the decoding end determines an adjusting factor according to the value range characteristic difference in advance, performs inverse quantization on the received data according to the quantization step corresponding to the first transformation when decoding the data processed by the first transformation, and performs inverse transformation on the quotient of the inverse quantization result and the adjusting factor.
6. The method of claim 2, wherein the processing the transformed data based on the estimated value range feature difference value is:
determining the quantization point offset according to the estimated value range characteristic difference value, and subtracting the quantization point offset from the quantization point corresponding to the transformation;
and quantizing the first transformed data by using the adjusted quantization point.
7. The method according to claim 6, wherein the quantization point offset Δ QP is determined as Δ QP | -k × D '|, where D' is the difference between the two transforms and the corresponding quantized value range, and k is the corresponding quantization point offset when the quantization step size is decreased by half.
8. The method of claim 6, further comprising: and the decoding end determines the quantization point offset in advance according to the numerical range characteristic difference, adds the quantization point offset to the corresponding quantization point when decoding the data after the first transformation, performs inverse quantization on the received data by using the adjusted quantization point, and performs inverse transformation on the inverse quantization result.
9. The method of claim 2, wherein the compensating the first transformed data based on the estimated value range feature difference is:
determining an adjustment factor and a quantization point offset according to the estimated numerical range characteristic difference;
multiplying the data after the first transformation by the determined adjustment factor, and reducing the quantization point corresponding to the first transformation by the quantization point offset;
and quantizing the data multiplied by the adjusting factor by using the adjusted quantization point.
10. The method of claim 9, further comprising: the decoding end determines an adjusting factor and a quantization point offset in advance according to the value range characteristic difference, adds the quantization point offset to a corresponding quantization point when decoding the data after the first transformation, performs inverse quantization on the received data by using the adjusted quantization point, and performs inverse transformation on the quotient of the inverse quantization result and the adjusting factor.
11. The method of claim 2, wherein the compensating the first transformed data based on the estimated value range feature difference is:
determining an adjusting coefficient and a displacement offset during quantization according to the estimated numerical range characteristic difference;
the first transformed data is multiplied by the adjustment coefficient and the number of shifted bits is increased by the shift offset when quantized.
12. The method of claim 11, wherein the adjustment factor is q' ═ 2D′×2nThe shift offset is n, where n is a positive integer.
13. The method of claim 11, further comprising: the decoding end determines an adjusting coefficient and a shift offset in quantization in advance according to the value range characteristic difference, subtracts the shift offset from the shift bit number of the corresponding quantization point when decoding the data after the first transformation, performs inverse quantization on the received data by using the adjusted quantization point, and performs inverse transformation on the quotient of the inverse quantization result and the adjusting coefficient.
14. The method of claim 2, wherein the compensating the first transformed data based on the estimated value range feature difference is:
determining an adjustment factor and a coefficient offset according to the estimated numerical range characteristic difference;
multiplying the first transformed data by a determined adjustment factor, plus the coefficient offset;
and quantizing the data multiplied by the adjustment factor and added with the coefficient offset by using the quantization step corresponding to the first transformation.
15. The method according to any of claims 2 to 14, wherein the difference between the estimated image data after two transformations and corresponding quantized value ranges, respectively, is:
for both transformations, the mean value ranges after transformation are calculated separately <math> <mrow> <msub> <mi>Avr</mi> <mi>A</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>n</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>Aij</mi> </mrow> </math> And <math> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>m</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>Bij</mi> <mo>,</mo> </mrow> </math> wherein Aij is the numerical range of the ith row and the jth column pixel data after the first transformation in the subblock obtained by dividing the image data according to the scale of the first transformation, and Bij is the numerical range of the second transformation in the subblock obtained by dividing the image data according to the scale of the second transformation in the two transformationsThe numerical range of the pixel data of the ith row and the jth column is shown, the scale of the first transformation is n multiplied by n, and the scale of the second transformation is m multiplied by m;
calculating the difference D' (Avr) of the image data respectively subjected to two kinds of conversion and the corresponding quantized value range according to the average value range after the two kinds of conversion and the corresponding quantization step lengthB+log2QTAB2[n])-(AvrA+log2QTAB1[n]) Wherein, QTAB1[ n ]]And QTAB2[ n ]]Respectively, the quantization step lengths found from the quantization point lookups corresponding to the first and second transforms.
16. The method according to any of claims 2 to 14, wherein the difference between the two transformed and corresponding quantized value ranges of the estimated video data is:
calculating the average value range after the second transformation of the two transformations <math> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>Bij</mi> <mo>;</mo> </mrow> </math>
According to the average value range after the second transformation, the value ranges of all pixel data of the subblocks obtained by dividing according to the transformed scale after the first transformation and the quantization points corresponding to the two transformations, calculating the difference d 'between the value range of the ith row and the jth column of pixel data of the subblock after the first transformation and the corresponding quantization and the average value range after the second transformation and the corresponding quantization'(i,j)=(AvrB+log2QTAB2[n])-(Aij+log2QTAB1[n]) Wherein Aij is the ith row and the jth column of the subblock after the subblock is subjected to the first transformationNumerical range of pixel data, QTAB1[ n ]]And QTAB2[ n ]]Respectively, the quantization step lengths found from the quantization point lookups corresponding to the first and second transforms.
17. The method of claim 1,
the operation of estimating the value range is performed separately for at least k successive quantization points, wherein every k quantization points the quantization step is reduced to half of the original;
the difference between the two transformed numerical range features is: for the at least k consecutive quantization points, the image data is respectively subjected to the difference Δ d(s) of two transformed numerical ranges, wherein s is a quantization point index;
after estimating the difference value of the numerical range characteristics of the two kinds of transformation and before receiving data to be transformed, establishing a quantization table corresponding to the first kind of transformation further according to the difference value of the numerical range characteristics of the two kinds of transformation;
the compensating the first transformed data according to the estimated value range characteristic difference is: and quantizing the data after the first transformation by utilizing the established quantization table corresponding to the first transformation.
18. The method of claim 17, wherein the quantization table corresponding to the first transform is: setting the quantization step corresponding to the quantization point s to be 2Δd(s)X R, where R is a constant coefficient value.
19. The method of claim 17, further comprising: the decoding end establishes a first transformed inverse quantization table in advance according to the numerical range characteristic difference, and when the data after the first transformation is decoded, inverse quantization is carried out on the received data according to the first transformed inverse quantization table, and inverse transformation is carried out on the inverse quantization result.
20. The method of claim 19, wherein the creating the inverse quantization table for the first transform is:
setting the inverse quantization step corresponding to the quantization point s as <math> <mrow> <mi>IQ</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mn>2</mn> <mrow> <mo>(</mo> <mi>h</mi> <mo>-</mo> <mo>|</mo> <mfrac> <mi>s</mi> <mi>m</mi> </mfrac> <mo>|</mo> <mo>)</mo> </mrow> </msup> <mrow> <msup> <mn>2</mn> <mrow> <mi>&Delta;d</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>&CenterDot;</mo> <mi>R</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein m is the corresponding quantization point offset when the quantization step size is reduced by half, and h and R are constant coefficient values;
every m quantization points are set, the corresponding shift bit number is increased by 1.
21. The method according to any one of claims 17 to 20, wherein the difference Δ d(s) of the two transformed value ranges of the image data for different quantization points is estimated as:
for the current quantization point s, for both transformations, the mean value range after transformation is calculated separately <math> <mrow> <msub> <mi>Avr</mi> <mi>A</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>n</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>Aij</mi> </mrow> </math> And <math> <mrow> <msub> <mi>Avr</mi> <mi>B</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>m</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mi>Bij</mi> <mo>,</mo> </mrow> </math> wherein, Aij is the numerical range of the ith row and jth column pixel data after the first transformation corresponding to the quantization point s, Bij is the numerical range of the ith row and jth column pixel data after the second transformation in the two transformations corresponding to the quantization point s, the scale of the first transformation is n × n, and the scale of the second transformation is m × m;
the difference value delta d(s) of the value ranges of the image data after two kinds of conversion is respectively AvrB-AvrA
22. A transformed data processing apparatus comprising a first numerical range estimation unit, a second numerical range estimation unit, a numerical range difference unit, and a transform compensation unit,
the first numerical range estimation unit is used for estimating a numerical range of the image data after first transformation according to a transformation matrix required by the first transformation in two preset transformations, and providing the numerical range difference unit with the estimated numerical range;
the second numerical range estimation unit is used for estimating a numerical range of the image data after second transformation according to a transformation matrix required by the second transformation in the two preset transformations, and providing the numerical range difference unit with the estimated numerical range;
the numerical range difference unit is used for estimating the characteristic difference of the two transformed numerical ranges according to the numerical ranges respectively subjected to the first transformation and the second transformation, and providing the characteristic difference to the transformation compensation unit;
and the transformation compensation unit is used for receiving the data to be transformed, applying the first transformation of the two transformations to the data, and compensating the data after the first transformation according to the estimated value range characteristic difference.
23. The apparatus of claim 22, wherein the value range difference unit is further configured to estimate a value range feature difference of two transforms according to quantization step sizes of the two transforms found by the quantization point lookup corresponding to the image data, where the value range feature difference of the two transforms is a difference of the image data after the two transforms and the corresponding quantization.
24. The apparatus of claim 23, wherein the transform compensation unit comprises an adjustment factor determination subunit, a transform subunit, a post-transform processing subunit, and a quantization subunit,
the adjustment factor determining subunit is configured to determine an adjustment factor according to the difference value of the numerical range provided by the numerical range difference value unit, and provide the adjustment factor to the post-conversion processing subunit;
the transformation subunit is used for applying a first transformation to the data to be transformed and providing a transformation result to the post-transformation processing subunit;
the post-transformation processing subunit is used for multiplying the data after the first transformation by the determined adjustment factor and providing the result to the quantization subunit;
and the quantization subunit quantizes the result multiplied by the adjustment factor according to the quantization step corresponding to the first transformation.
25. The apparatus of claim 23, wherein the transform compensation unit comprises a quantization point modification sub-unit, a transform sub-unit, and a quantization sub-unit,
the quantization point correction subunit is configured to determine a quantization point offset according to the difference value of the numerical range provided by the numerical range difference value unit, subtract the quantization point offset from the quantization point corresponding to the first transform, and provide the adjusted result to the quantization subunit;
the transformation subunit is configured to apply a first transformation to the data to be transformed, and provide a transformed result to the quantization subunit;
and the quantization subunit is configured to quantize the transformed result according to the adjusted quantization point provided by the quantization point modification subunit.
26. The apparatus of claim 23, wherein the transform compensation unit comprises an adjustment factor and quantization point modification subunit, a transform subunit, a post-transform processing subunit, and a quantization subunit,
the adjustment factor and quantization point correction subunit is configured to determine an adjustment factor and a quantization point offset according to the difference value of the numerical range provided by the numerical range difference value unit, subtract the quantization point offset from the quantization point corresponding to the first transform, provide the adjusted quantization point to the quantization subunit, and provide the determined adjustment factor to the post-transform processing subunit;
the transformation subunit is used for applying a first transformation to the data to be transformed and providing a transformation result to the post-transformation processing subunit;
the post-transformation processing subunit is used for multiplying the data after the first transformation by the determined adjustment factor and providing the result to the quantization subunit;
and the quantization subunit quantizes the result multiplied by the adjustment factor according to the quantization step corresponding to the adjusted quantization point.
27. The apparatus of claim 23, wherein the transform compensation unit comprises an adjustment coefficient and shift offset sub-unit, a transform sub-unit, a post-transform processing sub-unit, and a quantization sub-unit,
the adjusting coefficient and shift offset quantum unit is used for determining an adjusting coefficient and a shift offset during quantization according to the estimated numerical range characteristic difference value, and respectively providing the adjusting coefficient and the shift offset to the transform post-processing subunit and the quantization subunit;
the transformation subunit is used for applying a first transformation to the data to be transformed and providing a transformation result to the post-transformation processing subunit;
the post-transformation processing subunit is used for multiplying the data after the first transformation by the determined adjusting coefficient and providing the result to the quantization subunit;
and the quantization subunit quantizes the result multiplied by the adjustment factor according to the shift offset and the quantization step corresponding to the first transformation.
28. The apparatus of claim 23, wherein the transform compensation unit comprises an adjustment factor and coefficient offset sub-unit, a transform sub-unit, a post-transform processing sub-unit, and a quantization sub-unit,
the adjustment factor and coefficient offset quantum unit is used for determining an adjustment factor and coefficient offset according to the numerical range difference value provided by the numerical range difference value unit and providing the adjustment factor and coefficient offset to the transform post-processing subunit;
the transformation subunit is used for applying a first transformation to the data to be transformed and providing a transformation result to the post-transformation processing subunit;
the said transform post-processing subunit is used for multiplying the data after the first kind of transform by the determined adjustment factor plus the coefficient offset and providing the result to the said quantization subunit;
and the quantization subunit quantizes the result which is multiplied by the adjustment factor and added with the coefficient offset according to the quantization step corresponding to the first transformation.
29. The apparatus of claim 22, wherein the value range difference unit estimates the two transformed value range feature differences as the difference between two transformed value ranges of the image data;
the transformation compensation unit comprises a quantization table establishing subunit, a transformation subunit and a quantization subunit;
the quantization table establishing subunit is configured to establish a corresponding quantization table for the first transformation according to the numerical range characteristic difference, and provide the quantization table for the quantization subunit;
the transformation subunit is used for applying a first transformation to the data to be transformed and providing the transformation result to the quantization subunit;
and the quantization subunit quantizes the result after the first transformation according to the quantization table corresponding to the first transformation.
30. A method of encoding, comprising:
receiving data to be transformed;
performing first transformation on the data to be transformed to obtain first transformed data;
performing second transformation on the data to be transformed to obtain second transformed data;
determining an adjusting parameter according to the second transformed parameter, the first transformed data and the second transformed data, and adjusting the first transformed data according to the adjusting parameter and the second transformed parameter;
and writing the adjustment parameters into an encoding code stream.
31. The method of claim 30, wherein adjusting the parameter comprises determining an adjustment factor based on the first transformed data and second transformed data, and wherein adjusting the first transformed data based on the adjustment parameter and the second transformed parameter comprises:
multiplying the first transformed data by the adjustment factor;
and quantizing the data multiplied by the adjustment factor according to the corresponding quantization step.
32. The method of claim 30, wherein the adjustment parameter comprises a quantization offset determined according to a first transformed data and a second transformed data, wherein the second transformed parameter is a quantization point of a second transform, and wherein adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises:
subtracting the quantization offset from the quantization point corresponding to the second transform to serve as the quantization point of the first transform;
and quantizing the data after the first transformation by using the adjusted quantization point of the first transformation.
33. The method of claim 30, wherein the first transform is a4x4 transform and the second transform is an 8x8 transform.
34. The method of claim 30, wherein the adjustment parameters include a quantization offset and an adjustment factor determined from the first transformed data and the second transformed data; the parameter of the second transformation is a quantization point corresponding to the second transformation, and the adjusting the data after the first transformation according to the adjustment parameter and the parameter of the second transformation includes:
multiplying the data after the first transformation by the adjusting factor, and subtracting the quantization offset from the quantization point corresponding to the second transformation to be used as the quantization point of the first transformation;
and quantizing the data multiplied by the adjustment factor by using the quantization point of the first transformation.
35. The method of claim 30, wherein the adjustment parameters include an adjustment coefficient and a quantization shift offset determined from the first transformed data and the second transformed data;
the parameter of the second transform comprises a shift number of quantization points of the second transform data;
the adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises:
multiplying the data after the first transformation by the adjusting coefficient, and quantizing the data multiplied by the adjusting coefficient by using the shift digit of the quantization point corresponding to the first transformation;
the shift digit of the quantization point corresponding to the first transformation is the sum of the shift digit of the quantization point corresponding to the second transformation and the quantization shift offset.
36. The method of claim 30, wherein the adjustment parameters include an adjustment factor and a coefficient offset determined from the first transformed data and the second transformed data, and wherein adjusting the first transformed data based on the adjustment parameters and the second transformed parameters comprises:
multiplying the first transformed data by the adjustment factor, adding the coefficient offset, and quantizing.
37. The method of claim 30, wherein writing the adjustment parameter into an encoded code stream comprises: and writing the adjusting parameters into a sequence header or an image header or a strip header or a macro block header in the coding code stream.
38. A method of decoding, comprising:
receiving a code stream, and decoding the code stream to obtain first converted data and an adjustment parameter;
and adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
39. The method of claim 38, wherein said adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises:
determining an adjustment factor for the first transformed data according to the adjustment parameter and the second transformed parameter:
and performing inverse quantization on the data after the first transformation according to the quantization step corresponding to the first transformation, and performing inverse transformation on the quotient of the inverse quantization result and the adjustment factor.
40. The method of claim 38, wherein the adjustment parameter is a quantization offset, the parameter of the second transform is a quantization point corresponding to the second transform, the quantization point of the first transform is a quantization point corresponding to the second transform minus the quantization offset, and the adjusting the data after the first transform according to the adjustment parameter and the parameter of the second transform comprises:
and carrying out inverse quantization on the data after the first transformation by using the quantization point corresponding to the first transformation, and carrying out inverse transformation on the inverse quantization result.
41. The method of claim 38,
the adjustment parameters include a quantization offset and an adjustment factor for the first transformed data;
the parameter of the second transformation is a quantization point corresponding to the second transformation, and the quantization point of the first transformation is the quantization point of the second transformation minus the quantization offset;
the adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises:
carrying out inverse quantization on the data after the first transformation by using the quantization point corresponding to the first transformation;
and performing inverse transformation on the quotient of the inverse quantization result and the adjustment factor.
42. The method of claim 38,
the adjustment parameters include an adjustment coefficient for the first transform and a quantization shift offset for the first transform data;
the parameter of the second transform comprises a shift number of quantization points of the second transform data;
the adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises:
subtracting the quantization shift offset from the shift bit number of the quantization point corresponding to the second transformation to obtain the shift bit number of the quantization point corresponding to the first transformation;
carrying out inverse quantization on the received data after the first transformation by using the shift bit number of the quantization point corresponding to the first transformation;
and performing inverse transformation on the quotient of the inverse quantization result and the adjustment coefficient.
43. The method of claim 38,
determining an adjustment factor for the first transformed data according to the adjustment parameter and the parameter of the second transformation;
determining the coefficient offset of the first transformation data according to the adjustment parameter and the parameter of the second transformation;
the adjusting the first transformed data according to the adjustment parameter and the second transformed parameter comprises:
and carrying out inverse quantization on the received data according to the quantization step corresponding to the first transformation, subtracting the coefficient offset from the inverse quantization result, and carrying out inverse transformation on the quotient of the subtraction result and the adjustment factor.
44. The method of claim 38, wherein the adjustment parameter is obtained from a sequence header or a picture header or a slice header or a macroblock header in the bitstream.
45. The method of claim 38, wherein the first transform is a4x4 transform and the second transform is an 8x8 transform.
46. An encoding apparatus, comprising:
a data receiving unit for receiving data to be transformed;
the conversion unit is used for carrying out first conversion on the data to be converted received by the receiving unit to obtain first converted data; performing second transformation on the data to be transformed to obtain second transformed data;
a first adjusting unit which determines an adjusting parameter according to a second transformed parameter, the first transformed data and the second transformed data, and adjusts the first transformed data according to the adjusting parameter and the second transformed parameter;
and the writing unit is used for writing the adjusting parameters into the coding code stream.
47. A decoding apparatus, comprising:
a code stream receiving unit for receiving the code stream;
the decoding unit is used for decoding the code stream to obtain first converted data and adjustment parameters;
and the second adjusting unit is used for adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
48. A coding/decoding system, comprising: an encoding device and a decoding device;
the encoding apparatus includes:
a data receiving unit for receiving data to be transformed;
the conversion unit is used for carrying out first conversion on the data to be converted received by the receiving unit to obtain first converted data; performing second transformation on the data to be transformed to obtain second transformed data;
a first adjusting unit which determines an adjusting parameter according to a second transformed parameter, the first transformed data and the second transformed data, and adjusts the first transformed data according to the adjusting parameter and the second transformed parameter;
and the writing unit is used for writing the adjusting parameters into the coding code stream.
The decoding apparatus includes:
a code stream receiving unit for receiving the code stream;
the decoding unit is used for decoding the code stream to obtain first converted data and adjustment parameters;
and the second adjusting unit is used for adjusting the data after the first transformation according to the adjusting parameters and the parameters of the second transformation.
49. A method of encoding, comprising:
receiving data to be transformed and encoded parameter information;
performing first transformation on the data to be transformed to obtain first transformed data;
determining an adjustment parameter according to the encoded parameter information and a second transformed parameter, and adjusting the first transformed data according to the adjustment parameter and the second transformed parameter;
and writing the adjustment parameters into an encoding code stream.
50. The method of claim 49 wherein the first transform is a4x4 transform and the second transform is an 8x8 transform.
51. The method of claim 49, wherein the adjustment parameter comprises a quantization offset determined according to encoded parameter information and a parameter of the second transform, the parameter of the second transform being a quantization point of the second transform, and wherein adjusting the data after the first transform according to the adjustment parameter and the parameter of the second transform comprises:
subtracting the quantization offset from the quantization point corresponding to the second transform to serve as the quantization point of the first transform;
and quantizing the data after the first transformation by using the adjusted quantization point of the first transformation.
52. The method of claim 49, wherein the encoded parameter information comprises statistics obtained before encoding the current picture in the current sequence or the current group of pictures, the statistics comprising the number of intra-predicted picture blocks using the first transform, the number of intra-predicted picture blocks using the second transform, the number of all intra-predicted picture blocks using the first transform, the number of intra-predicted picture blocks using the first transform in P-frames or B-frames, the number of intra-predicted picture blocks using the first transform in P-frames and B-frames, the number of intra-predicted picture blocks using the second transform in P-frames or B-frames, the number of intra-predicted picture blocks using the second transform in P-frames and B-frames, the number of intra-predicted picture blocks using the skip mode or the direct mode in P-frames, the number of picture blocks using the skip mode or the direct mode in P-frames, the number of image blocks in the B frame using the skip mode or the direct mode, the number of image blocks in the B frame using the skip mode and the direct mode, the number of image blocks in the P frame using the inter prediction mode, or the number of image blocks in the B frame using the inter prediction mode.
53. The method of claim 51 or 52, wherein the step of determining the quantization offset based on the encoded parameter information comprises:
performing mathematical operation on the encoded parameter information to obtain a first determined value and a second determined value;
carrying out logic judgment by using the first and second determined values and preset conditions, and selecting a table for determining the quantization offset from preset tables according to a logic judgment result;
determining quantization offsets in the table for determining quantization offsets based on parameters of the second transform.
54. The method of claim 53, wherein the step of mathematically operating the encoded parameter information to obtain a first determined value and a second determined value comprises:
calculating the proportional relation between the number of the image blocks of each intra-frame prediction to obtain a first relation value, calculating the proportional relation between the number of the image blocks of each skip mode or direct mode and the number of the image blocks of each inter-frame prediction mode, or calculating the proportional relation between the number of the image blocks of each skip mode or direct mode and the number of the image blocks of each inter-frame prediction mode to obtain a second relation value;
and obtaining the first determination value and the second determination value by performing mathematical operation on the first relationship value, the second relationship value, and the parameter of the second transformation.
55. The method of claim 51, 52 or 54, wherein after the quantization offset is calculated, a weighting parameter lambda in the rate-distortion optimization model is obtained according to the quantization offset and a preset table.
56. An encoding apparatus, comprising:
a third receiving unit for receiving data to be transformed and encoded parameter information;
a third conversion unit, configured to perform first conversion on the data to be converted received by the third receiving unit to obtain first converted data;
a third adjusting unit, configured to determine an adjustment parameter according to the encoded parameter information and a second transformed parameter received by a third receiving unit, and adjust the first transformed data obtained by the third transforming unit according to the adjustment parameter and the second transformed parameter;
and the third writing unit is used for writing the adjusting parameters obtained by the third adjusting unit into the coding code stream.
CN 200810087919 2007-06-13 2008-03-19 Method and apparatus for processing transformation data, method and apparatus for encoding and decoding Active CN101325714B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN 200810087919 CN101325714B (en) 2007-06-13 2008-03-19 Method and apparatus for processing transformation data, method and apparatus for encoding and decoding
PCT/CN2008/071255 WO2008151570A1 (en) 2007-06-13 2008-06-10 Method, device and system for coding and decoding

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
CN200710112378.0 2007-06-13
CN200710112378 2007-06-13
CN200810008671.7 2008-01-31
CN200810008671 2008-01-31
CN 200810087919 CN101325714B (en) 2007-06-13 2008-03-19 Method and apparatus for processing transformation data, method and apparatus for encoding and decoding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN 201010233134 Division CN101888556B (en) 2008-03-19 2008-03-19 Encoding method, decoding method, and encoding device and decoding device

Publications (2)

Publication Number Publication Date
CN101325714A true CN101325714A (en) 2008-12-17
CN101325714B CN101325714B (en) 2010-10-27

Family

ID=40188994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810087919 Active CN101325714B (en) 2007-06-13 2008-03-19 Method and apparatus for processing transformation data, method and apparatus for encoding and decoding

Country Status (1)

Country Link
CN (1) CN101325714B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685487A (en) * 2011-03-16 2012-09-19 华为技术有限公司 Image coding and decoding methods, image coding and decoding equipment and network system
CN104221378A (en) * 2012-04-16 2014-12-17 高通股份有限公司 Uniform granularity for quantization matrix in video coding
CN104541506A (en) * 2012-09-28 2015-04-22 英特尔公司 Inter-layer pixel sample prediction
WO2016011796A1 (en) * 2014-07-24 2016-01-28 华为技术有限公司 Adaptive inverse-quantization method and apparatus in video coding
CN108111850A (en) * 2011-10-17 2018-06-01 株式会社Kt The method that the decoded video signal decoding with current block is treated with decoding apparatus
CN109996074A (en) * 2017-12-29 2019-07-09 富士通株式会社 Picture coding device, picture decoding apparatus and electronic equipment
CN110291792A (en) * 2017-01-11 2019-09-27 交互数字Vc控股公司 The ratio depending on block of asymmetric coding unit size
CN110365983A (en) * 2019-09-02 2019-10-22 珠海亿智电子科技有限公司 A kind of macro-block level bit rate control method and device based on human visual system
US20210211743A1 (en) 2010-04-13 2021-07-08 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11546641B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11611761B2 (en) 2010-04-13 2023-03-21 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US11734714B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Region merging and coding parameter reuse via merging

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1209928C (en) * 2003-07-04 2005-07-06 清华大学 Inframe coding frame coding method using inframe prediction based on prediction blockgroup
CN101061725B (en) * 2004-11-19 2010-08-11 松下电器产业株式会社 Video encoding method, and video decoding method
CN100466745C (en) * 2005-10-11 2009-03-04 华为技术有限公司 Predicting coding method and its system in frame
CN100534195C (en) * 2006-12-22 2009-08-26 上海广电(集团)有限公司中央研究院 Fast inter-frame mode adjudging method capable of fusing multi-reference frame selection and motion estimation

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11765363B2 (en) 2010-04-13 2023-09-19 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US11546642B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11856240B1 (en) 2010-04-13 2023-12-26 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US12010353B2 (en) 2010-04-13 2024-06-11 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11553212B2 (en) 2010-04-13 2023-01-10 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11611761B2 (en) 2010-04-13 2023-03-21 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US11983737B2 (en) 2010-04-13 2024-05-14 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11910029B2 (en) 2010-04-13 2024-02-20 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division preliminary class
US11910030B2 (en) 2010-04-13 2024-02-20 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11810019B2 (en) 2010-04-13 2023-11-07 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US12120316B2 (en) 2010-04-13 2024-10-15 Ge Video Compression, Llc Inter-plane prediction
US20210211743A1 (en) 2010-04-13 2021-07-08 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11900415B2 (en) 2010-04-13 2024-02-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11785264B2 (en) 2010-04-13 2023-10-10 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US11778241B2 (en) 2010-04-13 2023-10-03 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11546641B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11765362B2 (en) 2010-04-13 2023-09-19 Ge Video Compression, Llc Inter-plane prediction
US11736738B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using subdivision
US11734714B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
WO2012122948A1 (en) * 2011-03-16 2012-09-20 清华大学 Method for coding and decoding images, device for coding and decoding images, and network system
US9106897B2 (en) 2011-03-16 2015-08-11 Huawei Technologies Co., Ltd. Picture encoding and decoding method, picture encoding and decoding device and network system
CN102685487B (en) * 2011-03-16 2015-07-08 华为技术有限公司 Image coding and decoding methods, image coding and decoding equipment and network system
CN102685487A (en) * 2011-03-16 2012-09-19 华为技术有限公司 Image coding and decoding methods, image coding and decoding equipment and network system
CN108134935A (en) * 2011-10-17 2018-06-08 株式会社Kt The method that the decoded video signal decoding with current block is treated with decoding apparatus
CN108174211A (en) * 2011-10-17 2018-06-15 株式会社Kt The method that the decoded video signal decoding with current block is treated with decoding apparatus
CN108134935B (en) * 2011-10-17 2021-11-05 株式会社Kt Method for decoding video signal having current block to be decoded by decoding device
CN108111850B (en) * 2011-10-17 2021-11-05 株式会社Kt Method for decoding video signal having current block to be decoded by decoding device
CN108111849B (en) * 2011-10-17 2021-11-02 株式会社Kt Method for decoding video signal having current block to be decoded by decoding device
CN108174212B (en) * 2011-10-17 2021-11-02 株式会社Kt Method for decoding video signal having current block to be decoded by decoding device
CN108174211B (en) * 2011-10-17 2021-11-02 株式会社Kt Method for decoding video signal having current block to be decoded by decoding device
CN108111850A (en) * 2011-10-17 2018-06-01 株式会社Kt The method that the decoded video signal decoding with current block is treated with decoding apparatus
CN108111849A (en) * 2011-10-17 2018-06-01 株式会社Kt The method that the decoded video signal decoding with current block is treated with decoding apparatus
CN108174212A (en) * 2011-10-17 2018-06-15 株式会社Kt The method that the decoded video signal decoding with current block is treated with decoding apparatus
CN104221378B (en) * 2012-04-16 2017-12-05 高通股份有限公司 It is used for the uniform particle size of quantization matrix in video coding
CN104221378A (en) * 2012-04-16 2014-12-17 高通股份有限公司 Uniform granularity for quantization matrix in video coding
CN104541506A (en) * 2012-09-28 2015-04-22 英特尔公司 Inter-layer pixel sample prediction
US10257514B2 (en) 2014-07-24 2019-04-09 Huawei Technologies Co., Ltd. Adaptive dequantization method and apparatus in video coding
CN105338352A (en) * 2014-07-24 2016-02-17 华为技术有限公司 Adaptive dequantization method and device in video decoding
WO2016011796A1 (en) * 2014-07-24 2016-01-28 华为技术有限公司 Adaptive inverse-quantization method and apparatus in video coding
CN110291792A (en) * 2017-01-11 2019-09-27 交互数字Vc控股公司 The ratio depending on block of asymmetric coding unit size
US11533502B2 (en) 2017-01-11 2022-12-20 Interdigital Madison Patent Holdings, Sas Asymmetric coding unit size block dependent ratio
CN109996074A (en) * 2017-12-29 2019-07-09 富士通株式会社 Picture coding device, picture decoding apparatus and electronic equipment
CN110365983A (en) * 2019-09-02 2019-10-22 珠海亿智电子科技有限公司 A kind of macro-block level bit rate control method and device based on human visual system
CN110365983B (en) * 2019-09-02 2019-12-13 珠海亿智电子科技有限公司 Macroblock-level code rate control method and device based on human eye vision system

Also Published As

Publication number Publication date
CN101325714B (en) 2010-10-27

Similar Documents

Publication Publication Date Title
CN101325714B (en) Method and apparatus for processing transformation data, method and apparatus for encoding and decoding
CN101888556B (en) Encoding method, decoding method, and encoding device and decoding device
US8059721B2 (en) Estimating sample-domain distortion in the transform domain with rounding compensation
US8086052B2 (en) Hybrid video compression method
KR100932879B1 (en) Macroblock Level Rate Control
US8244048B2 (en) Method and apparatus for image encoding and image decoding
US8259793B2 (en) System and method of fast MPEG-4/AVC quantization
US8374451B2 (en) Image processing device and image processing method for reducing the circuit scale
JP2007089035A (en) Moving image encoding method, apparatus, and program
KR101621854B1 (en) Tsm rate-distortion optimizing method, encoding method and device using the same, and apparatus for processing picture
KR20050119422A (en) Method and apparatus for estimating noise of input image based on motion compenstion and, method for eliminating noise of input image and for encoding video using noise estimation method, and recording medium for storing a program to implement the method
JP4294095B2 (en) Motion compensated prediction process and encoder using motion compensated prediction process
KR100961760B1 (en) Motion Estimation Method and Apparatus Which Refer to Discret Cosine Transform Coefficients
JP4494803B2 (en) Improved noise prediction method and apparatus based on motion compensation, and moving picture encoding method and apparatus using the same
US20120008687A1 (en) Video coding using vector quantized deblocking filters
US20090041119A1 (en) Method and Device for Coding a Video Image
US8194740B2 (en) Apparatus and method for compression-encoding moving picture
EP2076047A2 (en) Video motion estimation
KR101522391B1 (en) Quantization control device and method, and quantization control program
JP4532607B2 (en) Apparatus and method for selecting a coding mode in a block-based coding system
KR20040007818A (en) Method for controlling DCT computational quantity for encoding motion image and apparatus thereof
US11736704B1 (en) Methods and apparatuses of SATD folding hardware design in video encoding systems
US20120027080A1 (en) Encoder and encoding method using coded block pattern estimation
CN116137658A (en) Video coding method and device
KR20100082700A (en) Wyner-ziv coding and decoding system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: He Yun

Inventor after: Wu Yannan

Inventor after: Wang Yunfei

Inventor after: Mao Xiunan

Inventor after: Zheng Xiaozhen

Inventor after: Zheng Jianhua

Inventor before: He Yun

Inventor before: Wu Yannan

Inventor before: Zheng Xiaozhen

Inventor before: Zheng Jianhua

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: HE YUN WU YANNAN ZHENG XIAOZHEN ZHENG JIANHUA TO: HE YUN WU YANNAN WANG YUNFEI MAO XUNAN ZHENG XIAOZHEN ZHENG JIANHUA