CN103248889B - A kind of image processing method and device - Google Patents
A kind of image processing method and device Download PDFInfo
- Publication number
- CN103248889B CN103248889B CN201310095634.5A CN201310095634A CN103248889B CN 103248889 B CN103248889 B CN 103248889B CN 201310095634 A CN201310095634 A CN 201310095634A CN 103248889 B CN103248889 B CN 103248889B
- Authority
- CN
- China
- Prior art keywords
- sequence
- value
- prediction
- predicted
- mean
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims description 10
- 238000012545 processing Methods 0.000 claims abstract description 114
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000011161 development Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 19
- 238000009825 accumulation Methods 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 108091034117 Oligonucleotide Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Obtain in embodiments of the present invention N the first predicted value that a described N predictive mode is corresponding and judge whether the first data and the second data that obtain by N the first predicted value are less than the first predetermined threshold value, and generate the first result of determination, in the time that described the first result of determination characterizes described the first data and described the second data and is less than described the first predetermined threshold value, according to N the first predicted value, N the first predicted value carried out to Grey Prediction Model processing, obtain the second predicted value, the final predicted value of each predictive mode in a predictive mode using the second predicted value as N, thereby having solved in prior art in view data processing procedure the mode that each prediction mode is all carried out to rate distortion calculating has caused in prior art for the higher technical problem of image infra-frame prediction complexity.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus.
Background
With the rapid development of third generation (3rd generation) mobile communication and multimedia technologies, obtaining good image quality with possibly low storage and fast transmission of low bandwidth images has become two major challenges for video compression. The highly compressed digital video codec standard H.264 was proposed for this Joint video team (JVT, JointVideoTeam) jointly consisting of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). Compared with the existing video coding standard, the H.264 has incomparable advantages in the aspects of low code stream, image quality, fault-tolerant capability, network adaptability and the like, so that the H.264 provides more excellent image quality under the same bandwidth.
H.264 intra coding is used to reduce the spatial redundancy of images, and in order to improve the efficiency of h.264 intra coding, the spatial correlation of neighboring macroblocks, which typically contain similar properties, is exploited in a given frame. H.264 differs from previous compression standards in many ways, one of the main differences being a wider range of block sizes. In h.264, there are seven different partition modes for a macroblock. There are four ways to partition a 16 × 16 macroblock: 1 16 × 16, or 2 × 8, or 2 × 16, or 4 × 8, wherein the sub-macroblock of 8 × 8 mode can be further divided in four ways: 1 8 × 8, or 2 4 × 8, or 2 8 × 4, or 4 × 4. For a 4 × 4 block, there are 9 prediction modes, specifically see fig. 1, in order to adapt to applications of High Definition Television (HDTV) and video capture systems, the HEVC standard comes from birth, the basic framework of the inter-frame technology is substantially the same as that of h.264, the original 8 prediction directions are extended into 33, and the refinement of intra-frame prediction is increased. In addition, the intra prediction mode preserves DC prediction and improves the Planar prediction method. Currently, the HM model includes 35 prediction modes, as shown in fig. 2.
However, the related art method for selecting an intra prediction mode of a frame image of the h.264 standard includes a method for selecting an intra prediction mode of a luminance portion and a method for selecting an intra prediction mode of a chrominance portion. The method for selecting the brightness part comprises the steps of dividing a frame image into a plurality of macro blocks, setting a difference threshold value, calculating the difference of each macro block, comparing the obtained difference with the threshold value, using prediction modes with different sizes according to different comparison results, then calculating an SATD value of each prediction mode, obtaining an optimal prediction mode according to the obtained value, performing rate distortion optimization on the optimal prediction modes, and finally determining an optimal mode.
Disclosure of Invention
The invention provides an image processing method and device, which are used for solving the technical problem that the intra-frame prediction complexity of an image in the prior art is higher due to the fact that the rate distortion calculation is carried out on each prediction mode in the image data processing process in the prior art, and the specific technical scheme is as follows:
an image processing method applied to an image processing apparatus, the image processing apparatus being capable of performing encoding and decoding processing on image data, the image processing apparatus dividing the image data into N prediction macroblocks in the process of performing encoding and decoding processing on the image data, the N prediction macroblocks corresponding to N prediction modes, N being an integer greater than or equal to 2, the method comprising:
obtaining N first predicted values corresponding to the N prediction modes;
according to a first condition, processing a first mean value and a first variance of each first predicted value in the N first predicted values to obtain N first mean values and N first variances, wherein the first condition is that:wherein, muNIs the average value of the predicted values in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and the length of the current prediction macro block,σNis the variance of the predicted value in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and length of the current prediction macro block, fN(x, y) is a prediction function of the Nth prediction mode at the coordinate position (x, y);
according to a second condition, processing the N first means and the N first variances to obtain a second variance σ of the N first meansμAnd a second variance σ of the N first variancesvarWherein the second condition is: n denotes the number of all prediction modes, σμAnd σvarRespectively represent muNAnd σNThe variance of (a);
judging whether second variances of the N first means and second variances of the N first variances are smaller than a first preset threshold value or not, and generating a first judgment result;
when the first judgment result represents that the second variances of the N first mean values and the second variances of the N first variances are smaller than the first preset threshold value, performing gray prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values;
and taking the second predicted value as a final predicted value of each of the N prediction modes.
An image processing apparatus capable of performing encoding and decoding processing on image data, the image processing apparatus dividing the image data into N prediction macroblocks in a process of performing encoding and decoding processing on the image data, the N prediction macroblocks corresponding to N prediction modes, N being an integer equal to or greater than 2, the apparatus comprising:
a first obtaining unit, configured to obtain N first prediction values corresponding to the N prediction modes;
a second obtaining unit, configured to process a first mean and a first variance of each of the N first prediction values according to a first condition, and obtain N first mean and N first variance, where the first condition is:wherein, muNIs the average value of the predicted values in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and the length of the current prediction macro block,σNis the variance of the predicted value in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and length of the current prediction macro block, fN(x, y) is a prediction function of the Nth prediction mode at the coordinate position (x, y);
a third obtaining unit, configured to process the N first means and the N first variances according to a second condition to obtain a second variance σ of the N first meansμAnd a second variance σ of the N first variancesvarWherein the second condition is: n denotes the number of all prediction modes, σμAnd σvarRespectively represent muNAnd σNThe variance of (a);
a determining unit, configured to determine whether a second variance of the N first means and a second variance of the N first variances are smaller than a first preset threshold, and generate a first determination result;
the processing unit is configured to, when the first determination result represents that second variances of the N first mean values and second variances of the N first variances are smaller than the first preset threshold, perform gray prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values;
a determining unit configured to use the second prediction value as a final prediction value of each of the N prediction modes.
In the embodiment of the present invention, N first predicted values corresponding to the N prediction modes are obtained to determine whether first data and second data obtained by the N first predicted values are smaller than a first preset threshold, and a first determination result is generated, and when the first determination result represents that the first data and the second data are smaller than the first preset threshold, the N first predicted values are subjected to gray prediction model processing according to the N first predicted values to obtain a second predicted value, and the second predicted value is used as a final predicted value of each prediction mode in the N prediction modes, so as to solve a technical problem in the prior art that a rate distortion calculation is performed on each prediction mode in an image data processing process to cause a high complexity of intra prediction of an image frame, and further, the complexity of an image data encoding and decoding process can be effectively reduced by processing the image data through gray modeling, the prediction precision is improved, the processing time of the image processing device is saved, and the processing efficiency of the image processing device is improved.
Drawings
FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of image processing;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The present invention provides an image processing method, which is applied to an image processing apparatus, wherein the image processing apparatus is capable of performing encoding and decoding processing on image data, the image processing apparatus divides the image data into N prediction macroblocks during the encoding and decoding processing of the image data, and the N prediction macroblocks correspond to N prediction modes, and the method specifically comprises: obtaining N first predicted values corresponding to the N prediction modes; judging whether first data and second data obtained through the N first predicted values are smaller than a first preset threshold value or not, and generating a first judgment result; when the first judgment result represents that the first data and the second data are smaller than the first preset threshold value, performing grey prediction model processing on the N first predicted values according to the N first predicted values to obtain a second predicted value; and taking the second predicted value as a final predicted value of each of the N prediction modes.
Specifically, the method for selecting the intra-frame prediction mode of the h.264 standard includes a method for selecting the intra-frame prediction mode of a luminance portion and a method for selecting the intra-frame prediction mode of a chrominance portion, wherein the method for selecting the luminance portion is to divide a frame image into a plurality of macro blocks, set a threshold value of a difference degree, calculate the difference degree of each macro block, compare the obtained difference degree with the threshold value, use prediction modes of different sizes according to different comparison results, then calculate the SATD value of each prediction mode, obtain preferred prediction modes according to the obtained values, perform rate distortion optimization on the preferred prediction modes, and finally determine the best mode, and obviously, when determining the best mode in the prior art, each prediction mode needs to be calculated to obtain the SATD value and perform rate distortion optimization, thereby resulting in a complex operation process of the intra-frame prediction of the h.264 standard, so that the processing speed of the image data is slow and the processing efficiency of the image data is low.
Therefore, in the embodiment of the invention, the gray modeling processing is carried out on the predicted value, so that the processing processes of SATD value calculation and rate distortion optimization of each prediction can be avoided, the encoding complexity of the image data is reduced on the premise of not influencing the video quality, and the processing time of the image data is also saved.
The technical solutions of the present invention are described in detail below with reference to the accompanying drawings and specific embodiments, and it should be understood that the embodiments of the present invention are only detailed descriptions of the technical solutions of the present invention, and are not limited to the technical solutions of the present invention, and specific technical features in the embodiments and the embodiments of the present invention may be combined with each other without conflict.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the method includes:
step 101, obtaining N first prediction values corresponding to N prediction modes.
First, the method in the embodiment of the present invention is applied to an image processing apparatus, the image processing apparatus is capable of performing encoding and decoding processing on image data, and the image processing apparatus divides the image data into N prediction macroblocks during the encoding and decoding processing on the image data, and each macroblock corresponds to one prediction mode, so that the N prediction macroblocks correspond to the N prediction modes.
After the image processing device obtains N prediction modes, the image processing device obtains N first prediction values corresponding to the N prediction modes, and the N first prediction values are obtained by predicting pixel points which are adjacent to and coded by a predicted macro block.
After obtaining the N first prediction values, the image processing apparatus proceeds to step 102.
Step 102, determining whether the first data and the second data obtained through the N first predicted values are smaller than a first preset threshold, and generating a first determination result.
After acquiring the N first predicted values in step 101, the image processing apparatus first acquires a first mean value and a first variance of each first predicted value, and specifically calculates as follows:
substituting each of the N first predicted values into a mean value operation formula, namely:wherein, muNThe image processing apparatus obtains N first average values corresponding to the N first prediction values through the above operation formula, where H is the height of the current prediction macroblock, and W is the width and length of the current prediction macroblock.
After obtaining the N first mean values, the image processing apparatus obtains N first variances corresponding to the N first predicted values according to the N first predicted values and the N first mean values, that is:σNis the variance of the predicted value in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and length of the current prediction macro block, fNAnd (x, y) is a prediction function of the Nth prediction mode at the coordinate position (x, y).
The N first variances of the N first predicted values can be obtained by the variance calculation formula.
After obtaining the N first mean values and the N first variance values, the image processing apparatus obtains first data of the N first mean values and second data of the N first variance values according to the N first mean values and the N first variance values, that is, a second variance value of the N first mean values and a second variance value of the N first variance values, and the specific calculation method is as follows:
calculating a formula according to the N first means and the variance:obtaining a second variance σ of the N first meansμ;
Calculating formula according to the N first variances and the variance:obtaining a second variance σ of the N first variancesvar。
Then the obtained second variance value sigmaμAnd σvarComparing with a first preset threshold value T, and determining the second variance value sigmaμAnd σvarWhen the prediction value is lower than the first preset threshold value T, it may be determined that the prediction values obtained by the different prediction modes are similar.
When the first determination result represents that the first data and the second data are smaller than the first preset threshold T, the image processing apparatus will execute step 103.
And 103, performing grey prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values.
After determining that the first data and the second data are less than a first preset threshold value T, the image processing apparatus substitutes N first predicted values as input data of a gray-modeled input sequence into a gray-modeled input sequence x(0)=(x(0)(1),x(0)(2),...,x(0)(N)), wherein, through the transportationThe processing of the input sequence results in second sequence data.
It should be noted that the level ratio of the gray modeling sequence x must fall within the feasible range in the gray modeling, and therefore, in the embodiment of the present invention, before acquiring the second sequence data, the image processing apparatus further determines the level ratio of the input sequence x, specifically, the level ratioNeed to be limited toIn the range corresponding to the region, if the level ratio of the input sequence x is within the region range, the image processing apparatus directly performs gray modeling processing on the N first predicted values.
If more than one level ratio in the level ratios of the input sequence x is not in the region, the image processing device correspondingly adjusts the input sequence, and the specific adjustment process is to change the level ratio falling region of the original sequence x. The principle of data processing is to minimize the step ratio deviation to make the difference information delta of the processed sequence yy(k) The ratio to the transformed data y (k) is as small as possible, expressed asOn this basis, the image processing device performs translation processing on an input original input sequence, and the specific mode is as follows:then just obtain the translation resultWherein y (k) characterizes the translated sequence after translation, x (k) characterizes the original sequence,y(k) is a predetermined threshold value, Δy(k) For the difference information, select asy(k) The threshold value of (a) can be obtained, and after obtaining the Q value, a new sequence generated after the translation, that is, a sequence y (k),and performs a gray modeling process, i.e., GM (1,1) modeling, according to the newly generated sequence.
The following is a specific description of the process of gray modeling GM (1,1) by the original sequence when the rank ratio condition is satisfied.
Inputting the N first predicted values into an input sequence x as input data of the gray prediction model(0)=(x(0)(1),x(0)(2),...,x(0)(N)), and acquiring second sequence data corresponding to the N first predicted values, wherein x is(0)(1) Is the first original predicted value, x(0)(2) Is the second original predicted value, x(0)(N) is the Nth original predicted value;
obtaining an accumulation generation sequence corresponding to the input sequence according to the second sequence data Wherein x is(0)For the original input sequence, x(1)(1) Is a first sequence value, x(1)(2) Is a second sequence value, x(1)(n) is the nth sequence value;
according to the accumulation generating sequence, acquiring a mean value generating sequence corresponding to the accumulation generating sequence: z is a radical of(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)), according to z(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)) to obtain z(1)(k)=0.5x(1)(k)+0.5x(1)(k-1) wherein z(1)Generating a sequence for said mean value, z(1)(1) Generating a first sequence value of the sequence for the first mean value, z(1)(2) Generating a second sequence value of the sequence for the first mean value, z(1)(n) generating a third sequence value of the sequence for the first mean;
acquiring intermediate parameters according to the input sequence and the mean value generation sequence, and acquiring a development coefficient a and a gray input amount b according to the input sequence, the mean value generation sequence and the intermediate parameters;
obtaining the gray prediction model x according to the input sequence, the mean generation sequence, the development coefficient and the gray input quantity(0)(k)+az(1)(k)=b;
According to the gray prediction model x(0)(k)+az(1)(k) And b, performing the gray prediction model processing on the N first predicted values to obtain the second predicted value.
In order to obtain the coefficient of development a and the ash input amount b, it is necessary to first obtain an intermediate parameter, and the intermediate parameter is specifically obtained as follows:
generating a sequence according to the mean value to obtain a first intermediate parameter
Obtaining a second intermediate parameter according to the input sequence
Obtaining a third intermediate parameter according to the mean value generation sequence and the input sequence
Generating a sequence according to the mean value to obtain a fourth intermediate parameter
Obtaining the development coefficient according to the first intermediate parameter C, the second intermediate parameter D, the third intermediate parameter E and the fourth intermediate parameter FAnd the ash input amountAfter the coefficient of development a and the gray input amount b are obtained, the expression of GM (1,1) is obtained: x is the number of(0)(k)+az(1)(k) B, the expressionOnly one transition expression is obtained according to the transition expression ThereinAnd characterizing the temporary predicted value.
Since the development coefficient a exists in the expression in the form of a denominator, it is first necessary to determine the development coefficient a, that is, whether the development coefficient a is greater than a second preset threshold value, which may be set to 10, and generate a second determination result-5The number of the size of the input sequence x is determined by the image processing device when the coefficient of development a is smaller than the second preset threshold(0)Is similar, so that the gray modeling process is not required to be performed on the N first predicted values, but any one of the N first predicted values is directly used as a final predicted value.
If the development coefficient a is larger than a second preset threshold, the image processing device is used for: pfinal(i,j)=fGM(1,1)(p1,p2,...,pr) Obtaining a second predicted value, wherein Pfinal(i, j) is a second predicted value obtained after modeling GM (1,1) by gray.
And 104, taking the second predicted value as a final predicted value of each prediction mode in the N prediction modes.
In addition, as shown in fig. 2, which is a flow chart for implementing image processing in the embodiment of the present invention, in fig. 2, first, N first predicted values are taken as values of an input sequence, then, a level ratio of the input sequence is determined, and when the level ratio satisfies a condition, x is obtained by accumulating and generating(1)(k) If the input sequence does not satisfy the level ratio, the input sequence is adjusted, and after the input sequence satisfies the condition, x is generated through accumulation(1)(k) Then to x(1)(k) Performing mean generation to obtain z(1)(k) In the presence of a base to obtain z(1)(k) Judging whether the development coefficient in the mean value generation sequence is 0 or not, and if the development coefficient is equal to 0, then x is added(0)(k) The first value in the sequence is used as a predicted value, and if the coefficient of development is greater than 0, the predicted value is usedx (1)(k+1)=(x(0)(1)-b/a)eak+b/a,x (0)(k+1)=x (1)(k+1)-x (1)(k) Obtaining an analog output valuex (0)(k +1), will eventuallyx (0)(k +1) as the final prediction result.
Corresponding to an image processing method in an embodiment of the present invention, an embodiment of the present invention further provides an image processing apparatus, as shown in fig. 3, which is a schematic structural diagram of an image processing apparatus in an embodiment of the present invention, where the image processing apparatus includes:
an obtaining unit 301, configured to obtain N first prediction values corresponding to the N prediction modes;
a determining unit 302, configured to determine whether each of N-1 first difference values between any two of the N first predicted values is smaller than a first preset threshold, and generate N-1 first determination results;
a processing unit 303, configured to, when the N-1 first determination results indicate that the N-1 first difference values are smaller than the first preset threshold, perform gray prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values;
a determining unit 304, configured to use the second prediction value as a final prediction value of each of the N prediction modes.
Wherein, the processing unit 303 in the image processing apparatus further comprises:
a first processing module for inputting the N first predicted values into an input sequence x as input data of the gray prediction model(0)=(x(0)(1),x(0)(2),...,x(0)(N)), acquiring second sequence data corresponding to N first predicted values, wherein x is(0)(1) Is the first original predicted value, x(0)(2) Is the second original predicted value, x(0)(N) is the Nth original predicted value;
a second processing module, configured to obtain an accumulated generated sequence corresponding to the input sequence according to the second sequence data Wherein x is(0)For the original input sequence, x(1)(1) Is a first sequence value, x(1)(2) Is a second sequence value, x(1)(n) is the nth sequence value;
a third processing module, configured to obtain, according to the accumulation generating sequence, a mean generating sequence corresponding to the accumulation generating sequence: z is a radical of(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)), according to z(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)) to obtain z(1)(k)=0.5x(1)(k)+0.5x(1)(k-1) wherein z(1)Generating a sequence for said mean value, z(1)(1) Generating a first sequence value of the sequence for the first mean value, z(1)(2) Generating a second sequence value of the sequence for the first mean value, z(1)(n) generating a third sequence value of the sequence for the first mean;
the fourth processing module is used for acquiring an intermediate parameter according to the input sequence and the mean value generation sequence, and acquiring a development coefficient a and a gray input amount b according to the input sequence, the mean value generation sequence and the intermediate parameter;
a fifth processing module, configured to obtain the gray prediction model x according to the input sequence, the mean generation sequence, the development coefficient, and the gray input amount(0)(k)+az(1)(k)=b;
An obtaining module for obtaining a gray prediction model x from the gray prediction model x(0)(k)+az(1)(k) And b, performing the gray prediction model processing on the N first predicted values to obtain the second predicted value.
In the embodiment of the present invention, N first predicted values corresponding to the N prediction modes are obtained to determine whether first data and second data obtained by the N first predicted values are smaller than a first preset threshold, and a first determination result is generated, and when the first determination result represents that the first data and the second data are smaller than the first preset threshold, the N first predicted values are subjected to gray prediction model processing according to the N first predicted values to obtain a second predicted value, and the second predicted value is used as a final predicted value of each prediction mode in the N prediction modes, so as to solve a technical problem in the prior art that a rate distortion calculation is performed on each prediction mode in an image data processing process to cause a high complexity of intra prediction of an image frame, and further, the complexity of an image data encoding and decoding process can be effectively reduced by processing the image data through gray modeling, the prediction precision is improved, the processing time of the image processing device is saved, and the processing efficiency of the image processing device is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (9)
1. An image processing method applied to an image processing apparatus, the image processing apparatus being capable of performing encoding and decoding processing on image data, the image processing apparatus dividing the image data into N prediction macroblocks during the encoding and decoding processing on the image data, the N prediction macroblocks corresponding to N prediction modes, N being an integer greater than or equal to 2, the method comprising:
obtaining N first predicted values corresponding to the N prediction modes;
according to a first condition, atProcessing a first mean value and a first variance of each of the N first predicted values to obtain N first mean values and N first variances, wherein the first condition is:wherein, muNIs the average value of the predicted values in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and the length of the current prediction macro block,σNis the variance of the predicted value in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and length of the current prediction macro block, fN(x, y) is a prediction function of the Nth prediction mode at the coordinate position (x, y);
according to a second condition, processing the N first means and the N first variances to obtain a second variance σ of the N first meansμAnd a second variance σ of the N first variancesvarWherein the second condition is: n denotes the number of all prediction modes, σμAnd σvarRespectively represent muNAnd σNThe variance of (a);
judging whether second variances of the N first means and second variances of the N first variances are smaller than a first preset threshold value or not, and generating a first judgment result;
when the first judgment result represents that the second variances of the N first mean values and the second variances of the N first variances are smaller than the first preset threshold value, performing gray prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values;
and taking the second predicted value as a final predicted value of each of the N prediction modes.
2. The method according to claim 1, wherein the performing gray prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values specifically includes:
inputting the N first predicted values into an input sequence x as input data of the gray prediction model(0)=(x(0)(1),x(0)(2),...,x(0)(N)), and acquiring second sequence data corresponding to the N first predicted values, wherein x(0)(1) Is the first original predicted value, x(0)(2) Is the second original predicted value, x(0)(N) is the Nth original predicted value;
obtaining an accumulation generation sequence corresponding to the input sequence according to the second sequence data Wherein x is(0)For the original input sequence, x(1)(1) Is a first sequence value, x(1)(2) Is a second sequence value, x(1)(n) is the nth sequence value;
according to the accumulation generating sequence, acquiring a mean value generating sequence corresponding to the accumulation generating sequence: z is a radical of(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)), according to z(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)) to obtain z(1)(k)=0.5x(1)(k)+0.5x(1)(k-1) wherein z(1)Generating a sequence for said mean value, z(1)(1) Generating a first sequence value of the sequence for the first mean value, z(1)(2) Generating a second sequence value of the sequence for the first mean value, z(1)(n) generating a third sequence value of the sequence for the first mean;
acquiring intermediate parameters according to the input sequence and the mean value generation sequence, and acquiring a development coefficient a and a gray input amount b according to the input sequence, the mean value generation sequence and the intermediate parameters;
obtaining the gray prediction model x according to the input sequence, the mean generation sequence, the development coefficient and the gray input quantity(0)(k)+az(1)(k)=b;
According to the gray prediction model x(0)(k)+az(1)(k) And b, performing the gray prediction model processing on the N first predicted values to obtain the second predicted value.
3. The method of claim 2, wherein the N first predicted values are input to an input sequence x as input data to the gray prediction model(0)=(x(0)(1),x(0)(2),...,x(0)(r)), the obtaining of the second sequence data corresponding to the N first predicted values specifically includes:
detecting whether the level ratio of the gray modeling sequence in the input sequence is in a first preset rangeGenerating a first detection result, wherein n represents the total number of predicted values;
when the level ratio is within the first preset range, inputting N first predicted values into the input sequence x as input data of the gray prediction model(0)=(x(0)(1),x(0)(2),...,x(0)(r)), and acquiring the second sequence data corresponding to the N first predicted values.
4. The method of claim 3, wherein the first detection result characterizes that the stage ratio is not within the first preset range, the method further comprising:
performing translation processing on the input sequenceAnd obtaining a translation resultWherein y (k) characterizes the translated sequence after translation, x (k) characterizes the original sequence,y(k) is a predetermined threshold value, Δy(k) Is difference information;
according to the resultObtaining a Q value;
according to the translation resultObtaining a second sequence;
and performing grey prediction model processing on the N first predicted values according to the second sequence to obtain the second predicted value.
5. The method according to claim 2, wherein the obtaining intermediate parameters according to the input sequence and the mean generation sequence, and obtaining a coefficient of development and a gray input amount according to the input sequence, the mean generation sequence, and the intermediate parameters specifically comprises:
generating a sequence according to the mean value to obtain a first intermediate parameter
Obtaining a second intermediate parameter according to the input sequence
Obtaining a third intermediate parameter according to the mean value generation sequence and the input sequence
Generating a sequence according to the mean value to obtain a fourth intermediate parameter
Obtaining the development coefficient according to the first intermediate parameter C, the second intermediate parameter D, the third intermediate parameter E and the fourth intermediate parameter FAnd the ash input amount
6. The method of claim 2, wherein the predicting model x is based on the gray(0)(k)+az(1)(k) B, performing the gray prediction model processing on the N first predicted values to obtain the second predicted value, including:
according to the gray prediction model x(0)(k)+az(1)(k) B, obtaining a predicted value processing condition: wherein,characterizing the temporary predicted value;
judging whether the development coefficient a is larger than a second preset threshold value or not, and generating a second judgment result;
when the second determination result indicates that the development coefficient a is greater than the second preset threshold, according to a prediction model: pfinal(i,j)=fGM(1,1)(p1,p2,...,pr) Obtaining the second predicted value, wherein Pfinal(i, j) is the second predicted value.
7. The method according to claim 6, wherein when the second determination result indicates that the coefficient of development a is less than the second preset threshold, the method further comprises:
and taking any one of the N first predicted values as the second predicted value, and taking the second predicted value as the final predicted value.
8. An image processing apparatus capable of performing encoding/decoding processing on image data, wherein the image processing apparatus divides the image data into N prediction macroblocks in a process of performing encoding/decoding processing on the image data, and the N prediction macroblocks correspond to N prediction modes, N being an integer equal to or greater than 2, the apparatus comprising:
a first obtaining unit, configured to obtain N first prediction values corresponding to the N prediction modes;
a second obtaining unit, configured to process a first mean and a first variance of each of the N first prediction values according to a first condition, and obtain N first mean and N first variance, where the first condition is:wherein, muNIs the average value of predicted values in the Nth prediction mode, and H is the current prediction macroThe block height, W is the width and length of the current predicted macroblock,σNis the variance of the predicted value in the Nth prediction mode, H is the height of the current prediction macro block, W is the width and length of the current prediction macro block, fN(x, y) is a prediction function of the Nth prediction mode at the coordinate position (x, y);
a third obtaining unit, configured to process the N first means and the N first variances according to a second condition to obtain a second variance σ of the N first meansμAnd a second variance σ of the N first variancesvWherein the second condition is: n denotes the number of all prediction modes, σμAnd σvarRespectively represent muNAnd σNThe variance of (a);
a determining unit, configured to determine whether a second variance of the N first means and a second variance of the N first variances are smaller than a first preset threshold, and generate a first determination result;
the processing unit is configured to, when the first determination result represents that second variances of the N first mean values and second variances of the N first variances are smaller than the first preset threshold, perform gray prediction model processing on the N first predicted values according to the N first predicted values to obtain second predicted values;
a determining unit configured to use the second prediction value as a final prediction value of each of the N prediction modes.
9. The apparatus of claim 8, wherein the processing unit comprises:
a first processing module for inputting the N first predicted values into an input sequence x as input data of the gray prediction model(0)=(x(0)(1),x(0)(2),...,x(0)(N)), acquiring second sequence data corresponding to N first predicted values, wherein x(0)(1) Is the first original predicted value, x(0)(2) Is the second original predicted value, x(0)(N) is the Nth original predicted value;
a second processing module, configured to obtain an accumulated generated sequence corresponding to the input sequence according to the second sequence data Wherein x is(0)For the original input sequence, x(1)(1) Is a first sequence value, x(1)(2) Is a second sequence value, x(1)(n) is the nth sequence value;
a third processing module, configured to obtain, according to the accumulation generating sequence, a mean generating sequence corresponding to the accumulation generating sequence: z is a radical of(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)), according to z(1)=MEANx(1)=(z(1)(1),z(1)(2),…,z(1)(n)) to obtain z(1)(k)=0.5x(1)(k)+0.5x(1)(k-1) wherein z(1)Generating a sequence for said mean value, z(1)(1) Generating a first sequence value of the sequence for the first mean value, z(1)(2) Generating a second sequence value of the sequence for the first mean value, z(1)(n) generating a third sequence value of the sequence for the first mean;
the fourth processing module is used for acquiring an intermediate parameter according to the input sequence and the mean value generation sequence, and acquiring a development coefficient a and a gray input amount b according to the input sequence, the mean value generation sequence and the intermediate parameter;
a fifth processing module, configured to obtain the gray prediction model x according to the input sequence, the mean generation sequence, the development coefficient, and the gray input amount(0)(k)+az(1)(k)=b;
An obtaining module for obtaining a gray prediction model x from the gray prediction model x(0)(k)+az(1)(k) And b, performing the gray prediction model processing on the N first predicted values to obtain the second predicted value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310095634.5A CN103248889B (en) | 2013-03-22 | 2013-03-22 | A kind of image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310095634.5A CN103248889B (en) | 2013-03-22 | 2013-03-22 | A kind of image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103248889A CN103248889A (en) | 2013-08-14 |
CN103248889B true CN103248889B (en) | 2016-05-25 |
Family
ID=48928078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310095634.5A Active CN103248889B (en) | 2013-03-22 | 2013-03-22 | A kind of image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103248889B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102299573B1 (en) * | 2014-10-22 | 2021-09-07 | 삼성전자주식회사 | Application processor for performing real time in-loop filtering, method thereof, and system including the same |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0888013A2 (en) * | 1997-06-20 | 1998-12-30 | Matsushita Electric Industrial Co., Ltd. | Image processing method, image processing apparatus, and data recording medium |
CN101534441A (en) * | 2009-04-24 | 2009-09-16 | 西安电子科技大学 | AVS video watermarking method based on gray theory and uniform spectrum theory |
CN102542296A (en) * | 2012-01-10 | 2012-07-04 | 哈尔滨工业大学 | Method for extracting image characteristics by multivariate gray model-based bi-dimensional empirical mode decomposition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3119994B2 (en) * | 1994-04-28 | 2000-12-25 | 株式会社グラフィックス・コミュニケーション・ラボラトリーズ | Image data processing method, storage device used therefor, and image data processing device |
-
2013
- 2013-03-22 CN CN201310095634.5A patent/CN103248889B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0888013A2 (en) * | 1997-06-20 | 1998-12-30 | Matsushita Electric Industrial Co., Ltd. | Image processing method, image processing apparatus, and data recording medium |
CN101534441A (en) * | 2009-04-24 | 2009-09-16 | 西安电子科技大学 | AVS video watermarking method based on gray theory and uniform spectrum theory |
CN102542296A (en) * | 2012-01-10 | 2012-07-04 | 哈尔滨工业大学 | Method for extracting image characteristics by multivariate gray model-based bi-dimensional empirical mode decomposition |
Non-Patent Citations (1)
Title |
---|
基于灰预测的图象编码技术;李扬;樊养余;王海军;魏巍;《计算机工程与应用》;20070630;第43卷(第12期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN103248889A (en) | 2013-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11736701B2 (en) | Hash-based encoder decisions for video coding | |
EP3120556B1 (en) | Encoder-side decisions for screen content encoding | |
JP5133290B2 (en) | Video encoding apparatus and decoding apparatus | |
JP2012170042A5 (en) | ||
CN101888546A (en) | Motion estimation method and device | |
JP2015050661A (en) | Encoding apparatus, control method for encoding apparatus, and computer program | |
CN107105240B (en) | HEVC-SCC complexity control method and system | |
US20050089232A1 (en) | Method of video compression that accommodates scene changes | |
JP2008271127A (en) | Coding apparatus | |
WO2016116984A1 (en) | Moving image encoding device, moving image encoding method, and moving image encoding program | |
JP2008153907A (en) | Image encoding apparatus, information terminal including the same, and image encoding method | |
CN103248889B (en) | A kind of image processing method and device | |
US20150208082A1 (en) | Video encoder with reference picture prediction and methods for use therewith | |
JP5832263B2 (en) | Image coding apparatus and image coding method | |
TWI493942B (en) | Moving picture coding method, moving picture coding apparatus, and moving picture coding program | |
JP2004072732A (en) | Coding apparatus, computer readable program, and coding method | |
CN104995917A (en) | Self-adaption motion estimation method and module thereof | |
JPH10224779A (en) | Method and device for detecting scene change of moving image | |
JP6239838B2 (en) | Moving picture encoding apparatus, control method thereof, and imaging apparatus | |
JP2009118097A (en) | Image encoder, its control method, and computer program | |
JP2008072608A (en) | Apparatus and method for encoding image | |
KR101021538B1 (en) | Fast Intra Mode Decision Method in H.264 Encoding | |
KR101311143B1 (en) | Encoding device and method for high speed processing image | |
JP5521859B2 (en) | Moving picture coding apparatus and method | |
JP2015019319A (en) | Encoding apparatus, encoding method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |