[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104144347B - A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data - Google Patents

A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data Download PDF

Info

Publication number
CN104144347B
CN104144347B CN201410287578.XA CN201410287578A CN104144347B CN 104144347 B CN104144347 B CN 104144347B CN 201410287578 A CN201410287578 A CN 201410287578A CN 104144347 B CN104144347 B CN 104144347B
Authority
CN
China
Prior art keywords
block
macro block
current
vector
current macro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410287578.XA
Other languages
Chinese (zh)
Other versions
CN104144347A (en
Inventor
王让定
李然然
徐达文
李倩
李伟
王家骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201410287578.XA priority Critical patent/CN104144347B/en
Publication of CN104144347A publication Critical patent/CN104144347A/en
Application granted granted Critical
Publication of CN104144347B publication Critical patent/CN104144347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of H.264/AVC video I frame error recovery methods based on hiding reversible data, it is in coding side, the characteristic vector of a upper macro block, is then embedded into host's vector of current macro block by the characteristic vector for first extracting macro block and the host's vector for determining macro block;In decoding end, characteristic information is extracted from the macro block in the I frames embedded with characteristic information, and determines the coding mode and luma prediction modes of macro block, then Fault recovery is carried out to incorrect decoding block;Advantage is in coding side, the characteristic information of the macro block of extraction by macro block sub-block luma prediction modes or determined by the coding mode and luma prediction modes of macro block, it effectively can recover to exist the macro block lost during scene change, and in the embedded characteristic vector into host's vector, according to the coding mode of a upper macro block, the method extended using Generalized Difference is embedded in, and not only make it that embedding capacity is controllable, and after extraction characteristic information can be realized, the initial data before reduction I frame insertion characteristic informations.

Description

Reversible data hiding-based H.264/AVC video I frame error recovery method
Technical Field
The invention relates to an error recovery and covert communication technology in a video network transmission process, in particular to an H.264/AVC video I frame error recovery method based on reversible data hiding.
Background
Wireless video communication has gained increased attention as multimedia technology has matured and user demand has driven. However, transmission channels of a wireless communication network and the Internet are unreliable, and when an h.264/AVC video code stream is transmitted on a communication channel, especially on a narrow-band channel with large noise interference or a channel (Internet) with a possibility of packet loss, problems such as channel interference, network congestion, routing delay and the like may cause error phenomena such as random bit errors, burst errors and the like.
The high-efficiency H.264/AVC video coding standard has strong dependence on the integrity of video code streams, and has great influence on the video quality once packet loss or error codes occur. This is because: on one hand, h.264/AVC adopts Variable Length Coding (VLC) to improve Coding efficiency, but because each VLC codeword of the Variable Length Coding has different Length, if some bits of the VLC codewords have errors, the decoder cannot correctly skip the error codeword, so that the VLC codewords are out of synchronization, and the subsequent bit stream cannot be correctly decoded; on the other hand, h.264/AVC employs prediction techniques such as intra prediction, motion estimation, and motion vector prediction, so that the following data should refer to the preceding data when encoding, and if some part of data is erroneous, not only cannot correct decoding itself, but also the following data is affected, and the effect continues until the following frames. The above situation will directly result in the wrong decoding of the video information, which will cause the reconstruction quality of the video signal to be drastically degraded, and in severe cases, may even cause the complete failure of the entire video communication. Therefore, how to reduce or eliminate the effect of transmission errors is an important research direction in low-bit-rate video communication applications.
Although some error recovery methods for h.264/AVC video have appeared, for example, error recovery using video content correlation at the decoding end; establishing interactive error recovery of a feedback channel between an encoding end and a decoding end; by utilizing an information hiding technology, the characteristic information suitable for error recovery is extracted at the encoding end, and the characteristic information is transmitted to the decoding end for error recovery in an information hiding mode. The method for performing error recovery based on the information hiding technology is a new idea for processing video error recovery because the information of the encoding end is more abundant and accurate than that of the decoding end. Lin et al refers to a reversible data hiding method based on differential expansion to embed feature information, but since the feature information is a pixel value, the number of bits of the embedded feature information is very large, and thus, multi-layer differences are used, and the calculation is relatively complicated. Chen et al propose an effective error recovery method, which uses the Motion Vector (MV) of each macroblock in the I frame as important feature data, and embeds the Motion vector into other macroblocks in the same frame in a loop manner, where the embedding method uses a parity embedding method, and since the original quantized dct (discrete Cosine transform) coefficient value of the video carrier is permanently changed after information is embedded, the reconstructed video quality is affected after information is extracted. Chung et al, on the basis of the above, use a reversible information hiding method of histogram translation to embed the motion vector of the macroblock into the quantized DCT coefficient whose value is zero, but this method needs to modify many quantized DCT coefficients, which not only affects the imperceptibility of the video, but also increases the transmission rate. Therefore, the method for performing error recovery based on the information hiding technology still has room for improvement in the aspects of extracting effective features, selecting a proper embedding method and embedding position, improving video reconstruction quality and the like.
Disclosure of Invention
The invention aims to solve the technical problem of providing an H.264/AVC video I frame error recovery method based on reversible data hiding, wherein when scene change exists, the extracted characteristic information can efficiently recover the macro block lost in the I frame, the I frame error recovery quality is improved, and after the characteristic information is extracted, the original data of the I frame macro block before the characteristic information is embedded can be restored.
The technical scheme adopted by the invention for solving the technical problems is as follows: an I frame error recovery method of H.264/AVC video based on reversible data hiding, which is characterized by comprising the following steps:
firstly, embedding characteristic information into each macro block except the 1 st macro block in the I frame at an encoding end to obtain the I frame embedded with the characteristic information, wherein the specific process comprises the following steps:
firstly-1, defining the kth macro block after the current precoding in the I frame as a current macro block, wherein K is more than or equal to 1 and less than or equal to K, wherein K represents the total number of macro blocks contained in the I frame, and the initial value of K is 1;
① -2, converting the decimal value of CBP of the current macro block into binary string composed of 6 bit binary value and marked as A, A ═ a1a2a3a4a5a6Wherein a is1Is A ofHighest binary value, a6The lowest bit binary value of A;
① -3, extracting the characteristic vector of the current macro block, if the coding mode of the current macro block is Intra4 × 4, extracting the digital identifications of the brightness prediction modes of the four 4 × 4 blocks numbered 0, 4, 8 and 12 in the current macro block, quantizing the digital identifications of the brightness prediction modes of the four 4 × 4 blocks to 4 bits, arranging the digital identifications represented by the bits of the brightness prediction modes of the four 4 × 4 blocks according to the sequence of the numbers of the four 4 × 4 blocks in the current macro block to form a one-dimensional vector containing 16 elements, and marking the one-dimensional vector as the characteristic vector of the current macro block as the one-dimensional vectorWherein,andcorresponding to the feature vector w representing the current macroblockIntra4×4(k) The 1 st element, the 2 nd element, and the 16 th element;
if the coding mode of the current macro block is Intra16 × 16, the coding mode of the current macro block is marked by numeral 9, then the numeral mark of the coding mode of the current macro block is quantized into 4 bits, the numeral mark of the brightness prediction mode of the current macro block is quantized into 4 bits, then the numeral mark represented by the bits of the coding mode of the current macro block and the numeral mark represented by the bits of the brightness prediction mode of the current macro block are arranged in sequence to form a one-dimensional vector containing 8 elements, and then the one-dimensional vector is used as the characteristic vector of the current macro block and is marked as wIntra16×16(k),Wherein,andcorresponding to the feature vector w representing the current macroblockIntra16×16(k) The 1 st element, the 2 nd element and the 8 th element;
① -4, determining a host vector of the current macro block, namely, calculating the sum of absolute values of all alternating current DCT coefficients of each 4 × 4 block in the current macro block, embedding the 4 × 4 block with the maximum sum value into a characteristic information embedding block, scanning all quantized DCT coefficients in the characteristic information embedding block in a zig-zag mode, arranging the 8 th to 16 th quantized DCT coefficients in the characteristic information embedding block according to the scanning sequence of the zig-zag mode to form a one-dimensional vector containing 9 elements, and taking the one-dimensional vector as the host vector of the current macro block, wherein x is (k), and x is (k) of the current macro block1x2…x9) Wherein x is1、x2And x9Corresponding to the 1 st element, the 2 nd element and the 9 th element in the host vector x (k) representing the current macroblock;
① -5, embedding the feature vector of the last macro block of the current macro block into the host vector of the current macro block, judging whether the current macro block is the 1 st macro block in the I frame, if so, directly executing the step ① -7 without processing the current macro block, otherwise, if the coding mode of the last macro block of the current macro block is Intra4 × 4, adopting a twice generalized differential extension method to embed the feature vector w of the last macro block of the current macro block into the host vector of the current macro blockIntra4×4(k-1) embedding into host vector x (k) of current macro block to obtain vector embedded with characteristic information corresponding to current macro blockThen useSequentially replacing the first element in the feature information embedded block in the current macro block8 to 16 quantized DCT coefficients, and then performing step ① -6. if the coding mode of the last macroblock of the current macroblock is Intra16 × 16, the feature vector w of the last macroblock of the current macroblock is extended by a generalized differential extension methodIntra16×16(k-1) embedding into host vector x (k) of current macro block to obtain vector embedded with characteristic information corresponding to current macro blockThen useAll the elements in the macroblock sequentially replace the 8 th to 16 th quantized DCT coefficients in the feature information embedded block in the current macroblock, and then perform step ① -6;
firstly-6, modifying the CBP of the current macro block, and then executing a step (firstly-7);
firstly-7, entropy coding is carried out on the current macro block;
step 8, enabling k to be k +1, taking the next pre-coded macro block in the frame I as a current macro block, and then returning to the step 2 to continue executing until all the pre-coded macro blocks in the frame I are processed, so as to obtain an I frame coded code stream embedded with characteristic information, wherein the value of k to be k +1 is an assignment symbol;
at a decoding end, extracting characteristic information from each macro block in an I frame embedded with the characteristic information, determining a coding mode and a brightness prediction mode of each macro block, and then performing error recovery on an incorrect decoding block, wherein the specific process comprises the following steps:
secondly, 1, entropy decoding each macro block in the I frame embedded with the characteristic information, then determining whether each entropy decoded macro block is a correct decoding block, then determining a vector containing the characteristic information of each correct decoding block except the 1 st entropy decoded macro block, then extracting the characteristic information from the vector containing the characteristic information of each correct decoding block except the 1 st entropy decoded macro block, and determining a coding mode and a brightness prediction mode of a last macro block of each correct decoding block except the 1 st entropy decoded macro block, wherein if the k 'th entropy decoded macro block is the correct decoding block, the coding mode and the brightness prediction mode are determined, and if the k' th entropy decoded macro block is the correct decoding block
The specific process for determining the vector containing the characteristic information of the macro block comprises the steps of calculating the sum of absolute values of all quantized DCT coefficients which are subjected to inverse quantization and become alternating current DCT coefficients of each 4 × 4 block in the macro block, taking the 4 × 4 block with the largest sum value as a characteristic information extraction block, scanning all the quantized DCT coefficients in the characteristic information extraction block in a zig-zag mode, arranging 8 th to 16 th quantized DCT coefficients in the characteristic information extraction block according to the scanning sequence of the zig-zag mode to form a one-dimensional vector containing 9 elements, taking the one-dimensional vector as the vector containing the characteristic information of the macro block, and marking the one-dimensional vector as the vector containing the characteristic information of the macro blockWherein,andcorresponding representationThe 1 st element, the 2 nd element, and the 9 th element;
from the vector of the macroblock containing the characteristic informationExtracting characteristic information, and determining a coding mode and a brightness prediction mode of a previous macro block of the macro block;
k' is not less than 2 and not more than K;
secondly, defining the kth macro block in the decoded I frame as a current macro block, wherein K is more than or equal to 1 and less than or equal to K, and the initial value of K is 1;
secondly-3, judging whether the current macro block is the last macro block in the decoded I frame, if not, executing a step (secondly-4), and if so, executing a step (secondly-5);
4, judging whether the current macro block is a correct decoding block or an incorrect decoding block, if the current macro block is the correct decoding block, not processing the current macro block, and then executing the step 6;
if the current macro block is an incorrect decoding block, judging whether the next macro block of the current macro block is an correct decoding block or an incorrect decoding block, if the next macro block of the current macro block is the correct decoding block, restoring the current macro block by using the coding mode and the brightness prediction mode of the current macro block, and then executing the step two-6; if the next macro block of the current macro block is an incorrect decoding block, restoring the current macro block by adopting a bilinear interpolation method;
judging whether the current macro block is a correct decoding block or an incorrect decoding block, if the current macro block is the correct decoding block, not processing the current macro block, and recovering all the incorrect decoding blocks in the decoded I frame; if the current macro block is an incorrect decoding block, restoring the current macro block by adopting a bilinear interpolation method, and thus finishing restoring all incorrect decoding blocks in the decoded I frame;
and 6, enabling k to be k +1, taking the next macroblock to be processed in the decoded I frame as a current macroblock, and then returning to the step 3 to continue execution, wherein the value of k to be k +1 is an assignment symbol.
In the step ① -5, the feature vector w of the previous macroblock of the current macroblock is extended by two generalized differencesIntra4×4(k-1) the specific process of embedding into the host vector x (k) of the current macroblock is:
a1, positive-transform x (k) to obtain a vector y (k), and y (k) ═ y1y2…y9) Wherein, y1、y2And y9Corresponding to the 1 st element, the 2 nd element and the 9 th element in y (k),y2=x2-x1,yi'=xi'-x1,y9=x9-x1,2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
a2, mixing wIntra4×4Embedding the 1 st element to the 8 th element in (k-1) into y (k) to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9,andcorresponding representation wIntra4×4The 1 st element, the i' -1 th element and the 8 th element in (k-1);
a3, pairInverse transformation is carried out to obtain a vector embedded with partial characteristic informationWherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,
2≤i'≤9;
a4, pairPerforming forward transformation to obtain vectorWherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αiis the weight;
a5, mixing wIntra4×4The 9 th element to the 16 th element in (k-1) are embedded inIn (1), obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9,andcorresponding representation wIntra4×4The 9 th element, the i' +7 th element and the 16 th element in (k-1);
a6, pairPerform inverse transformationObtaining the vector embedded with the characteristic information corresponding to the current macro block Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9。
in the step ① -5, the feature vector w of the previous macroblock of the current macroblock is extended by a generalized differenceIntra16×16(k-1) the specific process of embedding into the host vector x (k) of the current macroblock is:
b1, positive-transform x (k) to obtain a vector y (k), and y (k) ═ y1y2…y9) Wherein, y1、y2And y9Corresponding to the 1 st element, the 2 nd element and the 9 th element in y (k),yi'=xi'-x1,y9=x9-x1,2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
b2, mixing wIntra16×16(k-1) embedding in y (k) to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,andcorresponding representation wIntra16×16The 1 st element, the i' -1 th element and the 8 th element in (k-1);
b3, pairInverse transformation is carried out to obtain a vector embedded with characteristic information corresponding to the current macro block Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9。
the specific process of modifying the CBP of the current macroblock in step (i-6) is as follows:
① -6a, if the 8 × 8 block where the characteristic information embedded block in the current macro block is located is the 8 × 8 block at the position of the upper left corner in the current macro block, counting whether all quantized DCT coefficients in the 8 × 8 block at the position of the upper left corner after the characteristic information embedded in the current macro block are all 0, if all 0, then using a in A6Is set to 0, a3,a4,a5Keeping the same, if not all are 0, then a in A is added6Is set to 1, a3,a4,a5Keeping the same;
if the 8 × 8 block in which the feature information embedded block in the current macro block is located is the 8 × 8 block at the upper right corner position in the current macro block, counting whether all quantized DCT coefficients in the 8 × 8 block at the upper right corner position after the feature information embedded in the current macro block are all 0, if all 0, then the a in A is used as the reference value5Is set to 0, a3,a4,a6Keeping the same, if not all are 0, then a in A is added5Is set to 1, a3,a4,a6Keeping the same;
if the 8 × 8 block where the feature information embedded block in the current macroblock is located is the 8 × 8 block at the lower left corner of the current macroblock, then count the embedded feature information of the current macroblockWhether all the quantized DCT coefficients in the 8 × 8 block at the bottom left corner position are all 0, if all 0, then a in A4Is set to 0, a3,a5,a6Keeping the same, if not all are 0, then a in A is added4Is set to 1, a3,a5,a6Keeping the same;
if the 8 × 8 block in which the feature information embedded block in the current macroblock is located is the 8 × 8 block at the bottom right corner of the current macroblock, then it is counted whether all quantized DCT coefficients in the 8 × 8 block at the bottom right corner of the current macroblock after the feature information is embedded are all 0, if all 0, then a in A is used3Is set to 0, a4,a5,a6Keeping the same, if not all are 0, then a in A is added3Is set to 1, a4,a5,a6Keeping the same;
and 6b, converting the modified A into a decimal number to obtain the modified CBP of the current macro block.
The step ② -1 includes extracting the vector containing the feature information from the macroblockThe specific process of extracting the feature information and determining the coding mode and the brightness prediction mode of the last macro block of the macro block comprises the following steps:
② -1a, pairPerforming forward transformation to obtain vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
② -1b fromExtracting out the characteristic information composed of 8 characteristic information bits, and recording as Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 8 th element,j is more than or equal to 1 and less than or equal to 8, and the symbol "|" is an absolute value calculation symbol;
② -1c, a new vector z (k '), z (k') (z)1z2…z9) Wherein z is1、z2And z9Corresponding to the 1 st element, the 2 nd element and the 9 th element in z (k'), 2≤i'≤9;
② -1d, z (k ') is inverse transformed to obtain a vector f (k '), f (k ') (f)1f2…f9) Wherein f is1、f2And f9Corresponding to the 1 st element, the 2 nd element and the 9 th element in f (k'),f2=z2+f1,fi'=zi'+f1,f9=z9+f1,2≤i'≤9;
② -1e, will be composed of1 st element of (1)2 nd elementElement number 3And the 4 th elementThe composed binary string is converted into a decimal value, if the decimal value corresponds to the number 9, the coding mode of the last macroblock of the macroblock is represented as Intra16 × 16, then all elements in f (k') are used for sequentially replacing the characteristic information in the last macroblock of the macroblock to extract the 8 th to 16 th quantized DCT coefficients in the block, then the brightness prediction mode of the last macroblock of the macroblock is determined, and the number of the brightness prediction mode of the last macroblock of the macroblock is identified as being marked by the number of the brightness prediction modeThe 5 th element in (1)Element number 6Element number 7And 8 th elementThe decimal value converted from the binary string is used for finishing the characteristic information extraction and the determination of the coding mode and the brightness prediction mode;
if the decimal value does not correspond to the number 9, the coding mode of the last macroblock of the macroblock is represented as Intra4 × 4, and then step (ii) -1f is executed;
② -1f, and f (k') is subjected to forward transformation to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
② -1g, fromExtracting out the characteristic information composed of 8 characteristic information bits, and recording as Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 8 th element,j is more than or equal to 1 and less than or equal to 8, and the symbol | | "isTaking an absolute value operation sign;
② -1h, construct a new vector z '(k'), z '(k') (z ═ z1'z2'…z9') wherein z1'、z2' and z9' corresponding means 1 st element, 2 nd element and 9 th element in z ' (k '), 2≤i'≤9;
② -1i, and z '(k') is inversely transformed to obtain a vector x (k '), where x (k') is (x)1x2…x9) Wherein x is1、x2And x9Corresponding to the 1 st element, the 2 nd element and the 9 th element in x (k'),x2=z2'+x1,xi'=zi''+x1,x9=z9'+x1,2≤i'≤9;
-1j, sequentially replacing the 8 th to 16 th quantized DCT coefficients in the feature information extraction block in the last macro block of the macro block by all elements in x (k');
② -1k, constructing a vector w (k ') containing 16 elements, the first 8 elements of w (k') beingThe last 8 elements of w (k') are8 elements of (a);
1l, determining the brightness prediction mode of a4 x 4 block with the number of 0 in the last macro block of the macro block, wherein the number mark of the brightness prediction mode of the 4 x 4 block is a decimal value converted by a binary string consisting of the 1 st element, the 2 nd element, the 3 rd element and the 4 th element in w (k');
determining a brightness prediction mode of a4 x 4 block numbered 4 in a previous macroblock of the macroblock, wherein the number of the brightness prediction mode of the 4 x 4 block is identified as a decimal value converted from a binary string consisting of a5 th element, a6 th element, a 7 th element and an 8 th element in w (k');
determining a brightness prediction mode of a4 × 4 block numbered 8 in a previous macroblock of the macroblock, wherein the number of the brightness prediction mode of the 4 × 4 block is identified as a decimal value converted from a binary string consisting of a 9 th element, a 10 th element, an 11 th element and a 12 th element in w (k');
determining a brightness prediction mode of a4 x 4 block with the number of 12 in a last macro block of the macro block, wherein the number of the brightness prediction mode of the 4 x 4 block is identified as a decimal value converted from a binary string consisting of a 13 th element, a 14 th element, a 15 th element and a16 th element in w (k');
and finishing the characteristic information extraction and coding mode and brightness prediction mode determination.
The specific process of recovering the current macro block by using the coding mode and the brightness prediction mode of the current macro block in the step two-4 is as follows:
4a, if the coding mode of the current macro block is Intra16 multiplied by 16, predicting the pixel value of each pixel point in the current macro block by using the brightness prediction mode of the current macro block, and then taking the predicted value of each pixel point in the current macro block as the finally recovered pixel value of the corresponding pixel point to finish the recovery of the current macro block;
4b, if the coding mode of the current macro block is Intra4 × 4, predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using the brightness prediction mode of the 4 × 4 block numbered as 0 in the current macro block, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block;
predicting the pixel value of each pixel point in each 4 x 4 block in an 8 x 8 block where the 4 x 4 block is located by using a brightness prediction mode of a4 x 4 block with the number of 4 in a current macro block, and then taking the predicted value of each pixel point in each 4 x 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 x 4 block;
predicting the pixel value of each pixel point in each 4 x 4 block in an 8 x 8 block where the 4 x 4 block is located by using a brightness prediction mode of the 4 x 4 block with the number of 8 in the current macro block, and then taking the predicted value of each pixel point in each 4 x 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 x 4 block;
predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using the brightness prediction mode of the 4 × 4 block with the number of 12 in the current macroblock, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block.
Compared with the prior art, the invention has the advantages that:
1) at the encoding end, the extracted characteristic information of the macro block is determined by the brightness prediction mode of the sub-block of the macro block (the encoding mode is Intra4 × 4) or the encoding mode and the brightness prediction mode of the macro block (the encoding mode is Intra16 × 16), so that the extraction is convenient, the macro block lost in the presence of scene change can be effectively recovered, and the I frame error recovery quality is improved.
2) At the encoding end, when embedding the feature vector into the host vector, embedding is carried out by adopting a twice generalized differential extension method or a once generalized differential extension method according to the encoding mode of the last macro block of the current processed macro block, wherein the generalized differential extension method not only enables the embedding capacity to be controllable, but also can realize that original data before the feature information is embedded into the I frame is restored after the feature information is extracted.
3) The difference object of the generalized difference extension method adopted by the method is the quantized DCT coefficient, and the calculation complexity is greatly reduced for the condition that the difference has no overflow.
4) The method of the invention recovers the I frame error by means of the information hiding technology, fully utilizes the information of the encoding end, and has richer and more accurate available video resources and higher flexibility compared with the method of only hiding the error at the decoding end.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
fig. 2 is a schematic diagram of the numbering of 16 4 × 4 blocks in a macroblock;
fig. 3a is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Foreman standard test sequence when a coding quantization parameter QP is 28;
fig. 3b is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a phone standard test sequence when the coding quantization parameter QP is 28;
fig. 3c is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Container standard test sequence when the coding quantization parameter QP is 28;
fig. 3d is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of an Akyio standard test sequence when a coding quantization parameter QP is 28;
fig. 3e is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Silent standard test sequence when a coding quantization parameter QP is 28;
fig. 3f is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Mobile standard test sequence when a coding quantization parameter QP is 28;
fig. 4a is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Foreman standard test sequence when a coding quantization parameter QP is 38;
fig. 4b is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a phone standard test sequence when the coding quantization parameter QP is 38;
fig. 4c is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Container standard test sequence when the coding quantization parameter QP is 38;
fig. 4d is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of an Akyio standard test sequence when a coding quantization parameter QP is 38;
fig. 4e is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Silent standard test sequence when a coding quantization parameter QP is 38;
fig. 4f is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a Mobile standard test sequence when the coding quantization parameter QP is 38;
FIG. 5a is the original image of frame 20 (frame I) of the Carphone standard test sequence;
FIG. 5b is a graph of FIG. 5a after block dropping (block dropping rate 20%);
FIG. 5c is a diagram of the recovery of FIG. 5b using a prior art JM methodology;
FIG. 5d is a graph of FIG. 5b after recovery using a prior art RH process;
FIG. 5e is a diagram of FIG. 5b after recovery using the method of the present invention;
fig. 6a is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates for the first 150 frames of the synthesized sequence when the coding quantization parameter QP is 28;
fig. 6b is a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates for the first 150 frames of the synthesized sequence when the coding quantization parameter QP is 38;
FIG. 7a is the original of frame 29 of the composition sequence;
FIG. 7b is the original of frame 30 (frame I) of the composition sequence;
FIG. 7c is a graph of the image shown in FIG. 7b after block dropping (block dropping rate of 20%);
FIG. 7d is a subjective quality map of FIG. 7c after recovery using a conventional JM method;
FIG. 7e is a subjective quality map of FIG. 7c after recovery using a prior RH process;
fig. 7f is a subjective quality map of fig. 7c after recovery using the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The general implementation block diagram of the method for recovering I frame errors of H.264/AVC video based on reversible data hiding, which is provided by the invention, is shown in figure 1 and comprises the following steps:
firstly, embedding characteristic information into each macro block except the 1 st macro block in the I frame at an encoding end to obtain the I frame embedded with the characteristic information, wherein the specific process comprises the following steps:
firstly, defining the kth macro block after the current pre-coding in the I frame as a current macro block, wherein K is more than or equal to 1 and less than or equal to K, wherein K represents the total number of the macro blocks contained in the I frame, and the initial value of K is 1.
① -2, converting the decimal value of CBP of the current macro block into binary string composed of 6 bit binary value and marked as A, A ═ a1a2a3a4a5a6Wherein a is1Is the highest binary value of A, a6The lowest bit binary value of a.
The upper two bits of the binary string corresponding to the CBP of the macro block are used for representing the chroma, and the lower four bits are used for representing the brightness; a is6Statistical conditions of quantized DCT coefficients of 8 × 8 block representing the position of the upper left corner of the macroblock, if a6Is 0, it means that the quantized DCT coefficients of 4 × 4 blocks of the 8 × 8 blocks are all 0, if a6A value of 1 indicates that the quantized DCT coefficients of 4 × 4 blocks out of the 8 × 8 blocks are not all 0, a5Statistical case of quantized DCT coefficients of 8 × 8 block representing the position of the upper right corner of the macroblock, a4Statistical case of quantized DCT coefficients of 8 × 8 block representing the position of the lower left corner of a macroblock, a3Representing the statistics of the quantized DCT coefficients of 8 × 8 blocks at the bottom right corner position of a macroblock.
① -3, extracting the feature vector of the current macroblock, if the coding mode of the current macroblock is Intra4 × 4, extracting the digital identification of the brightness prediction mode of each of the four 4 × 4 blocks numbered 0, 4, 8 and 12 (as shown in fig. 2) in the current macroblock, then quantizing the digital identification of the brightness prediction mode of each of the four 4 × 4 blocks to 4 bits, and then marking the digital identification of the brightness prediction mode of each of the four 4 × 4 blocks in bits according to the order of the numbering of the four 4 × 4 blocks in the current macroblockIdentifying and arranging to form a one-dimensional vector containing 16 elements, taking the one-dimensional vector as the characteristic vector of the current macro block, and recording as the characteristic vectorWherein,andcorresponding to the feature vector w representing the current macroblockIntra4×4(k) The 1 st element, the 2 nd element and the 16 th element.
If the coding mode of the current macro block is Intra16 × 16, the coding mode of the current macro block is marked by numeral 9, then the numeral mark of the coding mode of the current macro block is quantized into 4 bits, the numeral mark of the brightness prediction mode of the current macro block is quantized into 4 bits, then the numeral mark represented by the bits of the coding mode of the current macro block and the numeral mark represented by the bits of the brightness prediction mode of the current macro block are arranged in sequence to form a one-dimensional vector containing 8 elements, and then the one-dimensional vector is used as the characteristic vector of the current macro block and is marked as wIntra16×16(k),Wherein,andcorresponding to the feature vector w representing the current macroblockIntra16×16(k) The 1 st element, the 2 nd element and the 8 th element.
In H.264/AVC video, the 4 x 4 blocks in the macro block with the coding mode of Intra4 x 4 have 9 kinds of brightness prediction modes, which are respectively represented by numerical identifiers 0-8; the coding mode is that the luminance prediction modes of the Intra16 × 16 macro block are totally 4, and are respectively represented by numerical identifiers 0 to 3.
① -4, determining the host vector of the current macro block, calculating the sum of absolute values of all AC DCT coefficients of each 4 × 4 block in the current macro block, embedding the 4 × 4 block with the maximum sum as the characteristic information embedding block, then scanning all the quantized DCT coefficients in the characteristic information embedding block in a zig-zag mode, then arranging the 8 th to 16 th quantized DCT coefficients (the 8 th to 16 th quantized DCT coefficients in the zig-zag scanning are medium-high frequency quantized DCT coefficients) in the characteristic information embedding block according to the scanning sequence of the zig-zag mode to form a one-dimensional vector containing 9 elements, and then taking the one-dimensional vector as the host vector of the current macro block and marking the vector as x (k), wherein x (k) is x (k) which is the sum of absolute values of all the AC DCT coefficients of the 4 blocks in the current macro block and marking the vector as x (k) which is the host vector of the current1x2…x9) Wherein x is1、x2And x9Corresponding to the 1 st, 2 nd and 9 th elements in the host vector x (k) representing the current macroblock.
① -5, embedding the feature vector of the last macro block of the current macro block into the host vector of the current macro block, judging whether the current macro block is the 1 st macro block in the I frame, if so, directly executing the step ① -7 without processing the current macro block, otherwise, if the coding mode of the last macro block of the current macro block is Intra4 × 4, adopting a twice generalized differential extension method to embed the feature vector w of the last macro block of the current macro block into the host vector of the current macro blockIntra4×4(k-1) embedding into host vector x (k) of current macro block to obtain vector embedded with characteristic information corresponding to current macro blockThen useSequentially replacing 8 th to 16 th quantized DCT coefficients in the characteristic information embedded block in the current macro block by all elements in the block, and then executing step ① -6. if the coding mode of the last macro block of the current macro block is Intra16 × 16, adopting one-time generalized differential expansionThe method of exhibition is to use the feature vector w of the last macro block of the current macro blockIntra16×16(k-1) embedding into host vector x (k) of current macro block to obtain vector embedded with characteristic information corresponding to current macro blockThen useAll elements in the macroblock sequentially replace the 8 th to 16 th quantized DCT coefficients in the feature information embedded block in the current macroblock, and then the step ① -6 is performed.
In this embodiment, the feature vector w of the previous macroblock of the current macroblock is extended by two generalized differences in step ① -5Intra4×4(k-1) the specific process of embedding into the host vector x (k) of the current macroblock is:
a1, positive-transform x (k) to obtain a vector y (k), and y (k) ═ y1y2…y9) Wherein, y1、y2And y9Corresponding to the 1 st element, the 2 nd element and the 9 th element in y (k),y2=x2-x1,yi'=xi'-x1,y9=x9-x1,2≤i'≤9,αias a weight, α is taken in implementationi1, symbolTo round the symbol down.
A2, mixing wIntra4×4Embedding the 1 st element to the 8 th element in (k-1) into y (k) to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9,andcorresponding representation wIntra4×4The 1 st element, the i' -1 th element and the 8 th element in (k-1).
A3, pairInverse transformation is carried out to obtain a vector embedded with partial characteristic informationWherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9。
a4, pairPerforming forward transformation to obtain vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αiis a weight value.
A5, mixing wIntra4×4The 9 th element to the 16 th element in (k-1) are embedded inIn (1), obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9,andcorresponding representation wIntra4×4The 9 th element, the i' +7 th element, and the 16 th element in (k-1).
A6, pairInverse transformation is carried out to obtain a vector embedded with characteristic information corresponding to the current macro blockCompleting the feature vector w of the last macro block of the current macro blockIntra4×4(k-1) is embedded into the host vector x (k) of the current macroblock,wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9。
in this embodiment, the feature vector w of the previous macroblock of the current macroblock is extended by a generalized difference extension in step ① -5Intra16×16(k-1) the specific process of embedding into the host vector x (k) of the current macroblock is:
b1, positive-transform x (k) to obtain a vector y (k), and y (k) ═ y1y2…y9) Wherein, y1、y2And y9Corresponding to the 1 st element, the 2 nd element and the 9 th element in y (k),yi'=xi'-x1,y9=x9-x1,2≤i'≤9,αias a weight, α is taken in implementationi1, symbolTo round the symbol down.
B2, mixing wIntra16×16(k-1) embedding in y (k) to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,andcorresponding representation wIntra16×16The 1 st element, the i' -1 th element and the 8 th element in (k-1).
B3, pairInverse transformation is carried out to obtain a vector embedded with characteristic information corresponding to the current macro blockCompleting the feature vector w of the last macro block of the current macro blockIntra16×16(k-1) is embedded into the host vector x (k) of the current macroblock,wherein,andcorresponding representation1 st element of (1)The 2 nd element and the 9 th element,2≤i'≤9。
firstly, 6, modifying the CBP (CodedBlockPattern) of the current macro block, and then executing the step (I-7).
In this embodiment, the specific process of modifying the CBP of the current macroblock in step r-6 is as follows:
① -6a, if the 8 × 8 block where the characteristic information embedded block in the current macro block is located is the 8 × 8 block at the position of the upper left corner in the current macro block, counting whether all quantized DCT coefficients in the 8 × 8 block at the position of the upper left corner after the characteristic information embedded in the current macro block are all 0, if all 0, then using a in A6Is set to 0, a3,a4,a5Keeping the same, if not all are 0, then a in A is added6Is set to 1, a3,a4,a5Remain unchanged.
If the 8 × 8 block in which the feature information embedded block in the current macro block is located is the 8 × 8 block at the upper right corner position in the current macro block, counting whether all quantized DCT coefficients in the 8 × 8 block at the upper right corner position after the feature information embedded in the current macro block are all 0, if all 0, then the a in A is used as the reference value5Is set to 0, a3,a4,a6Keeping the same, if not all are 0, then a in A is added5Is set to 1, a3,a4,a6Remain unchanged.
If the 8 × 8 block in which the feature information embedded block in the current macroblock is located is the 8 × 8 block at the lower left corner of the current macroblock, then it is counted whether all quantized DCT coefficients in the 8 × 8 block at the lower left corner of the current macroblock after the feature information is embedded are all 0, if all 0, then a in A is used4Is set to 0, a3,a5,a6Keeping the same, if not all are 0, then a in A is added4Is set to 1, a3,a5,a6Remain unchanged.
If the feature information in the current macroblock is embedded in the blockThe located 8 × 8 block is the 8 × 8 block at the bottom right corner of the current macroblock, then it is counted whether all the quantized DCT coefficients in the 8 × 8 block at the bottom right corner of the current macroblock after embedding the feature information are all 0, if all 0, then a in A is added3Is set to 0, a4,a5,a6Keeping the same, if not all are 0, then a in A is added3Is set to 1, a4,a5,a6Remain unchanged.
And 6b, converting the modified A into a decimal number to obtain the modified CBP of the current macro block.
Firstly-7, entropy coding is carried out on the current macro block; existing context-adaptive based variable length coding (CAVLC) techniques may be employed in implementations to entropy encode the current macroblock.
And (8) taking k as k +1, taking the next pre-coded macro block in the I frame as the current macro block, and then returning to the step (2) to continue executing until all the pre-coded macro blocks in the I frame are processed, so as to obtain the coded code stream of the I frame embedded with the characteristic information, wherein the value of k as k +1 is an assignment symbol.
At a decoding end, extracting characteristic information from each macro block in an I frame embedded with the characteristic information, determining a coding mode and a brightness prediction mode of each macro block, and then performing error recovery on an incorrect decoding block, wherein the specific process comprises the following steps:
secondly, 1, entropy decoding each macro block in the I frame embedded with the characteristic information, then determining whether each entropy decoded macro block is a correct decoding block, then determining a vector containing the characteristic information of each correct decoding block except the 1 st macro block after entropy decoding (the 1 st macro block may be a correct decoding block or an incorrect decoding block, and the 1 st macro block is not processed), then extracting the characteristic information from the vector containing the characteristic information of each correct decoding block except the 1 st macro block after entropy decoding, and determining the coding mode and the brightness prediction mode of the last macro block of each correct decoding block except the 1 st macro block after entropy decoding, wherein the k' macro block after entropy decoding is assumed to be a correct decoding block
The specific process for determining the vector containing the characteristic information of the macro block comprises the steps of calculating the sum of absolute values of quantized DCT coefficients which are subjected to inverse quantization and are alternating current DCT coefficients of each 4 × 4 block in the macro block, namely calculating the sum of absolute values of partial quantized DCT coefficients of each 4 × 4 block in the macro block, wherein each quantized DCT coefficient in the partial quantized DCT coefficients is subjected to inverse quantization and is an alternating current DCT coefficient, taking the 4 × 4 block with the maximum sum value as a characteristic information extraction block, scanning all the quantized DCT coefficients in the characteristic information extraction block in a zig-zag mode, then arranging 8 th to 16 th quantized DCT coefficients in the characteristic information extraction block according to the scanning sequence of the zig-zag mode (the 8 th to 16 th quantized DCT coefficients in the zig-zag scanning are medium-high frequency quantized DCT coefficients) to form a one-dimensional vector containing 9 elements, and taking the one-dimensional vector as the vector containing the characteristic information of the macro block and marking the vector as the vector containing the characteristic information of the macro block Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element.
From the vector of the macroblock containing the characteristic informationExtracting the characteristic information and determining the coding mode and the brightness prediction mode of the last macro block of the macro block.
In this embodiment, the steps② -1 vector containing characteristic information from the macroblockThe specific process of extracting the feature information and determining the coding mode and the brightness prediction mode of the last macro block of the macro block comprises the following steps:
② -1a, pairPerforming forward transformation to obtain vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αias a weight, α is taken in implementationi1, symbolTo round the symbol down.
② -1b fromExtracting out the characteristic information composed of 8 characteristic information bits, and recording as Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 8 th element,j is more than or equal to 1 and less than or equal to 8, and the symbol "|" is an absolute value calculation symbol.
② -1c, a new vector z (k '), z (k') (z)1z2…z9) Wherein z is1、z2And z9Corresponding to the 1 st element, the 2 nd element and the 9 th element in z (k'), 2≤i'≤9。
② -1d, and performing inverse transformation on z (k') to obtain a vectorf(k'),f(k')=(f1f2…f9) Wherein f is1、f2And f9Corresponding to the 1 st element, the 2 nd element and the 9 th element in f (k'),f2=z2+f1,fi'=zi'+f1,f9=z9+f1,2≤i'≤9。
② -1e, will be composed of1 st element of (1)2 nd elementElement number 3And the 4 th elementThe binary string is converted into decimal value, if the decimal value corresponds to the number 9, the coding mode of the last macroblock of the macroblock is represented as Intra16 × 16, then all elements in f (k') are used for sequentially replacing the characteristic information in the last macroblock of the macroblock to extract the 8 th to 16 th quantized DCT coefficients in the block, namely the original quantized DCT coefficients are recovered, then the brightness prediction mode of the last macroblock of the macroblock is determined, and the number of the brightness prediction mode of the last macroblock of the macroblock is identified as being marked by the number of the brightness prediction mode of the last macroblock of the macroblockThe 5 th element in (1)Element number 6Element number 7And 8 th elementAnd the decimal value converted from the formed binary string is used for finishing the characteristic information extraction and the determination of the coding mode and the brightness prediction mode.
If the decimal value does not correspond to the numeral 9, the coding mode of the macroblock immediately preceding the macroblock is Intra4 × 4, and then step (c) -1f is performed.
② -1f, and f (k') is subjected to forward transformation to obtain a vectorWherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αias a weight, α is taken in implementationi1, symbolTo round the symbol down.
② -1g, fromExtracting out the characteristic information composed of 8 characteristic information bits, and recording as Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 8 th element,j is more than or equal to 1 and less than or equal to 8, and the symbol "|" is an absolute value calculation symbol.
② -1h, construct a new vector z '(k'), z '(k') (z ═ z1'z2'…z9') wherein z1'、z2' and z9' corresponding means 1 st element, 2 nd element and 9 th element in z ' (k '), 2≤i'≤9。
② -1i, and z '(k') is inversely transformed to obtain a vector x (k '), where x (k') is (x)1x2…x9) Wherein x is1、x2And x9Corresponds to the second in the representation x (k')1 element, 2 nd element and 9 th element,x2=z2'+x1,xi'=zi''+x1,x9=z9'+x1,2≤i'≤9。
and 1j, sequentially replacing the 8 th to 16 th quantized DCT coefficients in the characteristic information extraction block in the last macro block of the macro block by all elements in x (k'), namely recovering the original quantized DCT coefficients.
② -1k, constructing a vector w (k ') containing 16 elements, the first 8 elements of w (k') beingThe last 8 elements of w (k') are8 elements of (2).
And 1l, determining a brightness prediction mode of a4 x 4 block with the number of 0 in the last macro block of the macro block, wherein the number of the brightness prediction mode of the 4 x 4 block is a decimal value converted from a binary string consisting of the 1 st element, the 2 nd element, the 3 rd element and the 4 th element in w (k').
The luminance prediction mode of the 4 × 4 block numbered 4 in the previous macroblock of the macroblock is determined, and the number of the luminance prediction mode of the 4 × 4 block is identified as a decimal value converted from a binary string consisting of the 5 th element, the 6 th element, the 7 th element and the 8 th element in w (k').
The luminance prediction mode of the 4 × 4 block numbered 8 in the previous macroblock of the macroblock is determined, and the number of the luminance prediction mode of the 4 × 4 block is identified as a decimal value converted from a binary string consisting of the 9 th element, the 10 th element, the 11 th element and the 12 th element in w (k').
The luminance prediction mode of the 4 × 4 block numbered 12 in the previous macroblock of the macroblock is determined, and the number of the luminance prediction mode of the 4 × 4 block is identified as a decimal value converted from a binary string consisting of the 13 th element, the 14 th element, the 15 th element and the 16 th element in w (k').
And finishing the characteristic information extraction and coding mode and brightness prediction mode determination.
And K' is more than or equal to 2 and less than or equal to K.
In specific implementation, the existing CAVLC entropy decoding technology can be adopted to carry out entropy decoding on each macro block in the I frame embedded with the characteristic information; the prior art is directly employed to determine whether each entropy decoded macroblock is a correctly decoded block.
And 2, defining the kth macro block in the decoded I frame as a current macro block, wherein K is more than or equal to 1 and less than or equal to K, and the initial value of K is 1.
And (3) judging whether the current macro block is the last macro block in the decoded I frame, if not, executing the step (4), and if so, executing the step (5).
And 4, judging whether the current macro block is a correct decoding block or an incorrect decoding block, if the current macro block is the correct decoding block, not processing the current macro block, and then executing the step 6.
If the current macro block is an incorrect decoding block, judging whether the next macro block of the current macro block is an correct decoding block or an incorrect decoding block, if the next macro block of the current macro block is the correct decoding block, restoring the current macro block by using the coding mode and the brightness prediction mode of the current macro block, and then executing the step two-6; if the next macroblock of the current macroblock is an incorrectly decoded block, the current macroblock is restored by using the existing bilinear interpolation method.
In this embodiment, the specific process of recovering the current macroblock by using the coding mode and the luminance prediction mode of the current macroblock in step two-4 is as follows:
and 4a, if the coding mode of the current macro block is Intra16 multiplied by 16, predicting the pixel value of each pixel point in the current macro block by using the brightness prediction mode of the current macro block, and then taking the predicted value of each pixel point in the current macro block as the finally recovered pixel value of the corresponding pixel point to finish the recovery of the current macro block.
And 4b, if the coding mode of the current macro block is Intra4 × 4, predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using the brightness prediction mode of the 4 × 4 block numbered as 0 in the current macro block, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block.
Predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using a brightness prediction mode of a4 × 4 block numbered in a current macroblock, and then taking the predicted value of each pixel point in each 4 × 4 block as a finally recovered pixel value of each pixel point in a corresponding 4 × 4 block.
Predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using a brightness prediction mode of the 4 × 4 block with the number of 8 in the current macroblock, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block.
Predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using the brightness prediction mode of the 4 × 4 block with the number of 12 in the current macroblock, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block.
Judging whether the current macro block is a correct decoding block or an incorrect decoding block, if the current macro block is the correct decoding block, not processing the current macro block, and recovering all the incorrect decoding blocks in the decoded I frame; if the current macro block is an incorrect decoding block, the current macro block is restored by adopting the existing bilinear interpolation method, and the restoration of all the incorrect decoding blocks in the decoded I frame is completed.
And 6, enabling k to be k +1, taking the next macroblock to be processed in the decoded I frame as a current macroblock, and then returning to the step 3 to continue execution, wherein the value of k to be k +1 is an assignment symbol.
The method selects a test model JM-12.0 of H.264/AVC to carry out a simulation experiment. In the simulation, the H.264/AVC basic level is adopted to carry out coding test on video sequences Foreman, Carphone, Container, Akyio, Silent and Mobile in the standard QCIF format (176 x 144). In order to verify the effectiveness of the method of the invention on the existence of scene changes, a mixed sequence synthesized by Akiyo, Bridge-close and Carphone is adopted to simulate the existence of scene changes. In two groups of test experiments, when the loss rate is 10% and 20%, the method of the invention is compared with the existing test model JM-12.0 self-contained intra-frame recovery method (JM method) and the existing error recovery algorithm (RH method) based on histogram translation reversible information hiding proposed by Chung et al. Table 1 gives some basic coding parameters for the settings.
Table 1 coding parameter settings
Grade of Basic grade
Coding structure IPPPP
Number of encoded frames 150
Frame rate (frame/second) 15
Entropy coding method CAVLC
In order to verify the error recovery effect of the method, particularly the quality of video error recovery in the presence of scene changes, two sets of experiments were performed. One group is to code standard video test sequences Foreman, Carphone, Container, Akyio, Silent and Mobile, test experiments are carried out when the packet loss rates are respectively 10% and 20%, and the subjective and objective quality of the I frame image processed by the method of the invention is compared with the subjective and objective quality of the I frame image processed by the existing JM method and RH method; the other group simulates the situation of scene change, namely, a synthesized sequence is adopted for coding test, and similarly, test experiments are carried out when the packet loss rates are respectively 10% and 20%, and the subjective and objective quality of the I-frame image processed by the method of the invention is compared with the subjective and objective quality of the I-frame image processed by the existing JM method and the RH method.
Fig. 3a to fig. 3f respectively show a comparison diagram of objective quality of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a standard test sequence of Foreman, phone, Container, Akyio, Silent, and Mobile when a coding quantization parameter QP is 28. As can be seen from fig. 3a to 3f, when the loss rates are 10% and 20%, the objective quality of error recovery using the method of the present invention is higher than that using the conventional JM method, and is averagely higher by 3.24dB and 2.79dB, respectively.
Fig. 4a to fig. 4f respectively show a comparison diagram of objective quality of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates of an I frame of a standard test sequence of Foreman, phone, Container, Akyio, Silent, and Mobile when the coding quantization parameter QP is 38.
Fig. 5a shows the original image of frame 20 (frame I) of the phone standard test sequence, fig. 5b shows the image of fig. 5a after block loss (block loss rate of 20%), fig. 5c shows the image of fig. 5b after recovery by the conventional JM method, fig. 5d shows the image of fig. 5b after recovery by the conventional RH method, and fig. 5e shows the image of fig. 5b after recovery by the method of the present invention. As can be seen from fig. 5c to fig. 5e, the missing macroblock image is easily blurred by the weighted interpolation method used by the JM platform, and the image restored by the RH method easily generates the blocking effect.
Fig. 6a and fig. 6b respectively show a schematic diagram of objective quality comparison of error recovery after processing by the method of the present invention and the existing JM method and RH method under different macroblock loss rates for the first 150 frames of a synthesized sequence synthesized by three standard test sequences of Akiyo, carrene, and Bridge-close at 30 frame intervals when the coding quantization parameters QP 28 and QP 38 are respectively set. As can be seen from fig. 6a and 6b, the objective recovery quality in the presence of scene changes by the method of the present invention is significantly higher than that by the existing JM method and RH method. When the QP is 28 and the macro block loss rate is 10%, the objective recovery quality of the method is 2.14dB higher than that of the existing JM method; when the QP is 28 and the macro block loss rate is 20%, the objective recovery quality of the method is 0.93dB higher than that of the conventional JM method. Compared with the existing RH method, the objective recovery quality of the method is higher by about 1dB on average because the RH method is adopted and is characterized by the motion vectors of two I frame macro blocks, so that when scene change exists, the RH method can wrongly introduce the scene image blocks of the previous I frame into the current frame, the PSNR value is inevitably reduced, and the reconstruction quality is influenced.
Fig. 7a shows the original of the 29 th frame of the synthesized sequence, fig. 7b shows the original of the 30 th frame (I frame) of the synthesized sequence, fig. 7c shows the map after block loss (block loss rate of 20%) of the 30 th frame of the synthesized sequence, fig. 7d shows the subjective quality map after the recovery of fig. 7c by the conventional JM method, fig. 7e shows the subjective quality map after the recovery of fig. 7c by the conventional RH method, and fig. 7f shows the subjective quality map after the recovery of fig. 7c by the method of the present invention. It can be easily found from the subjective quality maps shown in fig. 7d to 7f that the method of the present invention can ensure that the reconstruction quality is reduced in a gentle manner when there is a scene change, and avoid the occurrence of blocking effect, image blur, etc. By combining the above, the simulation result is analyzed and found that: when scene change exists, the objective quality and the subjective quality of the video reconstructed by the method are obviously better than those of the video reconstructed by the existing JM method and RH method; the method of the invention also achieves good effect when no scene change exists.

Claims (1)

1. An I frame error recovery method of H.264/AVC video based on reversible data hiding, which is characterized by comprising the following steps:
firstly, embedding characteristic information into each macro block except the 1 st macro block in the I frame at an encoding end to obtain the I frame embedded with the characteristic information, wherein the specific process comprises the following steps:
firstly-1, defining the kth macro block after the current precoding in the I frame as a current macro block, wherein K is more than or equal to 1 and less than or equal to K, wherein K represents the total number of macro blocks contained in the I frame, and the initial value of K is 1;
① -2, relating to the current macro blockThe decimal value of the CBP is converted into a binary string consisting of 6-bit binary values and is denoted as a, a ═ a1a2a3a4a5a6Wherein a is1Is the highest binary value of A, a6The lowest bit binary value of A;
① -3, extracting the characteristic vector of the current macro block, if the coding mode of the current macro block is Intra4 × 4, extracting the digital identifications of the brightness prediction modes of the four 4 × 4 blocks numbered 0, 4, 8 and 12 in the current macro block, quantizing the digital identifications of the brightness prediction modes of the four 4 × 4 blocks to 4 bits, arranging the digital identifications represented by the bits of the brightness prediction modes of the four 4 × 4 blocks according to the sequence of the numbers of the four 4 × 4 blocks in the current macro block to form a one-dimensional vector containing 16 elements, and taking the one-dimensional vector as the characteristic vector of the current macro block and marking the one-dimensional vector as wIntra4×4(k),Wherein,andcorresponding to the feature vector w representing the current macroblockIntra4×4(k) The 1 st element, the 2 nd element, and the 16 th element;
if the coding mode of the current macro block is Intra16 × 16, the coding mode of the current macro block is marked by numeral 9, then the numeral mark of the coding mode of the current macro block is quantized into 4 bits, the numeral mark of the brightness prediction mode of the current macro block is quantized into 4 bits, then the numeral mark represented by the bits of the coding mode of the current macro block and the numeral mark represented by the bits of the brightness prediction mode of the current macro block are arranged in sequence to form a one-dimensional vector containing 8 elements, and then the one-dimensional vector is used as the characteristic vector of the current macro block and is marked as wIntra16×16(k),Wherein,andcorresponding to the feature vector w representing the current macroblockIntra16×16(k) The 1 st element, the 2 nd element and the 8 th element;
① -4, determining a host vector of the current macro block, namely, calculating the sum of absolute values of all alternating current DCT coefficients of each 4 × 4 block in the current macro block, embedding the 4 × 4 block with the maximum sum value into a characteristic information embedding block, scanning all quantized DCT coefficients in the characteristic information embedding block in a zig-zag mode, arranging the 8 th to 16 th quantized DCT coefficients in the characteristic information embedding block according to the scanning sequence of the zig-zag mode to form a one-dimensional vector containing 9 elements, and taking the one-dimensional vector as the host vector of the current macro block, wherein x is (k), and x is (k) of the current macro block1x2…x9) Wherein x is1、x2And x9Corresponding to the 1 st element, the 2 nd element and the 9 th element in the host vector x (k) representing the current macroblock;
① -5, embedding the feature vector of the last macro block of the current macro block into the host vector of the current macro block, judging whether the current macro block is the 1 st macro block in the I frame, if so, directly executing the step ① -7 without processing the current macro block, otherwise, if the coding mode of the last macro block of the current macro block is Intra4 × 4, adopting a twice generalized differential extension method to embed the feature vector w of the last macro block of the current macro block into the host vector of the current macro blockIntra4×4(k-1) embedding into host vector x (k) of current macro block to obtain vector embedded with characteristic information corresponding to current macro blockThen useReplacing the 8 th to 16 th quantized DCT coefficients of the feature information embedded block in the current macro block by all the elements in sequence, and then executing step ① -6. if the coding mode of the last macro block of the current macro block is Intra16 × 16, adopting a one-time generalized differential extension method to carry out the feature vector w of the last macro block of the current macro blockIntra16×16(k-1) embedding into host vector x (k) of current macro block to obtain vector embedded with characteristic information corresponding to current macro blockThen useAll the elements in the macroblock sequentially replace the 8 th to 16 th quantized DCT coefficients in the feature information embedded block in the current macroblock, and then perform step ① -6;
in the step ① -5, the feature vector w of the previous macroblock of the current macroblock is extended by two generalized differencesIntra4×4(k-1) the specific process of embedding into the host vector x (k) of the current macroblock is:
a1, positive-transform x (k) to obtain a vector y (k), and y (k) ═ y1y2…y9) Wherein, y1、y2And y9Corresponding to the 1 st element, the 2 nd element and the 9 th element in y (k),y2=x2-x1,yi'=xi'-x1,y9=x9-x1,2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
a2, mixing wIntra4×4Embedding the 1 st element to the 8 th element in (k-1) into y (k) to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9,andcorresponding representation wIntra4×4The 1 st element, the i' -1 th element and the 8 th element in (k-1);
a3, pairInverse transformation is carried out to obtain a vector embedded with partial characteristic information Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9;
a4, pairPerforming forward transformation to obtain vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αiis the weight;
a5, mixing wIntra4×4The 9 th element to the 16 th element in (k-1) are embedded inTo getTo vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9,andcorresponding representation wIntra4×4The 9 th element, the i' +7 th element and the 16 th element in (k-1);
a6, pairInverse transformation is carried out to obtain a vector embedded with characteristic information corresponding to the current macro block Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9;
in the step ① -5, the feature vector w of the previous macroblock of the current macroblock is extended by a generalized differenceIntra16×16(k-1) the specific process of embedding into the host vector x (k) of the current macroblock is:
b1, positive-transform x (k) to obtain a vector y (k), and y (k) ═ y1y2…y9) Wherein, y1、y2And y9Corresponding to the 1 st element, the 2 nd element and the 9 th element in y (k),y2=x2-x1,yi'=xi'-x1,y9=x9-x1,2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
b2, mixing wIntra16×16(k-1) embedding in y (k) to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,andcorresponding representation wIntra16×16The 1 st element, the i' -1 th element and the 8 th element in (k-1);
b3, pairInverse transformation is carried out to obtain a vector embedded with characteristic information corresponding to the current macro block Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element,2≤i'≤9;
firstly-6, modifying the CBP of the current macro block, and then executing a step (firstly-7);
the specific process of modifying the CBP of the current macroblock in step (i-6) is as follows:
① -6a, if the 8 × 8 block where the characteristic information embedded block in the current macro block is located is the 8 × 8 block at the position of the upper left corner in the current macro block, counting whether all quantized DCT coefficients in the 8 × 8 block at the position of the upper left corner after the characteristic information embedded in the current macro block are all 0, if all 0, then using a in A6Is set to 0, a3,a4,a5Keeping the same, if not all are 0, then a in A is added6Is set to 1, a3,a4,a5Keeping the same;
if the 8 × 8 block in which the feature information embedded block in the current macro block is located is the 8 × 8 block at the upper right corner position in the current macro block, counting whether all quantized DCT coefficients in the 8 × 8 block at the upper right corner position after the feature information embedded in the current macro block are all 0, if all 0, then the a in A is used as the reference value5Is set to 0, a3,a4,a6Keeping the same, if not all are 0, then a in A is added5Is set to 1, a3,a4,a6Keeping the same;
if the 8 × 8 block in which the feature information embedded block in the current macroblock is located is the 8 × 8 block at the lower left corner of the current macroblock, then it is counted whether all quantized DCT coefficients in the 8 × 8 block at the lower left corner of the current macroblock after the feature information is embedded are all 0, if all 0, then a in A is used4Is set to 0, a3,a5,a6Keeping the same, if not all are 0, then a in A is added4Is set to 1, a3,a5,a6Keeping the same;
if the 8 × 8 block in which the feature information embedded block in the current macroblock is located is the 8 × 8 block at the bottom right corner of the current macroblock, then count the 8 × 8 blocks at the bottom right corner of the current macroblock after embedding the feature informationWhether all the quantized DCT coefficients are 0, if all the quantized DCT coefficients are 0, the a in A is added3Is set to 0, a4,a5,a6Keeping the same, if not all are 0, then a in A is added3Is set to 1, a4,a5,a6Keeping the same;
firstly-6 b, converting the modified A into a decimal number to obtain the modified CBP of the current macro block;
firstly-7, entropy coding is carried out on the current macro block;
step 8, enabling k to be k +1, taking the next pre-coded macro block in the frame I as a current macro block, and then returning to the step 2 to continue executing until all the pre-coded macro blocks in the frame I are processed, so as to obtain an I frame coded code stream embedded with characteristic information, wherein the value of k to be k +1 is an assignment symbol;
at a decoding end, extracting characteristic information from each macro block in an I frame embedded with the characteristic information, determining a coding mode and a brightness prediction mode of each macro block, and then performing error recovery on an incorrect decoding block, wherein the specific process comprises the following steps:
secondly, 1, entropy decoding each macro block in the I frame embedded with the characteristic information, then determining whether each entropy decoded macro block is a correct decoding block, then determining a vector containing the characteristic information of each correct decoding block except the 1 st entropy decoded macro block, then extracting the characteristic information from the vector containing the characteristic information of each correct decoding block except the 1 st entropy decoded macro block, and determining a coding mode and a brightness prediction mode of a last macro block of each correct decoding block except the 1 st entropy decoded macro block, wherein if the k 'th entropy decoded macro block is the correct decoding block, the coding mode and the brightness prediction mode are determined, and if the k' th entropy decoded macro block is the correct decoding block
The specific process of determining the vector containing the characteristic information of the macro block comprises the steps of calculating the sum of absolute values of all quantized DCT coefficients which are subjected to inverse quantization and become alternating current DCT coefficients of each 4 × 4 block in the macro block, taking the 4 × 4 block with the largest sum value as a characteristic information extraction block, scanning all the quantized DCT coefficients in the characteristic information extraction block in a zig-zag mode, and arranging the 8 th quantized DCT coefficients to the 16 th quantized DCT coefficients in the characteristic information extraction block according to the scanning sequence of the zig-zag mode to form the vector containing the characteristic informationA one-dimensional vector containing 9 elements, and the one-dimensional vector is taken as the vector containing the characteristic information of the macro block and is marked as the vector containing the characteristic information Wherein,andcorresponding representationThe 1 st element, the 2 nd element, and the 9 th element;
from the vector of the macroblock containing the characteristic informationExtracting characteristic information, and determining a coding mode and a brightness prediction mode of a previous macro block of the macro block;
k' is not less than 2 and not more than K;
the step ② -1 includes extracting the vector containing the feature information from the macroblockThe specific process of extracting the feature information and determining the coding mode and the brightness prediction mode of the last macro block of the macro block comprises the following steps:
② -1a, pairPerforming forward transformation to obtain vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
② -1b fromExtracting out the characteristic information composed of 8 characteristic information bits, and recording as Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 8 th element,j is more than or equal to 1 and less than or equal to 8, and the symbol "|" is an absolute value calculation symbol;
② -1c, a new vector z (k '), z (k') (z)1z2…z9) Wherein z is1、z2And z9Corresponding to the 1 st element, the 2 nd element and the 9 th element in z (k'), 2≤i'≤9;
② -1d, z (k ') is inverse transformed to obtain a vector f (k '), f (k ') (f)1f2…f9) Wherein f is1、f2And f9Corresponding to the 1 st element, the 2 nd element and the 9 th element in f (k'),f2=z2+f1,fi'=zi'+f1,f9=z9+f1,2≤i'≤9;
② -1e, will be composed of1 st element of (1)2 nd elementElement number 3And the 4 th elementThe composed binary string is converted into a decimal value, if the decimal value corresponds to the number 9, the coding mode of the last macroblock of the macroblock is represented as Intra16 × 16, then all elements in f (k') are used for sequentially replacing the characteristic information in the last macroblock of the macroblock to extract the 8 th to 16 th quantized DCT coefficients in the block, then the brightness prediction mode of the last macroblock of the macroblock is determined, and the number of the brightness prediction mode of the last macroblock of the macroblock is identified as being marked by the number of the brightness prediction modeThe 5 th element in (1)Element number 6Element number 7And 8 th elementThe decimal value converted from the binary string is used for finishing the characteristic information extraction and the determination of the coding mode and the brightness prediction mode;
if the decimal value does not correspond to the number 9, the coding mode of the last macro block of the macro block is Intra4 multiplied by 4, and then the step of 1f is executed;
② -1f, and f (k') is subjected to forward transformation to obtain a vector Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 9 th element, 2≤i'≤9,αias a weight, a symbolIs a rounded-down symbol;
② -1g, fromExtracting out the characteristic information composed of 8 characteristic information bits, and recording as Wherein,andcorresponding representationThe 1 st element, the 2 nd element and the 8 th element,j is more than or equal to 1 and less than or equal to 8, and the symbol "|" is an absolute value calculation symbol;
② -1h, construct a new vector z '(k'), z '(k') (z ═ z1'z2'…z9') wherein z1'、z2' and z9' corresponding means 1 st element, 2 nd element and 9 th element in z ' (k '), 2≤i'≤9;
② -1i, and z '(k') is inversely transformed to obtain a vector x (k '), where x (k') is (x)1x2…x9) Wherein x is1、x2And x9Corresponding to the 1 st element, the 2 nd element and the 9 th element in x (k'),x2=z2'+x1,xi'=zi′′+x1,x9=z9'+x1,2≤i'≤9;
-1j, sequentially replacing the 8 th to 16 th quantized DCT coefficients in the feature information extraction block in the last macro block of the macro block by all elements in x (k');
② -1k, constructing a vector w (k ') containing 16 elements, the first 8 elements of w (k') beingThe last 8 elements of w (k') are8 elements of (a);
1l, determining the brightness prediction mode of a4 x 4 block with the number of 0 in the last macro block of the macro block, wherein the number mark of the brightness prediction mode of the 4 x 4 block is a decimal value converted by a binary string consisting of the 1 st element, the 2 nd element, the 3 rd element and the 4 th element in w (k');
determining a brightness prediction mode of a4 x 4 block numbered 4 in a previous macroblock of the macroblock, wherein the number of the brightness prediction mode of the 4 x 4 block is identified as a decimal value converted from a binary string consisting of a5 th element, a6 th element, a 7 th element and an 8 th element in w (k');
determining a brightness prediction mode of a4 × 4 block numbered 8 in a previous macroblock of the macroblock, wherein the number of the brightness prediction mode of the 4 × 4 block is identified as a decimal value converted from a binary string consisting of a 9 th element, a 10 th element, an 11 th element and a 12 th element in w (k');
determining a brightness prediction mode of a4 x 4 block with the number of 12 in a last macro block of the macro block, wherein the number of the brightness prediction mode of the 4 x 4 block is identified as a decimal value converted from a binary string consisting of a 13 th element, a 14 th element, a 15 th element and a16 th element in w (k');
finishing feature information extraction and determination of a coding mode and a brightness prediction mode;
secondly, defining the kth macro block in the decoded I frame as a current macro block, wherein K is more than or equal to 1 and less than or equal to K, and the initial value of K is 1;
secondly-3, judging whether the current macro block is the last macro block in the decoded I frame, if not, executing a step (secondly-4), and if so, executing a step (secondly-5);
4, judging whether the current macro block is a correct decoding block or an incorrect decoding block, if the current macro block is the correct decoding block, not processing the current macro block, and then executing the step 6;
if the current macro block is an incorrect decoding block, judging whether the next macro block of the current macro block is an correct decoding block or an incorrect decoding block, if the next macro block of the current macro block is the correct decoding block, restoring the current macro block by using the coding mode and the brightness prediction mode of the current macro block, and then executing the step two-6; if the next macro block of the current macro block is an incorrect decoding block, restoring the current macro block by adopting a bilinear interpolation method;
the specific process of recovering the current macro block by using the coding mode and the brightness prediction mode of the current macro block in the step two-4 is as follows:
4a, if the coding mode of the current macro block is Intra16 multiplied by 16, predicting the pixel value of each pixel point in the current macro block by using the brightness prediction mode of the current macro block, and then taking the predicted value of each pixel point in the current macro block as the finally recovered pixel value of the corresponding pixel point to finish the recovery of the current macro block;
4b, if the coding mode of the current macro block is Intra4 × 4, predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using the brightness prediction mode of the 4 × 4 block numbered as 0 in the current macro block, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block;
predicting the pixel value of each pixel point in each 4 x 4 block in an 8 x 8 block where the 4 x 4 block is located by using a brightness prediction mode of a4 x 4 block with the number of 4 in a current macro block, and then taking the predicted value of each pixel point in each 4 x 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 x 4 block;
predicting the pixel value of each pixel point in each 4 x 4 block in an 8 x 8 block where the 4 x 4 block is located by using a brightness prediction mode of the 4 x 4 block with the number of 8 in the current macro block, and then taking the predicted value of each pixel point in each 4 x 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 x 4 block;
predicting the pixel value of each pixel point in each 4 × 4 block in an 8 × 8 block where the 4 × 4 block is located by using a brightness prediction mode of a4 × 4 block with the number of 12 in a current macroblock, and then taking the predicted value of each pixel point in each 4 × 4 block as the finally recovered pixel value of each pixel point in the corresponding 4 × 4 block;
judging whether the current macro block is a correct decoding block or an incorrect decoding block, if the current macro block is the correct decoding block, not processing the current macro block, and recovering all the incorrect decoding blocks in the decoded I frame; if the current macro block is an incorrect decoding block, restoring the current macro block by adopting a bilinear interpolation method, and thus finishing restoring all incorrect decoding blocks in the decoded I frame;
and 6, enabling k to be k +1, taking the next macroblock to be processed in the decoded I frame as a current macroblock, and then returning to the step 3 to continue execution, wherein the value of k to be k +1 is an assignment symbol.
CN201410287578.XA 2014-06-24 2014-06-24 A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data Active CN104144347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410287578.XA CN104144347B (en) 2014-06-24 2014-06-24 A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410287578.XA CN104144347B (en) 2014-06-24 2014-06-24 A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data

Publications (2)

Publication Number Publication Date
CN104144347A CN104144347A (en) 2014-11-12
CN104144347B true CN104144347B (en) 2017-12-15

Family

ID=51853404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410287578.XA Active CN104144347B (en) 2014-06-24 2014-06-24 A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data

Country Status (1)

Country Link
CN (1) CN104144347B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337514A (en) * 2017-12-28 2018-07-27 宁波工程学院 A kind of encrypted domain HEVC video data hidden methods
CN108683921B (en) * 2018-06-07 2020-04-07 四川大学 Video reversible information hiding method based on zero quantization DCT coefficient group

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621692A (en) * 2009-07-27 2010-01-06 宁波大学 H.264/AVC video information hiding method based on predictive mode
CN102223540A (en) * 2011-07-01 2011-10-19 宁波大学 Information hiding method facing to H.264/AVC (automatic volume control) video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621692A (en) * 2009-07-27 2010-01-06 宁波大学 H.264/AVC video information hiding method based on predictive mode
CN102223540A (en) * 2011-07-01 2011-10-19 宁波大学 Information hiding method facing to H.264/AVC (automatic volume control) video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Intra-frame Error Resilience Algorithm Based on Reversible Data Embedding in H.264/AVC;Ranran Li等;《Journal of Computational Information Systems》;20140215;第10卷;1489-1499 *
Video Error Resilience Scheme using Reversible Data Hiding Technique for Intra-Frame in H.264/AVC;Ranran Li等;《The 3rd International Conference on Multimedia Technology (ICMT 2013)》;20131231;462-469 *

Also Published As

Publication number Publication date
CN104144347A (en) 2014-11-12

Similar Documents

Publication Publication Date Title
CN108028919B (en) Video or image coding and decoding method and device
CN107197260B (en) Video coding post-filter method based on convolutional neural networks
CN101267563B (en) Adaptive variable-length coding
CN1332563C (en) Coding method of video frequency image jump over macro block
CN101711481B (en) Method and apparatus for video coding using prediction data refinement
CN101584218B (en) Method and apparatus for encoding and decoding based on intra prediction
CN107710759A (en) Method and device for the conversion coefficient encoding and decoding of non-square block
CN114786019B (en) Image prediction method, encoder, decoder, and storage medium
KR100612691B1 (en) Systems and Methods for Measurement of Video Quality
EP2018070A1 (en) Method for processing images and the corresponding electronic device
CN105284112A (en) Method and apparatus for determining a value of a quantization parameter
KR101038531B1 (en) Apparatus and method for encoding image capable of parallel processing in decoding and apparatus and method for decoding image capable of parallel processing
US20170374361A1 (en) Method and System Of Controlling A Video Content System
CN102026001B (en) Method for evaluating importance of video frame based on motion information
CN104363461B (en) The error concealing method of frame of video and apply its video encoding/decoding method
CN102256130B (en) Method for marking video frame image sequence number based on inserted macro block brightness particular values
CN104144347B (en) A kind of H.264/AVC video I frame error recovery methods based on hiding reversible data
CN102333223A (en) Video data coding method, decoding method, coding system and decoding system
CN104104956B (en) For layered video coding and the method for decoding, encoding apparatus and decoding apparatus
CN102378012A (en) Data hiding-based H.264 video transmission error code recovery method
He et al. Hybrid video coding scheme based on VVC and spatio-temporal attention convolution neural network
Carreira et al. Selective motion vector redundancies for improved error resilience in HEVC
WO2005125212A1 (en) Method for video encoding and decoding process
CN108024114B (en) High-capacity lossless HEVC information hiding method based on flag bit parameter modification
CN108024111A (en) A kind of frame type decision method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant