CN101854548B - Wireless multimedia sensor network-oriented video compression method - Google Patents
Wireless multimedia sensor network-oriented video compression method Download PDFInfo
- Publication number
- CN101854548B CN101854548B CN 201010182470 CN201010182470A CN101854548B CN 101854548 B CN101854548 B CN 101854548B CN 201010182470 CN201010182470 CN 201010182470 CN 201010182470 A CN201010182470 A CN 201010182470A CN 101854548 B CN101854548 B CN 101854548B
- Authority
- CN
- China
- Prior art keywords
- frame
- decoding
- coding
- interest
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention provides a wireless multimedia sensor network-oriented video compression method, which solves the problem of large data volume in video application. Due to the adoption of the method, code rate is reduced and the quality of a decoded image is improved at the same time, and at last, the energy consumption of a node of the sensor is reduced, so that the life cycle of a network is prolonged. In the method, encoding in a strenuous motion area and in a motion edge area is enhanced by adopting an ROI distinguishing algorithm, and the decoded image is postprocessed by adopting a deblocking filter, so that the subjective quality of the decoded image is further improved. On the basis of a Wyner-Ziv distributed video encoding scheme, the strenuous motion area is extracted through an ROI judging criterion based on an image gradient field and based on Huffman encoding and decoding compression is preformed, and the other areas are encoded and decoded based on the LDPC distribution type, so that the method has the advantages of reducing code rate, improving the quality of the decoded image, reducing the processing and transmission energy consumption of the nodes, implementing the optimized transmission of video and prolonging the life cycle of the whole network.
Description
Technical field
The present invention is a kind of at wireless multimedia sensor network (Wireless Multimedia SensorNetworks, WMSN) technical scheme of middle multi-medium data compression.Be mainly used in the problem that solves video compression coding, and improve the quality of decoded picture as far as possible, belong to the Computer Wireless Communication technical field.
Background technology
In recent years, along with the development of wireless multimedia communication technology, increasing Video Applications demand has appearred, as: wireless multimedia sensor network, mobile video telephone, wireless video monitoring, wireless pc video camera etc.In wireless multimedia sensor network, Video Applications need be handled lot of data, because node computing capability or node energy are limited, the conventional video coding standard no longer is applicable to the wireless video occasion.(Distributed Video Coding DVC) is applied in the wireless multimedia sensor network a kind of brand-new coding and decoding video framework---distributed video coding.
Traditional video encoding standard (as MPEG, H.26x) has adopted hybrid encoding frame, coding adopts motion compensation technique, the time and the spatial coherence that make full use of video sequence carry out predictive coding, and generally speaking, encoder complexity is 5~10 times of decoding complex degree.And distributed video coding has the advantages that coding is simple, decoding is complicated.In addition, distributed video coding has robustness preferably, higher compression efficiency, and easily forms the code stream of hierarchical coding, is applicable to the wireless video occasion that encoder complexity is lower.
Relatively more classical at present distributed coding and decoding scheme mainly comprises the Wyner-Ziv video coding that Girod of Stanford University and Aaron etc. propose, the PRISM video coding that the Ramchandran of University of California Berkeley etc. propose, the layering Wyner-Ziv video coding that Zixiang Xiong proposes, the state-free distributed video coding that Sehgal etc. propose is based on the distributed video coding of wavelet coding and various visual angles distributed video coding etc.The Wyner-Ziv distributed video coding is made up of key frame (Key frame) and two kinds of frames of Wyner-Ziv frame (WZ frame).Wherein the Key frame adopts the mode of encoding and decoding in traditional frame, and the mode that the WZ frame adopts intraframe coding and interframe decoding to combine.During WZ frame coding, carry out block-based dct transform and quantification earlier, adopt the Slepian-Wolf encoder to encode then.The encoder check digit that generates of will encoding is stored in the buffer of coding side, and the decoding feedback request according to decoding end sends to decoder with check digit and carries out error correction decoding.During decoding, the Slepian-Wolf decoder is decoded according to decoding side information and the check digit that receives, according to the correctness of the decoding of decoding end, and continuous feedback request bit number, the coding side buffer constantly sends check digit, till can being correctly decoded.Then decoded coefficient is carried out IDCT and inverse quantization and decoding and rebuilding.These schemes all are based on turbo or LDPC to the indiscriminate coding of the All Ranges of Wyner-Ziv frame, this coding processing mode, be adapted to the mild situation of moving, but for the more violent zone and the fringe region of motion object of moving, estimation and motion compensation technique just can not accurately be predicted, need so not only increase code check to the more feedback information of coding side request during decoding, and the parts of images of decoding is still accurate inadequately.At this problem, (Region of Intrest ROI) distinguishes algorithm to the present invention proposes the pixel domain region of interest.On the basis of Wyne-Ziv distributed video coding theory, a kind of improved Wyner-Ziv distributed video coding algorithm has been proposed, this algorithm is based on the image gradient field, compress by the violent zone of area-of-interest decision criteria extraction motion and based on entropy coding, the optimization transmission of video is finally realized then based on the distributed encoding and decoding of LDPC in all the other zones.In addition, decoded image is carried out reprocessing, adopt block effect filtering, further improved the quality of image, satisfied the subjective requirement of people image.
Summary of the invention
Technical problem: the objective of the invention is to propose a kind of video-frequency compression method, solve the big problem of data volume in the Video Applications towards wireless multimedia sensor network.The method that the application of the invention proposes has improved the quality of decoded picture when reducing code check, finally reduce the sensor node energy consumption, thereby prolonged network life cycle.
Technical scheme: the method that a kind of video-frequency compression method towards wireless multimedia sensor network of the present invention is a kind of property improved, the main task of distributed video compression algorithm is that data volume bigger in the Video Applications is compressed, thereby reduce the energy consumption of node, prolong network life cycle, in addition, in order further to improve the subjective quality of decoded picture, adopt post processing and filtering to reduce blocking artifact to decoded image.
One, architecture
This method is on the basis of Wyner-Ziv distributed video coding scheme, based on the image gradient field, compress by the violent zone of ROI decision criteria extraction motion and based on the Huffman encoding and decoding, all the other zones are then based on the distributed encoding and decoding of LDPC, when reducing code check, improve decoded image quality, reduce the processing and the transmission energy consumption of node, realize the optimization transmission of video, prolong the life cycle of whole network.Decoded image is adopted block-eliminating effect filtering, improved the subjective quality of image, satisfied the vision requirement of people image.
This method is divided into two kinds of different frames based on Wyner-Ziv distributed video coding scheme with video sequence: keyword (Key Frame, K) and non-key frame (Wyner-Ziv frame, WZ).Key frame is adopted traditional JPEG coded system, utilize ROI to distinguish algorithm the Wyner-Ziv frame is divided into ROI zone and non-ROI zone, to the regional mode that adopts the Huffman encoding and decoding of the ROI of Wyner-Ziv frame, the code encoding/decoding mode of LPDC is adopted in remaining non-ROI zone.Adopt block-eliminating effect filtering to handle to decoded image, further improved the quality of decoded picture.
Concrete steps are as follows: (1) is at coding side: a) frame separator: the video sequence of coding side input is divided into key frame (Key frame) and Wyner-Ziv frame (WZ frame); B) spatial alternation: to the W frame carry out block-based discrete cosine transform (Discrete Cosine Transform, DCT); C) quantize: the coefficient behind each dct transform is quantized; D) coding: use traditional JPEG technology for encoding Key frame, utilize the ROI extraction algorithm that the Wyner-Ziv frame is divided into ROI zone and non-ROI zone: the Huffman coding is adopted in the ROI zone, the LDPC coding is adopted in non-ROI zone; (2) in decoding end: a) generate side information: use the frame that decodes, carry out movement compensating frame interpolation (or extrapolation) and generate side information; B) correlated noise model: the residual error of corresponding DCT coefficient statistics is used as a laplacian distribution and is carried out modeling between WZ frame and the side information; C) decoding: to the Key frame, use traditional JPEG technique decodes, the Huffman decoding is adopted in the ROI zone of Wyner-Ziv frame, the LDPC decoding is adopted in remaining non-ROI zone; D) reconstruct:, rebuild all DCT coefficients assisting down of side information; E) inverse transformation: to the coefficient after rebuilding carry out inverse discrete cosine transformation (Inverse Discrete Cosine Transform, IDCT); (3) decoded picture reprocessing: decoded image is adopted block-eliminating effect filtering; (4) frame mixes: decoded Key frame and WZ frame are integrated into video flowing.
Two, method flow
This method comprises the above 4 step, is discussed in detail below:
(1): at coding side:
A) frame separator: video sequence is divided into Wyner-Ziv frame (WZ frame) and key frame (Key frame), and wherein key frame periodically inserts, and depends on GOP (Group of Pictures) size.Utilize the frame separator that video sequence is divided into different frames, for each different video sequence, since the difference of coding structure, the attribute difference that is endowed of every frame, thereby the coding processing mode of employing is also different.
B) spatial alternation: block-based conversion is applied to each WZ frame to dct transform especially.According to the residing position of DCT coefficient of each piece, the DCT coefficient of whole WZ frame is divided into different groups, thereby forms different DCT coefficient set.
C) quantize: each DCT collection is by unified quantification, and these quantized levels depend on the quality that will obtain image.For a given set, the bit stream of quantized signal is divided into groups together, forms bit plane, encodes independently then.
D) coding: for the Key frame, use traditional JPEG technology for encoding, utilize ROI to distinguish algorithm the Wyner-Ziv frame is divided into ROI zone and non-ROI zone: adopt the Huffman coding for the ROI zone, adopt the LDPC coding for non-ROI zone.ROI adopts in the zone process of Huffman coding as follows: by the number of times ordering that occurs, occurrence number is many in front with gray scale to be encoded, and number of times is in the back few; Take out the number of times addition of two minimum numbers of occurrence number, sum is as the set element and the rearrangement of a new occurrence number, emerging number of times is still followed the descending rule to determine residing position in new set, two least number of times corresponding gray scale levels of addition become the leaf node of Huffman tree, parents' node of this two node structures, repeat this step, till all gray scales all are used to construct the Huffman tree; If the left child of all nodes is " 0 ", right child is " 1 ", then from root, promptly is the Huffman sign indicating number of this leaf node through each intermediate node to the path code of leaf node.For each DCT collection in non-ROI zone, (Most Significant Bit-plane MSB) carries out the LDPC coding from most important bit plane.For each bit plane (bit-plane), the odd-even check information that is generated is stored in the buffer, under the request of decoding end, by feedback mechanism, constantly sends check information.
(2): in decoding end:
A) generate side information: decoding end is used nearest decoded frame, by the mode of carrying out movement compensating frame interpolation (or extrapolation) generate each WZ frame side information (Side Information, SI).The side information of each WZ frame is taken as a kind of estimated value of original WZ frame.The quality of estimated value is good more, and " mistake " that the LDPC decoding end need correct is few more, and the parity check bit (or bit stream) of asking to buffer is few more.
B) correlated noise model: the residual error of corresponding DCT coefficient statistics is assumed to be and is used as a model of obeying laplacian distribution in WZ frame and side information, and its parameter uses the training stage of off-line (off-line) to carry out the initialization estimation.
C) LDPC decoding: key frame uses traditional JPEG technology to decode; The ROI of WZ frame uses in the zone Huffman recovery of decoding; For the non-ROI zone of WZ frame, in case known the residual error statistics of a side information DCT coefficient and a given DCT coefficient, each bit plane can carry out LDPC decoding (from the MSB decoding).Under the request of LDPC decoder, encoder sends some parity check message by feedback channel.In order to judge whether certain specific bit plane of correct decoding needs more check digit, decoder adopts a request stopping criterion.After the MSB bit plane of a DCT collection of the decoding of LDPC successfully, the LDPC decoder with a unification mode handle all the other relevant collection.In case the bit plane of all DCT collection is by successfully LDPC decoding, the LDPC decoder begins to decode next the collection.
D) reconstruct: after the LDPC decoding, all bit planes and each DCT collection are divided into groups together, to form quantized signal stream and each collection of decoding.In case can obtain all decoding quantized signals, at assisting down of corresponding side information coefficient, the just all DCT coefficients of restructural.Replace for the DCT collection of the DCT coefficient set of being transmitted that does not contain the WZ bit stream by the side information correspondence.
E) inverse transformation: after all DCT collection are rebuilt, carry out IDCT, the WZ frame that just can obtain decoding thus.
(3): the decoded picture reprocessing: decoded image is used filter, thereby weaken the blocking artifact that brings by quantizing.Block elimination effect filter is the one-dimensional filtering device, in order to obtain two-dimensional effects, will carry out twice filtering to a piece, is in the horizontal direction for the first time, for the second time in vertical direction.To be quantization error become the continuous variation of original adjacent pixel values into " step " to the reason that produces owing to blocking effect changes, and seems just to have the square phenomenon of " pseudo-edge ".Total be exactly deblocking effect under the condition that the energy that makes image remains unchanged, becomes " step " very little or approaching continuous grey scale change to the grey scale change of those " steps " very high step change type again as far as possible.
(4): frame mixes: last, for each frame that has been correctly decoded, promptly utilize traditional JPEG encoding and decoding Key frame and the WZ frame that utilizes LDPC and Huffman mixed encoding and decoding, according to size at the coding structure GOP that coding side adopted, Key frame and WZ frame are mixed into video flowing by the GOP order, revert to decoded video sequence.So far, coding and decoding video compression processing finishes.
This method adopts ROI to distinguish algorithm, strengthens the coding to move violent zone and movement edge zone, and decoded image is adopted the block-eliminating effect filtering reprocessing, further improves the subjective quality of decoded picture, and this method is specific as follows:
1) at coding side
A) frame separator: video sequence is divided into key frame and non-key frame, wherein key frame periodically inserts, the size that depends on image sets, the frame separator is divided into different frames with video sequence, the number difference of the non-key frame that is distributed between per two key frames, key frame adopts intraframe coding, and non-key frame adopts the low density parity check code coding;
B) spatial alternation: block-based conversion, especially discrete cosine transform is applied on each non-key frame, non-key frame is divided into the piece of non-overlapping copies, according to the residing position of the discrete cosine transform coefficient of each piece, forms different discrete cosine transform coefficient collection;
C) quantize: each discrete cosine transform collection is by unified quantification, and these quantized levels depend on the quality that will obtain image, and for a given set, the bit stream of quantized signal is divided into groups together, forms bit plane, encodes independently then;
D) coding: for key frame, use traditional JPEG (joint photographic experts group) to encode, utilize area-of-interest to distinguish algorithm non-key frame is divided into area-of-interest and non-area-of-interest: adopt huffman coding for area-of-interest, adopt the low density parity check code coding for non-area-of-interest;
2) in decoding end
A) generate side information: decoding end is used nearest decoded frame, generate the side information of each non-key frame by the mode that adopts movement compensating frame interpolation or extrapolation, the side information of each non-key frame is taken as the estimated value of original non-key frame, the quality of estimated value is good more, " mistake " that the decoding low-density parity-check (ldpc) code end need correct is few more, and the parity check bit or the bit stream of asking to buffer are few more;
B) correlated noise model: the residual error of corresponding discrete cosine transform coefficient statistics is assumed that a model of obeying laplacian distribution in non-key frame and the side information, and its parameter uses the training mode of off-line to carry out the initialization estimation;
C) decoding low-density parity-check (ldpc) code: key frame uses traditional JPEG (joint photographic experts group) technology to decode; The area-of-interest of non-key frame uses the Huffman recovery of decoding; Non-area-of-interest for non-key frame, in case known the residual error statistics of a side information discrete cosine transform coefficient and a given discrete cosine transform coefficient, each bit plane can carry out decoding low-density parity-check (ldpc) code, begins decoding from most important bit plane; Under the request of low-density odd-even check code decoder, encoder sends some parity check message by feedback channel, in order to judge whether certain specific bit plane of decoding needs more check digit, decoder adopts a request stopping criterion, after low density parity check code is correctly decoded the most significant bit plane of a discrete cosine transform collection, low-density odd-even check code decoder will be handled all the other relevant set with uniform way, when the bit plane of all discrete cosine transform set was all correctly decoded by low density parity check code, decoder then began to decode next the set;
D) reconstruct: behind decoding low-density parity-check (ldpc) code, all bit planes and each discrete cosine transform set are divided into groups together, to form the quantized signal stream and the set of decoding, in case can obtain all decoding quantized signals, assisting down of corresponding side information coefficient, just all discrete cosine transform coefficients of restructural are gathered for the discrete cosine transform coefficient that does not contain non-key framing bit stream that is transmitted, and are gathered by the discrete cosine transform of side information correspondence to replace;
E) inverse transformation: after all discrete cosine transform collection are rebuilt, carry out inverse discrete cosine transformation, the non-key frame that just can obtain decoding thus;
3) decoded picture reprocessing
Decoded image is used filter, thereby weaken the blocking artifact that brings by quantizing, block elimination effect filter is the one-dimensional filtering device, in order to obtain two-dimensional effects, to carry out twice filtering to a piece, be in the horizontal direction for the first time, for the second time in vertical direction, to be quantization error become the continuous variation of original adjacent pixel values into " step " to the reason that produces owing to blocking effect changes, seem just to have the square phenomenon of " pseudo-edge ", total be exactly deblocking effect under the condition that the energy that makes image remains unchanged, becomes " step " very little or approaching continuous grey scale change to the grey scale change of those " steps " very high step change type again as far as possible;
4) frame mixes
For each frame that has been correctly decoded, promptly utilize the key frame and the non-key frame that utilizes low density parity check code and Huffman mixed encoding and decoding of traditional JPEG (joint photographic experts group) encoding and decoding, according to size in the coded image group that coding side adopted, key frame and non-key frame are mixed into video flowing by the corresponding order of image sets, and then revert to decoded video sequence, so far, coding and decoding video compression processing finishes.
Described area-of-interest adopts the process of huffman coding as follows: by the number of times ordering that occurs, occurrence number is many in front with gray scale to be encoded, and number of times is in the back few; Take out the number of times addition of two minimum numbers of occurrence number, sum is as the set element and the rearrangement of a new occurrence number, emerging number of times is still followed the descending rule to determine residing position in new set, two least number of times corresponding gray scale levels of addition become a leaf node of Hofman tree, parents' node of this two node structures, repeat this step, till all gray scales all are used to construct Hofman tree; If the left child of all nodes is " 0 ", right child is " 1 ", then from root, promptly is the Huffman code of this leaf node through each intermediate node to the path code of leaf node; For each discrete cosine transform set of non-area-of-interest, hang down password parity check code coding from most important bit plane; For each bit plane, the odd-even check information that is generated is stored in the buffer, under the request of decoding end, by feedback mechanism, constantly sends check information.
For key frame, use traditional JPEG (joint photographic experts group) to encode, utilize area-of-interest to distinguish algorithm non-key frame is divided into area-of-interest and non-area-of-interest: adopt the Huffman encoding and decoding for area-of-interest, adopt low density parity check code to carry out encoding and decoding for non-area-of-interest, concrete steps are as follows:
Step 1): 8 * 8 macro blocks that every frame are divided into equal and opposite in direction and non-overlapping copies;
Step 2): key frame and non-key frame are carried out graded;
Step 3): the absolute difference sum of calculating key frame and non-key frame same position macro block;
Step 4):, each macro block of non-key frame is carried out area-of-interest distinguish according to the decision criteria of area-of-interest macro block;
Step 5): the area-of-interest macro block to non-key frame adopts Huffman encoding and decoding compression;
Step 6): other macro block to non-key frame then adopts low density parity check code encoding and decoding compression.
Beneficial effect: the inventive method has proposed a kind of improved Wyner-Ziv distributed video compression method, mainly be big with solving in the wireless multimedia sensor network the big network node energy-consumption that brings of the video data volume, the problem that network life cycle is short, and satisfy the demand of people to decoded image quality and real-time video.The method that wireless multimedia sensor network the application of the invention proposes, can solve that the network node energy-consumption that brings because of big data quantity transmission is big, network life cycle short, the estimation of acutely bringing by moving lost efficacy and reached the problem of the blocking artifact that is brought by quantization step, and to the real-time of video and the demanding problem of subjective quality.Reach the data volume that reduces Network Transmission, reduce node transmission energy consumption, prolong network life cycle, guarantee the real-time of multimedia video data transmission and the high-quality of image.Provide specific description below:
1. coding is simple: (as MPEG series, H.26x), the present invention is owing to adopt Wyner-Ziv distributed video coding scheme, and coding side is simple, the decoding end complexity with respect to traditional video encoding standard.Distributed video coding is transferred to decoding end with estimation and the high complexity, the intensive that reach the coding side that motion compensation brought, and decoding end generally is positioned at aggregation node or network center, make full use of that aggregation node and network center's computing capability are strong, storage capacity is big, the advantage of continued power, finishes the compressed encoding to video.
2. low code check: the present invention can be provided with the size of the GOP of image, and the code encoding/decoding mode that has adopted LDPC and Huffman to combine to the WZ frame, and is less to the parity check bit that buffer is asked, thereby greatly reduces the code check of coding.
3. energy consumption is low: the present invention has reduced the data volume of pending video flowing, thereby has reduced the encoding process energy consumption of each sensor node owing to can change the quantity of WZ frame between the Key frame, and then the life cycle that has prolonged whole network.
4. real-time: therefore the present invention has reduced data quantity transmitted because to compression of video data rate height, the data volume behind the compressed encoding is little, has optimized real-time Transmission, and then has guaranteed the real-time of video flowing transmission.
5. reliability: the present invention is owing to adopted the ROI extraction algorithm, the WZ frame is divided into ROI zone and non-ROI zone, the Huffman code encoding/decoding mode is adopted in the ROI zone, realized undistorted compression, improve the accuracy of decoding, in addition, decoded image has been taked block-eliminating effect filtering, further improved the subjective quality of decoded picture, and then realized requirement video coding compression reliability.
Description of drawings
Fig. 1 is the distributed video coding schematic diagram.As figure, the distributed video coding framework comprises low complex degree encoder and high complexity decoder.
Fig. 2 is the schematic diagram of point-to-point wireless mobile video communication.As figure, transmit leg adopts the Wyner-Ziv distributed video coding and the video flowing of encoding is sent to base station or network center's node, in the base station or network center's node the code stream transducer is set, distributed code check is converted to H.26x/MPEG code stream, and the video flowing after will being changed by base station or network center's node then sends the recipient to.For transmit leg and receiving side terminal, only need to carry out the Code And Decode of lower complexity.
Fig. 3 is distributed encoding and decoding schematic diagram.As figure, the mode that adopts the decoding of intraframe coding and interframe to combine at coding side, adopts intraframe coding technology two or more relevant information sources of encoding independently of each other, and coding stream is sent to receiving terminal; In decoding end, utilize the correlation between each information source, carry out the associated prediction decoding.
Fig. 4 is the Wyner-Ziv distributed video coding schematic diagram that the present invention is based on the ROI differentiation of gradient field.Utilize ROI to distinguish algorithm the WZ frame is divided into ROI zone and non-ROI zone, adopt Huffman and LDPC to carry out encoding and decoding then respectively.
Fig. 5 is a decoded picture reprocessing flow chart.Block-eliminating effect filtering only carries out filtering after image is decoded.
Fig. 6 is the whole flow chart of the inventive method.The overall process of the Wyner-Ziv distributed video encoding and decoding of distinguishing based on gradient field ROI has been described as shown in the figure.
Embodiment
This method is on the basis of Wyner-Ziv distributed video coding scheme, based on the image gradient field domain, compress by the violent zone of RO decision criteria extraction motion and based on the Huffman encoding and decoding, all the other zones are then based on the distributed encoding and decoding of LDPC, when reducing code check, improve decoded image quality, reduce the processing and the transmission energy consumption of node, realize the optimization transmission of video, prolong the life cycle of whole network.Adopt block-eliminating effect filtering to handle to decoded image, further improved the quality of decoded picture.
This method is divided into two kinds of different frames based on Wyner-Ziv distributed video coding scheme with video sequence: key frame (Key Frame, K frame) and Wyner-Ziv frame (WZ frame).Key frame is adopted traditional JPEG coded system, utilize ROI to distinguish algorithm the Wyner-Ziv frame is divided into ROI zone and non-ROI zone, to the regional mode that adopts the Huffman encoding and decoding of the ROI of Wyner-Ziv frame, the code encoding/decoding mode of remaining non-ROI area L PDC.Decoded image is carried out reprocessing, adopt block-eliminating effect filtering, further improved the quality of decoded picture.The enforcement of this method is divided into 4 stages: i) at coding side; Ii) in decoding end; Iii) decoded picture reprocessing; Iv) frame mixes, and specifically describes as follows:
Phase I: at coding side
This stage is divided into following processing procedure:
A) frame separator: video sequence is divided into Wyner-Ziv frame (WZ frame) and key frame (Key frame), and wherein key frame periodically inserts, and depends on GOP (Group of Pictures) size.Utilize the frame separator that video sequence is divided into different frames, for each different video sequence, since the difference of coding structure, the attribute difference that is endowed of every frame, thereby the coding processing mode of employing is also different.
B) spatial alternation: block-based conversion is applied to each WZ frame to dct transform especially.According to the residing position of DCT coefficient of each piece, the DCT coefficient of whole WZ frame is divided into different groups, thereby forms different DCT coefficient set.
C) quantize: each DCT collection is by unified quantification, and these quantized levels depend on the quality that will obtain image.For a given set, the bit stream of quantized signal is divided into groups together, forms bit plane, encodes independently then.
D) coding: for the Key frame, use traditional JPEG technology for encoding, utilize ROI to distinguish algorithm the Wyner-Ziv frame is divided into ROI zone and non-ROI zone: adopt the Huffman coding for the ROI zone, adopt the LDPC coding for non-ROI zone.ROI adopts in the zone process of Huffman coding as follows: by the number of times ordering that occurs, occurrence number is many in front with gray scale to be encoded, and number of times is in the back few; Take out the number of times addition of two minimum numbers of occurrence number, sum is as the set element and the rearrangement of a new occurrence number, emerging number of times is still followed the descending rule to determine residing position in new set, two least number of times corresponding gray scale levels of addition become the leaf node of Huffman tree, parents' node of this two node structures, repeat this step, till all gray scales all are used to construct the Huffman tree; If the left child of all nodes is " 0 ", right child is " 1 ", then from root, promptly is the Huffman sign indicating number of this leaf node through each intermediate node to the path code of leaf node.For each DCT collection in non-ROI zone, (Most Significant Bit-plane MSB) begins to carry out the LDPC coding from most important bit plane.For each bit plane (bit-plane), the odd-even check information that is generated is stored in the buffer, under the request of decoding end, by feedback mechanism, constantly sends check information.
Video sequence has been carried out the relevant treatment of coding by this stage, for the decoding of next stage is got ready.Second stage: in decoding end
This stage is divided into following processing procedure:
A) generate side information: decoding end is used nearest decoded frame, by the mode of carrying out movement compensating frame interpolation (or extrapolation) generate each WZ frame side information (Side Information, SI).The side information of each WZ frame is taken as a kind of estimated value of original WZ frame.The quality of estimated value is good more, and " mistake " that the LDPC decoding end need correct is few more, and the parity check bit (or bit stream) of asking to buffer is few more.
B) correlated noise model: the residual error of corresponding DCT coefficient statistics is assumed to be and is used as a model of obeying laplacian distribution in WZ frame and side information, and its parameter uses the training stage of off-line (off-line) to carry out the initialization estimation.
C) LDPC decoding: key frame uses traditional JPEG technology to decode; The ROI of WZ frame uses in the zone Huffman recovery of decoding; For the non-ROI zone of WZ frame, in case known the residual error statistics of a side information DCT coefficient and a given DCT coefficient, each bit plane can carry out LDPC decoding (from the MSB decoding).Under the request of LDPC decoder, encoder sends some parity check message by feedback channel.In order to judge whether certain specific bit plane of correct decoding needs more check digit, decoder adopts a request stopping criterion.After the MSB bit plane of a DCT collection of the decoding of LDPC successfully, the LDPC decoder with a unification mode handle all the other relevant collection.In case the bit plane of all DCT collection is by successfully LDPC decoding, the LDPC decoder begins to decode next the collection.
D) reconstruct: after the LDPC decoding, all bit planes and each DCT collection are divided into groups together, to form quantized signal stream and each collection of decoding.In case can obtain all decoding quantized signals, at assisting down of corresponding side information coefficient, the just all DCT coefficients of restructural.Replace for the DCT collection of the DCT coefficient set of being transmitted that does not contain the WZ bit stream by the side information correspondence.
E) inverse transformation: after all DCT collection are rebuilt, carry out IDCT, the WZ frame that just can obtain decoding thus.
By above several processing procedures, finished being correctly decoded to Key frame and WZ frame.
Phase III: decoded picture reprocessing
Decoded image is used filter, thereby weaken the blocking artifact that brings by quantizing.Block elimination effect filter is the one-dimensional filtering device, in order to obtain two-dimensional effects, will carry out twice filtering to a piece, is in the horizontal direction for the first time, for the second time in vertical direction.To be quantization error become the continuous variation of original adjacent pixel values into " step " to the reason that produces owing to blocking effect changes, and seems just to have the square phenomenon of " pseudo-edge ".Total be exactly deblocking effect under the condition that the energy that makes image remains unchanged, becomes " step " very little or approaching continuous grey scale change to the grey scale change of those " steps " very high step change type again as far as possible.
The quadravalence section: frame mixes
At last, for each frame that has been correctly decoded, promptly utilize traditional JPEG encoding and decoding Key frame and the WZ frame that utilizes LDPC and Huffman mixed encoding and decoding, according to size at the coding structure GOP that coding side adopted, Key frame and WZ frame are mixed into video flowing by the GOP order, revert to decoded video sequence.So far, coding and decoding video compression processing finishes.
Claims (3)
1. video-frequency compression method towards wireless multimedia sensor network, it is characterized in that: adopt region of interest ROI to distinguish algorithm, reinforcement is to the coding in move violent zone and movement edge zone, and decoded image is adopted the block-eliminating effect filtering reprocessing, further improve the subjective quality of decoded picture, this method is specific as follows:
1) at coding side
A) frame separator: video sequence is divided into key frame and WZ frame, wherein key frame periodically inserts, the size that depends on image sets, the frame separator is divided into different frames with video sequence, the number difference of the WZ frame that is distributed between per two key frames, key frame adopts intraframe coding, and the WZ frame adopts the low density parity check code coding;
B) spatial alternation: block-based conversion, discrete cosine transform is applied on each WZ frame, the WZ frame is divided into the piece of non-overlapping copies, according to the residing position of the discrete cosine transform coefficient of each piece, forms different discrete cosine transform coefficient collection;
C) quantize: each discrete cosine transform coefficient collection is by unified quantification, and quantized level depends on the quality that will obtain image, and for a given set, the bit stream of quantized signal is divided into groups together, forms bit plane, encodes independently then;
D) coding: for key frame, use traditional JPEG (joint photographic experts group) to encode, utilize area-of-interest to distinguish algorithm the WZ frame is divided into area-of-interest and non-area-of-interest: adopt huffman coding for area-of-interest, adopt the low density parity check code coding for non-area-of-interest;
2) in decoding end
A) generate side information: the decoding low-density parity-check (ldpc) code end uses nearest decoded frame, generate the side information of each WZ frame by the mode that adopts movement compensating frame interpolation or extrapolation, the side information of each WZ frame is taken as the estimated value of original WZ frame, the quality of estimated value is good more, the mistake that the decoding low-density parity-check (ldpc) code end need correct is few more, and the parity check bit or the bit stream of asking to buffer are few more;
B) correlated noise model: the residual error of corresponding discrete cosine transform coefficient statistics is assumed that a model of obeying laplacian distribution in WZ frame and the side information, and its parameter uses the training mode of off-line to carry out the initialization estimation;
C) decoding low-density parity-check (ldpc) code: key frame uses traditional JPEG (joint photographic experts group) technology to decode; The area-of-interest of WZ frame uses the Huffman recovery of decoding; For the non-area-of-interest of WZ frame, in case known the residual error statistics of a side information discrete cosine transform coefficient and a given discrete cosine transform coefficient, each bit plane can carry out decoding low-density parity-check (ldpc) code, begins decoding from most important bit plane; Under the request of decoding low-density parity-check (ldpc) code end, coding side sends some parity check message by feedback channel, in order to judge whether certain specific bit plane of decoding needs more check digit, the decoding low-density parity-check (ldpc) code end adopts a request stopping criterion, after low density parity check code is correctly decoded the most significant bit plane of a discrete cosine transform collection, low-density odd-even check code decoder will be handled all the other relevant set with uniform way, when the bit plane of all discrete cosine transform set was all correctly decoded by low density parity check code, the decoding low-density parity-check (ldpc) code end then began to decode next the set;
D) reconstruct: behind decoding low-density parity-check (ldpc) code, all bit planes and each discrete cosine transform set are divided into groups together, to form the quantized signal stream and the set of decoding, in case can obtain all decoding quantized signals, assisting down of corresponding side information coefficient, just all discrete cosine transform coefficients of restructural are gathered for the discrete cosine transform coefficient that does not contain WZ framing bit stream that is transmitted, and are gathered by the discrete cosine transform of side information correspondence to replace;
E) inverse transformation: after all discrete cosine transform collection are rebuilt, carry out inverse discrete cosine transformation, the WZ frame that just can obtain decoding thus;
3) decoded picture reprocessing
Decoded image is used filter, thereby weaken the blocking artifact that brings by quantizing, block elimination effect filter is the one-dimensional filtering device, in order to obtain two-dimensional effects, to carry out twice filtering to a piece, be in the horizontal direction for the first time,, the grey scale change of the very high step change type of step become the very little or approaching continuous grey scale change of step again for the second time in vertical direction;
4) frame mixes
For each frame that has been correctly decoded, promptly utilize the key frame and the WZ frame that utilizes low density parity check code and Huffman mixed encoding and decoding of traditional JPEG (joint photographic experts group) encoding and decoding, according to size in the coded image group that coding side adopted, key frame and WZ frame are mixed into video flowing by the corresponding order of image sets, and then revert to decoded video sequence, so far, coding and decoding video compression processing finishes.
2. the video-frequency compression method towards wireless multimedia sensor network according to claim 1, it is characterized in that described area-of-interest adopts the process of huffman coding as follows: gray scale to be encoded is sorted by the number of times that occurs, occurrence number is many in front, and number of times is in the back few; Take out the number of times addition of two minimum numbers of occurrence number, sum is as the set element and the rearrangement of a new occurrence number, emerging number of times is still followed the descending rule to determine residing position in new set, two least number of times corresponding gray scale levels of addition become a leaf node of Hofman tree, parents' node of this two node structures, repeat this step, till all gray scales all are used to construct Hofman tree; If the left child of all nodes is " 0 ", right child is " 1 ", then from root, promptly is the Huffman code of this leaf node through each intermediate node to the path code of leaf node; For each discrete cosine transform set of non-area-of-interest, hang down password parity check code coding from most important bit plane; For each bit plane, the odd-even check information that is generated is stored in the buffer, under the request of decoding end, by feedback mechanism, constantly sends check information.
3. the video-frequency compression method towards wireless multimedia sensor network according to claim 1, it is characterized in that for key frame, use traditional JPEG (joint photographic experts group) to encode, utilize area-of-interest to distinguish algorithm the WZ frame is divided into area-of-interest and non-area-of-interest: adopt the Huffman encoding and decoding for area-of-interest, adopt low density parity check code to carry out encoding and decoding for non-area-of-interest, concrete steps are as follows:
Step 1): 8 * 8 macro blocks that every frame are divided into equal and opposite in direction and non-overlapping copies;
Step 2): key frame and WZ frame are carried out graded;
Step 3): the absolute difference sum of calculating key frame and WZ frame same position macro block;
Step 4):, each macro block of WZ frame is carried out area-of-interest distinguish according to the decision criteria of area-of-interest macro block;
Step 5): the area-of-interest macro block to the WZ frame adopts Huffman encoding and decoding compression;
Step 6): other macro block to the WZ frame then adopts low density parity check code encoding and decoding compression.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010182470 CN101854548B (en) | 2010-05-25 | 2010-05-25 | Wireless multimedia sensor network-oriented video compression method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010182470 CN101854548B (en) | 2010-05-25 | 2010-05-25 | Wireless multimedia sensor network-oriented video compression method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101854548A CN101854548A (en) | 2010-10-06 |
CN101854548B true CN101854548B (en) | 2011-09-07 |
Family
ID=42805771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010182470 Expired - Fee Related CN101854548B (en) | 2010-05-25 | 2010-05-25 | Wireless multimedia sensor network-oriented video compression method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101854548B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343119A (en) * | 2017-07-28 | 2017-11-10 | 北京化工大学 | A kind of digital picture steganographic data method for deleting |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012088629A1 (en) * | 2010-12-29 | 2012-07-05 | Technicolor (China) Technology Co., Ltd. | Method for generating motion synthesis data and device for generating motion synthesis data |
PT2700234T (en) * | 2011-04-22 | 2019-07-23 | Dolby Int Ab | Method and device for lossy compress-encoding data |
CN102137262B (en) * | 2011-05-03 | 2017-04-12 | 深圳市云宙多媒体技术有限公司 | Method and device for selecting irregular dividing video coding mode |
CN102158703B (en) * | 2011-05-04 | 2013-01-23 | 西安电子科技大学 | Distributed video coding-based adaptive correlation noise model construction system and method |
CN102630008B (en) * | 2011-09-29 | 2014-07-30 | 北京京东方光电科技有限公司 | Method and terminal for wireless video transmission |
CN102510427B (en) * | 2011-12-01 | 2013-12-18 | 大连三通科技发展有限公司 | Real-time online transmission method for cell phone with low network bandwidth |
CN102572428B (en) * | 2011-12-28 | 2014-05-07 | 南京邮电大学 | Side information estimating method oriented to distributed coding and decoding of multimedia sensor network |
CN102595132A (en) * | 2012-02-17 | 2012-07-18 | 南京邮电大学 | Distributed video encoding and decoding method applied to wireless sensor network |
CN103517072B (en) * | 2012-06-18 | 2017-11-03 | 联想(北京)有限公司 | Video communication method and equipment |
CN102833536A (en) * | 2012-07-24 | 2012-12-19 | 南京邮电大学 | Distributed video encoding and decoding method facing to wireless sensor network |
JP6217643B2 (en) * | 2012-09-19 | 2017-10-25 | 日本電気株式会社 | Video encoding device |
CN103002283A (en) * | 2012-11-20 | 2013-03-27 | 南京邮电大学 | Multi-view distributed video compression side information generation method |
CN104935946B (en) * | 2015-06-12 | 2017-12-26 | 珠海市杰理科技股份有限公司 | Improve the method and system of digital picture blocking artifact |
WO2017045101A1 (en) | 2015-09-14 | 2017-03-23 | Mediatek Singapore Pte. Ltd. | Advanced deblocking filter in video coding |
CN116634168B (en) * | 2023-07-26 | 2023-10-24 | 上海方诚光电科技有限公司 | Image lossless processing method and system based on industrial camera |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070013561A1 (en) * | 2005-01-20 | 2007-01-18 | Qian Xu | Signal coding |
US8340193B2 (en) * | 2006-08-04 | 2012-12-25 | Microsoft Corporation | Wyner-Ziv and wavelet video coding |
CN100512443C (en) * | 2007-01-11 | 2009-07-08 | 北京交通大学 | Distributive vide frequency coding method based on self adaptive Hashenhege type vector quantization |
CN101360236B (en) * | 2008-08-08 | 2010-08-11 | 宁波大学 | Wyner-ziv video encoding and decoding method |
CN101621690B (en) * | 2009-07-24 | 2012-07-04 | 北京交通大学 | Two-description video coding method based on Wyner-Ziv principle |
-
2010
- 2010-05-25 CN CN 201010182470 patent/CN101854548B/en not_active Expired - Fee Related
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343119A (en) * | 2017-07-28 | 2017-11-10 | 北京化工大学 | A kind of digital picture steganographic data method for deleting |
Also Published As
Publication number | Publication date |
---|---|
CN101854548A (en) | 2010-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101854548B (en) | Wireless multimedia sensor network-oriented video compression method | |
CN103002283A (en) | Multi-view distributed video compression side information generation method | |
CN101159875B (en) | Double forecast video coding/decoding method and apparatus | |
CN102572428B (en) | Side information estimating method oriented to distributed coding and decoding of multimedia sensor network | |
CN102271256B (en) | Mode decision based adaptive GOP (group of pictures) distributed video coding and decoding method | |
CN104320657B (en) | The predicting mode selecting method of HEVC lossless video encodings and corresponding coding method | |
CN102256133B (en) | Distributed video coding and decoding method based on side information refining | |
KR20110014839A (en) | Method and apparatus for encoding video, and method and apparatus for decoding video | |
US9014499B2 (en) | Distributed source coding using prediction modes obtained from side information | |
CN104301730A (en) | Two-way video coding and decoding system and method based on video mobile equipment | |
CN103533359A (en) | H.264 code rate control method | |
CN103581670A (en) | H.264 self-adaptation intra-frame mode selection code rate estimated rate-distortion optimization method and device thereof | |
CN102833536A (en) | Distributed video encoding and decoding method facing to wireless sensor network | |
CN100508608C (en) | Non-predicted circulation anti-code error video frequency coding method | |
CN102595132A (en) | Distributed video encoding and decoding method applied to wireless sensor network | |
CN110351552A (en) | A kind of fast encoding method in Video coding | |
CN102065293B (en) | Image compression method based on space domain predictive coding | |
CN105611301A (en) | Distributed video coding and decoding method based on wavelet domain residual errors | |
CN105791868B (en) | The method and apparatus of Video coding | |
CN101002476B (en) | Coding and decoding method and coding and decoding device for video coding | |
Wang et al. | A low complexity compressed sensing-based codec for consumer depth video sensors | |
Ming-Feng et al. | Lossless video compression using combination of temporal and spatial prediction | |
CN100579227C (en) | System and method for selecting frame inner estimation mode | |
Barbarien et al. | Scalable motion vector coding | |
CN108632613B (en) | Hierarchical distributed video coding method and system based on DISCOVER framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110907 Termination date: 20150525 |
|
EXPY | Termination of patent right or utility model |