CN1672421A - Method and apparatus for performing multiple description motion compensation using hybrid predictive codes - Google Patents
Method and apparatus for performing multiple description motion compensation using hybrid predictive codes Download PDFInfo
- Publication number
- CN1672421A CN1672421A CNA038181967A CN03818196A CN1672421A CN 1672421 A CN1672421 A CN 1672421A CN A038181967 A CNA038181967 A CN A038181967A CN 03818196 A CN03818196 A CN 03818196A CN 1672421 A CN1672421 A CN 1672421A
- Authority
- CN
- China
- Prior art keywords
- sequence
- subframes
- encoder
- coding
- frame sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 239000013598 vector Substances 0.000 claims abstract description 30
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 claims 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 claims 1
- 230000002457 bidirectional effect Effects 0.000 claims 1
- 230000002123 temporal effect Effects 0.000 abstract description 2
- 238000000638 solvent extraction Methods 0.000 abstract 1
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000002950 deficient Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/573—Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/162—User input
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/164—Feedback from the receiver or from the transmission channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/37—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/39—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving multiple description coding [MDC], i.e. with separate layers being structured as independently decodable descriptions of input picture data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/89—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
An improved multiple description coding (MDC) method and apparatus is provided which extends multi-description motion compensation (MDMC) by allowing for multi-frame prediction and is not limited to only I and P frames. Further, the coding method of the invention extends MDMC for use with any conventional predictive coder, such as, for example, MPEG2/4 and H.26L. The improved MDC permits the use of any conventional predictive coder for use as a top and bottom predictive encoder. Further, the top and bottom predictive coders can advantageously include B-frames and multiple prediction motion compensation. Still further, any of the top, middle and bottom predictive encoders can be a scalable encoder (e.g., FGS-like or data-partitioning like where the motion vectors (MVs) are sent first, temporal scalability etc.).
Description
Present invention relates in general to on the network or the data of transmitting on other type communication medium, voice, image, video and other type signal carry out multiple description coded (MDC).
The most information that on current network, transmits even can under the deterioration condition, use.Example comprises voice, audio frequency, rest image and video.When this type of information suffers packet loss, because of real-time constraint can't retransmit.The superior function of overall transmission rate, distortion and time delay aspect sometimes can be by increase redundancy in bit stream but not retransmits lost divides into groups to realize.
Can add redundancy in bit stream by multiple description coded (MDC) method, in MDC, data are broken down into several streams, and these streams have some redundancies.When receiving all streams, can be so that the high slightly bit rate of designed system is that cost is guaranteed low distortion than aiming at compression.On the other hand, when only receiving some of them when stream, the quality of reconstruction can appropriateness descend, and to aiming at compression designed system this may take place hardly.Unlike multiresolution or layering source encoding, level is not described; So multiple description coded erasure channel or packet network that priority is not provided that be applicable to.
Multiple description coded can the realization in many ways.A kind of mode is by collecting strange frame sequence and even frame sequence and independently the sequence of double sampling on the time of gained is encoded respectively at encoder, thereby input video stream is decomposed arbitrarily in the channel subset.When receiving one of sub-sampled sequences on the decoder, can be with half frame rate with decoding video stream.Because the correlation properties of video flowing only receive that a sub-sampled sequences allows to use motion compensated error concealment techniques to recover intermediate frame.The more detailed description of this technology can be referring to people's such as Wenger " error resilience H.263+ (error resilience) support " (" Error resiliencesupport in H.263+ ", IEEE Transactions on Circuits and Systems for VideoTechnology, pp.867-877, November 1998).
In order to realize error resilience, Wang and Lin showed is entitled as " adopting the error resilience video coding of multiple description motion compensation " (" Error resilient video coding using multipledescription motion compensation ", IEEE Transactions on Circuits andSystems for Video Technology, vol.12, no.6, pp.4348-52, June 2002) paper a kind of multiple description coded method that is used to realize has been described.According to the method, the time prediction device allows encoder to utilize even frame in the past and strange frame when coding, thereby only receives that on decoder is caused an a kind of mismatch when describing between encoder.This mismatch error can be carried out explicit coding for overcoming this problem.The main benefit that allows encoder simultaneously strange frame sequence and even frame sequence to be used to predict is code efficiency.By changing the tap of termporal filter, can the control redundancy amount.Disclosed method provides rational flexibility between amount of redundancy and error resilience.
A defective of the method that Wang and Lin proposed is that this method is confined to only I and P frame (not having the B frame).Another defective of this method is that it does not allow to carry out multi-frame prediction as in H.26L.These drawbacks limit the code efficiency of MDMC, but also require proprietary completely realization but not adopt available codec modules.
The invention provides a kind of improved multiple description coded (MDC) method and apparatus, this method and apparatus has overcome above-mentioned defective.Specifically, coding method of the present invention has been expanded multiple description motion compensation (MDMC) by allowing multi-frame prediction, and is not limited only to I frame and P frame.As MPEG2/4 and H.26L in addition, coding method of the present invention is used for the predictive coder of any routine with MDMC expansion.
According to a first aspect of the invention, provide a kind of improved MDMC encoder, it comprises three predictability encoders, promptly goes up predictability encoder, middle predictability encoder and following predictability encoder.Incoming frame offers described encoder with the form of three independent inputs.Incoming frame offers central encoder.In addition, incoming frame is divided into two subframe streams, and the first subframe stream includes only strange frame, and second subframe stream then includes only even frame.Encode by the last encoder of first subframe stream input that even frame is formed, with the even frame sequence that obtains encoding; And encode by the following encoder of second subframe stream input that strange frame is formed, with the strange frame sequence that obtains encoding.Notice that other embodiment can use different criterions to come frame is divided, and for example can adopt unbalanced division, wherein, per two frames are by last encoder encodes in three frames, and per the 3rd frame is by encoder encodes down.The original incoming frame stream without cutting apart is added to central encoder, calculates the prediction of strange frame according to even frame by it.In addition, the central encoder prediction of calculating even frame separately according to strange frame.Calculate the prediction residue between the central encoder and the first side encoder and the second side encoder subsequently respectively.MDMC encoder of the present invention output predicts that with even frame corresponding first calculates the output of prediction residue together with last encoder, and output predicts that with strange frame corresponding second calculates the output of prediction residue together with following encoder.
According to a second aspect of the invention, a kind of method of encoding video signal to the expression frame sequence is provided, described method comprises: described frame sequence is divided into first subsequence and second subsequence, first subsequence is added to the first side encoder, second subsequence is added to the second side encoder, the original frame sequence of not cutting apart is added to central encoder, calculate first prediction residue between output of the first side encoder and the central encoder output, calculate second prediction residue between output of the second side encoder and the central encoder output, the output of first prediction residue and the first side encoder is merged into first substream of data, the output of second prediction residue and the second side encoder is merged into second substream of data, separately sends first substream of data and second substream of data then.
Advantage of the present invention comprises:
(1) can be with any conventional predictability encoder as last encoder and following encoder.In addition, last predictability encoder and following predictability encoder can advantageously comprise B frame and many prediction motion compensation;
(2) arbitrary encoder of going up in predictability encoder, medium range forecast encoder and the following predictability encoder can be scalable coding device (for example, at first sending motion vector (MV)), the similar FGS (fine scalable coding) that adopts time domain classification technology such as (temporal scalability) or the encoder that the class likelihood data is cut apart).For example, be in the situation of scalable coding device having only central encoder, central encoder allows a transmitting channel information of quantity.Determining under the considerably less extreme case of available bandwidth, with the information that only sends by the side encoder encodes., then will use extendible central encoder transmitting channel to allow the mismatch signal of quantity when but other bandwidth becomes the time spent.
(3), be used for determining that the prediction according to strange/even frame sequence of current idol/strange frame of mismatch signal can obtain according to the B frame for the complexity of restriction system.
(4) not to carry out as conventional calculation side predicated error (promptly being used for the even frame of side encoder and the error between the strange frame) and to its coding, but go back mismatch between calculation side predicated error and the central error (be present frame and according to the error between the prediction of front cross frame), or calculate central error (central error).
Description, the same label in the accompanying drawing are represented corresponding device:
Fig. 1 represents MDMC encoder according to an embodiment of the invention.
Multiple description coded (MDC) refers to a kind of compressed format, its objective is an inlet flow is encoded into a plurality of bit streams that separate, and wherein, described a plurality of bit streams that separate usually are called a plurality of descriptions.These bit streams that separate all have the characteristic that can be decoded independently of one another.Specifically, if decoder is received any one bit stream, then it can be with this bit stream decoding, to obtain useful signal (need not to visit any other bit stream).MDC also has following characteristic: the decoded signal quality can correctly be received and improved with more bit streams.For example, suppose that employing MDC is encoded into N stream altogether with a video.As long as decoder is received any one stream in this N the stream, it just can be decoded and obtain a useful version of this video.If this decoder receives two streams, then it can be decoded and obtain the video version that makes moderate progress than the situation of only receiving a stream.This qualitative improvement lasts till always receives all N streams, and at this moment it can rebuild best in quality.
There are many kinds to realize the method for MDC video coding.A kind of method is independently different frame to be encoded into not homogeneous turbulence.For example, each video sequence frame can only adopt intraframe coding such as JPEG, JPEG-2000 or any to use the video encoding standard (as MPEG-1/2/4, H.26-1/3) of I frame coding to be encoded into a frame (being independent of other frame).For example, all even frame sequences can send in stream 1, and the odd frame sequence of institute can send in stream 2.Decode because each frame can be independent of other frame, decode so each bit stream also can be independent of other stream.This simple form of MDC video coding has aforesaid characteristic, but causes compression efficiency not really high because of lacking intraframe coding.
Before describing Fig. 1 in detail, recall the hierarchy of pixel in the digitized image that adopts in the relevant MPEG2 standard and some definition of predicting strategy earlier.The two blocking of brightness and chroma sample (pixel), every by 8 * 8 matrixes (each piece contains 8 row pixels, and every row contains 8 pixels again) formation; The brightness of some and chrominance block (as the chroma data piece of 4 brightness data pieces and 2 correspondences) form a macro block; So digitized image comprises the matrix that macro block constitutes, its size depends on selected class (promptly, depend on resolution) and supply frequency: for example, under 50 hertz supply frequency, the macro block of its big I from minimum 18 * 32 is to the macro block of maximum 72 * 120.Image can have frame structure (wherein the pixel of sequential lines is attached to different) or field structure (wherein all pixels are attached to same) again.Therefore, macro block also can have frame structure or field structure.Image is organized into image sets again, wherein, first image is the I image always, be thereafter some B images (image of two-way interpolation, their obey forward direction or back forecast or the two, " forward direction " refers to that prediction is based on previous image, and " back to " refers to predict the reference frame based on future), be the P image then, the P image is used to predict the B image, will be encoded at once after the I image.
Refer now to Fig. 1, a signal source that does not show provides a frame sequence 201 (being frame structure) of having arranged by coded sequence (even order that reference picture can be used) for encoder 200, and afterwards, these frame sequences can be used for image prediction.Complete frame sequence 201 is received by the motion estimation unit (not shown), this estimation unit just be used for calculating one or more motion vectors of each macro block of image encoded and with cost or error described or each vector correlation connection, and with its output.Encoder 200 comprises the first side encoder (side encoder 1) 202, central encoder 204 and the second side encoder 206.Complete frame sequence 201 all is added to central encoder 204.First subclass 210 of full frame sequence 201 is made of even frame subsequence 210 subclass of full frame sequence 201 in the present embodiment, and it is added to the first side encoder 202.Second subclass 220 of full frame sequence 201 is made of strange frame sequence 220 subclass of full frame sequence 201 in the present embodiment, and it is added to the second side encoder 206.
General introduction predictive coding operation below.
A. the first side encoder 202
The strange frame subsequence 210 that comprises the subclass of list entries 201 is added to the first side encoder 202.It should be noted that the first side encoder 202 can be advantageously implemented as any conventional predictive coder (as MPEG-1/2/4, H.26-1/3).202 pairs of strange frame subsequences 210 of the first side encoder are encoded, the strange frame subsequence 211 of output encoder.The strange frame subsequence 211 of coding is included in first substream of data 245 as the component that will export.The strange frame subsequence 211 of coding also offers central encoder sub-module 230 as input, will describe below.
B. the second side encoder 206
The even frame subsequence 220 that comprises the subclass of list entries 201 is added to the second side encoder 206.It should be noted that the second side encoder 206 is similar to the first side encoder 202 and can be advantageously implemented as any conventional predictive coder (as MPEG-1/2/4, H.26-1/3).The second side encoder, 206 antithesis frame subsequences 220 are encoded the even frame subsequence 212 of output encoder.The even frame subsequence 212 of coding is included in second substream of data 255 as the component that will export.The even frame subsequence 212 of coding also offers central encoder sub-module 232 as input, will describe below.
C. central encoder 204
Complete frame sequence 201 is added on the central encoder 204.
Central encoder sub-module 250 is calculated first group of motion vector 214, but also calculates even frame forecasting sequence 215 and it is encoded, and this idol frame forecasting sequence 215 is made of the even frame prediction of making according to the strange frame of list entries 201.The central encoder sub-module 250 even frame forecasting sequence 215 of output and first motion vector sequence 214, these two sequences all offer central encoder sub-module 230 as input.
Central encoder sub-module 260 is calculated second group of motion vector 216, but also calculates strange frame forecasting sequence 217 and it is encoded, and this strange frame forecasting sequence 217 is made of the strange frame prediction of making according to the even frame of list entries 201.The central encoder sub-module 260 strange frame forecasting sequence 217 of output and second motion vector sequence 216, these two sequences all offer central encoder sub-module 232 as input.
Central encoder sub-module 230 is carried out two kinds of functions or processing.First kind of processing is that the first group of motion vector 214 that receives from submodule 250 encoded, with the motion vector 218 of exporting first group coding.Second kind of function or processing are to calculate first prediction residue 221, and this can calculate as follows:
First prediction residue=e
c-e
s(1)
Wherein, e
c=even frame predictive frame sequence 215, and
e
sThe strange frame subsequence 211 of=coding.
Central encoder sub-module 230 outputs comprise the motion vector 218 of first prediction residue 221 of coding together with first group coding.These outputs merge with the strange frame sequence 211 (some A) of coding, jointly as 245 outputs of first substream of data.
Similarly, calculate second prediction residue as follows, so that be included in second substream of data 255:
Second prediction residue=e
c-e
s(2)
Wherein, e
c=strange frame predictive frame sequence 217, and
e
sThe even frame subsequence 212 of=coding.
Central encoder sub-module 232 outputs comprise the motion vector 219 of second prediction residue 222 of coding together with second group coding.These outputs merge with the even frame sequence 212 (some B) of coding, as 255 outputs of second substream of data.
Above description of the preferred embodiment of the present invention is used for signal and explanation.Described preferred embodiment is not used in exhaustive or the present invention is limited to disclosed precise forms, obviously can make many kinds of modifications and variations according to above instruction.These class modifications and variations are conspicuous for technical staff in the art, and are included in the scope of the invention that limits as appended claims.
Claims (15)
1. one kind is used for coding method that incoming frame sequence (201) is encoded, and described method comprises the following steps:
A) to first sequence of subframes (210) coding, with first sequence of subframes (211) that obtains encoding from described incoming frame sequence (201);
B) to second sequence of subframes (220) coding, with second sequence of subframes (212) that obtains encoding from described incoming frame sequence (201);
C) calculate the first predictive frame sequence (215) by described second sequence of subframes (220);
D) calculate the second predictive frame sequence (217) by described first sequence of subframes (210);
E) calculate first group of motion vector (214) by the described first predictive frame sequence (215);
F) calculate second group of motion vector (216) by the described second predictive frame sequence (217);
G) first prediction residue is calculated as error between first sequence of subframes (211) of described first predictive frame sequence (215) and described coding;
H) second prediction residue is calculated as error between second sequence of subframes (212) of described second predictive frame sequence (217) and described coding;
I) described first prediction residue, described second prediction residue, described first group of motion vector (214) and described second group of motion vector (216) are encoded;
J) determine network condition;
K) can the first group of motion vector (221) of first prediction residue (218) of described coding, described coding and first sequence of subframes (211) of described coding be merged into first substream of data (245) according to described definite network condition with expanding;
L) can the second group of motion vector (222) of second prediction residue (219) of described coding, described coding and second sequence of subframes (212) of described coding be merged into second substream of data (255) according to described definite network condition with expanding; And
M) send described first and second substream of data (245,255) independently.
2. the method for claim 1, it is characterized in that: described definite network condition is the channel width determination data.
3. the method for claim 1 is characterized in that: be included in described step (a) before described incoming frame sequence (201) is pressed the tactic preliminary step of predictive encoding.
4. the method for claim 1, it is characterized in that: described first sequence of subframes (210) includes only the strange frame from described incoming frame sequence (201).
5. the method for claim 1, it is characterized in that: described second sequence of subframes (220) includes only those the even frames from described incoming frame sequence (201).
6. the method for claim 1 is characterized in that: described second sequence of subframes (220) comprises from described incoming frame sequence (201) and is not included in those frames in described first sequence of subframes (210).
7. the method for claim 1, it is characterized in that: described first and second sequence of subframes (210,220) are selected according to user preference.
8. the method for claim 1 is characterized in that: described incoming frame sequence comprises frame (I), predictive frame (P) and bidirectional frame (B) in the frame.
9. one kind is used for encoder 200 that incoming frame sequence (201) is encoded, and described encoder (200) comprising:
A) in the first side encoder (202), first sequence of subframes (210) from described incoming frame sequence (201) is encoded;
B) in the second side encoder (206), second sequence of subframes (220) from described incoming frame sequence (201) is encoded;
C) in central encoder (204), calculate the first predictive frame sequence (215) by described second sequence of subframes (220);
D) in described central encoder (204), calculate the second predictive frame sequence (217) by described first sequence of subframes (210);
E) in described central encoder (204), calculate first group of motion vector (214) by the described first predictive frame sequence (215);
F) in described central encoder (204), calculate second group of motion vector (216) by the described second predictive frame sequence (217);
G) in described central encoder (204), first prediction residue is calculated as error between first sequence of subframes (211) of described first predictive frame sequence (215) and described coding;
H) in described central encoder (204), second prediction residue is calculated as error between second sequence of subframes (212) of described second predictive frame sequence (217) and described coding;
I) in described central encoder (204), described first prediction residue, described second prediction residue, described first group of motion vector (214) and described second group of motion vector (216) are encoded;
J) determine network condition;
K) can the first group of motion vector (221) of first prediction residue (218) of described coding, described coding and first sequence of subframes (211) of described coding be merged into first substream of data (245) according to described definite network condition with expanding;
L) second sequence of subframes (212) that can expand second prediction residue (219), described second group of motion vector (222) and the described coding of the just described coding in ground according to described definite network condition is merged into second substream of data (255); And
M) send described first and second substream of data (245,255) independently from described encoder (200).
10. encoder as claimed in claim 9 is characterized in that: the described first side encoder (202), the described second side encoder (206) and described central encoder (204) are conventional predictability encoders.
11. encoder 200 as claimed in claim 10 is characterized in that: the described first side encoder (202), the described second side encoder (206) and described central encoder (204) are extendible encoders.
12. encoder as claimed in claim 10 is characterized in that: described conventional predictability encoder is the encoder of selecting from comprise following encoder group: MPEG1, MPEG2, MPEG4, MPEG7, encoder H.261, H.262, H.263, H.263+, H.263++, H.26L and H.26L.
13. encoder as claimed in claim 9 is characterized in that: described encoder (200) is included in the telecommunications transmitter of wireless network.
14. one kind is used for system that incoming frame sequence (201) is encoded, described system comprises:
Be used for from first sequence of subframes (210) of described incoming frame sequence (201) coding, with the device of first sequence of subframes (211) that obtains encoding;
Be used for from second sequence of subframes (220) of described incoming frame sequence (201) coding, with the device of second sequence of subframes (212) that obtains encoding;
Be used for calculating the device of the first predictive frame sequence (215) by described second sequence of subframes (220);
Be used for calculating the device of the second predictive frame sequence (217) by described first sequence of subframes (210);
Be used for calculating the device of first group of motion vector (214) by the described first predictive frame sequence (215);
Be used for calculating the device of second group of motion vector (216) by the described second predictive frame sequence (217);
Be used for first prediction residue is calculated as the device of the error between first sequence of subframes (211) of described first predictive frame sequence (215) and described coding;
Be used for second prediction residue is calculated as the device of the error between second sequence of subframes (212) of described second predictive frame sequence (217) and described coding;
Be used for described first prediction residue, described second prediction residue, described first group of motion vector (214) and described second group of motion vector (216) are carried out apparatus for encoding;
Be used for determining the device of network condition;
Be used for according to described definite network condition can expand first sequence of subframes (211) of first group of motion vector (221) of first prediction residue (218) of described coding, described coding and described coding is merged into the device of first substream of data (245);
Be used for according to described definite network condition can expand second sequence of subframes (212) of second group of motion vector (222) of second prediction residue (219) of described coding, described coding and described coding is merged into the device of second substream of data (255); And
Be used for sending independently the device of described first and second substream of data (245,255).
15. system as claimed in claim 15 is characterized in that also comprising: the device that is used for arranging described incoming frame sequence (201) by predefined procedure.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US39975502P | 2002-07-31 | 2002-07-31 | |
US60/399,755 | 2002-07-31 | ||
US46178003P | 2003-04-10 | 2003-04-10 | |
US60/461,780 | 2003-04-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN1672421A true CN1672421A (en) | 2005-09-21 |
Family
ID=31498603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA038181967A Pending CN1672421A (en) | 2002-07-31 | 2003-07-24 | Method and apparatus for performing multiple description motion compensation using hybrid predictive codes |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1527607A1 (en) |
JP (1) | JP2005535219A (en) |
KR (1) | KR20050031460A (en) |
CN (1) | CN1672421A (en) |
AU (1) | AU2003249461A1 (en) |
WO (1) | WO2004014083A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009056071A1 (en) * | 2007-10-26 | 2009-05-07 | Huawei Technologies Co., Ltd. | A multiple description coding and decoding method, system and apparatus based on frame |
CN101371294B (en) * | 2005-12-19 | 2012-01-18 | 杜比实验室特许公司 | Method for processing signal and equipment for processing signal |
CN105103554A (en) * | 2013-03-28 | 2015-11-25 | 华为技术有限公司 | Method for protecting video frame sequence against packet loss |
CN106961607A (en) * | 2017-03-28 | 2017-07-18 | 山东师范大学 | Time-domain lapped transform based on JND is multiple description coded, decoding method and system |
CN107027028A (en) * | 2017-03-28 | 2017-08-08 | 山东师范大学 | Random offset based on JND quantifies the method and system of multiple description coded decoding |
CN110740380A (en) * | 2019-10-16 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Video processing method and device, storage medium and electronic device |
CN114640867A (en) * | 2022-05-20 | 2022-06-17 | 广州万协通信息技术有限公司 | Video data processing method and device based on video stream authentication |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1574995A1 (en) * | 2004-03-12 | 2005-09-14 | Thomson Licensing S.A. | Method for encoding interlaced digital video data |
EP1638337A1 (en) | 2004-09-16 | 2006-03-22 | STMicroelectronics S.r.l. | Method and system for multiple description coding and computer program product therefor |
ITTO20040780A1 (en) | 2004-11-09 | 2005-02-09 | St Microelectronics Srl | PROCEDURE AND SYSTEM FOR THE TREATMENT OF SIGNALS TO MULTIPLE DESCRIPTIONS, ITS COMPUTER PRODUCT |
JP2008521321A (en) | 2004-11-17 | 2008-06-19 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Robust wireless multimedia transmission in multiple-input multiple-output (MIMO) systems assisted by channel state information |
EP1879399A1 (en) * | 2006-07-12 | 2008-01-16 | THOMSON Licensing | Method for deriving motion data for high resolution pictures from motion data of low resolution pictures and coding and decoding devices implementing said method |
US8897322B1 (en) * | 2007-09-20 | 2014-11-25 | Sprint Communications Company L.P. | Enhancing video quality for broadcast video services |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6700933B1 (en) * | 2000-02-15 | 2004-03-02 | Microsoft Corporation | System and method with advance predicted bit-plane coding for progressive fine-granularity scalable (PFGS) video coding |
-
2003
- 2003-07-24 KR KR1020057001444A patent/KR20050031460A/en not_active Application Discontinuation
- 2003-07-24 CN CNA038181967A patent/CN1672421A/en active Pending
- 2003-07-24 JP JP2004525701A patent/JP2005535219A/en active Pending
- 2003-07-24 EP EP03766578A patent/EP1527607A1/en not_active Withdrawn
- 2003-07-24 WO PCT/IB2003/003436 patent/WO2004014083A1/en not_active Application Discontinuation
- 2003-07-24 AU AU2003249461A patent/AU2003249461A1/en not_active Abandoned
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101371294B (en) * | 2005-12-19 | 2012-01-18 | 杜比实验室特许公司 | Method for processing signal and equipment for processing signal |
WO2009056071A1 (en) * | 2007-10-26 | 2009-05-07 | Huawei Technologies Co., Ltd. | A multiple description coding and decoding method, system and apparatus based on frame |
CN101420607B (en) * | 2007-10-26 | 2010-11-10 | 华为技术有限公司 | Method and apparatus for multi-description encoding and decoding based on frame |
CN105103554A (en) * | 2013-03-28 | 2015-11-25 | 华为技术有限公司 | Method for protecting video frame sequence against packet loss |
US10425661B2 (en) | 2013-03-28 | 2019-09-24 | Huawei Tehcnologies Co., Ltd. | Method for protecting a video frame sequence against packet loss |
CN106961607A (en) * | 2017-03-28 | 2017-07-18 | 山东师范大学 | Time-domain lapped transform based on JND is multiple description coded, decoding method and system |
CN107027028A (en) * | 2017-03-28 | 2017-08-08 | 山东师范大学 | Random offset based on JND quantifies the method and system of multiple description coded decoding |
CN106961607B (en) * | 2017-03-28 | 2019-05-28 | 山东师范大学 | Time-domain lapped transform based on JND is multiple description coded, decoded method and system |
CN107027028B (en) * | 2017-03-28 | 2019-05-28 | 山东师范大学 | Random offset based on JND quantifies multiple description coded, decoded method and system |
CN110740380A (en) * | 2019-10-16 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Video processing method and device, storage medium and electronic device |
CN114640867A (en) * | 2022-05-20 | 2022-06-17 | 广州万协通信息技术有限公司 | Video data processing method and device based on video stream authentication |
Also Published As
Publication number | Publication date |
---|---|
KR20050031460A (en) | 2005-04-06 |
JP2005535219A (en) | 2005-11-17 |
EP1527607A1 (en) | 2005-05-04 |
WO2004014083A1 (en) | 2004-02-12 |
AU2003249461A1 (en) | 2004-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101189882B (en) | Method and apparatus for encoder assisted-frame rate up conversion (EA-FRUC) for video compression | |
US7369610B2 (en) | Enhancement layer switching for scalable video coding | |
US8532187B2 (en) | Method and apparatus for scalably encoding/decoding video signal | |
KR100888963B1 (en) | Method for scalably encoding and decoding video signal | |
KR100888962B1 (en) | Method for encoding and decoding video signal | |
US9338453B2 (en) | Method and device for encoding/decoding video signals using base layer | |
KR20060105407A (en) | Method for scalably encoding and decoding video signal | |
KR20060063611A (en) | Method for decoding an image block | |
CN1672421A (en) | Method and apparatus for performing multiple description motion compensation using hybrid predictive codes | |
US20060133482A1 (en) | Method for scalably encoding and decoding video signal | |
US20060120454A1 (en) | Method and apparatus for encoding/decoding video signal using motion vectors of pictures in base layer | |
US20080008241A1 (en) | Method and apparatus for encoding/decoding a first frame sequence layer based on a second frame sequence layer | |
KR100883591B1 (en) | Method and apparatus for encoding/decoding video signal using prediction information of intra-mode macro blocks of base layer | |
Pereira | Distributed video coding: Basics, main solutions and trends | |
US20070223573A1 (en) | Method and apparatus for encoding/decoding a first frame sequence layer based on a second frame sequence layer | |
US20070280354A1 (en) | Method and apparatus for encoding/decoding a first frame sequence layer based on a second frame sequence layer | |
US20070242747A1 (en) | Method and apparatus for encoding/decoding a first frame sequence layer based on a second frame sequence layer | |
US20060159176A1 (en) | Method and apparatus for deriving motion vectors of macroblocks from motion vectors of pictures of base layer when encoding/decoding video signal | |
US20060133497A1 (en) | Method and apparatus for encoding/decoding video signal using motion vectors of pictures at different temporal decomposition level | |
CN1202673C (en) | Enhanced type fineness extensible video coding structure | |
KR20060101847A (en) | Method for scalably encoding and decoding video signal | |
US20060133498A1 (en) | Method and apparatus for deriving motion vectors of macroblocks from motion vectors of pictures of base layer when encoding/decoding video signal | |
US20060133499A1 (en) | Method and apparatus for encoding video signal using previous picture already converted into H picture as reference picture of current picture and method and apparatus for decoding such encoded video signal | |
US20060120457A1 (en) | Method and apparatus for encoding and decoding video signal for preventing decoding error propagation | |
US20060133488A1 (en) | Method for encoding and decoding video signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |