[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2000067487A1 - Low bit rate video coding method and system - Google Patents

Low bit rate video coding method and system Download PDF

Info

Publication number
WO2000067487A1
WO2000067487A1 PCT/EP2000/003773 EP0003773W WO0067487A1 WO 2000067487 A1 WO2000067487 A1 WO 2000067487A1 EP 0003773 W EP0003773 W EP 0003773W WO 0067487 A1 WO0067487 A1 WO 0067487A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
blocks
previous
block
encoding
Prior art date
Application number
PCT/EP2000/003773
Other languages
French (fr)
Inventor
Daniel Snook
Jean Gobert
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to EP00927072A priority Critical patent/EP1092322A1/en
Priority to JP2000614740A priority patent/JP2002543715A/en
Priority to KR1020007015054A priority patent/KR20010071692A/en
Publication of WO2000067487A1 publication Critical patent/WO2000067487A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/114Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/179Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/87Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression

Definitions

  • the invention relates to a method of encoding a source sequence of pictures comprising the steps of: dividing the source sequence into a set of groups of pictures, each group of pictures comprising a first frame, hereafter referred to as I-frame, followed by at least a pair of frames, hereafter referred to as PB -frames; dividing each I-frame and PB-frame into spatially non-overlapping blocks of pixels; encoding the blocks of said I-frame, hereafter referred to as the I-blocks, independently from any other frame in the group of pictures; deriving motion vectors and corresponding predictors for the blocks from the temporally second frame of said PB-frame, hereafter referred to as the P-blocks, based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame; predictively encoding the P-blocks based on the I-blocks in the previous I- frame or the P-blocks in the previous PB-frame; predictively encoding the blocks of the first
  • the invention also relates to a system for carrying out said method.
  • the invention may be used, for example, in video coding at a very low bit rate.
  • H.320 Standardization of low bitrate video telephony products and technology by the ITU (International Telecommunication Union) are compiled in the standards H.320 and H.324. These standards describe all the requirements to be satisfied for the different components audio, video, multiplexer, control protocol and modem.
  • H.320 is dedicated to videoconferencing or videophony over ISDN (Integrated Services Digital Network) phone lines.
  • H.324 is aimed at videophony over GSTN (Global Switch Telephonic Network) analog phone lines.
  • the two standards both support Recommendation H.263 for video-coding, which describes compression of low bit rate video signals.
  • Recommendation H.263 comprises four optional modes for a video coder.
  • PB-frames mode which gives a way of encoding a PB-frame.
  • H.263+ A second version of Recommendation H.263, called H.263+, was developed to improve the image quality and comprises some new options.
  • Improved PB-frames mode which is an improvement of the original PB-frames mode, provides a new way of encoding a PB-frame.
  • a sequence of picture frames may be composed of a series of I-frames and PB-frames.
  • An I-frame comprises a picture coded according to an Intra mode, which means that an I-frame is coded using spatial redundancy within the picture without any reference to another picture.
  • a P-frame is predictively encoded from a previous P or I-picture.
  • temporal redundancy between the P-picture and a previous picture used as a picture reference which is mostly the previous I or P-picture, is used in addition to the spatial redundancy as for an I-picture.
  • a B-picture has two temporal references and is usually predictively encoded from the previous reconstructed P or I-picture and the P-picture currently being reconstructed.
  • a PB-frame comprises two successive pictures, a first B-frame and a subsequent P-frame, coded as one unit.
  • FIG.l A method of coding a PB-frame in accordance with the PB-frame mode is illustrated in Fig.l. It shows a PB-frame composed of a B-frame B and a P-frame P2.
  • the B- frame B is surrounded by a previous P-picture PI and the P-picture P2 currently being reconstructed.
  • a P-picture PI There is shown in this example a P-picture PI ;
  • PI may also be a I-picture and serves as a picture reference for the encoding of the P-picture P2 and the B-picture B.
  • a B- block of the B-frame, in the PB-frame mode, can be subjected to forward or bidirectional predictive encoding.
  • a set of motion vectors MV is derived for the P- picture P2 of the PB-frame with reference to the picture PL In fact for each macro block of P2, a macro block of PI is associated by block matching and a corresponding motion vector MV is derived.
  • MVf (TRb x MV) / TRd(l)
  • MVb ((TRb - TRd) x MV)/ TRd
  • TRb MVf - MV (3)
  • TRb is the increment in the temporal reference of the B-picture from the previous P-frame PI
  • TRd is the increment in the temporal reference of the current P-frame P2 from the previous I or P-picture
  • PL Fig.1 shows a macro block AB of the B-picture.
  • This macro block AB has the same location as a macro block A 2 B 2 , Prec, of P2 that was previously reconstructed.
  • a forward motion vector MV is associated to the macro block A B from a macro block AiBi, which belongs to PL
  • a forward motion vector MVf and a backward motion vector MVb, both associated to AB are derived from MV as shown in the relations (1) to (3).
  • the macro blocks of PI and P2 associated to the macro block AB by the forward vector MVf and by the backward vector MVb are respectively ⁇ M] and K 2 M 2 , as illustrated in Fig.l.
  • bidirectional prediction and forward prediction is made at the block level in the B-picture and depends on where MVb points. Then a MB part of the B-block AB, for which MVb points inside Prec, is bidirectionally predicted, and the prediction for this part of the B-block is:
  • MB(i,j) [A 1 M 1 (i,j)+A 2 M 2 (i,j)]/2 (4) where i and j are the spatial coordinates of the pixels.
  • AM(i,j) K 1 A 1 (i,j) (5)
  • the encoding of the B-blocks comprises for each B-block in series the steps of: - deriving the minimum of the sum of absolute difference for the B-block based on the I-blocks in the previous I-frame or on the P-blocks in the previous PB-frame, hereafter referred to as SADf; deriving the sum of absolute difference for the B-block and the P-block in the P-frame of the PB-frame with the same location as the B-block, hereafter referred to as SADb; when SADf is greater than SADb, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame; when SADf is lower than SADb: deriving, for the P-block with the same location as the B-block, the difference between said motion vector and said predictor; when the difference obtained is greater than a predetermined threshold, predictively encoding the B-block based on the I-blocks or the P- blocks in the previous PB-frame; when the
  • the method claimed gives a strategy for the choice of the prediction mode to be used among the forward, backward and bidirectional modes.
  • the choice is based on SAD (Sum of Absolute Difference) calculation and motion vector coherence.
  • the strategy is based on a specific order in the comparisons of the SAD values for the three prediction modes and the introduction of motion coherence.
  • This motion vector coherence criterion permits to avoid the calculation of S ADbidirectional for the choice of bidirectional prediction, which is CPU-consuming.
  • the proposed method has the main advantage of not being in favor of bidirectional prediction and allows to perform backward prediction when there is no motion. Thus, the method leads to a suitable choice of prediction mode for a given block of a B-frame.
  • a method according to the invention may either be carried out by a system constituted by wired electronic circuits that may perform the various steps of the proposed method. This method may also be partly performed by means of a set of instructions stored in a computer-readable medium.
  • Fig.l illustrates a prior art decoding method according to the PB-frame mode
  • Fig.2 shows a sequence of pictures for encoding
  • Fig.3 is a block diagram of the various steps of a coding system
  • Fig.4 allows to understand how the predictor of a motion vector is defined
  • Fig.5 is a block diagram of the various steps in the encoding of a B-block leading to the choice of a prediction mode in accordance with the invention.
  • Fig.2 depicts a source sequence of picture frames that has to be encoded following a method in accordance with the invention. This shown sequence is organized in a first I-frame lo temporally followed by a series of PB-frames.
  • Each PB-frame PB 1, PB2, PB3 is constituted by a first frame, say, a B-frame and a second frame, say a P-frame.
  • PB1 comprises a B-frame Bl and a subsequent P-frame P2
  • PB2 comprises a B-frame B3 and a subsequent P-frame P4
  • PB3 comprises a B-frame B5 and a subsequent P-frame P6...
  • Io is first encoded according to an Intra mode, i.e. without reference to any other picture.
  • P2 is, then, predictively encoded with reference to Io and, subsequently, Bl is encoded with reference to Io and PI, which is, inside the encoder, internally reconstructed.
  • P4 is then encoded with reference to P2 and, subsequently, B3 is encoded with reference to P2 and P4, which is internally reconstructed too.
  • each P-block of a PB-frame in the sequence is transmitted and encoded before the B-block of the PB-frame, and with reference to the previous I or P- picture.
  • Each B-picture is encoded after the corresponding P-picture of the PB-frame and with reference to said corresponding P-picture of the PB-frame and to the previous encoded I or P-picture.
  • the sequence of pictures proposed in Fig.2 is by no means a limitation of the sort of sequences of pictures, that may be encoded following a method in accordance with the invention.
  • the sequence may also comprise two or more successive B-frames between two P-frames.
  • the B-frames are encoded in the same order as they are transmitted with reference to the previous I or P-frame and the next P-frame, which was previously encoded and which is currently reconstructed.
  • a sequence of pictures such as the one described in Fig.2, is passed picture- by-picture through the various coding steps of the system in Fig.3, said system being provided for carrying out a method in accordance with the invention.
  • First a circuit DIV(I,P,B) divides each transmitted frame into spatially non-overlapping NxM, say 16x16, macro blocks of pixels for encoding convenience.
  • I, P and B frames are not encoded in the same way, so, they do not follow the same path through the system.
  • Each sort of frame follows an adapted path.
  • An I-frame whose encoding does not require reference to any other picture, is passed directly from the circuit DIV(I,P,B) to a circuit DCT/Q.
  • This circuit DCT/Q transforms a frame received in the spatial domain into a frame in the frequency domain. It applies a discrete cosine transform to the picture divided into blocks of pixels, resulting in a set of transform coefficients, which are then quantized. These quantized coefficients, coming from the DCT/Q circuit are then passed to a circuit COD for further encoding and at the same time to a circuit IDCT/Q "1 .
  • the circuit IDCT/Q "1 dequantizes and transforms the coefficients by inverse discrete cosine transform, back to the spatial domain.
  • a circuit REC(P) reconstructs each block of the I-frame and then the I-picture is stored in a memory part of a circuit MV(P).
  • a P-frame after being divided into blocks of pixels by DIV(I,P,B), is transmitted to the motion estimator MV(P).
  • MV(P) is stored in the memory part with the previously transmitted I or P-picture already stored in the memory.
  • a motion vector MV is derived for each block of the P-picture, hereafter referred to as P-block, with reference to the picture currently stored. This vector MV may possibly be derived by minimizing a function SAD (Sum of Absolute Difference), which is given hereinbelow:
  • an associated predictor MVpred is derived for each motion vector MV.
  • a possible way of deriving MVpred is given by Recommendation H.263 as illustrated in Fig.4, which depicts a P-block and its adjacent neighbouring blocks.
  • MVpred is defined as the median value of MV1, MV2, MV3, where MV1 is the motion vector associated to the previous macro block, MV2 is the motion vector of the above macro block and MV3 is the motion vector of the above right macro block.
  • the difference between this motion-compensated P-frame and the previous I or P-frame stored in the memory part of MV(P) is performed in the tap adder S and transmitted to the unit DCT/Q resulting in a quantized transformed frame. This one is then passed to the unit COD for further encoding and, at the same time, to the units IDCT/Q- 1 and REC(P).
  • REC(P) reconstructs each block of the P-frame from the association of the differential frame received from the circuit IDCT/Q "1 , the motion vectors received from the motion estimator MV(P) and the previously I or P-frame stored in the memory part of MV(P). After being reconstructed, the memory part of MV(P) is updated with the current P-frame.
  • a B-frame is passed directly to a predictor PRED(B) for being predictively encoded according to a forward, backward or bidirectional prediction mode.
  • PRED(B) receives from REC(P) data concerning the associated P-frame of the PB- frame, which is the previous P-frame reconstructed and the previous I or P-frame, both pictures being stored in the memory part of MV(P).
  • REC(P) data concerning the associated P-frame of the PB- frame, which is the previous P-frame reconstructed and the previous I or P-frame, both pictures being stored in the memory part of MV(P).
  • a forward motion estimation MVf is performed in a step 1. It comprises deriving a forward motion vector MVf by minimizing the SAD function for the B-block with reference to the previous I or P-picture. This minimum is referred to as SADf.
  • SADf is derived as the sum of absolute difference between the B-block and the macro block with the same location in the P-frame of the PB-frame.
  • a comparison between SADf and SADb in a step 3 leads to two cases. First, when the value of SADf is greater than the value of SADb, the backward prediction mode is chosen and performed in a step 8.
  • the B-block is, in this case, predictively encoded with reference to the corresponding P-frame of the PB-frame.
  • a motion estimation coherence test is performed.
  • the motion vector MV and its predictor MVpred which are associated to the P-block with the same location as the B-block in the P-frame of the PB-frame, and which were calculated in MV(P) as shown in Fig.3, are compared in steps 4 and 5.
  • MV-MVpred is lower than a predefined threshold tl
  • bidirectional prediction is chosen and performed in a step 6.
  • the B-block is, in this case, predictively encoded from the previous I or P-picture and the P-picture of the PB-frame currently decoded.
  • the forward prediction is chosen and performed in a step 7.
  • the B-block is, in this case, predictively encoded with reference to the previous I or P-picture.
  • a new block Mbck[n] is provided, a suitable prediction mode is selected, and the new block is, in turn, encoded, until the B-picture is completely encoded, block-by-block.
  • this coding method can be implemented in several manners, such as by means of wired electronic circuits or, alternatively, by means of a set of instructions stored in a computer-readable medium, said instructions replacing at least a part of said circuits and being executable under the control of a computer or a digital processor in order to carry out the same functions as fulfilled in said replaced circuits.
  • the invention then also relates to a computer-readable medium comprising a software module that includes computer-executable instructions for performing the steps, or some steps, of the method described hereinabove.
  • these instructions are incorporated in a computer program that can be loaded and stored in said medium and causes any encoding systems such as described above and including said medium to be able to carry out the described encoding method by means of an implementation of the same functions as those fulfilled by the replaced circuits.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

In the Improved PB-frames mode, one of the options of the H.263+ Recommendation, a macroblock of a B-frame may be encoded according to a forward, a backward or a bidirectional prediction mode. The invention relates to a method of encoding a sequence of pictures defining a strategy for the choice of a prediction mode among the three possible ones in the encoding of B-macroblock. This strategy is based upon SAD (Sum of Absolute Difference) calculations and motion vectors coherence and allows to use backward prediction when scene cuts occur. In the proposed strategy, the SAD of the bidirectional prediction is not necessarily derived when the motion is non linear allowing less calculation and reduction in CPU burden. The invention also relates to an encoding system for carrying out said method and including a computer-readable medium storing instructions that allow the implementation of this method.

Description

Low bit rate video coding method and system.
FIELD OF THE INVENTION
The invention relates to a method of encoding a source sequence of pictures comprising the steps of: dividing the source sequence into a set of groups of pictures, each group of pictures comprising a first frame, hereafter referred to as I-frame, followed by at least a pair of frames, hereafter referred to as PB -frames; dividing each I-frame and PB-frame into spatially non-overlapping blocks of pixels; encoding the blocks of said I-frame, hereafter referred to as the I-blocks, independently from any other frame in the group of pictures; deriving motion vectors and corresponding predictors for the blocks from the temporally second frame of said PB-frame, hereafter referred to as the P-blocks, based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame; predictively encoding the P-blocks based on the I-blocks in the previous I- frame or the P-blocks in the previous PB-frame; predictively encoding the blocks of the first frame of said PB-frame, hereafter referred to as the B -blocks.
The invention also relates to a system for carrying out said method.
The invention may be used, for example, in video coding at a very low bit rate.
BACKGROUND ART
Standardization of low bitrate video telephony products and technology by the ITU (International Telecommunication Union) are compiled in the standards H.320 and H.324. These standards describe all the requirements to be satisfied for the different components audio, video, multiplexer, control protocol and modem. H.320 is dedicated to videoconferencing or videophony over ISDN (Integrated Services Digital Network) phone lines. H.324 is aimed at videophony over GSTN (Global Switch Telephonic Network) analog phone lines. The two standards both support Recommendation H.263 for video-coding, which describes compression of low bit rate video signals. Recommendation H.263 comprises four optional modes for a video coder. One of these optional modes is called the PB-frames mode, which gives a way of encoding a PB-frame. A second version of Recommendation H.263, called H.263+, was developed to improve the image quality and comprises some new options. Thus, an option called Improved PB-frames mode, which is an improvement of the original PB-frames mode, provides a new way of encoding a PB-frame. A sequence of picture frames may be composed of a series of I-frames and PB-frames. An I-frame comprises a picture coded according to an Intra mode, which means that an I-frame is coded using spatial redundancy within the picture without any reference to another picture. A P-frame is predictively encoded from a previous P or I-picture. Thus, when coding a P-picture, temporal redundancy between the P-picture and a previous picture used as a picture reference, which is mostly the previous I or P-picture, is used in addition to the spatial redundancy as for an I-picture. A B-picture has two temporal references and is usually predictively encoded from the previous reconstructed P or I-picture and the P-picture currently being reconstructed. A PB-frame comprises two successive pictures, a first B-frame and a subsequent P-frame, coded as one unit.
A method of coding a PB-frame in accordance with the PB-frame mode is illustrated in Fig.l. It shows a PB-frame composed of a B-frame B and a P-frame P2. The B- frame B is surrounded by a previous P-picture PI and the P-picture P2 currently being reconstructed. There is shown in this example a P-picture PI ; PI may also be a I-picture and serves as a picture reference for the encoding of the P-picture P2 and the B-picture B. A B- block of the B-frame, in the PB-frame mode, can be subjected to forward or bidirectional predictive encoding. The fact that a B-block is subjected to forward predictive coding is based on the previous I or P-picture PI and the fact that a B-block is subjected to bidirectional predictive coding is based on both the previous I or P-picture PI and the P- picture P2 currently being reconstructed. A set of motion vectors MV is derived for the P- picture P2 of the PB-frame with reference to the picture PL In fact for each macro block of P2, a macro block of PI is associated by block matching and a corresponding motion vector MV is derived. Motion vectors for the B-block are derived from the set of motion vectors previously derived for PL Therefore, a forward motion vector MVf and a backward motion vector MVb are calculated for a B-block as follows: MVf = (TRb x MV) / TRd(l) MVb = ((TRb - TRd) x MV)/ TRd (2) MVb = MVf - MV (3) where TRb is the increment in the temporal reference of the B-picture from the previous P-frame PI, and
TRd is the increment in the temporal reference of the current P-frame P2 from the previous I or P-picture PL Fig.1 shows a macro block AB of the B-picture. This macro block AB has the same location as a macro block A2B2, Prec, of P2 that was previously reconstructed. A forward motion vector MV is associated to the macro block A B from a macro block AiBi, which belongs to PL A forward motion vector MVf and a backward motion vector MVb, both associated to AB are derived from MV as shown in the relations (1) to (3). The macro blocks of PI and P2 associated to the macro block AB by the forward vector MVf and by the backward vector MVb are respectively Γ M] and K2M2, as illustrated in Fig.l.
The choice between bidirectional prediction and forward prediction is made at the block level in the B-picture and depends on where MVb points. Then a MB part of the B-block AB, for which MVb points inside Prec, is bidirectionally predicted, and the prediction for this part of the B-block is:
MB(i,j)=[A1M1(i,j)+A2M2(i,j)]/2 (4) where i and j are the spatial coordinates of the pixels.
An AM part of the B-block AB, for which MVb points outside Prec, is forward-predicted and the prediction for this part of the B-block AB is: AM(i,j)= K1A1 (i,j) (5)
An improved method of encoding a PB-frame according to the PB-frame mode is described in European Patent Application EP 0 782 343 A2. It discloses a predictive method of coding the blocks in the bidirectionally predicted frame, which method introduces a delta motion vector added to or subtracted from the derived forward and backward motion vectors respectively. The described method may be relevant when the motion in a sequence of pictures is non-linear, however, it is totally unsuitable for a sequence of pictures where scene-cuts occur. Indeed, when there is a scene cut between a previous P-frame and the B- part of a PB-frame, bidirectional and forward prediction give an erroneous coding. Besides, the implementation of the delta vector, which is costly in terms of CPU burden, may result in unnecessary, expensive and complicated calculations.
SUMMARY OF THE INVENTION
It is an object of the invention to improve the efficiency of existing coding methods, while decreasing CPU burden, and, more particularly, to provide an efficient strategy or method which permits to make the most suitable choice among prediction modes for the coding of a given macro block of a B-frame.
Thus, the encoding of the B-blocks comprises for each B-block in series the steps of: - deriving the minimum of the sum of absolute difference for the B-block based on the I-blocks in the previous I-frame or on the P-blocks in the previous PB-frame, hereafter referred to as SADf; deriving the sum of absolute difference for the B-block and the P-block in the P-frame of the PB-frame with the same location as the B-block, hereafter referred to as SADb; when SADf is greater than SADb, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame; when SADf is lower than SADb: deriving, for the P-block with the same location as the B-block, the difference between said motion vector and said predictor; when the difference obtained is greater than a predetermined threshold, predictively encoding the B-block based on the I-blocks or the P- blocks in the previous PB-frame; when the difference obtained is smaller than the predetermined threshold, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame and the I-blocks or the P-blocks in the previous PB-frame.
For the coding of a B-block, the method claimed gives a strategy for the choice of the prediction mode to be used among the forward, backward and bidirectional modes. The choice is based on SAD (Sum of Absolute Difference) calculation and motion vector coherence. The strategy is based on a specific order in the comparisons of the SAD values for the three prediction modes and the introduction of motion coherence. This motion vector coherence criterion permits to avoid the calculation of S ADbidirectional for the choice of bidirectional prediction, which is CPU-consuming. The proposed method has the main advantage of not being in favor of bidirectional prediction and allows to perform backward prediction when there is no motion. Thus, the method leads to a suitable choice of prediction mode for a given block of a B-frame.
In a preferred embodiment of the invention, a method according to the invention may either be carried out by a system constituted by wired electronic circuits that may perform the various steps of the proposed method. This method may also be partly performed by means of a set of instructions stored in a computer-readable medium.
BRIEF DESCRIPTION OF THE DRAWINGS The particular aspects of the invention will now be explained with reference to the embodiments described hereinafter and considered in connection with the accompanying drawings, in which:
Fig.l illustrates a prior art decoding method according to the PB-frame mode; Fig.2 shows a sequence of pictures for encoding; Fig.3 is a block diagram of the various steps of a coding system;
Fig.4 allows to understand how the predictor of a motion vector is defined; Fig.5 is a block diagram of the various steps in the encoding of a B-block leading to the choice of a prediction mode in accordance with the invention.
DETAILED DESCRIPTION OF THE INVENTION
A misuse of the word "block" may occur in the following paragraphs. When reading block one must understand macro block, as defined in ITU standards.
Fig.2 depicts a source sequence of picture frames that has to be encoded following a method in accordance with the invention. This shown sequence is organized in a first I-frame lo temporally followed by a series of PB-frames. Each PB-frame PB 1, PB2, PB3 is constituted by a first frame, say, a B-frame and a second frame, say a P-frame. Thus, PB1 comprises a B-frame Bl and a subsequent P-frame P2, PB2 comprises a B-frame B3 and a subsequent P-frame P4, PB3 comprises a B-frame B5 and a subsequent P-frame P6...
The various frames will be encoded in the order given hereinafter. Io is first encoded according to an Intra mode, i.e. without reference to any other picture. P2 is, then, predictively encoded with reference to Io and, subsequently, Bl is encoded with reference to Io and PI, which is, inside the encoder, internally reconstructed. P4 is then encoded with reference to P2 and, subsequently, B3 is encoded with reference to P2 and P4, which is internally reconstructed too. Thus, each P-block of a PB-frame in the sequence is transmitted and encoded before the B-block of the PB-frame, and with reference to the previous I or P- picture. Each B-picture is encoded after the corresponding P-picture of the PB-frame and with reference to said corresponding P-picture of the PB-frame and to the previous encoded I or P-picture. The sequence of pictures proposed in Fig.2 is by no means a limitation of the sort of sequences of pictures, that may be encoded following a method in accordance with the invention. In fact, the sequence may also comprise two or more successive B-frames between two P-frames. In such case the B-frames are encoded in the same order as they are transmitted with reference to the previous I or P-frame and the next P-frame, which was previously encoded and which is currently reconstructed.
A sequence of pictures, such as the one described in Fig.2, is passed picture- by-picture through the various coding steps of the system in Fig.3, said system being provided for carrying out a method in accordance with the invention. First a circuit DIV(I,P,B) divides each transmitted frame into spatially non-overlapping NxM, say 16x16, macro blocks of pixels for encoding convenience. I, P and B frames are not encoded in the same way, so, they do not follow the same path through the system. Each sort of frame follows an adapted path.
An I-frame, whose encoding does not require reference to any other picture, is passed directly from the circuit DIV(I,P,B) to a circuit DCT/Q. This circuit DCT/Q transforms a frame received in the spatial domain into a frame in the frequency domain. It applies a discrete cosine transform to the picture divided into blocks of pixels, resulting in a set of transform coefficients, which are then quantized. These quantized coefficients, coming from the DCT/Q circuit are then passed to a circuit COD for further encoding and at the same time to a circuit IDCT/Q"1 . The circuit IDCT/Q"1 dequantizes and transforms the coefficients by inverse discrete cosine transform, back to the spatial domain. A circuit REC(P) reconstructs each block of the I-frame and then the I-picture is stored in a memory part of a circuit MV(P).
A P-frame, after being divided into blocks of pixels by DIV(I,P,B), is transmitted to the motion estimator MV(P). MV(P) is stored in the memory part with the previously transmitted I or P-picture already stored in the memory. A motion vector MV is derived for each block of the P-picture, hereafter referred to as P-block, with reference to the picture currently stored. This vector MV may possibly be derived by minimizing a function SAD (Sum of Absolute Difference), which is given hereinbelow:
SAD = ∑ ζ|R, s (m, n) - Bt_U _v (m, n) m=l π=l where Bt (m, ή) represents the (n^n)* pixel of the 16x16 P-block at the spatial location (i,j) and Bt_U }_v (m, ή) represents the (m.n)* pixel of a candidate macro block in the previous I or P-picture at the spatial location (i,j) displaced by the vector (u,v). The motion vector is the displacement between the P-block and the candidate macro block giving the smallest SAD. Simultaneously, in the circuit MV(P), an associated predictor MVpred is derived for each motion vector MV. A possible way of deriving MVpred is given by Recommendation H.263 as illustrated in Fig.4, which depicts a P-block and its adjacent neighbouring blocks. MVpred is defined as the median value of MV1, MV2, MV3, where MV1 is the motion vector associated to the previous macro block, MV2 is the motion vector of the above macro block and MV3 is the motion vector of the above right macro block. The difference between this motion-compensated P-frame and the previous I or P-frame stored in the memory part of MV(P) is performed in the tap adder S and transmitted to the unit DCT/Q resulting in a quantized transformed frame. This one is then passed to the unit COD for further encoding and, at the same time, to the units IDCT/Q- 1 and REC(P). Here, REC(P) reconstructs each block of the P-frame from the association of the differential frame received from the circuit IDCT/Q"1, the motion vectors received from the motion estimator MV(P) and the previously I or P-frame stored in the memory part of MV(P). After being reconstructed, the memory part of MV(P) is updated with the current P-frame.
A B-frame is passed directly to a predictor PRED(B) for being predictively encoded according to a forward, backward or bidirectional prediction mode. When encoded, a motion-compensated block of this frame is subtracted in S from the initial reference block, the difference being passed through DCT/Q and then to COD for further encoding. A choice has to be made from the three possible prediction modes. When needed for making this choice, PRED(B) receives from REC(P) data concerning the associated P-frame of the PB- frame, which is the previous P-frame reconstructed and the previous I or P-frame, both pictures being stored in the memory part of MV(P). A strategy in accordance with the invention leading to such the choice of the prediction mode for a B-block is depicted in the diagram of Fig.5.
For each macro block Mbck[n] of a B-frame transmitted to PRED(B) a forward motion estimation MVf is performed in a step 1. It comprises deriving a forward motion vector MVf by minimizing the SAD function for the B-block with reference to the previous I or P-picture. This minimum is referred to as SADf. In a step 2, SADb is derived as the sum of absolute difference between the B-block and the macro block with the same location in the P-frame of the PB-frame.
A comparison between SADf and SADb in a step 3 leads to two cases. First, when the value of SADf is greater than the value of SADb, the backward prediction mode is chosen and performed in a step 8. The B-block is, in this case, predictively encoded with reference to the corresponding P-frame of the PB-frame.
Otherwise, when SADf is smaller than SADb, a motion estimation coherence test is performed. The motion vector MV and its predictor MVpred, which are associated to the P-block with the same location as the B-block in the P-frame of the PB-frame, and which were calculated in MV(P) as shown in Fig.3, are compared in steps 4 and 5. When the difference MV-MVpred is lower than a predefined threshold tl, bidirectional prediction is chosen and performed in a step 6. The B-block is, in this case, predictively encoded from the previous I or P-picture and the P-picture of the PB-frame currently decoded. When the difference is greater than tl, the forward prediction is chosen and performed in a step 7. The B-block is, in this case, predictively encoded with reference to the previous I or P-picture.
Once the macro block Mbck[n] is predictively encoded according to the selected prediction mode, a new block Mbck[n] is provided, a suitable prediction mode is selected, and the new block is, in turn, encoded, until the B-picture is completely encoded, block-by-block.
It is to be noted that, with respect to the described coding method and system, modifications or improvements may be proposed without departing from the scope of the invention. For instance, it is clear that this coding method can be implemented in several manners, such as by means of wired electronic circuits or, alternatively, by means of a set of instructions stored in a computer-readable medium, said instructions replacing at least a part of said circuits and being executable under the control of a computer or a digital processor in order to carry out the same functions as fulfilled in said replaced circuits. The invention then also relates to a computer-readable medium comprising a software module that includes computer-executable instructions for performing the steps, or some steps, of the method described hereinabove. In such a case, these instructions are incorporated in a computer program that can be loaded and stored in said medium and causes any encoding systems such as described above and including said medium to be able to carry out the described encoding method by means of an implementation of the same functions as those fulfilled by the replaced circuits.

Claims

CLAIMS:
1. A method of encoding a source sequence of pictures comprising the steps of: dividing the source sequence into a set of groups of pictures, each group of pictures comprising a first frame, hereafter referred to as I-frame, followed by at least a pair of frames, hereafter referred to as PB-frames; - dividing each I-frame and PB-frame into spatially non-overlapping blocks of pixels; encoding the blocks of said I-frame, hereafter referred to as the I-blocks, independently from any other frame in the group of pictures; deriving motion vectors and corresponding predictors for the blocks from the temporally second frame of said PB-frame, hereafter referred to as the P-blocks, based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame; predictively encoding the P-blocks based on the I-blocks in the previous I- frame or the P-blocks in the previous PB-frame; predictively encoding the blocks of the first frame of said PB-frame, hereafter referred to as the B-blocks, wherein the encoding of the B-blocks comprises for each B-block in series the steps of: deriving the minimum of the sum of absolute difference for the B-block based on the I-blocks in the previous I-frame or on the P-blocks in the previous PB- frame, hereafter referred to as SADf; - deriving the sum of absolute difference for the B-block and the P-block in the
P-frame of the PB-frame with the same location as the B-block, hereafter referred to as SADb; when SADf is greater than SADb, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame; - when SADf is lower than SADb: deriving, for the P-block with the same location as the B-block, the difference between said motion vector and said predictor; when the difference obtained is greater than a predetermined threshold, predictively encoding the B-block based on the I-blocks or the P- blocks in the previous PB-frame; when the difference obtained is smaller than the predetermined threshold, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame and the I-blocks or the P-blocks in the previous PB-frame.
2. A system for encoding a sequence of pictures comprising: - means for dividing the source sequence into a set of groups of pictures, each group of pictures comprising a first frame, hereafter referred to as I-frame, followed by at least a pair of predictively encoded frames, hereafter referred to as PB-frames; means for dividing each I-frame or PB-frame into spatially non-overlapping blocks of pixels; - a motion estimator for deriving motion vectors and corresponding predictors for the blocks from the temporally second frame of said PB-frame, hereafter referred to as the P-blocks, based on the I-blocks in the previous I-frame or the P-blocks in the previous PB- frame; means for encoding the blocks of said I-frame, hereafter referred to as the I- blocks, independently from any other frame in the group of pictures, for predictively encoding the P-blocks based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame and for predictively encoding the blocks from the first frame of said PB- frame, hereafter referred to as the B-blocks, wherein the means for the encoding of the B-blocks perform for each B-block in series the steps of: deriving the minimum of the sum of absolute difference for the B-block based on the I-blocks in the previous I-frame or on the P-blocks in the previous PB-frame, hereafter referred to as SADf; deriving the sum of absolute difference for the B-block and the P-block with the same location as the B-block, hereafter referred to as SADb; when SADf is greater than SADb, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame; when SADf is lower than SADb: deriving, for the P-block with the same location as the B-block, the difference between said motion vector and said predictor of said motion vector; when the difference obtained is greater than a predetermined threshold, predictively encoding the B-block based on the I-blocks or the P- blocks in the previous PB-frame; when the difference obtained is smaller than the predetermined threshold, predictively encoding the B-block based on the P-blocks of the second frame of the PB-frame and the I-blocks or the P-blocks in the previous PB-frame.
3. A system for encoding a sequence of pictures comprising: means for dividing the source sequence into a set of groups of pictures, each group of pictures comprising a first frame, hereafter referred to as I-frame, followed by at least a pair of predictively encoded frames, hereafter referred to as PB-frames; means for dividing each I-frame or PB-frame into spatially non-overlapping blocks of pixels; a motion estimator for deriving motion vectors and corresponding predictors for the blocks from the temporally second frame of said PB-frame, hereafter referred to as the P-blocks, based on the I-blocks in the previous I-frame or the P-blocks in the previous PB- frame; means for encoding the blocks of said I-frame, hereafter referred to as the I- blocks, independently from any other frame in the group of pictures, for predictively encoding the P-blocks based on the I-blocks in the previous I-frame or the P-blocks in the previous PB-frame and for predictively encoding the blocks from the first frame of said PB- frame, hereafter referred to as the B-blocks, wherein said encoding system also comprises a computer-readable medium storing a computer program that itself comprises a set of instructions replacing at least some of said means and being executable under the control of a computer or a digital processor in order to carry out the encoding method according to claim 1 by means of an implementation of the same functions as those fulfilled by the replaced means.
PCT/EP2000/003773 1999-04-30 2000-04-19 Low bit rate video coding method and system WO2000067487A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP00927072A EP1092322A1 (en) 1999-04-30 2000-04-19 Low bit rate video coding method and system
JP2000614740A JP2002543715A (en) 1999-04-30 2000-04-19 Low bit rate video encoding method and system
KR1020007015054A KR20010071692A (en) 1999-04-30 2000-04-19 Low bit rate video coding method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP99401067.6 1999-04-30
EP99401067 1999-04-30

Publications (1)

Publication Number Publication Date
WO2000067487A1 true WO2000067487A1 (en) 2000-11-09

Family

ID=8241963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2000/003773 WO2000067487A1 (en) 1999-04-30 2000-04-19 Low bit rate video coding method and system

Country Status (6)

Country Link
US (1) US6608937B1 (en)
EP (1) EP1092322A1 (en)
JP (1) JP2002543715A (en)
KR (1) KR20010071692A (en)
CN (1) CN1166212C (en)
WO (1) WO2000067487A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003063508A1 (en) * 2002-01-24 2003-07-31 Koninklijke Philips Electronics N.V. Coding video pictures in a pb frames mode
US7940844B2 (en) 2002-06-18 2011-05-10 Qualcomm Incorporated Video encoding and decoding techniques

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100474932C (en) * 2003-12-30 2009-04-01 中国科学院计算技术研究所 Video frequency frame image fast coding method based on optimal prediction mode probability
CN101754012B (en) * 2004-10-14 2012-06-20 英特尔公司 Rapid multiframe motion estimation adopting self-adaptive search strategy
CN100338957C (en) * 2005-06-20 2007-09-19 浙江大学 Complexity hierarchical mode selection method
CN100466736C (en) * 2005-12-30 2009-03-04 杭州华三通信技术有限公司 Motion image code controlling method and code device
TWI327866B (en) * 2006-12-27 2010-07-21 Realtek Semiconductor Corp Apparatus and related method for decoding video blocks in video pictures
US20080216663A1 (en) * 2007-03-09 2008-09-11 Steve Williamson Brewed beverage maker with dispensing assembly
KR100939917B1 (en) 2008-03-07 2010-02-03 에스케이 텔레콤주식회사 Encoding system using motion estimation and encoding method using motion estimation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0577365A2 (en) * 1992-06-29 1994-01-05 Sony Corporation Encoding and decoding of picture signals
EP0782343A2 (en) * 1995-12-27 1997-07-02 Matsushita Electric Industrial Co., Ltd. Video coding method
US5668599A (en) * 1996-03-19 1997-09-16 International Business Machines Corporation Memory management for an MPEG2 compliant decoder
WO1999007159A2 (en) * 1997-07-29 1999-02-11 Koninklijke Philips Electronics N.V. Variable bitrate video coding method and corresponding video coder

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0577365A2 (en) * 1992-06-29 1994-01-05 Sony Corporation Encoding and decoding of picture signals
EP0782343A2 (en) * 1995-12-27 1997-07-02 Matsushita Electric Industrial Co., Ltd. Video coding method
US5668599A (en) * 1996-03-19 1997-09-16 International Business Machines Corporation Memory management for an MPEG2 compliant decoder
WO1999007159A2 (en) * 1997-07-29 1999-02-11 Koninklijke Philips Electronics N.V. Variable bitrate video coding method and corresponding video coder

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FUJITA G ET AL: "A VLSI ARCHITECTURE FOR MOTION ESTIMATION CORE DEDICATED TO H.263 VIDEO CODING", IEICE TRANSACTIONS ON ELECTRONICS,JP,INSTITUTE OF ELECTRONICS INFORMATION AND COMM. ENG. TOKYO, vol. E81-C, no. 5, May 1998 (1998-05-01), pages 702 - 707, XP000834536, ISSN: 0916-8524 *
GIROD B ET AL: "PERFORMANCE OF THE H.263 VIDEO COMPRESSION STANDARD", JOURNAL OF VLSI SIGNAL PROCESSING,NL,KLUWER ACADEMIC PUBLISHERS, DORDRECHT, vol. 17, no. 2/03, 1 November 1997 (1997-11-01), pages 101 - 111, XP000724574, ISSN: 0922-5773 *
NACHTERGAELE L ET AL: "LOW-POWER DATA TRANSFER AND STORAGE EXPLORATION FOR H.263 VIDEO DECODER SYSTEM", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,US,IEEE INC. NEW YORK, vol. 16, no. 1, 1998, pages 120 - 129, XP000734815, ISSN: 0733-8716 *
RIJKSE K: "H.263: VIDEO CODING FOR LOW-BIT-RATE COMMUNICATION", IEEE COMMUNICATIONS MAGAZINE,US,IEEE SERVICE CENTER. PISCATAWAY, N.J, vol. 34, no. 12, 1 December 1996 (1996-12-01), pages 42 - 45, XP000636452, ISSN: 0163-6804 *
RIJKSE K: "ITU standardisation of very low bitrate video coding algorithms", SIGNAL PROCESSING. IMAGE COMMUNICATION,NL,ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, vol. 7, no. 4, 1 November 1995 (1995-11-01), pages 553 - 565, XP004047099, ISSN: 0923-5965 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003063508A1 (en) * 2002-01-24 2003-07-31 Koninklijke Philips Electronics N.V. Coding video pictures in a pb frames mode
US7940844B2 (en) 2002-06-18 2011-05-10 Qualcomm Incorporated Video encoding and decoding techniques

Also Published As

Publication number Publication date
US6608937B1 (en) 2003-08-19
KR20010071692A (en) 2001-07-31
EP1092322A1 (en) 2001-04-18
CN1166212C (en) 2004-09-08
JP2002543715A (en) 2002-12-17
CN1302509A (en) 2001-07-04

Similar Documents

Publication Publication Date Title
US6442204B1 (en) Video encoding method and system
US8208547B2 (en) Bidirectional predicted pictures or video object planes for efficient and flexible coding
US8009734B2 (en) Method and/or apparatus for reducing the complexity of H.264 B-frame encoding using selective reconstruction
US8457203B2 (en) Method and apparatus for coding motion and prediction weighting parameters
EP2250813B1 (en) Method and apparatus for predictive frame selection supporting enhanced efficiency and subjective quality
Kamp et al. Multihypothesis prediction using decoder side-motion vector derivation in inter-frame video coding
US20060002465A1 (en) Method and apparatus for using frame rate up conversion techniques in scalable video coding
WO2005022923A2 (en) Method and apparatus for minimizing number of reference pictures used for inter-coding
KR20060090990A (en) Direct mode derivation process for error concealment
KR100790178B1 (en) Method for converting frame rate of moving picturer
Kim et al. An efficient scheme for motion estimation using multireference frames in H. 264/AVC
US5880784A (en) Method and apparatus for adaptively switching on and off advanced prediction mode in an H.263 video coder
US6608937B1 (en) Low bit rate video coding method and system
US20050013496A1 (en) Video decoder locally uses motion-compensated interpolation to reconstruct macro-block skipped by encoder
KR20090038278A (en) Method and apparatus for encoding and decoding image
Suzuki et al. Block-based reduced resolution inter frame coding with template matching prediction
KR101037834B1 (en) Coding and decoding for interlaced video
Ascenso et al. Hierarchical motion estimation for side information creation in Wyner-Ziv video coding
Langen et al. Chroma prediction for low-complexity distributed video encoding
JP2620431B2 (en) Image coding device
Kang Motion estimation algorithm with low complexity for video compression
Muñoz-Jimenez et al. Computational cost reduction of H. 264/AVC video coding standard for videoconferencing applications
Zhenga et al. Prediction Matching for Video Coding
Andrews et al. Test model 12/Appendix II of H. 263 Version 3 Purpose: Information
Andrews et al. Test model 11 Purpose: Information

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 00800733.0

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 2000927072

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020007015054

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2000927072

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020007015054

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 2000927072

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1020007015054

Country of ref document: KR