[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN1917647A - Method and apparatus for adaptively selecting context model for entropy coding - Google Patents

Method and apparatus for adaptively selecting context model for entropy coding Download PDF

Info

Publication number
CN1917647A
CN1917647A CNA2006100767112A CN200610076711A CN1917647A CN 1917647 A CN1917647 A CN 1917647A CN A2006100767112 A CNA2006100767112 A CN A2006100767112A CN 200610076711 A CN200610076711 A CN 200610076711A CN 1917647 A CN1917647 A CN 1917647A
Authority
CN
China
Prior art keywords
data
enhancement layer
coding
residual
prediction flag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2006100767112A
Other languages
Chinese (zh)
Inventor
李培根
李教爀
车尚昌
韩宇镇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN1917647A publication Critical patent/CN1917647A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method and apparatus for adaptively selecting a context model for entropy coding and a video decoder are provided. A method of coding a residual prediction flag indicating whether residual data for an enhancement layer block of a multi-layered video signal is predicted from residual data for a lower layer block corresponding to the residual data for the enhancement layer block includes the steps of calculating the energy of the residual data for the lower layer block, determining a coding method for the residual prediction flag according to the energy, and coding the residual prediction flag using the determined coding method.

Description

Selection adaptively is used for the method and apparatus of the context model of entropy coding
Technical field
The method and apparatus consistent with the present invention relates to the Code And Decode of vision signal, more particularly, relates to context model and a kind of Video Decoder of selecting to be used for entropy coding adaptively.
Background technology
Along with the development of the ICT (information and communication technology) that comprises the Internet, the existing multimedia service that more and more contains various information such as text, video, audio frequency etc.Because the amount of multi-medium data is very big usually, so the wide bandwidth that multi-medium data needs jumbo storage medium and is used to transmit.Therefore, comprise the multi-medium data of text, video and audio frequency for transmission, compaction coding method is absolutely necessary.
The basic principle of data compression is to remove data redundancy.By removing spatial redundancy, time redundancy or psychovisual sensation redundancy, data can be compressed, in described spatial redundancy, identical color or object are repeated in piece image, in described time redundancy, between the consecutive frame of moving image, almost do not have to change or in audio frequency identical sound be repeated, described psychovisual sensation redundancy has been considered human eyesight and to the limited perception of high frequency.In general video coding, remove time redundancy by motion compensation based on estimation and compensation, remove spatial redundancy by transition coding.
To remove the multi-medium data that produces after the data redundancy in order being sent in, to need transmission medium.Depend on transmission medium, the transmission performance difference.The transmission medium that uses has various transfer rates at present.For example, ultrahigh speed communication network per second can send the data of tens megabits, and mobile communications network has the transfer rate of per second 384 kilobits.Therefore, for the transmission medium of supporting to have various speed or in order to send multimedia with the data transfer rate that is suitable for transmission environment, data-encoding scheme with scalability (scalability) may be suitable for multimedia environment, these data-encoding schemes such as small wave video coding and subband video coding.
Scalable video coding be a kind of by according to a part of clipping bit stream such as the environmental condition of transmission bit rate, error rate and system resource with the bit stream that allows compression with different resolution, frame per second and the decoded technology of signal to noise ratio (snr) grade.Motion Picture Experts Group 4 (MPEG-4) part 10 standardization that are used for scalable video coding are carried out.Specifically, just carrying out very big effort and realizing scalability based on sandwich construction.For example, bit stream can be made up of multilayer, promptly has basic unit and first and second enhancement layer of different resolution (QCIF, CIF and 2CIF) or frame per second.
With being encoded as when video single when layer identical, when video is encoded as multilayer, for each layer of a plurality of layers obtain motion vectors (MV) to remove time redundancy.Can be each layer difference searching motion vector MV (promptly, last situation), perhaps the motion vector that obtains by the motion-vector search that carries out for one deck (not going up after the sampling of sampling of sampling/down or sampling on carrying out/down) is used to another layer (that is back one situation).But, in last situation,, still have the expense that causes owing to motion vector for each layer generation although be benefited from accurate motion vectors.Therefore, the redundancy of removing effectively between the motion vector of each layer is very challenging task.
Fig. 1 shows the example of the scalable video codecs that uses sandwich construction.With reference to Fig. 1, basic unit has the resolution of 1/4th CLV Common Intermediate Formats (QCIF) form and the frame per second of 15Hz, first enhancement layer has the resolution of CLV Common Intermediate Format (CIF) and the frame per second of 30Hz, and second enhancement layer has the resolution of single-definition (SD) and the frame per second of 60Hz.For example, in order to obtain the stream of CIF 0.5Mbps, first enhancement layer bit-stream (CIF_30Hz_0.7M) is by the target bit rate of brachymemma with coupling 0.5M.In this way, can provide the space, the time and scalability signal to noise ratio (snr).
As shown in Figure 1, the image that the frame on the identical time location (for example 10,20 and 30) can be considered to be similar in each layer.A kind of known coding techniques comprises that texture from lower level (directly or after sampling) prediction encodes when the texture of anterior layer and to predicted value and the difference worked as between the actual texture of anterior layer.In the scalable video model 3.0 (" SVM3.0 ") of ISO/IEC 21000-13 scalable video coding, this technology is defined as the Intra_BL prediction.
Except traditional direct infra-frame prediction and inter prediction that uses in H.264, SVM 3.0 also utilizes a kind of technology of using correlation between current block and the corresponding blocks in lower level to predict current block, predicts piece or macro block in the present frame.This Forecasting Methodology is called as " Intra_BL prediction ", uses the coding mode of Intra_BL prediction to be called as " Intra_BL pattern ".
Fig. 2 is the schematic diagram that is used to explain above three kinds of Forecasting Methodologies, and described three kinds of methods are: the infra-frame prediction that 1. is used for the macro block 14 of present frame 11; 2. use the inter prediction be positioned at the frame 12 on the time location different with present frame 11; 3. use Intra_BL prediction from the data texturing in the zone 16 in the basic frame 13 corresponding with macro block 14.
The scalable video coding standard is selected a kind of method that has superiority in described three kinds of Forecasting Methodologies for each macro block.
In order to offer decoder, can use various signs about the information of selected Forecasting Methodology or the data that are used for selected Forecasting Methodology.Depending on that coding is based on one by one macro block, still is performed frame by frame piecewise, can use a bit, a few bit and tens bits as a token of.When for each macro block, sheet or frame in the whole motion picture sign being set, the size of data increases.
Therefore, the method and apparatus that needs a kind of sign of compression effectively.
Summary of the invention
The invention provides a kind of method and apparatus that uses base layer data to reduce the data volume of forecasting institute need.
Checking under the situation described below, for those skilled in the art, above-mentioned aspect of the present invention and others, characteristics and advantage will become clear.
According to an aspect of the present invention, provide a kind of residual prediction flag is carried out Methods for Coding, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises: the energy that calculates the residual error data of described lower level piece; Determine the coding method of residual prediction flag according to described energy; With the coding method that use is determined residual prediction flag is encoded.
According to a further aspect in the invention, provide a kind of residual prediction flag is carried out Methods for Coding, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises: the value of calculating the coded block pattern (CBP) of lower level piece; Determine the coding method of residual prediction flag according to the value of the CBP that calculates; With the coding method that use is determined residual prediction flag is encoded.
According to a further aspect in the invention, a kind of method that residual prediction flag is decoded is provided, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises: the energy that calculates the residual error data of lower level piece; Determine the coding/decoding method of residual prediction flag according to described energy; With the coding/decoding method that use is determined residual prediction flag is decoded.
According to a further aspect in the invention, a kind of method that residual prediction flag is decoded is provided, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises: the value of calculating the CBP of lower level piece; Determine the coding/decoding method of residual prediction flag according to the value of the CBP that calculates; With the coding/decoding method that use is determined residual prediction flag is decoded.
According to a further aspect in the invention, a kind of method that residual prediction flag is decoded is provided, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises: the value of checking the CBP in this lower level piece; Whether each the value of determining the CBP in this lower level piece is " 1 "; According to the CBP in the lower level piece whether at least one value be " 1 ", determine the coding/decoding method of residual prediction flag; With the coding/decoding method that basis is determined residual prediction flag is decoded.
According to a further aspect in the invention, a kind of method that the enhancement layer of multi-layer video signal is decoded is provided, comprise: the value of the CBP of use lower level is determined the coding/decoding method of residual prediction flag, and whether described residual prediction flag indication enhancement layer is predicted according to lower level; With the entropy decoding method that use is selected according to coding/decoding method residual prediction flag is decoded, use described residual prediction flag that enhancement layer is decoded.
According to a further aspect in the invention, a kind of decoder that residual prediction flag is decoded of being used for is provided, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block, this decoder comprises: resolver, the value of the CBP of parsing lower level piece; And decoding unit, determine the coding/decoding method of residual prediction flag according to the value of CBP, use the coding/decoding method of determining that residual prediction flag is decoded.
According to a further aspect in the invention, provide a kind of decoder that the enhancement layer of multi-layer video signal is decoded, comprising: resolver, the value of the CBP of the layer that parsing is lower than enhancement layer; The entropy decoding unit, the value of use CBP is determined the coding/decoding method of residual prediction flag, and according to described coding/decoding method residual prediction flag is carried out the entropy decoding also subsequently to enhancement layer decoder, whether described residual prediction flag indication enhancement layer is predicted according to lower level; Inverse quantizer, the re-quantization enhancement layer; And inverse converter, the enhancement layer of re-quantization is carried out inverse transformation, and use is carried out decoding according to the prediction of lower level to the result according to residual prediction flag.
According to a further aspect in the invention, provide a kind of coding method of being carried out by multilayer video encoder, this coding method comprises: obtain basic frame from incoming frame; By obtain the data of enhancement frame from incoming frame with reference to described basic frame; With according to the data of determining enhancement layer whether be according to the data of basic frame predict the result data of enhancement layer are encoded.
According to a further aspect in the invention, provide a kind of coding/decoding method of being carried out by the multi-layer video decoder, this coding/decoding method comprises: the basic frame to input is decoded; Need to determine whether the data of basic frame to predict enhancement data; The result who determines with basis decodes to the data of enhancement layer.
According to a further aspect in the invention, provide a kind of video encoder, comprising: base layer encoder obtains basic frame from incoming frame; And enhancement layer encoder, produce data according to the enhancement layer of predicting from the basic frame of incoming frame, wherein, described enhancement layer encoder comprises converter, this converter according to the data of determining enhancement layer whether be according to the data of basic frame predict the result data of enhancement layer are encoded.
According to a further aspect in the invention, provide a kind of Video Decoder, comprising: the base layer decoder device, the basic frame of input is decoded; And enhancement layer decoder, by the reference basic frame data of enhancement layer are decoded, wherein, described enhancement layer decoder comprises inverse converter, and this inverse converter predicts that according to the data of the basic frame that determines whether the needs decoding result of the data of the enhancement layer of predicting according to described basic frame comes the data of enhancement layer are decoded.
Description of drawings
By the detailed description that the reference accompanying drawing carries out exemplary embodiment of the present invention, above and other characteristics of the present invention and advantage will become clear, wherein:
Fig. 1 illustrates the example of the scalable video codecs that uses sandwich construction;
Fig. 2 is the schematic diagram that is used to explain inter prediction, infra-frame prediction, Intra_BL prediction;
Fig. 3 is the diagrammatic sketch of the example of the residual prediction in the display video coding;
Fig. 4 is the flow chart that the method for the code efficiency that improves residual prediction flag according to an exemplary embodiment of the present invention is shown;
Fig. 5 illustrates according to an exemplary embodiment of the present invention using the flow chart of the method that the method coded data shown in Fig. 4 decodes;
Fig. 6 is the diagrammatic sketch that shows the example of the motion prediction in the video coding according to an exemplary embodiment of the present invention;
Fig. 7 is the flow chart that the illustrative methods of the code efficiency that improves the motion prediction sign is shown;
Fig. 8 illustrates according to an exemplary embodiment of the present invention using the flow chart of the method that the method coded data shown in Fig. 7 decodes;
Fig. 9 is the flow chart that the process that the residual prediction data of counter-rotating are encoded is shown according to an exemplary embodiment of the present invention;
Figure 10 is the flow chart that the example process that the process coded data shown in use Fig. 9 is decoded is shown;
Figure 11 illustrates according to an exemplary embodiment of the present invention the flow chart that uses the process that the context revised encodes to residual prediction flag;
Figure 12 is the flow chart that the example process that the process coded data shown in use Figure 11 is decoded is shown;
Figure 13 is the flow chart that the example of the coding mode that changes according to the base layer residual data is shown;
Figure 14 illustrates the flow chart that uses the cataloged procedure of different context models according to an exemplary embodiment of the present invention when carrying out based on contextual adaptive binary arithmetic coding (CABAC);
Figure 15 illustrates the flow chart that is used for example process that the video data that uses the process coding shown in Figure 14 is decoded;
Figure 16 is the block diagram of video encoder according to an exemplary embodiment of the present invention; With
Figure 17 is the block diagram of Video Decoder according to an exemplary embodiment of the present invention.
Embodiment
Describe the present invention more fully now with reference to accompanying drawing, shown exemplary embodiment of the present invention in the accompanying drawing.
With reference to following detailed description and accompanying drawing to exemplary embodiment, the method that can more easily understand advantage of the present invention and characteristics and realize it.Yet the present invention can be implemented and should not be interpreted as limitation and exemplary embodiment set forth herein with many different forms.These exemplary embodiments are provided so that the disclosure thoroughly and fully and fully design of the present invention is pass on to those skilled in the art, the present invention will only be defined by claims.In whole specification, the identical identical parts of label indication.
Hereinafter, with reference to flow chart description the present invention of method according to an exemplary embodiment of the present invention.The combination that should be appreciated that square frame in each square frame of flow chart and the flow chart can be realized by computer program instructions.These computer program instructions can be provided for the processor of all-purpose computer, special purpose computer or other programmable data processing device, producing a kind of mechanism, thereby create the means of the function that is used for being implemented in one or more flowchart block appointments via the instruction that the processor of computer or other programmable data processing device is carried out.
These computer program instructions also can be stored in computer can with or computer-readable memory in, described memory can instruct computer or other programmable data processing device to play a role in concrete mode, thereby computer can with or computer-readable memory in the instruction of storing produce a kind of product, this product comprises the instruction means of the function that is implemented in appointment in one or more flowchart block.
Computer program instructions also can be written into computer or other programmable data processing device is performed the sequence of operations step on computer or other programmable device, thereby produce a kind of so computer implemented processing, that is, the instruction of carrying out on computer or other programmable device provides the step that is implemented in the function of appointment in one or more flowchart block.
But the part of each square frame representation module, end or the code of flow chart, it comprises one or more executable instructions of the logic function that is used to realize appointment.Should be noted that at some in other the realization, the function of indicating can be not according to occurring in sequence in square frame.For example, depend on the function that relates to, two pieces that show in fact can be carried out substantially concomitantly continuously, and perhaps square frame sometimes can be carried out with opposite order.
About the information of the Forecasting Methodology that shows among Fig. 2, that is,, can be provided with and send by encoder about the data of a method selecting among the Forecasting Methodology that from Fig. 2, shows or about the information of reference data, use with decoded device.Carry out finally packed data nondestructively of entropy coding.Huffman encoding is often used as entropy coding method.Huffman encoding is a kind of parameter length coding and depends on Bit Allocation in Discrete based on the data of data probability.Therefore, in order to increase overall bit efficiency, can revise the data representation method.
The method that a kind of Forecasting Methodology that will select among the Forecasting Methodology shown in Fig. 2 is notified to decoder is to use prediction indication, and this prediction indication indication method selected is with reference to base layer information, for example the residual error data of basic unit or motion vector.The same with prediction indication, whether residual prediction flag or motion prediction sign can be used to notify the decoder base layer information by reference.Because be always each macro block, each sheet, each frame prediction indication is set, so reduce the size of sign or increase compression efficiency during such as the coding of entropy coding very important.In order to realize this purpose, for the decoder configuration information comes prediction indication is predicted and used this information to rebuild this prediction indication.
Fig. 3 illustrates the example of the residual prediction in the video coding.Residual prediction refers to the prediction of carrying out in the residual error data of using a kind of method prediction of selecting among the Forecasting Methodology shown in Fig. 2.Use residual error data, can be according to the macro block (sheet or frame) 14 a kind of method of among the Forecasting Methodology shown in Fig. 2, selecting (for example time inter prediction) structure basic unit.In the case, also can use on macro block, sheet or the frame of base layer residual data in enhancement layer and carry out residual prediction.Although will describe the present invention at macro block hereinafter, the invention is not restricted to this and also can be applied to sheet or frame.
For the macro block in the enhancement layer is realized the time inter prediction, can be with reference to base layer residual.In order to notify decoder with reference to base layer residual in the time inter prediction, residual prediction flag (residual error determination flag) is set to 1.If the value of macro block in the basic unit (macro block that promptly contains residual error data) all is set to zero, the quantity of non-zero pixels is zero or less than predetermined threshold less than predetermined threshold or CBP, and the value of residual prediction flag then needn't be set.This means as to prediction result between the time frame of basic unit, almost do not move.That is because there is not macro block in the enhancement layer to be used for reference data or this reference data deficiency of reference, so the coding of residual prediction flag skipped, thereby saved the required bit number of value that residual prediction flag is set.
Under motion vector in enhancement layer and the similar situation of the motion vector in the basic unit, residual prediction is significant.Therefore, the difference that obtains this two-layer motion vector is the basic problem of the value of prediction residual prediction indication.For briefly, hereinafter, the definite information that is used for the value of prediction residual prediction indication will be called as " sign is determined in prediction ".In joint scalable video model (JSVM) 1, can be by residual prediction flag and the prediction that is marked as PrdRpFlag being determined difference between the sign is encoded rather than by the efficient that residual prediction flag improves entropy coding is set.Therefore, such residual error that is marked as residual_predication_flag_diff judges that the difference sign is introduced into to video information coding.
The method that residual error is judged the difference sign that obtains will be described now.
At first, if the quantity that is present in the non-zero pixels in the base layer residual less than threshold value, is then skipped the coding of residual prediction flag.
On the contrary, if the quantity of non-zero pixels greater than threshold value, then residual error judges that difference sign rather than residual prediction flag are encoded.Judge the difference sign in order to obtain residual error, use the residual error between base layer motion vector BaseMV and enhancing or the current layer motion vector CurrMV.Encode for residual error being judged difference indicates, must obtain to determine sign.
Above process can be by following summary.
If | BaseMV-CurrMV| determines then that less than predetermined threshold sign is set to 1; Otherwise, determine that sign is set to 0.Subsequently, judge that as calculating the residual error of calibrating the difference between will and the residual prediction flag really the difference sign is encoded.
If the energy of base layer residual (or quantity of non-zero pixels) is less than threshold value Threshold Residual, then the coding of residual prediction flag is skipped.
If | BaseMV-CurrMV| is less than threshold value Threshold MV, 1 value that deducts residual prediction flag then, that is, the 1-residual prediction flag is encoded; Otherwise, the zero value that subtract residual prediction flag identical with residual prediction flag, that is, the value of 0-residual prediction flag is encoded.
Fig. 4 is the flow chart that the process that realizes above design is shown.With reference to figure 4, in step S101, obtain the base layer residual data by for example time interframe encode with reference to another frame or piece.In step S105, if the base layer residual data less than predetermined threshold, i.e. Threshold Residual, for example, if the value of base layer residual data be 0 or all energy less than predetermined threshold, then because residual prediction flag does not need to be set up, so its coding is skipped.On the contrary, in step S105, if the base layer residual data are greater than Threshold Residual, then the value of residual prediction flag optionally is 1 or 0.Selection is marked as the prediction of PrdRpFlag and determines that the basis of sign can be set up.
In step S110, if the difference between BaseMV and the CurrMV less than predetermined threshold, i.e. Threshold MV, then this means the probability height that uses the base layer residual data, in step S111, prediction determines that sign is set to 1.On the contrary, if the difference between BaseMV and the CurrMV greater than Threshold MV, then use the probability of base layer residual data low.Therefore, in step S112, prediction determines that sign is set to 0.In step S120, the prediction that in step S111 or S112, is provided with determine sign and and residual prediction flag and prediction determine that the corresponding residual error of difference between the sign judges that the difference sign is encoded.Encoder can based on macro block one by one, piecewise or frame by frame determine that to predicting sign and residual prediction judge that the difference sign encodes.
Fig. 5 illustrates according to an exemplary embodiment of the present invention using the flow chart of the process that the method coded data shown in Fig. 4 decodes.
At first, in step S151, obtain the base layer residual data from coded data.On the one hand, in step S155, (be Threshold if residual error data is not less than predetermined threshold Residual) (among the step S155 for not), then in step S161, the prediction that residual prediction flag is set to encode determines that sign and residual error judge poor between the difference sign.With in the cataloged procedure shown in Fig. 4, define the same, prediction determines that sign and residual error judge that difference indicates and will be marked as PrdRpFlag and residual_pred_flag_diff respectively.So, residual_pred_flag_diff=PrdRpFlag-residual_predication_f lag.Therefore, can obtain residual_predication_flag by deduct residual_pred_flag_diff from PrdRpFlag.
On the other hand, in step S155, if residual error data is less than Threshold Residual(among the step S155 for being) this means that enhancement layer is not produced by the reference residual prediction flag, and then in step S162, residual prediction flag is set to 0.
In above description, be encoded be except indication base layer residual data whether by the information the residual prediction flag of reference, thereby increased code efficiency.Similarly, out of Memory can be encoded, rather than the motion prediction sign of indicating basic unit's movable information whether to be used to motion-vector prediction is encoded, to realize identical purpose.
Fig. 6 is the diagrammatic sketch that shows the example of the motion prediction in the video coding according to an exemplary embodiment of the present invention.Motion prediction refers to according to base layer motion vector prediction enhancement layer or current layer motion vector.Therefore, be set to 1, then predict enhancement layer motion vector by the reference base layer motion vector if be marked as the motion prediction sign of motion_prediction_flag.Otherwise, if the motion prediction sign is set to 0, then not with reference to base layer motion vector.Among Fig. 6 21 or 25 expression macro block (or sub-piece), sheet or frames.Explain for convenience, suppose based on macro block ground execution motion prediction one by one.The motion vector of base layer macro block 21 is identical with the motion vector of enhancement layer macro block 25.In the case, do not need the motion prediction encoding flag.If the residual error between described two motion vectors is less than predetermined threshold, i.e. Threshold Motion, then also can skip coding to the motion prediction sign.
Simultaneously, by comparing base layer motion vector and can determining the motion prediction sign from the motion vector of space adjacent area.The space adjacent area can provide accurate motion vectors; Yet in some cases, motion prediction may not can be accurately carried out.This can cause big motion vector residual error.Although be lower than the accuracy of spatial movement prediction according to the accuracy of the motion prediction of base layer motion vector, the former provides rational accuracy under nearly all situation.Therefore, the residual error between described two motion vectors can be used to the predicted motion prediction indication.
If the residual error between the motion vector of macro block 22 and 26 is greater than predetermined threshold, i.e. Threshold MV,, determine that sign is set to 1 so be marked as the prediction of prediction_determination_flag then because the motion prediction sign is set to 1 probably.On the other hand, if the residual error between the motion vector of macro block 22 and 26 less than Threshold MV,, be set to 0 so sign is determined in prediction then because the motion prediction sign is set to 0 probably.
In case be provided with the value that sign is determined in prediction, then determine that corresponding to prediction motion prediction difference sign difference, that be marked as motion_pred_flag_diff between sign and the motion prediction sign is encoded.
Fig. 7 is the flow chart that the method for the code efficiency that improves the motion prediction sign is shown.
With reference to Fig. 7, in step S201, obtain according to the motion vector that is marked as Predict_MV_From_BaseLayer of base layer prediction and the motion vector of predicting according to the space adjacent area that is marked as Predict_MV_From_Spatia.In step S205, on the one hand, if the difference between described two motion vectors less than predetermined threshold, i.e. Threshold Motion, then the motion prediction sign is not encoded.On the other hand, if the difference between described two motion vectors greater than Threshold MV, prediction then is set determines that sign comes the predicted motion prediction indication, rather than the motion prediction sign is set.In step S210, according to described difference whether greater than Threshold MV, prediction determines that sign is set to 1 or 0.
That is, in step S210, if described difference is greater than Threshold MV, then prediction determines that sign is set to 1 in step S211.If described difference is less than Threshold MV, then prediction determines that sign is set to 0 in step S212.In step S220, the value 1 or 0 of sign is determined in the prediction that is provided with in step S211 or S212, and and residual prediction flag and prediction determine that the corresponding residual error of difference between the sign judges that the difference sign is encoded.
Fig. 8 illustrates according to an exemplary embodiment of the present invention using the flow chart of the method that the method coded data shown in Fig. 7 decodes.
With reference to Fig. 8, in step S251, obtain according to the motion vector of base layer prediction and the motion vector of predicting according to the space adjacent area based on coded data.
In step S255, on the one hand, if the difference between described two motion vectors is not less than predetermined threshold, i.e. Threshold Motion, then in step S261, the value that is marked as the motion prediction sign of motion_prediction_flag is set to predict determines poor between sign and the motion prediction difference sign.With in above-mentioned cataloged procedure, define the same, prediction determines that sign and motion prediction difference sign are marked as PrdRpFlag and motion_pred_flag_diff respectively.So, motion_pred_flag_diff=PrdRpFlag-motion_prediction_flag.Therefore, can obtain motion_prediction_flag by deduct motion_pred_flag_diff from PrdRpFlag.
On the other hand, in step S255, if described difference is less than Threshold Motion, then in step S262, motion_prediction_flag is set to 0.
As above described with reference to Fig. 3 to Fig. 8, the data that prediction will be encoded according to base layer data are to reduce the size of the data that will be encoded.The method of the size of the data that the minimizing of another exemplary embodiment according to the present invention will be encoded will be described now.
In the cataloged procedure shown in Fig. 3 to Fig. 8, when base layer data does not need to be used for to predict or to the value that is used to predict described sign rather than when indicating that itself encodes, whether skip the indication base layer data by the coding of the sign of reference.
In a further exemplary embodiment, the use base layer data is skipped the coding to residual prediction flag, and perhaps residual prediction flag can be converted into the new value that is used to encode.
When base layer residual does not have non-zero pixels or when the CBP of basic unit is zero, because residual prediction flag can be predicted, so the coding of residual prediction flag can be skipped.Otherwise because the value of residual prediction flag is 1 probably, so 1 subtract the value of residual prediction flag, promptly the 1-residual prediction flag is encoded.In the case, because the size of the data that will be encoded when 1 is encoded may increase, so 0 rather than 1 can be encoded to improve code efficiency.
Because it is more zero that above method produces, thus when use during based on the encoder of variable length code (VLC) this method be useful.When using, can be compressed in frequent bit or the bit mode that occurs in the data that will be encoded effectively based on contextual adaptive binary arithmetic coding (CABAC).Therefore, contain 1 or 0 bit mode by adjustment and can improve total compression ratio.By based on the code element context for each code element select probabilistic model, based on the partial statistics adjustment with the corresponding probability Estimation of described probabilistic model and to code element execution arithmetic coding, CABAC has realized high compression performance.The cataloged procedure of data symbols comprises four basic steps at the most: 1, binarization; 2, context modeling; 3, arithmetic coding; 4, probability upgrades.
1, binarization
In the CABAC technology, binary arithmetic coding allows the code element of given non-binary values to be mapped as binary sequence uniquely.In CABAC, only binary decision enters cataloged procedure.Code element such as the non-binary values of conversion coefficient or motion vector was converted into binary code before actual arithmetic coding process.Except binary code was encoded in advance by arithmetic encoder before sending, this process was to data symbols to be converted to the parameter length code similar.
Below context modeling, arithmetic coding and the basic operation upgraded of probability about each bit of the code of binarization, i.e. bin and recursively being carried out.
2, context modeling
As the probabilistic model of the one or more bin of the code element that is used for binarization and based on nearest coded data code element statistics and selecteed context model will be used for the probability of each bin and be stored as " 1 " or " 0 ".
3, arithmetic coding
Arithmetic encoder is encoded to each bin based on the probabilistic model of selecting.Each bin only has respectively and " 1 " and " 0 " corresponding two probability subranges.
4, probability upgrades
Value based on actual coding is upgraded the probabilistic model of selecting.That is to say that if the bin value is 1, then " 1 " frequency increase once.
Because according to above-mentioned CABAC is that each sheet (slice) is selected context model, so the probable value of probabilistic model is initialized to the table of the constant value that is used for sheet.Because context model must be upgraded continuously based on the statistics of recent coded data code element, thus with when the information of predetermined amount during by accumulative total, CABAC provides the better code efficiency than traditional VLC.
Fig. 9 is the flow chart that said process is shown, and wherein, uses the energy or the CBP value of base layer residual, and the coding that is labeled as the residual prediction flag of residual_prediction_flag is skipped, or residual prediction flag is converted into new value to be used for coding.
At step S301, obtain the base layer residual data.At step S302, if the energy of base layer residual data or CBP value are 0, these explanation base layer residual data do not have non-zero pixels, then in enhancement layer not with reference to the base layer residual data.Therefore, the coding that does not need residual prediction flag.
Otherwise if the energy of base layer residual data or CBP value are not 0, these explanation base layer residual data have non-zero pixels, and then residual prediction flag is significant, so obtain the value of residual prediction flag at step S312.Thereafter, at step S316, be defined as 1 value that deducts residual prediction flag, i.e. (reverse) residual prediction flag of the counter-rotating of 1-residual_prediction_flag is encoded.
Figure 10 is the flow chart that illustrates the process of using process coded data decoding shown in Figure 9.With reference to Figure 10,, obtain the base layer residual data at step S351.At step S352, if the energy of base layer residual data or CBP value are 0, this explanation does not have data to be used to prediction at enhancement layer, and then at step S364, residual prediction flag is set to 0.Otherwise if the base layer residual data have non-zero pixels, then residual prediction flag is significant.Because according to method coded data shown in Figure 9 is 1 to deduct the value of residual prediction flag, it is the value of 1-residual_prediction_flag, Fan Zhuan residual prediction flag just, so at step S370, residual prediction flag can be set to 1 and deduct residual prediction flag, i.e. the value of 1-residual_prediction_flag.
In Fig. 9 and embodiment shown in Figure 10, cataloged procedure has been described, it can determine whether to use the base layer residual data to the residual prediction flag coding and use the value of the counter-rotating of base layer residual data to carry out coding, thereby improves code efficiency.
Next, when using CABAC, use the context of revising to the residual prediction flag Methods for Coding with describing.As mentioned above, because CABAC is used to based on context code element be encoded, so it depends on that being biased to 0 or 1 context model provides variable compression.For example, when context is biased to 0, when use context to compress to comprise 0 during more than 1 the data that comprise, compression ratio is high.On the other hand, if context is biased to 1, if then use context to compress to comprise 1 during more than 0 the data that comprise, compression ratio is high.As above described with reference to Fig. 9, if residual prediction flag is encoded, then it probably is 1.Therefore, the context according to the correction that is biased to code element 1 comes residual prediction flag is encoded.
Figure 11 illustrates according to an exemplary embodiment of the present to use the context the revised flow chart to the process of the residual prediction flag coding that is labeled as residual_prediction_flag.As mentioned above,, use the energy of base layer residual or the coding that the CBP value is skipped residual prediction flag, or use the context that is biased to code element " 1 " that residual prediction flag is encoded according to cataloged procedure.
With reference to Figure 11,, obtain the base layer residual data at step S401.At step S402, if the energy of base layer residual data or CBP value are 0, these explanation base layer residual data do not have non-zero pixels, then at enhancement layer not with reference to the base layer residual data.Therefore, the coding that does not need residual prediction flag.Simultaneously, if use sandwich construction, the CBP value can be from lowermost layer by accumulative total.In the case, if the CBP value that adds up from basic unit is 0, then also skip the coding of residual prediction flag.
If the CBP of the energy of base layer residual data or accumulative total is not 0, these explanation base layer residual data have non-zero pixels, and then residual error data is significant, so obtain the value of residual prediction flag at step S412.Because residual prediction flag probably is 1, so use the context that is biased to code element " 1 " that residual prediction flag is encoded at step S416.
Figure 12 illustrates using the flow chart of process coded data decoding shown in Figure 11.With reference to Figure 12,, obtain the base layer residual data at step S451.At step S452, if the CBP value of the energy of base layer residual data or accumulative total is 0, this explanation does not have to be used to data predicted at enhancement layer, and then at step S464, residual prediction flag is set to 0.Otherwise at step S452, if the base layer residual data have non-zero pixels, then residual prediction flag is significant.Therefore, at step S462, use the context that is biased to code element 1 that residual prediction flag is decoded.Thereafter, at step S470, according to the value acquisition enhancement data of residual prediction flag.
In addition, can determine whether to use the energy of base layer residual data or CBP value that residual prediction flag is encoded.If the energy of base layer residual data or CBP value fall into predetermined scope, then can skip the coding of residual prediction flag.Provide the false code that is used to encode below:
If(energy of base layer residual data is 0 or CBP value is 0)
then
skip coding of residual_predication_flag
else
encode residual_predication_flag
As decoded results, if the energy of base layer residual data or CBP value are 0, then decoder is carried out decode procedure, and basic unit is not carried out residual prediction.Otherwise, to the value decoding of residual prediction flag.Provide the false code that is used to decode below:
If(energy of base layer residual data is 0 or CBP value is 0)
then
skip decoding of residual_prediction_flag
else
decode residual_prediction_flag
Figure 13 is the flow chart that the example of the coding mode that changes according to the base layer residual data is shown.
With reference to Figure 13,, obtain the base layer residual data at step S501.In step 520, if the energy of base layer residual data or CBP value are 0, this explanation residual prediction flag is 0, then uses the coding method that high code efficiency is provided for 0 time when residual prediction flag.For example, if CABAC is used to coding, the probabilistic model that then can use the probability that is used to have for " 0 " to be higher than for the CABAC context model of the probability of " 1 " comes residual prediction flag is encoded.
Otherwise if the energy of base layer residual data or CBP value are not 0, then very possible residual prediction flag is 1.Therefore, use the coding method that high code efficiency is provided for 1 time when residual prediction flag at step S530.For example, if CABAC is used as coding method, the probabilistic model that then can use the probability that is used to have for " 1 " to be higher than for the CABAC context model of the probability of " 0 " comes residual prediction flag is encoded.
Below provide the false code that is used for process shown in Figure 13:
If(energy of base layer residual data is 0 or CBP value is 0)
then
encode using a method offering high coding efficiency
when residual predication flag is‘0’
else
encode using a method offering high coding efficiency
when residual predication flag is‘1’
Be encoded as in the video coding of multilayer of basic unit, fine granulation convergent-divergent (fine granularity scalabilityFGS) layer and enhancement layer at video data, can be in basic unit or FGS layer with reference to the energy or the CBP value of residual error data.
Carry out decode procedure in the mode similar to cataloged procedure.That is to say that after obtaining the base layer residual data, if the energy of residual error data or CBP value are 0, this explanation residual prediction flag is 0, then be used for video data decoding encoding when residual prediction flag provides the method for high code efficiency for 0 the time.For example, if CABAC is used to coding,, can use the probability that is used to have for " 0 " to be higher than for the CABAC context model of the probability of " 1 " to come to video data decoding then in order to obtain residual prediction flag.
Otherwise, if the energy of base layer residual data or CBP value are not 0, this illustrates that very possible residual prediction flag is 1, then in the mode similar to coding method to decoding according to the vision signal of the coding method coding that high code efficiency is provided for 1 time when residual prediction flag.For example, if use CABAC as coding method, then in order to obtain residual prediction flag, the probabilistic model that can use the probability that is used to have for " 1 " to be higher than for the CABAC context model of the probability of " 0 " comes video data decoding.Below provide the false code that is used to decode:
If(energy of base layer residual data is 0 or CBP value is 0)
then
decode according to a method offering high coding
efficiency when the residual predication flag is‘0’
else
decode according to a method offering high coding
efficiency when the value of the residual predication flag is‘1’
With reference to Figure 13,, use respectively when residual prediction flag and for 0 the time and when residual prediction flag, provide the coding method of high code efficiency for 1 the time at step S520 and S530.One of method that high code efficiency is provided is to use different context models during entropy coding.
Figure 14 illustrates the flow chart that uses the cataloged procedure of different context models according to an exemplary embodiment of the present invention when using CABAC.With reference to Figure 14,, obtain the base layer residual data at step S601.At step S610, if the energy of base layer residual data or CBP value are 0, this explanation residual prediction flag is 0, then at step S620, uses the probability have for " 0 " to be higher than for the CABAC context model of the probability of " 1 " residual prediction flag is encoded.Otherwise at step S610, if the energy of base layer residual data or CBP value are not 0, then very possible residual prediction flag is 1.Therefore, using the probability have for " 1 " to be higher than for the CABAC context model of the probability of " 0 " at step S630 encodes to residual prediction flag.
Figure 15 illustrates the flow chart that is used for the process of the video data decoding that uses the process coding shown in Figure 14.At step S651, obtain the base layer residual data.At step S652, obtain the bit of the block mode (CBP) of coding.At step S653, respectively, 4 are assigned to the CBP value that is used for brightness than low order in CBP, are labeled as CodedBlockPatternLuma, and 2 higher significance bits are assigned to the CBP value that is used for colourity, are labeled as CodedBlockPatternChroma.Can be worth knowing the CBP value of 4 sub-pieces the macro block from the CBP of CodedBlockPatternLuma.Can be worth knowing the CBP value of DC and AC component from the CBP of CodedBlockPatternChroma.The form of coded_block_pattern, CodedBlockPatternLuma and CodedBlockPatternChroma is as shown in table 1.Coded_block_pattern provides the information that whether has non-zero pixels about the sub-piece of being represented by each bit or macro block.
Table 1
coded_block_pattern Position [5] Position [4] Position [3] Position [2] Position [1] Position [0]
CodedBlockPatternChroma CodedBlockPatternLuma
When upper left 8 * 8 luminance block of four 8 * 8 luminance block in macro block had non-zero pixels, the value of bit [0] was 1.When lower-left and bottom right 8 * 8 luminance block had non-zero pixels, the value of bit [1] and bit [2] was 1.
When the DC of the chrominance information in the macro block and AC component had the non-zero pixels value, the value of bit [4] and bit [5] was 1.
Therefore, can determine the existence of non-zero pixels or do not exist according to the value of CBP.At step S660, when at least one the value in four bits forming CodedBlockPatternLuma be 1 or the value of CodedBlockPatternChroma greater than 0 the time, this explanation residual error data has non-zero pixels, very possible residual prediction flag is 1.Therefore, at step S670, the value of context model that is used to select to be used for the entropy decoding that is marked as ctxIdxInc is set to 1.In case the value of ctxIdxInc has been set to 1, just uses when residual prediction flag provides the coding method of high code efficiency for 1 the time coded data is decoded.For example, the CABAC context model that can use the probability with " 1 " to be higher than " 0 " is decoded to coded data.
On the other hand, at step S660, be not 1 if constitute the value of four bits of CodedBlockPatternLuma, and the value of CodedBlockPatternChroma is 0, this explanation residual error data does not have non-zero pixels, and then very possible residual prediction flag is 0.Therefore, at step S680, the value of ctxIdxInc is set to 0 to select to be used for the context model of entropy decoding.In case the value of ctxIdxInc has been set to 0, just use the method for the code efficiency that when residual prediction flag is " 0 ", provides high that coded data is decoded.For example, the CABAC context model that can use the probability that " 1 " is provided to be higher than " 0 " is decoded to coded data.
In case the value of ctxIdxInc optionally is set to 1 or be not set to 0 at step S680 at step S670, at step S690, just uses the value of the ctxIdxInc that is provided with to carry out the entropy decoding.
At step S660, whether the bit of determining to have the value of " 1 " exists among the CBP.As mentioned above, the value of CBP is assigned to CodedBlockPatternLuma and CodedBlockPatternChroma.Use CodedBlockPatternLuma to find that the process of non-zero pixels is as follows:
for(luma8×8BlkIdx=0;i<=3;i++){
if((CodedBlockPatternLuma>>luma8×8BlkIdx)&1)and
(current macroblock is inter-macroblock))
then
residualAvailLuma8×8Blk=1;
break;
else
residualAvailLuma8×8Blk=0;
}
To each Luma8 * 8BlkIdx carry out " with " bit manipulation (﹠amp; ) whether be 0 with the value of each bit of determining CodedBlockPatternLuma, and whether make about current macro be the inspection of inter macroblocks.If current macro is an inter macroblocks, then residualAvailLuma8 * 8Blk is set to 1; Otherwise it is set to 0.ResidualAvaiLuma8 * 8Blk is used to be provided with baseResidualAvailLuma8 * 8Blk, and determines to be identified for the value of ctxIdxInc of the context model of entropy decoding.
Use the process of CodedBlockPatternChroma distribution non-zero pixels as follows:
if((CodedBlockPatternChroma!=0)and
(current macroblock is inter-macroblock))
then
residualAvailChroma=1;
break;
else
residualAvailChroma=0;
If the value of CodedBlockPatternChroma be not 0 and current macro be inter macroblocks, then residualAvaiChroma is set to 1; Otherwise it is set to 0.ResidualAvailLuma8 * 8Blk is used to be provided with baseResidualAvailLuma8 * 8Blk, and determines to be identified for the value of ctxIdxInc of the context model of entropy decoding.
Therefore, if the value of at least one bit among CodedBlockPatternLuma or the CodedBlockPatternChroma is 1, determine that then residual error data has at least one non-zero pixels, thereby ctxIdxInc is set to 1.Otherwise ctxIdxInc is set to 0 to carry out the entropy decoding.
In exemplary embodiment of the present as described below, " unit ", " part " or " module " expression component software or such as the nextport hardware component NextPort of field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) (ASIC).Specific function is carried out in described unit, but is not limited to software and hardware.Described unit can be included in the addressable storage medium, maybe can be configured to play one or more processors.Therefore, the unit can comprise assembly, process, function, attribute, processing, subprogram, program code segments, driver, firmware, microcode, circuit, data, database, data structure, table, array and the parameter such as component software, OO component software, class component and task component.The assembly that assembly that is provided by the unit and feature are combined into lesser amt and the unit of lesser amt maybe can be divided into assembly of a greater number and the unit of a greater number.In addition, can realize them by the mode of one or more CPU (CPU) in assembly and the unit running gear or secure multimedia card (MMC).
Figure 16 is the block diagram of video encoder 500 according to an exemplary embodiment of the present invention, and Figure 17 is the block diagram of Video Decoder 550 according to an exemplary embodiment of the present invention.Though describe the present invention with reference to Figure 16 and 17, can use two or more layers about the situation of using a basic unit and an enhancement layer.It will be understood by those skilled in the art that the present invention also can be employed between anterior layer and lower level.
With reference to Figure 16, video encoder 500 mainly comprises enhancement layer encoder 400 and base layer encoder 300.The configuration of base layer encoder 300 now will be described.
Following sampler 310 descends sampling according to resolution, frame per second or the video image size of basic unit to input video.Can use under the MPEG that sampler will be sampled as the resolution of basic unit under sampler or the small echo under the incoming frame.Can use frame scheme or frame interpolation scheme to change frame per second simply to descend sampling.Can realize and to be sampled as less size under the image to mate by the size that removes the information the border area from video information or reduce video information with the size of screen.For example, can carry out down sampling is that the ratio of width to height is 4: 3 with the original input video adjustment size that will have 16: 9.
350 pairs of basic frames of exercise estimator are carried out estimation is used for each subregion of basic frame with acquisition motion vector mv.Estimation is such process: seek the zone among the previous reference frame Fr ' that is stored in the frame buffer 380, described zone is the most similar to each subregion in present frame Fc, and has minimum match error.Multiple technologies such as class parameter size block coupling (HVSBM) or fixed size block matching algorithm can be used to estimation.Can provide reference frame Fr ' by frame buffer 380.Though in the superincumbent description, base layer encoder 300 supports to use the closed loop coding of the frame of reconstruct as the reference frame, can adopt use by the open loop encoding scheme of the original basic frame that descends sampler 310 to provide as the reference frame.
Motion compensator 360 uses motion vector mv that reference frame is carried out motion compensation.In order to produce residual frame, subtracter 315 deducts the reference frame of passive movement compensation to produce residual frame from basic unit's present frame.
320 pairs of residual frame of converter are carried out spatial alternation to create conversion coefficient.Spatial transform method can comprise discrete cosine transform (DCT) or wavelet transformation.Specifically, under the situation that adopts DCT, the DCT coefficient can be created, under the situation that adopts wavelet transformation, wavelet coefficient can be created.
330 pairs of conversion coefficients that receive from converter 320 of quantizer quantize.Quantize to refer to centrifugal pump and represent with any real-valued formed conversion coefficient, and the process of the index-matched of the quantization table that centrifugal pump and basis is predetermined.The end value that quantizes is called as quantization parameter.
The quantization parameter that 340 pairs of entropy coding unit are obtained by quantizer 330 and to carry out lossless coding by the motion vector that exercise estimator 350 obtains be base layer bitstream.The multiple lossless coding method that comprises arithmetic coding, parameter length coding etc. is arranged.
371 pairs of quantization parameters that receive from quantizer 330 of inverse quantizer apply re-quantization.Re-quantization means that the re-quantization process is with the value according to predetermined quantization table recovery of using and the index-matched that produces during quantizing in quantification.
The result of 372 pairs of re-quantizations of inverse converter carries out inverse transformation.For example, inverse spatial transform can be inverse DCT or inverse wavelet transform, by carrying out inverse spatial transform with the order of the reversed in order of the conversion of being carried out by converter 320.
Adder 325 is reconstructed the output valve of motion compensator 360 with inverse converter 372 mutually to the present frame that is sent to frame buffer 380 thereafter.The frame of frame buffer 380 interim storage reconstruct also provides it as the reference frame that is used for the basic frame of back is carried out inter prediction.
The frame Fc ' of reconstruct by on be fed to enhancement layer encoder 400 after the sampler 395.Certainly, if basic unit has the resolution identical with enhancement layer, then can skip sampling.
The configuration of enhancement layer decoder 200 now will be described.The frame and the incoming frame that receive from base layer encoder 300 are fed to subtracter 410.Subtracter 410 calculates incoming frames and comprises difference between the basic frame of actual area to produce residual frame.Before being transformed to enhancement layer bit-stream, residual frame is by converter 420, quantizer 430 and entropy coding unit 440.
Macro block in 420 pairs of incoming frames of converter and the residual signals between the macro block in the basic frame are carried out spatial alternation to create conversion coefficient.Discrete cosine transform (DCT) or wavelet transformation technique can be used for spatial alternation.Because the characteristic of enhancement layer macro block, DCT coefficient (under the situation of DCT) or wavelet coefficient (under the situation of wavelet transformation) have similitude.Therefore, converter 420 removes similitude in these coefficients to increase compression ratio.
Because quantizer 430 is carried out and its appropriate section identical functions/operation in base layer encoder 300, omit detailed description.Unit 435 is set before by the 440 pairs of residual prediction flag codings in entropy coding unit, to carry out the mode of residual prediction flag efficient coding is provided with from the bit stream of quantizer 430 outputs by coded message.
As mentioned above, for about the data of basic frame whether by the information of enhancement layer reference, coded message is provided with poor between determining prediction data and predicting the outcome of unit 435, thereby prevents by entropy coding unit 440 codings and the loss of the information of 440 transmission from the entropy coding unit.The bit allocation procedures that is used for the compression carried out at converter 420 is described in the process (see figure 7) of the process (seeing Fig. 4,9,11,13 and 14) of prediction residual data and motion vectors.For example, if the energy of the residual error data of basic unit or FGS layer be 0 or the CBP value be 0, then, residual prediction flag is not encoded so coded message is provided with unit 435 because do not need the prediction of base layer residual data; Otherwise it is encoded to residual prediction flag.If the energy of the residual error data in basic unit or FGS layer or CBP value are not 0, then very possible residual prediction flag is 1.As above with reference to Fig. 4,9,11,13 and 14 described, consider code efficiency and residual prediction flag is encoded.
When using cataloged procedure shown in Figure 14, if the energy of base layer residual data or CBP value are 0, this explanation residual prediction flag is 0, and then coded message is provided with the probable value that unit 435 is provided for the CABAC context model, thereby the probability of " 0 " is higher than " 1 ".Otherwise if the energy of base layer residual data or CBP value are not 0, this illustrates that the value of very possible residual prediction flag is 1, and then coded message is provided with the probable value that unit 435 is provided for the CABAC context model, thereby the probability of " 1 " is higher than " 0 ".In one exemplary embodiment, as mentioned above, coded message is provided with unit 435 can be set to 0 or 1 by ctxIdxInc.
Entropy coding unit 440 uses by coded message and the information of unit 435 settings is set to enhancement data execution coding.Because except this operation, the entropy coding unit 340 identical functions/operations in 440 execution of entropy coding unit and the base layer encoder 300 are so omission is to its description.
Enhancement layer encoder 400 shown in Figure 16 can be according to can be with reference to the data of basic frame, and promptly residual error data or motion vector predict that during cataloged procedure whether the data of basic frame are by reference.
Figure 17 is the Video Decoder block diagram according to the embodiment of the invention.Video Decoder 550 mainly comprises enhancement layer encoder 700 and base layer encoder 600.The configuration of base layer encoder 600 now will be described.
610 pairs of base layer bitstream of entropy decoding unit are nondestructively decoded to extract data texturing and the exercise data (being motion vector, partition information, reference frame label etc.) in the basic frame.
620 pairs of data texturings of inverse quantizer carry out re-quantization.Re-quantization is the inverse process of the quantification carried out in video encoder 500, is to use the process that the value of the corresponding quantization index that create during the comfortable quantizing process is reconstructed at the quantization table that uses during the quantizing process.
The result of 630 pairs of re-quantizations of inverse converter carries out inverse spatial transform with the reconstruct residual frame.Inverse spatial transform is the inverse process by the spatial alternation of 320 execution of the converter in the video encoder 500.Inverse DCT or inverse wavelet transform can be used for inverse spatial transform.
Entropy decoding unit 610 will comprise that also the exercise data of motion vector mv offers motion compensator 660.
The frame of the previous reconstruct that 660 pairs of motion compensators are provided by frame buffer 650, promptly reference frame is carried out motion compensation to produce the frame of passive movement compensation.
Adder 615 will be by the frame of the residual frame of inverse converter 630 reconstruct and the passive movement that is produced by motion compensator 660 compensation reconstruct base layer video frame in addition mutually.The base layer video frame of reconstruct can be temporarily stored in the frame buffer 650, and can be provided for the frame of motion compensator 660 with the reconstruct back.
The present frame Fc ' of reconstruct by on be fed to enhancement layer decoder 700 after the sampler 680.Therefore, if basic unit has the resolution identical with enhancement layer, then can skip sampling.If base layer video information does not have the information about the zone that occurs in enhancement-layer video information, then also can skip sampling.
The configuration of enhancement layer decoder 700 now will be described.The enhancement layer bit-stream of 710 pairs of inputs of entropy decoding unit is nondestructively decoded and is used for the data texturing of nonsynchronous frame with extraction.
Thereafter, along with the data texturing that extracts passes through inverse quantizer 720 and inverse converter 730, it is reconfigured as residual frame.Inverse quantizer 720 is carried out the function/operation similar to the inverse quantizer 620 in the base layer decoder device 600.
Adder 715 is added to the basic frame that provided by base layer decoder device 600 so that frame is reconstructed with the residual frame of reconstruct.
Can be to the process shown in 730 execution graphs 5,8,10,12 of the inverse converter in the enhancement layer decoder 700 or 15 with the reconstruct residual frame.For the data of the decoding of the basic frame of determining decoding whether by the enhancement data reference, the residual error data or the exercise data that constitute basic frame are checked.If the basic frame data are less than Threshold Residual, then as above described with reference to Fig. 5, because the base layer residual data are not enhanced a layer residual error data reference, the residual error data sign is set to 0.In addition, as above described with reference to Fig. 8, if base layer motion vector and from the difference between the motion vector of adjacent areas on the space less than Threshold Motion, then because base layer motion vector not by reference, so the motion prediction sign can be set to 0.
In addition, as above, depend on whether the energy of base layer residual data or CBP value are 0, in prediction enhancement layer residual data, can determine that whether the base layer residual data are by reference with reference to Figure 10,12 or 15 described.Because encoder uses various coding techniquess to carry out coding to improve code efficiency, obtain residual prediction flag so can use with the corresponding coding/decoding method of suitable coding method.
If use decode procedure shown in Figure 15, then resolver 705 obtains the CBP value to check in the bit whether have 1 value from enhancement layer bit-stream.Resolver 705 can determine whether residual error data has non-zero pixels based on the block mode of coding.Because this process is with shown in Figure 15 identical, so omission detailed description.
Resolver 705 can use baseResidualAvailLuma8 * 8Blk[0..3] and baseResidualAvailChroma determine whether residual error data has non-zero pixels.Resolver 705 obtains the CBP bit so that the bit of 4 less meanings and the bit of 2 more meanings are distributed to CodedBlockPatternLuma and CodedBlockPatternChroma respectively.Thereafter, resolver 705 checks whether whether in four bits among the CodedBlockPatternLuma at least one have the value of 1 value and CodedBlockPatternChroma greater than 0.If at least one in four bits has the value of 1 value or CodedBlockPatternChroma greater than 0, then resolver 705 determines that residual error data have at least one non-zero pixels, and value is set to 1; Otherwise the value of its ctxIdxInc is set to 0.Entropy decoding unit 710 uses the value of the ctxInxInc that is provided with by resolver 705 to carry out the entropy decoding.
Though as mentioned above, enhancement layer decoder shown in Figure 17 700 is used from BL prediction in the frame of basic frame and is carried out decoding, it will be understood by those skilled in the art that as reference Fig. 2 describedly, can carry out decoding by inter prediction or infra-frame prediction.
The present invention is by determining whether to provide according to basic frame prediction enhancement data the compression ratio of increase.
It will be apparent to those skilled in the art that under the situation that does not break away from the scope of the invention and spirit, can make various modifications and change it.Therefore, should be appreciated that top embodiment is not restrictive in all fields, but exemplary.Scope of the present invention is defined by claims, rather than by detailed description definitions of the present invention.The all modifications and the change that draw in the scope and spirit of accessory rights requirement and equivalent thereof are interpreted as being included in the scope of the present invention.

Claims (99)

1, a kind of residual prediction flag is carried out Methods for Coding, whether the residual error data of described residual prediction flag indication multi-layer video signal enhancement layer piece is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises:
Calculate the characteristic of lower level piece, wherein, described characteristic comprises at least one in the value of coded block pattern of the energy of residual error data of described lower level piece and described lower level piece;
Determine the coding method of residual prediction flag according to described characteristic; With
Use the coding method of determining that residual prediction flag is encoded.
2, the method for claim 1:
Wherein, if described characteristic is less than predetermined threshold, then skip coding to residual prediction flag, if and described characteristic is not less than predetermined threshold, then the coding of residual prediction flag comprised the predicted value of residual prediction flag and the difference between the described residual prediction flag are encoded.
If 3, the method for claim 1, wherein described characteristic is not less than predetermined threshold, then the coding of residual prediction flag is comprised that the value that " 1 " is subtracted described residual prediction flag encodes.
4, the method for claim 1, wherein, if described characteristic is less than predetermined threshold, then the coding of described residual prediction flag is skipped, if and described characteristic is not less than predetermined threshold, then use the context model that is biased to code element " 1 " to carry out coding to described residual prediction flag.
5, the method for claim 1, wherein, if described characteristic is less than predetermined threshold, then use the coding of first context model execution to described residual prediction flag, if and described characteristic is not less than predetermined threshold, then use the coding of second context model execution, its probability height of the likelihood ratio that described first context model is " 0 ", its probability height of the likelihood ratio that described second context model is " 1 " for " 0 " for " 1 " to residual prediction flag.
6, the method for claim 1, wherein described lower level comprises one of basic unit and the scalable layer of fine granulation.
7, the method for claim 1, wherein:
The energy of the residual error data of lower level piece comprises the quantity of the non-zero pixels in this lower level piece; And
The value of the coded block pattern of described lower level piece is included in the quantity of coded block pattern intermediate value for the position of " 1 ".
8, the method for claim 1, wherein, the value of coded block pattern comprises the value of following position: four of the coded block pattern that is used for monochrome information that are called CodedBlockPatternLuma are than low order, and two higher significance bits that are called the coded block pattern that is used for chrominance information of CodedBlockPatternChroma.
9, the method for claim 1, wherein the energy calculation of described residual error data comprises the energy of the lower level that is enhanced layer reference is sued for peace together.
10, the calculating of the coded block pattern of the method for claim 1, wherein described lower level piece comprises whether each value of the coded block pattern determined in the lower level piece is 1.
11, a kind of method that residual prediction flag is decoded, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises:
Calculate the characteristic of lower level piece, wherein, described characteristic comprises at least one in the value of coded block pattern of the energy of residual error data of described lower level piece and lower level piece;
Determine the coding/decoding method of residual prediction flag according to described characteristic; With
Use the coding/decoding method of determining that residual prediction flag is decoded.
12, method as claimed in claim 11, wherein:
If described characteristic is not then decoded to residual prediction flag less than predetermined threshold, residual prediction flag is set to " 0 "; And
If described characteristic is not less than predetermined threshold, then residual prediction flag is set to poor between the predicted value of residual prediction difference sign and residual prediction flag, wherein, described residual prediction difference sign poor corresponding between the predicted value of residual prediction flag and the residual prediction flag.
13, method as claimed in claim 11, wherein, if described characteristic greater than predetermined threshold, then residual prediction flag is set to the value that residual prediction flag is changeed in " 1 " anti-reflection.
14, method as claimed in claim 11, wherein,
If described characteristic less than predetermined threshold, is then skipped the decoding of described residual prediction flag; And
If described characteristic is not less than predetermined threshold, then use is biased to the decoding of the context model execution of code element " 1 " to residual prediction flag.
15, method as claimed in claim 11, wherein, under the situation of context execution of using based on contextual adaptive binary arithmetic coding to the coding of residual prediction flag:
If described characteristic less than predetermined threshold, then uses first context model that residual prediction flag is decoded; And
If described characteristic is not less than predetermined threshold, then use second context model that residual prediction flag is decoded, its probability height of the likelihood ratio that described first context model is " 0 " for " 1 ", its probability height of the likelihood ratio that described second context model is " 1 " for " 0 ".
16, method as claimed in claim 11, wherein, described lower level comprises one of basic unit and fine granulation scalability layers.
17, method as claimed in claim 11, wherein,
The energy of the residual error data of lower level piece comprises the quantity of the non-zero pixels in the described lower level piece; And
The value of the coded block pattern of lower level piece is included in the quantity of coded block pattern intermediate value for the position of " 1 ".
18, method as claimed in claim 11, wherein, the value of coded block pattern comprises the value of following position: four of the coded block pattern that is used for monochrome information that are called CodedBlockPatternLuma are than low order, and two higher significance bits that are called the coded block pattern that is used for chrominance information of CodedBlockPatternChroma.
19, method as claimed in claim 11, wherein, the energy calculation of described residual error data comprises sues for peace together to the energy of the lower level that is enhanced layer reference.
20, method as claimed in claim 11, wherein, the calculating of the coded block pattern of described lower level piece comprises whether each value of coded block pattern in the lower level piece of determining to be enhanced layer reference is 1.
21, a kind of method that residual prediction flag is decoded, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this method comprises:
Check the value of the coded block pattern in the lower level piece;
Whether each the value of determining the coded block pattern in the lower level piece is " 1 ";
At least one value of the coded block pattern according to whether in the lower level piece is " 1 ", determines the coding/decoding method of residual prediction flag; With
According to the coding/decoding method of determining residual prediction flag is decoded.
22, method as claimed in claim 21, wherein, whether each the value of determining coded block pattern is that 1 step comprises:
Four of coded block pattern are set to first parameter than low order, and described first parameter is used for determining the coding of monochrome information, and two higher significance bits of coded block pattern are set to second parameter, and second parameter is used for determining the coding of chrominance information;
Execution dextroposition operation, the integer figure that this operation moves right of first parameter to select among " 0 " to " 3 " of distributing to macroblock address, result to " 1 " and described dextroposition operation carries out the AND bit manipulation, i.e. the result of " 1 " AND dextroposition operation; With
If the result of AND bit manipulation is not " 0 ", at least one value then determining coded block pattern is " 1 ".
23, method as claimed in claim 21, wherein, the determining to comprise of coding/decoding method is provided for context model that residual prediction flag is decoded, and the decoding to residual prediction flag is carried out in the entropy decoding of passing through the context model of setting, wherein, described entropy decoding is the inverse operation of CABAC.
24, method as claimed in claim 23, wherein, by carry out the entropy decoding based on contextual adaptive binary arithmetic coding.
25, method as claimed in claim 21, wherein, determine that the step of coding/decoding method comprises:
If at least one value of coded block pattern is " 1 ", then be used to select the parameter of context model to be set to " 1 "; And
If the coded block pattern void value is the position of " 1 ", then be used to select the parameter of context model to be set to " 0 ".
26, method as claimed in claim 25, wherein, in the process that residual prediction flag is decoded, use and come the decoded residual prediction indication with drag:
If be used to select the parameter of context model to be set to " 0 ", then using the likelihood ratio for " 0 " is the first high context model of probability of " 1 "; And
If described parameter is set to " 1 ", use then that it is the second high context model of probability of " 0 " for the likelihood ratio of " 1 ".
27, method as claimed in claim 21, wherein, described lower level comprises one of basic unit and fine granulation scalability layers.
28, method as claimed in claim 21 wherein, in the process of the coded block pattern of confirming the lower level piece, is identified for predicting whether each the value of coded block pattern of the lower level piece of enhancement layer block is 1.
29, a kind of method that the enhancement layer of multi-layer video signal is decoded comprises:
The value of the coded block pattern of use lower level is determined the coding/decoding method of residual prediction flag, and whether described residual prediction flag indication enhancement layer is predicted according to lower level; With
The entropy decoding method that use is selected according to coding/decoding method is decoded to residual prediction flag, uses described residual prediction flag that enhancement layer is decoded.
30, a kind of video encoder that residual prediction flag is encoded of being used for, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this encoder comprises:
The lower level coding unit produces the characteristic of lower level piece, and wherein said characteristic comprises at least one in the value of coded block pattern of the energy of residual error data of lower level piece and lower level piece; With
The enhancement layer coding unit uses the coding method of residual prediction flag that residual prediction flag is encoded, and wherein, determines described method according to described characteristic.
31, video encoder as claimed in claim 30:
Wherein, if described characteristic is less than predetermined threshold, then skip coding,, then the coding of residual prediction flag is comprised the predicted value of residual prediction flag and the difference between the described residual prediction flag are encoded if described characteristic is not less than predetermined threshold to residual prediction flag.
32, video encoder as claimed in claim 30 wherein, if described characteristic is not less than predetermined threshold, then comprises that to the coding of residual prediction flag the value that " 1 " is subtracted residual prediction flag encodes.
33, video encoder as claimed in claim 30:
Wherein, use based on the coding of contextual adaptive binary arithmetic coding execution residual prediction flag; And
Wherein, if described characteristic less than predetermined threshold, is then skipped the coding of residual prediction flag, if described characteristic is not less than predetermined threshold, then use is biased to the coding of the context model execution of code element " 1 " to residual prediction flag.
34, video encoder as claimed in claim 30:
Wherein, use based on the coding of contextual adaptive binary arithmetic coding execution residual prediction flag; And
Wherein, if described characteristic is less than predetermined threshold, then use the coding of first context model execution to residual prediction flag, if described characteristic is not less than predetermined threshold, then use the coding of second context model execution to residual prediction flag, its probability height of the likelihood ratio that described first context model is " 0 " for " 1 ", its probability height of the likelihood ratio that described second context model is " 1 " for " 0 ".
35, video encoder as claimed in claim 30, wherein, described lower level comprises one of basic unit and fine granulation scalability layers.
36, video encoder as claimed in claim 30, wherein:
The energy of the residual error data of lower level piece comprises the quantity of the non-zero pixels in the lower level piece; And
The value of the coded block pattern of lower level piece is included in the quantity of coded block pattern intermediate value for the position of " 1 ".
37, video encoder as claimed in claim 30, wherein, the value of coded block pattern comprises the value of following position: four of the coded block pattern that is used for monochrome information that are called CodedBlockPatternLuma are than low order, and two higher significance bits that are called the coded block pattern that is used for chrominance information of CodedBlockPatternChroma.
38, a kind of Video Decoder that residual prediction flag is decoded of being used for, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this decoder comprises:
The base layer decoder unit produces the characteristic of lower level piece, and wherein said characteristic comprises at least one in the value of coded block pattern of the energy of residual error data of lower level piece and lower level piece; With
The enhancement layer decoder unit uses the coding/decoding method of residual prediction flag that residual prediction flag is decoded, and wherein, determines described method according to described characteristic.
39, Video Decoder as claimed in claim 38, wherein:
If described characteristic is not then decoded to residual prediction flag less than predetermined threshold, residual prediction flag is set to " 0 "; And
If described characteristic is not less than predetermined threshold, then residual prediction flag is set to poor between the predicted value of residual prediction difference sign and residual prediction flag, wherein, residual prediction difference sign poor corresponding between the predicted value of residual prediction flag and the residual prediction flag.
40, Video Decoder as claimed in claim 38, wherein, if described characteristic greater than predetermined threshold, then residual prediction flag is set to the value that residual prediction flag is changeed in " 1 " anti-reflection.
41, Video Decoder as claimed in claim 38, wherein, under using based on the situation of contextual adaptive binary arithmetic coding execution to the coding of residual prediction flag:
If described characteristic less than predetermined threshold, is then skipped the decoding of residual prediction flag; And
If described characteristic is not less than predetermined threshold, then use is biased to the decoding of the context model execution of code element " 1 " to residual prediction flag.
42, Video Decoder as claimed in claim 38, wherein, under using based on the situation of contextual adaptive binary arithmetic coding execution to the coding of residual prediction flag:
If described characteristic less than predetermined threshold, then uses first context model that residual prediction flag is decoded; And
If described characteristic is not less than predetermined threshold, then use second context model that residual prediction flag is decoded, its probability height of the likelihood ratio that described first context model is " 0 " for " 1 ", its probability height of the likelihood ratio that described second context model is " 1 " for " 0 ".
43, Video Decoder as claimed in claim 38, wherein, described lower level comprises one of basic unit and fine granulation scalability layers.
44, Video Decoder as claimed in claim 38, wherein,
The energy of the residual error data of described lower level piece comprises the quantity of the non-zero pixels in the lower level piece; And
The value of the coded block pattern of described lower level piece is included in the quantity of coded block pattern intermediate value for the position of " 1 ".
45, Video Decoder as claimed in claim 38, wherein, the value of coded block pattern comprises the value of following position: four of the coded block pattern that is used for monochrome information that are called CodedBlockPatternLuma are than low order, and two higher significance bits that are called the coded block pattern that is used for chrominance information of CodedBlockPatternChroma.
46, a kind of decoder that residual prediction flag is decoded of being used for, whether the residual error data of the enhancement layer block of described residual prediction flag indication multi-layer video signal is according to predicting that with the residual error data of the corresponding lower level piece of the residual error data of described enhancement layer block this decoder comprises:
Resolver, the value of the coded block pattern of parsing lower level piece; With
Decoding unit is determined the coding/decoding method of residual prediction flag according to the value of coded block pattern, uses the coding/decoding method of determining that residual prediction flag is decoded.
47, decoder as claimed in claim 46, wherein, four of described resolver coded block pattern are set to first parameter than low order, described first parameter is used for determining the coding of monochrome information, and two higher significance bits of coded block pattern are set to second parameter, second parameter is used for determining the coding of chrominance information, execution dextroposition operation, the integer figure that this operation moves right of first parameter to select among " 0 " to " 3 " of distributing to macroblock address, result to " 1 " and described dextroposition operation carries out the AND bit manipulation, i.e. the result of " 1 " AND dextroposition operation; If with the result of AND bit manipulation be not " 0 ", at least one value then determining coded block pattern is " 1 ".
48, decoder as claimed in claim 46, wherein, described resolver is provided for context model that residual prediction flag is decoded, and described decoding unit uses the context model that is provided with to carry out the entropy decoding.
49, decoder as claimed in claim 48, wherein, described entropy decoding is based on contextual adaptive binary arithmetic coding.
50, decoder as claimed in claim 46, wherein, in whether each the value of determining coded block pattern is 1 process, if at least one value of coded block pattern is " 1 ", then resolver is used to select the parameter of context model to be set to " 1 ", if perhaps the coded block pattern void value is the position of " 1 ", then is used for described parameter and is set to " 0 ".
51, decoder as claimed in claim 50, wherein, decoding unit uses and comes the decoded residual prediction indication with drag:
If be used to select the parameter of context model to be set to " 0 ", then using the likelihood ratio for " 0 " is the first high context model of probability of " 1 "; And
If described parameter is set to " 1 ", then using the likelihood ratio for " 1 " is the second high context model of probability of " 0 ".
52, decoder as claimed in claim 46, wherein, described lower level comprises one of basic unit and fine granulation scalability layers.
53, decoder as claimed in claim 46, wherein, whether each the value of coded block pattern that resolver determines to be enhanced the lower level piece of layer reference is 1.
54, a kind of decoder that the enhancement layer of multi-layer video signal is decoded comprises:
Resolver, the value of the coded block pattern of parsing lower level;
The entropy decoding unit, the value of use coded block pattern is determined the coding/decoding method of residual prediction flag, and according to described coding/decoding method residual prediction flag is carried out the entropy decoding also subsequently to enhancement layer decoder, whether described residual prediction flag indication enhancement layer is predicted according to lower level;
Inverse quantizer, the re-quantization enhancement layer; With
Inverse converter is carried out inverse transformation according to residual prediction flag to the enhancement layer of re-quantization.
55, a kind of coding method of carrying out by multilayer video encoder, this coding method comprises:
Obtain basic frame from incoming frame;
By obtain the data of enhancement frame with reference to the data that comprise basic frame, in the data that are used for second enhancement layer of different time locations and at least one the reference data of data that is used for the zones of different of enhancement layer; With
Whether according to the data of determining enhancement layer is to come the digital coding of prediction result to enhancement layer according to reference data.
56, coding method as claimed in claim 55, wherein, the data that are used for basic frame comprise: be used to comprise basic frame and be different from the residual error data of the basic frame of the residual error between second frame of basic frame on the time, and the motion vector of basic frame.
57, coding method as claimed in claim 56 also comprises the energy that obtains residual error data, wherein,
Whether if energy is less than predetermined threshold value, then the coding of the data of enhancement layer comprises: under not will the situation by the information of the data refer of enhancement layer about the data that are used for basic frame, to the digital coding of enhancement layer; With
If energy is not less than predetermined threshold value, then the coding of the data of enhancement layer comprises: according to the digital coding of information of forecasting to enhancement layer.
58, coding method as claimed in claim 57, wherein, predetermined threshold value is the value that obtains when the energy of residual error data is " 0 ".
59, coding method as claimed in claim 55 also comprises the energy that obtains residual error data, and wherein, if the energy of residual error data is not less than predetermined threshold value, then the coding of the data of enhancement layer comprises:
Calculate poor between the motion vector of the motion vector of basic frame and enhancement layer;
Be provided for the prediction residual data whether by definite information of the data refer of enhancement layer according to the difference between the motion vector of the motion vector of basic unit and enhancement layer; With
According to determine information and definite information and about residual error data whether by the digital coding of the difference between the information of the data refer of enhancement layer to enhancement layer.
60, coding method as claimed in claim 55 also comprises poor between the motion vector of adjacent area on the motion vector that obtains basic frame and the space, wherein:
Whether if described difference is less than predetermined threshold, then the coding of the data of enhancement layer comprises: under not will the situation by the information of the data refer of enhancement layer about the data that are used for basic frame, to the digital coding of enhancement layer; With
If described difference is not less than predetermined threshold value, then the coding of the data of enhancement layer comprises: according to the digital coding of information of forecasting to enhancement layer.
61, coding method as claimed in claim 60, wherein, predetermined threshold value is the value that obtains when the residual error between two motion vectors is " 0 ".
62, coding method as claimed in claim 55, also comprise poor between the motion vector of adjacent area on the motion vector that obtains basic frame and the space, wherein: if the difference between the motion vector of adjacent area is not less than predetermined threshold value on the motion vector of basic frame and the space, the coding that then is used for the data of enhancement data comprises:
Whether the motion vector that is provided for predicting basic unit according to the difference between the motion vector of adjacent area on the motion vector of basic frame and the space is by definite information of enhancement data reference; With
According to determine information and definite information and about the motion vector of basic frame whether by the digital coding of the difference between the information of the data refer of enhancement layer to enhancement layer.
63, coding method as claimed in claim 55, wherein, the data of enhancement layer comprise: be used to predict the data that are used for basic frame whether by definite information of the data refer of enhancement layer, and definite information and poor by between the information of forecasting of reference whether about the data that are used for basic frame.
64, coding method as claimed in claim 55 wherein, is carried out the coding of the data that are used for enhancement data by entropy coding.
65, coding method as claimed in claim 55, wherein, the data of enhancement layer are the data that are used for one of macro block, sheet and frame.
66, a kind of coding/decoding method of carrying out by the multi-layer video decoder, this coding/decoding method comprises:
Basic frame decoding to input;
Determine whether the data of prediction enhancement layer need to be used for the data of basic frame; And
According to the data decode of the result who determines to enhancement layer.
67, as the described coding/decoding method of claim 66, wherein, the data that are used for basic frame comprise: be used to comprise basic frame and be different from the residual error data of basic frame of the residual error between second frame of basic frame on the time and at least one of the motion vector of basic frame.
68, as the described coding/decoding method of claim 67, also comprise the energy that obtains residual error data, wherein, determining whether the prediction enhancement data needs to be used for the data of basic frame:
If energy is less than predetermined threshold value, then the decoding of the data of enhancement layer comprises:
Be provided as first information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are not enhanced the data refer of layer; With
Use of the data decode of first information of forecasting to enhancement layer; And
If energy is not less than predetermined threshold value, then the decoding of the data of enhancement layer comprises:
Be provided as second information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are enhanced the data refer of layer; With
Use of the data decode of second information of forecasting to enhancement layer.
69, as the described coding/decoding method of claim 68, wherein, predetermined threshold value is the value that obtains when the energy of residual error data is " 0 ".
70, as the described coding/decoding method of claim 67, also comprise the energy that obtains residual error data, wherein, if the energy of residual error data is not less than predetermined threshold value, then the decoding of the data of enhancement layer comprises:
Calculate poor between the motion vector of the motion vector of basic frame and enhancement layer;
Be provided for the prediction residual data whether by definite information of the data refer of enhancement layer according to the difference between the motion vector of the motion vector of basic unit and enhancement layer; With
To determining information and definite information and whether decoding by the difference between the information of the data refer of enhancement layer about residual error data.
71, as the described coding/decoding method of claim 67, wherein, the decoding of enhancement data comprises:
Calculating is according to the motion vector of the data prediction of basic frame;
Calculating is according to the motion vector of adjacent area prediction on the space; With
If the residual error between the motion vector is less than predetermined threshold value, then, indication do not come enhancement data is decoded according to the information of forecasting of the data prediction enhancement data of basic frame by being set.
72, as the described coding/decoding method of claim 67, also comprise poor between the motion vector of the motion vector that obtains basic unit and adjacent area, wherein, determining whether the prediction enhancement data needs to be used for the data of basic frame:
If described difference is less than predetermined threshold value, then the decoding of the data of enhancement layer comprises:
Be provided as first information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are not enhanced the data refer of layer; With
Use of the data decode of first information of forecasting to enhancement layer; And
If described difference is not less than predetermined threshold value, then the decoding of the data of enhancement layer comprises:
Be provided as second information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are enhanced the data refer of layer; With
Use of the data decode of second information of forecasting to enhancement layer.
73, as the described coding/decoding method of claim 66, wherein, predetermined threshold value is the value that obtains when the residual error between two motion vectors is " 0 ".
74, as the described coding/decoding method of claim 66, also comprise poor between the motion vector of adjacent area on the motion vector that obtains basic unit and the space, wherein: if the difference between the motion vector of adjacent area is not less than predetermined threshold value on the motion vector of basic frame and the space, the decoding that then is used for the data of enhancement data comprises:
Whether the motion vector that is provided for predicting basic unit according to the difference between the motion vector of adjacent area on the motion vector of basic unit and the space is by definite information of enhancement data reference; And
According to determine information and definite information and about the motion vector of basic frame whether by the data decode of the difference between the information of the data refer of enhancement layer to enhancement layer.
75, as the described coding/decoding method of claim 66, wherein, the data of enhancement layer comprise: be used to predict the data that are used for basic frame whether by definite information of the data refer of enhancement layer, and definite information and poor by between the information of forecasting of reference whether about the data that are used for basic frame.
76,, wherein, carry out the decoding of the data that are used for enhancement data by the entropy decoding as the described coding/decoding method of claim 66.
77, as the described coding/decoding method of claim 66, wherein, the data of enhancement layer are the data that are used for one of macro block, sheet and frame.
78, a kind of video encoder comprises:
Base layer encoder obtains basic frame from the frame of input; And
Enhancement layer encoder, produce the data of enhancement layer, described enhancement layer data based comprise the data that are used for basic frame, be used for the data of second enhancement layer of different time locations and be used for enhancement layer zones of different data at least one reference data and predicted
Wherein, enhancement layer encoder comprises: converter, and according to determining whether the digital coding to enhancement layer according to the result of the data of reference data prediction enhancement layer.
79, as the described encoder of claim 78, wherein, the data that are used for base layer encoder comprise: be used to comprise basic frame and be different from the residual error data of basic frame of the residual error between second frame of basic frame on the time and at least one of the motion vector of basic frame.
80, as the described encoder of claim 79:
Wherein, base layer encoder obtains the energy of residual error data; And
Wherein:
If energy is less than predetermined threshold value, then converter under whether will not be by the situation of the information of the data refer of enhancement layer about the data that are used for basic frame to the digital coding of enhancement layer; With
If energy is not less than predetermined threshold value, then converter is according to the digital coding of information of forecasting to enhancement layer.
81, as the described encoder of claim 80, wherein, predetermined threshold value is the value that obtains when the energy of residual error data is 0.
82, as the described encoder of claim 78, wherein, base layer encoder obtains the energy of residual error data, if the energy of residual error data is not less than predetermined threshold value, and converter then:
Calculate poor between the motion vector of the motion vector of basic frame and enhancement layer;
Be provided for the prediction residual data whether by definite information of the data refer of enhancement layer according to the difference between the motion vector of the motion vector of basic unit and enhancement layer; With
According to determine information and definite information and about residual error data whether by the digital coding of the difference between the information of the data refer of enhancement layer to enhancement layer.
83, as the described encoder of claim 78:
Wherein, converter calculates poor between the motion vector of adjacent area on the motion vector of basic frame and the space; And
Wherein:
If described difference is less than predetermined threshold, then converter under whether will not be by the situation of the information of the data refer of enhancement layer about the data that are used for basic frame to the digital coding of enhancement layer; With
If described difference is not less than predetermined threshold value, then converter is according to the digital coding of information of forecasting to enhancement layer.
84, as the described encoder of claim 83, wherein, predetermined threshold value is the value that obtains when the residual error between two motion vectors is " 0 ".
85, as the described encoder of claim 79:
Wherein, converter obtains poor between the motion vector of adjacent area on the motion vector of basic frame and the space; With
Wherein, if the difference between the motion vector of adjacent area is not less than predetermined threshold value on the motion vector of basic frame and the space, converter then:
Whether the motion vector that is provided for predicting basic unit according to the difference between the motion vector of adjacent area on the motion vector of basic frame and the space is by definite information of enhancement data reference; With
According to determine information and definite information and about the motion vector of basic frame whether by the digital coding of the difference between the information of the data refer of enhancement layer to enhancement layer.
86, as the described encoder of claim 78, wherein, the data of enhancement layer comprise: be used to predict the data that are used for basic frame whether by definite information of the data refer of enhancement layer, and definite information and poor by between the information of forecasting of reference whether about the data that are used for basic frame.
87,, wherein, carry out entropy coding by converter as the described encoder of claim 78.
88, as the described method of claim 78, wherein, the data of enhancement layer are the data that are used for one of macro block, sheet and frame.
89, a kind of Video Decoder comprises:
The base layer decoder device is to the basic frame decoding of input; And
Enhancement layer decoder, by of the data decode of reference basic frame to enhancement layer,
Wherein, enhancement layer decoder comprises inverse converter, according to the data of determining the prediction enhancement layer whether needs be used for the result of data of basic frame to the data decode of enhancement layer.
90, as the described decoder of claim 89, wherein, the data that are used for basic frame comprise: be used to comprise basic frame and be different from the residual error data of basic frame of the residual error between second frame of basic frame on the time and at least one of the motion vector of basic frame.
91, as the described decoder of claim 89, wherein, the base layer decoder device obtains the energy of residual error data, wherein:
If energy is less than predetermined threshold value, inverse converter then:
Be provided as first information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are not enhanced the data refer of layer; With
Use of the data decode of first information of forecasting to enhancement layer; And
If energy is not less than predetermined threshold value, inverse converter then:
Be provided as second information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are enhanced the data refer of layer; With
Use of the data decode of second information of forecasting to enhancement layer.
92, as the described decoder of claim 91, wherein, predetermined threshold value is the value that obtains when the energy of residual error data is " 0 ".
93, as the described decoder of claim 89, wherein, the base layer decoder device obtains the energy of residual error data, if the energy of residual error data is not less than predetermined threshold value, and inverse converter then:
Calculate poor between the motion vector of the motion vector of basic frame and enhancement layer;
Be provided for the prediction residual data whether by definite information of the data refer of enhancement layer according to the difference between the motion vector of the motion vector of basic unit and enhancement layer; With
To determining information and definite information and whether decoding by the difference between the information of the data refer of enhancement layer about residual error data.
94, as the described decoder of claim 89, wherein, inverse converter calculates poor between the motion vector of adjacent area on the motion vector of basic frame and the space, wherein:
If described difference is less than predetermined threshold value, inverse converter then:
Be provided as first information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are not enhanced the data refer of layer; With
Use of the data decode of first information of forecasting to enhancement layer; And
If described difference is not less than predetermined threshold value, then inverse converter:
Be provided as second information of forecasting that indication is used for basic frame when the data of enhancement layer are encoded data are enhanced the data refer of layer; With
Use of the data decode of second information of forecasting to enhancement layer.
95, as the described decoder of claim 95, wherein, predetermined threshold value is the value that obtains when the residual error between two motion vectors is " 0 ".
96, as the described decoder of claim 89:
Wherein, inverse converter obtains poor between the motion vector of adjacent area on the motion vector of basic unit and the space; And
Wherein, if the difference between the motion vector of adjacent area is not less than predetermined threshold value on the motion vector of basic frame and the space, inverse converter then:
Whether the motion vector that is provided for predicting basic unit according to the difference between the motion vector of adjacent area on the motion vector of basic unit and the space is by definite information of enhancement data reference; And
According to determine information and definite information and about the motion vector of basic frame whether by the data decode of the difference between the information of the data refer of enhancement layer to enhancement layer.
97, as the described decoder of claim 89, wherein, the data of enhancement layer comprise: be used to predict the data that are used for basic frame whether by definite information of the data refer of enhancement layer, and definite information and poor by between the information of forecasting of reference whether about the data that are used for basic frame.
98, as the described decoder of claim 89, wherein, inverse converter is carried out the entropy decoding.
99, as the described decoder of claim 89, wherein, the data of enhancement layer are the data that are used for one of macro block, sheet and frame.
CNA2006100767112A 2005-04-19 2006-04-19 Method and apparatus for adaptively selecting context model for entropy coding Pending CN1917647A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US67254905P 2005-04-19 2005-04-19
US60/672,549 2005-04-19
KR1020050059371 2005-07-01
US60/701,987 2005-07-25
US60/702,296 2005-07-26
US60/702,624 2005-07-27
KR1020050087761 2005-09-21

Publications (1)

Publication Number Publication Date
CN1917647A true CN1917647A (en) 2007-02-21

Family

ID=37616466

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2006100767112A Pending CN1917647A (en) 2005-04-19 2006-04-19 Method and apparatus for adaptively selecting context model for entropy coding

Country Status (2)

Country Link
KR (1) KR100763181B1 (en)
CN (1) CN1917647A (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101924873A (en) * 2009-06-12 2010-12-22 索尼公司 Image processing equipment and image processing method
CN102256125A (en) * 2011-07-14 2011-11-23 北京工业大学 Context adaptive arithmetic coding method for HEVC (High Efficiency Video Coding)
CN102939756A (en) * 2010-04-16 2013-02-20 Sk电信有限公司 Video coding and decoding method and apparatus
CN103188489A (en) * 2011-12-30 2013-07-03 联芯科技有限公司 Cbp analysis method and device thereof
CN101911712B (en) * 2008-01-16 2013-08-14 索尼公司 Video coding system using texture analysis and synthesis in scalable coding framework
CN101682763B (en) * 2007-06-12 2013-09-11 汤姆森许可贸易公司 Methods and apparatus supporting multi-pass video syntax structure for slice data
CN102057677B (en) * 2008-06-03 2013-10-02 微软公司 Adaptive quantization for enhancement layer video coding
CN103563373A (en) * 2012-02-03 2014-02-05 松下电器产业株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
CN104737537A (en) * 2012-09-07 2015-06-24 高通股份有限公司 Weighted prediction mode for scalable video coding
CN104967869A (en) * 2008-03-06 2015-10-07 通用仪表公司 Method and apparatus of decoding an enhanced video stream
CN105338353A (en) * 2014-08-05 2016-02-17 美国博通公司 Simplified range and context update for multimedia context-adaptive binary arithmetic coding design
CN105704490A (en) * 2010-04-16 2016-06-22 Sk电信有限公司 Video encoding/decoding apparatus and method
CN105915923A (en) * 2010-04-13 2016-08-31 Ge视频压缩有限责任公司 Cross-plane prediction
CN103313057B (en) * 2008-04-16 2016-08-31 英特尔公司 Tone for the Video Codec of bit-depth scalable maps
CN106028045A (en) * 2010-04-13 2016-10-12 Ge视频压缩有限责任公司 Cross-plane prediction
CN106105220A (en) * 2014-01-07 2016-11-09 诺基亚技术有限公司 Method and apparatus for video coding and decoding
CN106162171A (en) * 2010-04-13 2016-11-23 Ge视频压缩有限责任公司 Sample areas folding
CN106464889A (en) * 2014-03-06 2017-02-22 三星电子株式会社 Inter-layer video decoding method and apparatus therefor performing sub-block-based prediction, and inter-layer video encoding method and apparatus therefor performing sub-block-based prediction
CN106488237A (en) * 2010-04-05 2017-03-08 三星电子株式会社 Low complex degree entropy coding/decoding method and apparatus
US20170134761A1 (en) 2010-04-13 2017-05-11 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
CN107087177A (en) * 2011-06-30 2017-08-22 夏普株式会社 Context initialization based on decoder picture buffer
CN107197263A (en) * 2011-01-12 2017-09-22 太阳专利托管公司 Picture decoding method and picture decoding apparatus
US9854272B2 (en) 2008-03-06 2017-12-26 Arris Enterprises, Inc. Method and apparatus for decoding an enhanced video stream
CN107529060A (en) * 2011-10-31 2017-12-29 三星电子株式会社 Determine the method and apparatus and computer recordable media of context model
US9967561B2 (en) 2006-05-05 2018-05-08 Microsoft Technology Licensing, Llc Flexible quantization
US10038920B2 (en) 2010-04-13 2018-07-31 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US10129563B2 (en) 2012-01-20 2018-11-13 Sun Patent Trust Methods and apparatuses for encoding and decoding video using temporal motion vector prediction
US10212447B2 (en) 2012-03-06 2019-02-19 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20190089962A1 (en) 2010-04-13 2019-03-21 Ge Video Compression, Llc Inter-plane prediction
CN110290382A (en) * 2019-05-10 2019-09-27 同济大学 A kind of residual error data coding and decoding methods
CN111327903A (en) * 2018-12-13 2020-06-23 华为技术有限公司 Prediction method and device of chrominance block
CN114731440A (en) * 2019-09-23 2022-07-08 阿里巴巴集团控股有限公司 Lossless encoding of video data
US12149695B2 (en) 2023-08-09 2024-11-19 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8432968B2 (en) 2007-10-15 2013-04-30 Qualcomm Incorporated Scalable video coding techniques for scalable bitdepths
WO2013051897A1 (en) * 2011-10-05 2013-04-11 한국전자통신연구원 Image-encoding method and image-decoding method
WO2013147497A1 (en) * 2012-03-26 2013-10-03 엘지전자 주식회사 Method for applying sample adaptive offset in scalable video coding, and apparatus using the method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040046890A (en) * 2002-11-28 2004-06-05 엘지전자 주식회사 Implementation method of spatial scalability in video codec
FR2858741A1 (en) 2003-08-07 2005-02-11 Thomson Licensing Sa DEVICE AND METHOD FOR COMPRESSING DIGITAL IMAGES
KR100626651B1 (en) * 2004-02-17 2006-09-21 한국전자통신연구원 Selective Fine Granular Scalable Coding Apparatus and Method thereof
KR20060043115A (en) * 2004-10-26 2006-05-15 엘지전자 주식회사 Method and apparatus for encoding/decoding video signal using base layer

Cited By (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9967561B2 (en) 2006-05-05 2018-05-08 Microsoft Technology Licensing, Llc Flexible quantization
CN101682763B (en) * 2007-06-12 2013-09-11 汤姆森许可贸易公司 Methods and apparatus supporting multi-pass video syntax structure for slice data
CN101911712B (en) * 2008-01-16 2013-08-14 索尼公司 Video coding system using texture analysis and synthesis in scalable coding framework
US11146822B2 (en) 2008-03-06 2021-10-12 Arris Enterprises Llc Method and apparatus for decoding an enhanced video stream
US10616606B2 (en) 2008-03-06 2020-04-07 Arris Enterprises Llc Method and apparatus for decoding an enhanced video stream
CN104967869B (en) * 2008-03-06 2018-06-05 艾锐势有限责任公司 For decoding the method and apparatus of enhanced video stream
CN104967869A (en) * 2008-03-06 2015-10-07 通用仪表公司 Method and apparatus of decoding an enhanced video stream
US11722702B2 (en) 2008-03-06 2023-08-08 Bison Patent Licensing LLC Method and apparatus for decoding an enhanced video stream
US9854272B2 (en) 2008-03-06 2017-12-26 Arris Enterprises, Inc. Method and apparatus for decoding an enhanced video stream
CN103313057B (en) * 2008-04-16 2016-08-31 英特尔公司 Tone for the Video Codec of bit-depth scalable maps
CN102057677B (en) * 2008-06-03 2013-10-02 微软公司 Adaptive quantization for enhancement layer video coding
US10306227B2 (en) 2008-06-03 2019-05-28 Microsoft Technology Licensing, Llc Adaptive quantization for enhancement layer video coding
CN101924873A (en) * 2009-06-12 2010-12-22 索尼公司 Image processing equipment and image processing method
CN101924873B (en) * 2009-06-12 2013-01-16 索尼公司 Image processing device and image processing method
CN106488237A (en) * 2010-04-05 2017-03-08 三星电子株式会社 Low complex degree entropy coding/decoding method and apparatus
US10621614B2 (en) 2010-04-13 2020-04-14 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US20190174148A1 (en) 2010-04-13 2019-06-06 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
CN105915923A (en) * 2010-04-13 2016-08-31 Ge视频压缩有限责任公司 Cross-plane prediction
CN105915924A (en) * 2010-04-13 2016-08-31 Ge视频压缩有限责任公司 Cross-plane prediction
US12120316B2 (en) 2010-04-13 2024-10-15 Ge Video Compression, Llc Inter-plane prediction
CN106028045A (en) * 2010-04-13 2016-10-12 Ge视频压缩有限责任公司 Cross-plane prediction
US12010353B2 (en) 2010-04-13 2024-06-11 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
CN106162171A (en) * 2010-04-13 2016-11-23 Ge视频压缩有限责任公司 Sample areas folding
CN106303523A (en) * 2010-04-13 2017-01-04 Ge视频压缩有限责任公司 Sample areas folding
US11983737B2 (en) 2010-04-13 2024-05-14 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11910029B2 (en) 2010-04-13 2024-02-20 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division preliminary class
US11910030B2 (en) 2010-04-13 2024-02-20 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20170134761A1 (en) 2010-04-13 2017-05-11 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11900415B2 (en) 2010-04-13 2024-02-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11856240B1 (en) 2010-04-13 2023-12-26 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11810019B2 (en) 2010-04-13 2023-11-07 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11785264B2 (en) 2010-04-13 2023-10-10 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US11778241B2 (en) 2010-04-13 2023-10-03 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11765362B2 (en) 2010-04-13 2023-09-19 Ge Video Compression, Llc Inter-plane prediction
US11765363B2 (en) 2010-04-13 2023-09-19 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US11736738B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using subdivision
US11734714B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11611761B2 (en) 2010-04-13 2023-03-21 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US11553212B2 (en) 2010-04-13 2023-01-10 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11546642B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10038920B2 (en) 2010-04-13 2018-07-31 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US10051291B2 (en) 2010-04-13 2018-08-14 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11546641B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10687085B2 (en) 2010-04-13 2020-06-16 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11102518B2 (en) 2010-04-13 2021-08-24 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11087355B2 (en) 2010-04-13 2021-08-10 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10694218B2 (en) 2010-04-13 2020-06-23 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20190089962A1 (en) 2010-04-13 2019-03-21 Ge Video Compression, Llc Inter-plane prediction
US10250913B2 (en) 2010-04-13 2019-04-02 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10248966B2 (en) 2010-04-13 2019-04-02 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US20210211743A1 (en) 2010-04-13 2021-07-08 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US20190164188A1 (en) 2010-04-13 2019-05-30 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10803485B2 (en) 2010-04-13 2020-10-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11051047B2 (en) 2010-04-13 2021-06-29 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
CN106028045B (en) * 2010-04-13 2019-06-25 Ge视频压缩有限责任公司 The method of decoding data stream, the method and its decoder for generating data flow
US20190197579A1 (en) 2010-04-13 2019-06-27 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
CN105915923B (en) * 2010-04-13 2019-08-13 Ge视频压缩有限责任公司 Across planar prediction
US11037194B2 (en) 2010-04-13 2021-06-15 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10432978B2 (en) 2010-04-13 2019-10-01 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10432979B2 (en) 2010-04-13 2019-10-01 Ge Video Compression Llc Inheritance in sample array multitree subdivision
US10432980B2 (en) 2010-04-13 2019-10-01 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10440400B2 (en) 2010-04-13 2019-10-08 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10448060B2 (en) 2010-04-13 2019-10-15 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US10460344B2 (en) 2010-04-13 2019-10-29 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10893301B2 (en) 2010-04-13 2021-01-12 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
CN105915924B (en) * 2010-04-13 2019-12-06 Ge视频压缩有限责任公司 Cross-plane prediction
US10880581B2 (en) 2010-04-13 2020-12-29 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10880580B2 (en) 2010-04-13 2020-12-29 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10873749B2 (en) 2010-04-13 2020-12-22 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US10863208B2 (en) 2010-04-13 2020-12-08 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10855995B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Inter-plane prediction
US10855991B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Inter-plane prediction
US10856013B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10855990B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Inter-plane prediction
US10672028B2 (en) 2010-04-13 2020-06-02 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10681390B2 (en) 2010-04-13 2020-06-09 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10687086B2 (en) 2010-04-13 2020-06-16 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US20180324466A1 (en) 2010-04-13 2018-11-08 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10848767B2 (en) 2010-04-13 2020-11-24 Ge Video Compression, Llc Inter-plane prediction
US10803483B2 (en) 2010-04-13 2020-10-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10708628B2 (en) 2010-04-13 2020-07-07 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10721495B2 (en) 2010-04-13 2020-07-21 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10719850B2 (en) 2010-04-13 2020-07-21 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10721496B2 (en) 2010-04-13 2020-07-21 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10748183B2 (en) 2010-04-13 2020-08-18 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10764608B2 (en) 2010-04-13 2020-09-01 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10771822B2 (en) 2010-04-13 2020-09-08 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
CN106162171B (en) * 2010-04-13 2020-09-11 Ge视频压缩有限责任公司 Decoder and method, encoding method
CN106303523B (en) * 2010-04-13 2020-09-11 Ge视频压缩有限责任公司 Decoder and method, encoding method
US10805645B2 (en) 2010-04-13 2020-10-13 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
CN102939756A (en) * 2010-04-16 2013-02-20 Sk电信有限公司 Video coding and decoding method and apparatus
CN105704490A (en) * 2010-04-16 2016-06-22 Sk电信有限公司 Video encoding/decoding apparatus and method
CN105306943B (en) * 2010-04-16 2018-07-31 Sk电信有限公司 Method for video coding
CN105306943A (en) * 2010-04-16 2016-02-03 Sk电信有限公司 Video encoding method
CN105704490B (en) * 2010-04-16 2018-07-31 Sk电信有限公司 Video encoder
CN102939756B (en) * 2010-04-16 2016-03-02 Sk电信有限公司 Video encoding/decoding apparatus and method
CN107197263A (en) * 2011-01-12 2017-09-22 太阳专利托管公司 Picture decoding method and picture decoding apparatus
US11770536B2 (en) 2011-01-12 2023-09-26 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US11350096B2 (en) 2011-01-12 2022-05-31 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US10638134B2 (en) 2011-01-12 2020-04-28 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
CN107197263B (en) * 2011-01-12 2019-11-19 太阳专利托管公司 Picture decoding method and picture decoding apparatus
CN107087177B (en) * 2011-06-30 2020-02-18 威勒斯媒体国际有限公司 Video frame encoding method and device and video frame decoding method and device
CN107087177A (en) * 2011-06-30 2017-08-22 夏普株式会社 Context initialization based on decoder picture buffer
CN102256125A (en) * 2011-07-14 2011-11-23 北京工业大学 Context adaptive arithmetic coding method for HEVC (High Efficiency Video Coding)
CN102256125B (en) * 2011-07-14 2013-06-05 北京工业大学 Context adaptive arithmetic coding method for HEVC (High Efficiency Video Coding)
CN107529060A (en) * 2011-10-31 2017-12-29 三星电子株式会社 Determine the method and apparatus and computer recordable media of context model
CN107529060B (en) * 2011-10-31 2020-04-21 三星电子株式会社 Method and apparatus for determining context model and computer recordable medium
CN103188489A (en) * 2011-12-30 2013-07-03 联芯科技有限公司 Cbp analysis method and device thereof
CN103188489B (en) * 2011-12-30 2016-08-03 联芯科技有限公司 CBP analytic method and device thereof
US10129563B2 (en) 2012-01-20 2018-11-13 Sun Patent Trust Methods and apparatuses for encoding and decoding video using temporal motion vector prediction
US10616601B2 (en) 2012-01-20 2020-04-07 Sun Patent Trust Methods and apparatuses for encoding and decoding video using temporal motion vector prediction
US10904554B2 (en) 2012-02-03 2021-01-26 Sun Patent Trust Image coding method and image coding apparatus
US10334268B2 (en) 2012-02-03 2019-06-25 Sun Patent Trust Image coding method and image coding apparatus
US10623762B2 (en) 2012-02-03 2020-04-14 Sun Patent Trust Image coding method and image coding apparatus
US11451815B2 (en) 2012-02-03 2022-09-20 Sun Patent Trust Image coding method and image coding apparatus
US9648323B2 (en) 2012-02-03 2017-05-09 Sun Patent Trust Image coding method and image coding apparatus
US11812048B2 (en) 2012-02-03 2023-11-07 Sun Patent Trust Image coding method and image coding apparatus
US10034015B2 (en) 2012-02-03 2018-07-24 Sun Patent Trust Image coding method and image coding apparatus
CN103563373B (en) * 2012-02-03 2017-09-26 太阳专利托管公司 Method for encoding images and picture coding device
CN103563373A (en) * 2012-02-03 2014-02-05 松下电器产业株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
US9883201B2 (en) 2012-02-03 2018-01-30 Sun Patent Trust Image coding method and image coding apparatus
US11949907B2 (en) 2012-03-06 2024-04-02 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10560716B2 (en) 2012-03-06 2020-02-11 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10212447B2 (en) 2012-03-06 2019-02-19 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US11595682B2 (en) 2012-03-06 2023-02-28 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10880572B2 (en) 2012-03-06 2020-12-29 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
CN104737537A (en) * 2012-09-07 2015-06-24 高通股份有限公司 Weighted prediction mode for scalable video coding
CN104737537B (en) * 2012-09-07 2018-09-21 高通股份有限公司 For the equipment of decoding digital video, the method for decoding video, the method for encoded video and computer-readable storage medium
US9906786B2 (en) 2012-09-07 2018-02-27 Qualcomm Incorporated Weighted prediction mode for scalable video coding
CN106105220A (en) * 2014-01-07 2016-11-09 诺基亚技术有限公司 Method and apparatus for video coding and decoding
CN106464889A (en) * 2014-03-06 2017-02-22 三星电子株式会社 Inter-layer video decoding method and apparatus therefor performing sub-block-based prediction, and inter-layer video encoding method and apparatus therefor performing sub-block-based prediction
CN105338353B (en) * 2014-08-05 2018-11-16 安华高科技股份有限公司 The range and updating context of the simplification of context adaptive binary algorithm coding
CN105338353A (en) * 2014-08-05 2016-02-17 美国博通公司 Simplified range and context update for multimedia context-adaptive binary arithmetic coding design
CN111327903A (en) * 2018-12-13 2020-06-23 华为技术有限公司 Prediction method and device of chrominance block
CN111327903B (en) * 2018-12-13 2023-05-16 华为技术有限公司 Method and device for predicting chroma block
US12120325B2 (en) 2018-12-13 2024-10-15 Huawei Technologies Co., Ltd. Chroma block prediction method and apparatus
CN110290382A (en) * 2019-05-10 2019-09-27 同济大学 A kind of residual error data coding and decoding methods
CN114731440A (en) * 2019-09-23 2022-07-08 阿里巴巴集团控股有限公司 Lossless encoding of video data
US12047578B2 (en) 2019-09-23 2024-07-23 Alibaba Group Holding Limited Lossless coding of video data
US12149695B2 (en) 2023-08-09 2024-11-19 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus

Also Published As

Publication number Publication date
KR100763181B1 (en) 2007-10-05
KR20060110714A (en) 2006-10-25

Similar Documents

Publication Publication Date Title
CN1917647A (en) Method and apparatus for adaptively selecting context model for entropy coding
JP7026249B2 (en) Methods, devices and computer programs for video decoding
CN1214647C (en) Method for encoding images, and image coder
US8121191B1 (en) AVC to SVC transcoder
CN1254113C (en) Image encoding device, image encoding method, image decoding device, image decoding method, and communication device
CN1605213A (en) Skip macroblock coding
CN113330749A (en) Improved residual coding for transform skip mode and block differential pulse code modulation
CN1535024A (en) Video encoding device, method and program and video decoding device, method and program
CN1240226C (en) Video transcoder with drift compensation
CN1585958A (en) Method and system for context-based adaptive binary arithmetic coding
CN1469632A (en) Video frequency coding/decoding method and equipment
CN1366775A (en) Transcoding method of compressed video
CN1615645A (en) Coding dynamic filters
CN1947426A (en) Method and apparatus for implementing motion scalability
CN1656817A (en) Context-adaptive vlc video transform coefficients encoding/decoding methods and apparatuses
CN1679342A (en) Intra coding video data methods and apparatuses
CN1489391A (en) Space predicting method and apparatus for video encoding
CN1302419A (en) reducing compressed video
CN1162004C (en) Decoding apparatus and method, coding apparatus and method, and image processing system and method
CN1225904C (en) Method and apparatus for storage of effective compression domain video processing and compensation of fast reverse motion
CN1240225C (en) Picture coding device and picture coding method
CN1650629A (en) Encoding device and method, decoding device and method, edition device and method, recording medium, and program
CN114930852A (en) Method and apparatus for video encoding and decoding
CN1866297A (en) Method and system for context-based adaptive binary arithmetic coding
CN1266947C (en) Moving picture compression/coding apparatus and motion vector detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication