US20120219060A1 - System and method for scalable encoding and decoding of multimedia data using multiple layers - Google Patents
System and method for scalable encoding and decoding of multimedia data using multiple layers Download PDFInfo
- Publication number
- US20120219060A1 US20120219060A1 US13/468,493 US201213468493A US2012219060A1 US 20120219060 A1 US20120219060 A1 US 20120219060A1 US 201213468493 A US201213468493 A US 201213468493A US 2012219060 A1 US2012219060 A1 US 2012219060A1
- Authority
- US
- United States
- Prior art keywords
- enhancement
- base
- quantized coefficients
- prediction
- base layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/36—Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the invention relates to scalable encoding and decoding of multimedia data that may comprise audio data, video data or both. More particularly, the invention relates to a system and method for scalable encoding and decoding of multimedia data using multiple layers.
- the International Telecommunication Union has promulgated the H.261, H.262, H.263 and H.264 standards for digital video encoding. These standards specify the syntax of encoded digital video data and how this data is to be decoded for presentation or playback. However, these standards permit various different techniques (e.g., algorithms or compression tools) to be used in a flexible manner for transforming the digital video data from an uncompressed format to a compressed or encoded format. Hence, many different digital video data encoders are currently available. These digital video encoders are capable of achieving varying degrees of compression at varying cost and quality levels.
- Scalable video coding generates multiple layers, for example a base layer and an enhancement layer, for the encoding of video data. These two layers are generally transmitted on different channels with different transmission characteristics resulting in different packet error rates.
- the base layer typically has a lower packet error rate when compared with the enhancement layer.
- the base layer generally contains the most valuable information and the enhancement layer generally offers refinements over the base layer.
- Most scalable video compression technologies exploit the fact that the human visual system is more forgiving of noise (due to compression) in high frequency regions of the image than the flatter, low frequency regions. Hence, the base layer predominantly contains low frequency information and the enhancement layer predominantly contains high frequency information. When network bandwidth falls short, there is a higher probability of receiving just the base layer of the coded video (no enhancement layer). In such situations, the reconstructed video is blurred and deblocking filters may even accentuate this effect.
- Decoders generally decode the base layer or the base layer and the enhancement layer.
- multiple layer decoders When decoding the base layer and the enhancement layer, multiple layer decoders generally need increased computational complexity and memory when compared with single layer decoders. Many mobile devices do not utilize multiple layer decoders due to the increased computational complexity and memory requirements.
- a method of using a base layer to predict an enhancement layer is disclosed.
- a block of multimedia data may be used to generate a base residual that includes a plurality of base quantized coefficients.
- the block of multimedia data may also be used to generate an enhancement residual that includes a plurality of enhancement quantized coefficients.
- a first value may be determined based on the plurality of base quantized coefficients and a second value may be determined based on the plurality of enhancement quantized coefficients.
- the enhancement layer may be determined by using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
- a method of decoding a multimedia bitstream may include receiving a multimedia bitstream having a base layer and an enhancement layer.
- the base layer may be decoded to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
- FIG. 1 is a block diagram of a system for encoding and decoding multimedia data
- FIG. 2 is a block diagram of a H.264 video data bitstream
- FIG. 3 is a block diagram of a multiple layer scalable encoder with interlayer prediction
- FIG. 4 is a flow chart of a Mode Decision Module (MDM), which may be part of the prediction modules of FIG. 3 ;
- MDM Mode Decision Module
- FIG. 5 is a flow chart of a Transform+Entropy Coding Module (TECM), which may be part of the prediction modules of FIG. 3 ;
- TECM Transform+Entropy Coding Module
- FIG. 6 is a flow chart illustrating interlayer prediction on a macroblock basis or a block basis
- FIG. 7 shows six 4 ⁇ 4 blocks in the transform domain to illustrate interlayer prediction on a dct coefficient-by-coefficient basis
- FIG. 8 illustrates a method of interlayer prediction on a dct coefficient-by-coefficient basis
- FIG. 9 is a flow chart of a method of decoding a multimedia bitstream using intralayer prediction or interlayer prediction.
- FIG. 10 is a block diagram of a decoder with intralayer prediction and interlayer prediction.
- FIG. 1 is a block diagram of a system 100 for encoding and decoding multimedia (e.g., video, audio or both) data.
- System 100 may be configured to encode (e.g., compress) and decode (e.g., decompress) video data (e.g., pictures and video frames).
- System 100 may include a server 105 , a device 110 , and a communication channel 115 connecting server 105 to device 110 .
- System 100 may be used to illustrate the methods described below for encoding and decoding video data.
- System 100 may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof.
- One or more elements can be rearranged and/or combined, and other systems can be used in place of system 100 while still maintaining the spirit and scope of the invention. Additional elements may be added to system 100 or may be removed from system 100 while still maintaining the spirit and scope of the invention.
- Server 105 may include a processor 120 , a storage medium 125 , an encoder 130 , and an I/O device 135 (e.g., a transceiver).
- Processor 120 and/or encoder 130 may be configured to receive video data in the form of a series of video frames.
- Processor 120 and/or encoder 130 may be an Advanced RISC Machine (ARM), a controller, a digital signal processor (DSP), a microprocessor, or any other device capable of processing data.
- Processor 120 and/or encoder 130 may transmit the series of video frames to storage medium 125 for storage and/or may encode the series of video frames.
- Storage medium 125 may also store computer instructions that are used by processor 120 and/or encoder 130 to control the operations and functions of server 105 .
- Storage medium 125 may represent one or more devices for storing the video data and/or other machine readable mediums for storing information.
- machine readable medium includes, but is not limited to, random access memory (RAM), flash memory, (read-only memory) ROM, EPROM, EEPROM, registers, hard disk, removable disk, CD-ROM, DVD, wireless channels, and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
- Encoder 130 using computer instructions received from storage medium 125 , may be configured to perform both parallel and serial processing (e.g., compression) of the series of video frames.
- the computer instructions may be implemented as described in the methods below.
- the encoded data may be sent to I/O device 135 for transmission to device 110 via communication channel 115 .
- Device 110 may include a processor 140 , a storage medium 145 , a decoder 150 , an I/O device 155 (e.g., a transceiver), and a display device or screen 160 .
- Device 110 may be a computer, a digital video recorder, a handheld device (e.g., a cell phone, Blackberry, etc.), a set top box, a television, and other devices capable of receiving, processing (e.g., decompressing) and/or displaying a series of video frames.
- I/O device 155 receives the encoded data and sends the encoded data to the storage medium 145 and/or to decoder 150 for decompression.
- Decoder 150 is configured to reproduce the series of video frames using the encoded data.
- the series of video frames can be stored in storage medium 145 .
- Decoder 150 using computer instructions retrieved from storage medium 145 , may be configured to perform both parallel and serial processing (e.g., decompression) of the encoded data to reproduce the series of video frames.
- the computer instructions may be implemented as described in the methods below.
- Processor 140 may be configured to receive the series of video frames from storage medium 145 and/or decoder 150 and to display the series of video frames on display device 160 .
- Storage medium 145 may also store computer instructions that are used by processor 140 and/or decoder 150 to control the operations and functions of device 110 .
- Communication channel 115 may be used to transmit the encoded data between server 105 and device 110 .
- Communication channel 115 may be a wired connection or network and/or a wireless connection or network.
- communication channel 115 can include the Internet, coaxial cables, fiber optic lines, satellite links, terrestrial links, wireless links, other media capable of propagating signals, and any combination thereof.
- FIG. 2 is a block diagram of a H.264 video data bitstream 200 .
- the bitstream 200 may be organized or partitioned into a number of access units 205 (e.g., access unit 1 , access unit 2 , access unit 3 , etc.).
- Each access unit 205 may include information corresponding to a coded video frame.
- Each access unit 205 may be organized or partitioned into a number of NAL units 210 .
- Each NAL unit 210 may include a NAL prefix 215 , a NAL header 220 , and a block of data 225 .
- NAL prefix 215 may be a series of bits (e.g., 00000001) indicating the beginning of the block of data 225 and NAL header 220 may include a NAL unit type 230 (e.g., an I, P or B frame).
- the block of data 225 may include a header 235 and data 240 .
- the block of data 225 may be organized or partitioned into a 16 ⁇ 16 macroblock of data, an entire frame of data or a portion of the video data (e.g., a 2 ⁇ 2 block or a 4 ⁇ 4 block).
- the terms “macroblock” and “block” may be used interchangeably.
- Header 135 may include a mode 245 , a reference picture list 250 and QP values 255 .
- Mode 245 may indicate to encoder 130 how to organize or partition the macroblocks, how to determine and transmit motion information and how to determine and transmit residual information.
- Data 240 may include motion information (e.g., a motion vector 285 ) and residual information (e.g., DC 260 and AC 265 residuals).
- motion information e.g., a motion vector 285
- residual information e.g., DC 260 and AC 265 residuals
- I frames data 240 may include DC residuals 260 and AC residuals 265 .
- AC residuals 265 may include Coded Block Pattern (CBP) values 270 , number of trailing ones 275 and residual quantization coefficients 280 .
- CBP Coded Block Pattern
- No motion information may be needed for an I frame because it is the first frame.
- data 240 may include motion vectors 285 , DC residuals 290 and AC residuals 295
- FIG. 3 is a block diagram of base and enhancement layer encoding modules 300 and 305 of multiple layer scalable encoder 130 .
- Multiple layer encoding introduces multiple temporal prediction loops. For example, two layer coding may introduce two temporal prediction loops.
- Video data may be shared between the two layers to allow for a certain bit assignment for the two layers and to reduce overhead. Interlayer prediction may be used at the enhancement layer to reduce total coding overhead.
- Base layer encoding module 300 may be used for the base layer video and enhancement layer encoding module 305 may be used for the enhancement layer video. In some embodiments, the base layer video may be the same or approximately the same as the enhancement layer video.
- Video data may be encoded prior to receipt by base and enhancement layer encoding modules 300 and 305 .
- Encoded video data may be provided at inputs 310 and 315 .
- the base layer encoding module 300 may include a transform (T b ) module 320 , a quantization (Q b ) module 325 , an inverse transform (T b ⁇ 1 ) module 330 , and an inverse quantization (Q b ) module 335 .
- the enhancement layer encoding module 305 may include a transform (T e ) module 340 , a quantization (Q e ) module 345 , an inverse transform (T e ⁇ 1 ) module 350 , and an inverse quantization (Q e ⁇ 1 ) module 355 .
- Quantization modules 325 , 335 , 345 and 355 may include one or more quantization parameters that may be used to determine the quality of the resulting image. Generally, the quantization parameters for the base layer encoding module 300 are larger than the quantization parameters for the enhancement layer encoding module 305 . A larger quantization parameter indicated a lower quality image.
- Base layer encoding module 300 may produce residual information 360 for the base layer and enhancement layer encoding module 305 may produce residual information 365 for the enhancement layer.
- Base and enhancement layer encoding modules 300 and 305 may also include prediction modules 370 and 375 , respectively. Prediction modules 370 and 375 may be combined into a single prediction module. Prediction modules 370 and 375 may be used to perform intralayer and interlayer encoding of the multimedia data.
- the decoded base layer may be used as a reference for the enhancement layer.
- a collocated base frame and a reference, computed by motion compensating one or more previous frames may be used for the enhancement layer.
- Interlayer prediction can be performed on a macroblock basis, a block basis (e.g., a 4 ⁇ 4 block basis), or a dct coefficient basis.
- interlayer prediction or intralayer prediction can be used depending on various factors such as the rate-distortion cost.
- interlayer prediction e.g., temporal prediction
- an enhancement layer macroblock may be predicted by using a collocated base layer macroblock.
- the prediction error may be encoded and then transmitted to decoder 150 .
- temporal prediction an enhancement layer macroblock may be predicted by using one or more macroblocks from one or more prior and/or subsequent frames as a reference and using (e.g., copying) macroblock mode information and motion vectors from the base layer.
- FIG. 4 is a flow chart of a Mode Decision Module (MDM) 400 , which may be part of prediction modules 370 and 375 of FIG. 3 .
- MDM 400 may include a motion estimation module 405 and a decision module 410 .
- MDM 400 may be implemented by processor 120 and/or encoder 130 .
- Motion estimation module 405 generates motion information (e.g., motion vectors) for the enhancement layer for the various modes. The mode may be determined by using information (e.g., motion vectors and residuals) from the base layer and the enhancement layer.
- mode “a” may be a 16 ⁇ 16 macroblock (output MV x and MV y )
- mode “b” may be two 8 ⁇ 16 blocks or two 16 ⁇ 8 blocks (for each partition output MV x and MV y )
- mode “c” may be four 8 ⁇ 8 blocks (for each partition output 8 ⁇ 8 sub-partition mode and for each sub-partition output MV x and MV y ).
- Each macroblock and each block may its own motion information.
- several modes allow a large amount of flexibility in bit assignment.
- the enhancement layer generates more accurate motion vectors when compared with the base layer because of the higher quality enhancement layer video.
- the base layer and the enhancement layer may both use the same motion information corresponding to the base layer. Residual information may be generated by using a predicted macroblock and subtracting it from a current macroblock.
- Decision module 410 may select a mode, which influences various factors such as bit cost of encoding motion information, coding efficiency, motion accuracy, overhead, performance, rate-distortion optimization, etc.
- One mode may produce better results for the base layer while another mode may produce better results for the enhancement layer. Therefore, some compromising may need to occur to achieve the “best mode” or “optimal mode” for both the base layer and the enhancement layer. No compromising may be needed if the same mode produces the best results for both the base layer and the enhancement layer.
- the best mode may be chosen based on, for example, rate distortion optimization because it represents the best tradeoff between motion accuracy and bit cost of encoding motion information.
- Decision module 410 may utilize TECM 500 (see FIG. 5 ) for optimization purposes.
- the mode may provide processor 120 and/or encoder 130 with a set of guidelines, functions, instructions, parameters, routines, or any combination thereof, to perform the encoding of the video data.
- the description below provides an example of three different modes, a, b and c. Assume the base layer has the best performance at mode a, and the enhancement layer has the best performance at mode b. If decision module 410 selects mode a, then ⁇ R a — enh overhead is introduced at the enhancement layer and no overhead is introduced at the base layer. If decision module 410 selects mode b, then ⁇ R b — base overhead is introduced at the base layer and no overhead is introduced at the enhancement layer. If decision module 410 selects mode c, then ⁇ R c — base overhead is introduced at the base layer and ⁇ R c — enh overhead is introduced at the enhancement layer. From these variables, the cost of overhead for each mode for each layer can be determined.
- FIG. 5 is a flow chart of a Transform+Entropy Coding Module (TECM) 500 , which may be part of prediction modules 370 and 375 of FIG. 3 .
- TECM 500 may include a base layer encoding module 505 , a decoding module 510 , a checking module 515 , an interlayer prediction module 520 , and a temporal prediction module 525 .
- TECM 500 may be implemented by processor 120 and/or encoder 130 .
- TECM 500 uses the encoded base layer to predict the enhancement layer.
- Base layer encoding module 505 may be used to determine motion information (e.g., motion vectors) for the base layer.
- Decoding module 510 may be used to decode the encoded base layer prior to interlayer prediction.
- Checking module 515 may be used to determine the number of zero and/or non-zero coefficients in the transformed base layer residual. Depending on the coefficients, interlayer prediction ( 520 ) or temporal prediction ( 525 ) may be
- FIG. 6 is a flow chart illustrating interlayer prediction on a macroblock basis or a block basis.
- Interlayer prediction may be performed on a macroblock basis or a block basis (i.e., any portion of the macroblock (e.g., a 4 ⁇ 4 block basis)).
- a 4 ⁇ 4 block basis or a 2 ⁇ 2 block basis motion information and/or residual information from the macroblocks in the base layer may be used to determine whether to use interlayer prediction or temporal prediction.
- Base layer encoding module 505 may determine motion information and residual information for the base layer ( 605 ).
- Base layer encoding module 505 may also obtain a reference (e.g., a macroblock or frame) for the enhancement layer.
- a reference e.g., a macroblock or frame
- Base layer encoding module 505 may determine the number of non-zero or zero coefficients of the residual information for the base layer ( 610 ). If the residual information from the base layer contains more information than the reference from the enhancement layer, then the residual information in the base layer is useful to the enhancement layer.
- Checking module 515 may determine whether the number of non-zero or zero coefficients meet a selected condition ( 615 ). For example, checking module 515 may examine the residual information of the base layer to determine if the number of non-zero coefficients is greater than, less than or equal to a threshold (T) or the number of zero coefficients is greater than, less than or equal to a threshold (T).
- the residual information in the base layer may be useful to the enhancement layer and encoder 130 may use interlayer prediction to predict the macroblocks in the enhancement layer ( 625 ). If the residual information includes all zeros or some zeros, then the residual information in the base layer may not be useful to the enhancement layer and encoder 130 may use temporal prediction to predict the macroblocks in the enhancement layer ( 620 ). Encoder 130 may transmit the encoded macroblocks or encoded blocks to decoder 150 ( 630 ).
- FIG. 7 shows six 4 ⁇ 4 blocks in the transform domain to illustrate interlayer prediction on a dct coefficient-by-coefficient basis
- FIG. 8 illustrates a method 800 of interlayer prediction on a dct coefficient-by-coefficient basis.
- the top row includes a motion compensated prediction (MCP) or reference block 700 , a residual block 705 , and a reconstructed block 710 for the base layer.
- the bottom row includes a MCP or reference block 715 , a residual block 720 , and a reconstructed block 725 for the enhancement layer.
- MCP motion compensated prediction
- the bottom row includes a MCP or reference block 715 , a residual block 720 , and a reconstructed block 725 for the enhancement layer.
- MCP and residual blocks 700 , 705 , 715 and 720 are shown to have been converted from the spatial (e.g., pixel) domain to the transform (e.g., frequency) domain ( 805 ).
- MCP block 700 may be generated by using motion information in the base layer.
- Reconstructed block 710 may be formed by using coefficients from MCP and residual blocks 700 and 705 .
- Reconstructed block 725 may be formed by using (e.g., copying) coefficients from reconstructed block 710 .
- the interlayer prediction may be performed on the non-zero coefficients in residual block 705 for the base layer.
- ⁇ circumflex over (X) ⁇ b,t represents a coefficient in MCP block 700 and E t+1 represents an encoded non-zero coefficient in residual block 705 .
- the reconstructed coefficient at the same position for the enhancement layer may be a copy of the reconstructed coefficient from the base layer.
- the coefficient may not be useful to the enhancement layer and temporal prediction module 525 may perform temporal prediction to generate the reconstructed block 725 by using MCP block 715 and residual block 720 . If E t+1 ⁇ 0 or approximately 0, then the coefficient may be useful to the enhancement layer and interlayer prediction module 520 may perform interlayer prediction using the coefficients. Hence, the reconstructed coefficients for the enhancement layer can be copied from the base layer. Each coefficient may also be compared to a threshold to determine whether to use interlayer prediction or temporal prediction.
- the coefficients for the enhancement layer may be sent from encoder 130 to decoder 150 .
- CBP Coded Block Pattern
- interlayer prediction module 520 may assign all coefficients in residual macroblock 705 to zero ( 820 ) and may transmit residual macroblock 720 to decoder 150 ( 825 ).
- T may be 4 (or approximately 4) where the sum of all non-zero coefficients may be determined by a linear sum or a weighed sum of the residual coefficients based on the location of the residual coefficients in the macroblock 705 .
- interlayer prediction module 520 may assign all residual coefficients of the base layer (C b (i,j)) to zero ( 840 ) and may transmit all residual coefficients of the enhancement layer (C e (i,j)) to decoder 150 ( 845 ).
- FIG. 9 is a flow chart of a method 900 of decoding a multimedia bitstream using intralayer prediction or interlayer prediction.
- Processor 140 may receive a multimedia bitstream having a base layer and an enhancement layer ( 905 ).
- Decoder 150 may decode the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction ( 910 ).
- the base layer may include a plurality of base layer coefficients.
- decoder 150 may determine whether the plurality of base layer coefficients include at least one non-zero coefficient.
- Decoder 150 may decode the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value and may decode the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value.
- FIG. 10 is a block diagram of a decoder 1000 with intralayer prediction and interlayer prediction.
- Decoder 1000 may be part of processor 140 and/or decoder 150 and may be used to implement the method of FIG. 9 .
- Decoder 1000 may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof.
- Decoder 1000 may include a decision module 1005 , an intralayer prediction module 1010 and an interlayer prediction module 1015 .
- Decision module 1005 may receive a multimedia bitstream having a base layer and an enhancement layer and may decode the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
- Intralayer prediction module 1010 may be used to decode the enhancement layer using intralayer prediction.
- Interlayer prediction module 1015 may be used to decode the enhancement layer using interlayer prediction.
- an apparatus for processing multimedia data being associated with multiple layers may include means for determining a base layer residual.
- the means for determining a base layer residual may be processor 120 , encoder 130 , base layer encoding module 300 , enhancement layer encoding module 305 , prediction modules 370 and 375 , motion estimation module 405 , decision module 410 and/or base layer encoding module 505 .
- the apparatus may include means for performing interlayer prediction to generate an enhancement layer residual if at least one of a number of non-zero coefficients of the base layer residual or a number of zero coefficients of the base layer residual meets a first selected condition.
- the means for performing interlayer prediction may be processor 120 , encoder 130 , base layer encoding module 300 , enhancement layer encoding module 305 , prediction modules 370 and 375 , base layer encoding module 505 and/or interlayer prediction module 520 .
- the apparatus may include means for performing temporal prediction to generate the enhancement layer residual if at least one of a number of non-zero coefficients of the base layer residual or a number of zero coefficients of the base layer residual meets a second selected condition.
- the means for performing temporal prediction may be processor 120 , encoder 130 , base layer encoding module 300 , enhancement layer encoding module 305 , prediction modules 370 and 375 , base layer encoding module 505 and/or temporal prediction module 525 .
- an apparatus for decoding a multimedia bitstream may include means for receiving a multimedia bitstream having a base layer and an enhancement layer.
- the means for receiving a multimedia bitstream may be processor 140 , decoder 150 and/or decision module 1005 .
- the apparatus may include means for decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
- the means for decoding may be processor 140 , decoder 150 , decision module 1005 , intralayer prediction module 1010 and/or interlayer prediction module 1015 .
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium.
- the storage medium may be integral to the processor.
- the processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC).
- the ASIC may reside in a wireless modem.
- the processor and the storage medium may reside as discrete components in the wireless modem.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
A method of using a base layer to predict an enhancement layer is disclosed. The method may include using a block of multimedia data to generate a base residual including base quantized coefficients, using the block of multimedia data to generate an enhancement residual including enhancement quantized coefficients, determining a first value based on the base quantized coefficients, determining a second value based on the enhancement quantized coefficients, and determining the enhancement layer using at least one of the base quantized coefficients or the enhancement quantized coefficients. A method of decoding a multimedia bitstream may include receiving a multimedia bitstream having a base layer and an enhancement layer and decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
Description
- This application is a continuation application of U.S. patent application Ser. No. 11/416,851, “SYSTEM AND METHOD FOR SCALABLE ENCODING AND DECODING OF MULTIMEDIA DATA USING MULTIPLE LAYERS,” filed May 2, 2006, the contents of which are hereby incorporated by reference in their entirety, which claims priority to Provisional Application No. 60/789,271 entitled “DATA PROCESSING WITH SCALABILITY,” filed Apr. 4, 2006, Provisional Application No. 60/677,607 entitled “BASE LAYER VIDEO QUALITY COMPARISON,” filed May 3, 2005, Provisional Application No. 60/677,609 entitled “INTRODUCING NEW MB MODES,” filed May 3, 2005, Provisional Application No. 60/677,610 entitled “SHARING INFORMATION IN TWO LAYER CODING,” filed May 3, 2005, and Provisional Application No. 60/677,611 entitled “INTERLAYER PREDICTION FOR INTER MBS IN SCALABLE VIDEO CODING,” filed May 3, 2005, and all assigned to the assignee hereof and hereby expressly incorporated by reference herein.
- 1. Field
- The invention relates to scalable encoding and decoding of multimedia data that may comprise audio data, video data or both. More particularly, the invention relates to a system and method for scalable encoding and decoding of multimedia data using multiple layers.
- 2. Background
- The International Telecommunication Union (ITU) has promulgated the H.261, H.262, H.263 and H.264 standards for digital video encoding. These standards specify the syntax of encoded digital video data and how this data is to be decoded for presentation or playback. However, these standards permit various different techniques (e.g., algorithms or compression tools) to be used in a flexible manner for transforming the digital video data from an uncompressed format to a compressed or encoded format. Hence, many different digital video data encoders are currently available. These digital video encoders are capable of achieving varying degrees of compression at varying cost and quality levels.
- Scalable video coding generates multiple layers, for example a base layer and an enhancement layer, for the encoding of video data. These two layers are generally transmitted on different channels with different transmission characteristics resulting in different packet error rates. The base layer typically has a lower packet error rate when compared with the enhancement layer. The base layer generally contains the most valuable information and the enhancement layer generally offers refinements over the base layer. Most scalable video compression technologies exploit the fact that the human visual system is more forgiving of noise (due to compression) in high frequency regions of the image than the flatter, low frequency regions. Hence, the base layer predominantly contains low frequency information and the enhancement layer predominantly contains high frequency information. When network bandwidth falls short, there is a higher probability of receiving just the base layer of the coded video (no enhancement layer). In such situations, the reconstructed video is blurred and deblocking filters may even accentuate this effect.
- Decoders generally decode the base layer or the base layer and the enhancement layer. When decoding the base layer and the enhancement layer, multiple layer decoders generally need increased computational complexity and memory when compared with single layer decoders. Many mobile devices do not utilize multiple layer decoders due to the increased computational complexity and memory requirements.
- A method of using a base layer to predict an enhancement layer is disclosed. A block of multimedia data may be used to generate a base residual that includes a plurality of base quantized coefficients. The block of multimedia data may also be used to generate an enhancement residual that includes a plurality of enhancement quantized coefficients. A first value may be determined based on the plurality of base quantized coefficients and a second value may be determined based on the plurality of enhancement quantized coefficients. The enhancement layer may be determined by using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
- A method of decoding a multimedia bitstream may include receiving a multimedia bitstream having a base layer and an enhancement layer. The base layer may be decoded to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
- The features, objects, and advantages of the invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, wherein:
-
FIG. 1 is a block diagram of a system for encoding and decoding multimedia data; -
FIG. 2 is a block diagram of a H.264 video data bitstream; -
FIG. 3 is a block diagram of a multiple layer scalable encoder with interlayer prediction; -
FIG. 4 is a flow chart of a Mode Decision Module (MDM), which may be part of the prediction modules ofFIG. 3 ; -
FIG. 5 is a flow chart of a Transform+Entropy Coding Module (TECM), which may be part of the prediction modules ofFIG. 3 ; -
FIG. 6 is a flow chart illustrating interlayer prediction on a macroblock basis or a block basis; -
FIG. 7 shows six 4×4 blocks in the transform domain to illustrate interlayer prediction on a dct coefficient-by-coefficient basis; -
FIG. 8 illustrates a method of interlayer prediction on a dct coefficient-by-coefficient basis; -
FIG. 9 is a flow chart of a method of decoding a multimedia bitstream using intralayer prediction or interlayer prediction; and -
FIG. 10 is a block diagram of a decoder with intralayer prediction and interlayer prediction. - Systems and methods that implement the embodiments of the various features of the invention will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate some embodiments of the invention and not to limit the scope of the invention. Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. In addition, the first digit of each reference number indicates the figure in which the element first appears.
-
FIG. 1 is a block diagram of asystem 100 for encoding and decoding multimedia (e.g., video, audio or both) data.System 100 may be configured to encode (e.g., compress) and decode (e.g., decompress) video data (e.g., pictures and video frames).System 100 may include aserver 105, adevice 110, and acommunication channel 115 connectingserver 105 todevice 110.System 100 may be used to illustrate the methods described below for encoding and decoding video data. System 100 may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. One or more elements can be rearranged and/or combined, and other systems can be used in place ofsystem 100 while still maintaining the spirit and scope of the invention. Additional elements may be added tosystem 100 or may be removed fromsystem 100 while still maintaining the spirit and scope of the invention. -
Server 105 may include aprocessor 120, astorage medium 125, anencoder 130, and an I/O device 135 (e.g., a transceiver).Processor 120 and/orencoder 130 may be configured to receive video data in the form of a series of video frames.Processor 120 and/orencoder 130 may be an Advanced RISC Machine (ARM), a controller, a digital signal processor (DSP), a microprocessor, or any other device capable of processing data.Processor 120 and/orencoder 130 may transmit the series of video frames tostorage medium 125 for storage and/or may encode the series of video frames.Storage medium 125 may also store computer instructions that are used byprocessor 120 and/orencoder 130 to control the operations and functions ofserver 105.Storage medium 125 may represent one or more devices for storing the video data and/or other machine readable mediums for storing information. The term “machine readable medium” includes, but is not limited to, random access memory (RAM), flash memory, (read-only memory) ROM, EPROM, EEPROM, registers, hard disk, removable disk, CD-ROM, DVD, wireless channels, and various other mediums capable of storing, containing or carrying instruction(s) and/or data. -
Encoder 130, using computer instructions received fromstorage medium 125, may be configured to perform both parallel and serial processing (e.g., compression) of the series of video frames. The computer instructions may be implemented as described in the methods below. Once the series of frames are encoded, the encoded data may be sent to I/O device 135 for transmission todevice 110 viacommunication channel 115. -
Device 110 may include aprocessor 140, astorage medium 145, adecoder 150, an I/O device 155 (e.g., a transceiver), and a display device orscreen 160.Device 110 may be a computer, a digital video recorder, a handheld device (e.g., a cell phone, Blackberry, etc.), a set top box, a television, and other devices capable of receiving, processing (e.g., decompressing) and/or displaying a series of video frames. I/O device 155 receives the encoded data and sends the encoded data to thestorage medium 145 and/or to decoder 150 for decompression.Decoder 150 is configured to reproduce the series of video frames using the encoded data. Once decoded, the series of video frames can be stored instorage medium 145.Decoder 150, using computer instructions retrieved fromstorage medium 145, may be configured to perform both parallel and serial processing (e.g., decompression) of the encoded data to reproduce the series of video frames. The computer instructions may be implemented as described in the methods below.Processor 140 may be configured to receive the series of video frames fromstorage medium 145 and/ordecoder 150 and to display the series of video frames ondisplay device 160.Storage medium 145 may also store computer instructions that are used byprocessor 140 and/ordecoder 150 to control the operations and functions ofdevice 110. -
Communication channel 115 may be used to transmit the encoded data betweenserver 105 anddevice 110.Communication channel 115 may be a wired connection or network and/or a wireless connection or network. For example,communication channel 115 can include the Internet, coaxial cables, fiber optic lines, satellite links, terrestrial links, wireless links, other media capable of propagating signals, and any combination thereof. -
FIG. 2 is a block diagram of a H.264video data bitstream 200. Thebitstream 200 may be organized or partitioned into a number of access units 205 (e.g.,access unit 1,access unit 2,access unit 3, etc.). Eachaccess unit 205 may include information corresponding to a coded video frame. Eachaccess unit 205 may be organized or partitioned into a number of NAL units 210. Each NAL unit 210 may include aNAL prefix 215, aNAL header 220, and a block ofdata 225.NAL prefix 215 may be a series of bits (e.g., 00000001) indicating the beginning of the block ofdata 225 andNAL header 220 may include a NAL unit type 230 (e.g., an I, P or B frame). The block ofdata 225 may include aheader 235 anddata 240. The block ofdata 225 may be organized or partitioned into a 16×16 macroblock of data, an entire frame of data or a portion of the video data (e.g., a 2×2 block or a 4×4 block). The terms “macroblock” and “block” may be used interchangeably. -
Header 135 may include amode 245, areference picture list 250 and QP values 255.Mode 245 may indicate toencoder 130 how to organize or partition the macroblocks, how to determine and transmit motion information and how to determine and transmit residual information.Data 240 may include motion information (e.g., a motion vector 285) and residual information (e.g.,DC 260 andAC 265 residuals). For I frames,data 240 may includeDC residuals 260 andAC residuals 265.AC residuals 265 may include Coded Block Pattern (CBP) values 270, number of trailingones 275 andresidual quantization coefficients 280. No motion information may be needed for an I frame because it is the first frame. For P and B frames,data 240 may includemotion vectors 285,DC residuals 290 andAC residuals 295. -
FIG. 3 is a block diagram of base and enhancementlayer encoding modules scalable encoder 130. Multiple layer encoding introduces multiple temporal prediction loops. For example, two layer coding may introduce two temporal prediction loops. Video data may be shared between the two layers to allow for a certain bit assignment for the two layers and to reduce overhead. Interlayer prediction may be used at the enhancement layer to reduce total coding overhead. Baselayer encoding module 300 may be used for the base layer video and enhancementlayer encoding module 305 may be used for the enhancement layer video. In some embodiments, the base layer video may be the same or approximately the same as the enhancement layer video. Video data may be encoded prior to receipt by base and enhancementlayer encoding modules - Encoded video data may be provided at
inputs layer encoding module 300 may include a transform (Tb)module 320, a quantization (Qb)module 325, an inverse transform (Tb −1)module 330, and an inverse quantization (Qb)module 335. The enhancementlayer encoding module 305 may include a transform (Te)module 340, a quantization (Qe)module 345, an inverse transform (Te −1)module 350, and an inverse quantization (Qe −1)module 355.Quantization modules layer encoding module 300 are larger than the quantization parameters for the enhancementlayer encoding module 305. A larger quantization parameter indicated a lower quality image. Baselayer encoding module 300 may produceresidual information 360 for the base layer and enhancementlayer encoding module 305 may produceresidual information 365 for the enhancement layer. Base and enhancementlayer encoding modules prediction modules Prediction modules Prediction modules - For I frame, the decoded base layer may be used as a reference for the enhancement layer. For P and B frames, a collocated base frame and a reference, computed by motion compensating one or more previous frames, may be used for the enhancement layer. Interlayer prediction can be performed on a macroblock basis, a block basis (e.g., a 4×4 block basis), or a dct coefficient basis.
- For each macroblock in a P or B frame, interlayer prediction or intralayer prediction (e.g., temporal prediction) can be used depending on various factors such as the rate-distortion cost. If interlayer prediction is used, an enhancement layer macroblock may be predicted by using a collocated base layer macroblock. In some embodiments, the prediction error may be encoded and then transmitted to
decoder 150. If temporal prediction is used, an enhancement layer macroblock may be predicted by using one or more macroblocks from one or more prior and/or subsequent frames as a reference and using (e.g., copying) macroblock mode information and motion vectors from the base layer. -
FIG. 4 is a flow chart of a Mode Decision Module (MDM) 400, which may be part ofprediction modules FIG. 3 .MDM 400 may include amotion estimation module 405 and adecision module 410.MDM 400 may be implemented byprocessor 120 and/orencoder 130.Motion estimation module 405 generates motion information (e.g., motion vectors) for the enhancement layer for the various modes. The mode may be determined by using information (e.g., motion vectors and residuals) from the base layer and the enhancement layer. Several modes exist in H.264 motion estimation. For example, mode “a” may be a 16×16 macroblock (output MVx and MVy), mode “b” may be two 8×16 blocks or two 16×8 blocks (for each partition output MVx and MVy), and mode “c” may be four 8×8 blocks (for each partition output 8×8 sub-partition mode and for each sub-partition output MVx and MVy). Each macroblock and each block may its own motion information. For two layer coding, several modes allow a large amount of flexibility in bit assignment. In some modes, the enhancement layer generates more accurate motion vectors when compared with the base layer because of the higher quality enhancement layer video. In two layer coding, the base layer and the enhancement layer may both use the same motion information corresponding to the base layer. Residual information may be generated by using a predicted macroblock and subtracting it from a current macroblock. -
Encoder 130 may select a skip mode, which is an intralayer prediction mode. In the skip mode,encoder 130 does not transmit any motion and residual information about the current macroblock or block todecoder 150. Motion information for the current block may be derived from one or more neighboring blocks. In one mode,encoder 130 may transmit motion information and may not transmit residual information. This may be accomplished by setting coded_block_pattern to 0. In the H.264 standard, when the coded_block_pattern is set to 0, all transform coefficients are 0. When coded_block_pattern=0,decoder 150 is notified that no residual information is being sent byencoder 130. To encode the coded_block_pattern value, a code number as shown in Table I may be assigned to the coded_block_pattern. The code number may be coded using an Exp-Golomb code.Decoder 150 may receive a code number as shown in Table I fromencoder 130. -
TABLE I Code Number Coded_Block_Pattern Bit String 0 0 1 1 16 0 1 0 2 1 0 1 1 3 2 0 0 1 0 0 4 4 0 0 1 0 1 5 8 0 0 1 1 0 . . . . . . . . . -
Decision module 410 may select a mode, which influences various factors such as bit cost of encoding motion information, coding efficiency, motion accuracy, overhead, performance, rate-distortion optimization, etc. One mode may produce better results for the base layer while another mode may produce better results for the enhancement layer. Therefore, some compromising may need to occur to achieve the “best mode” or “optimal mode” for both the base layer and the enhancement layer. No compromising may be needed if the same mode produces the best results for both the base layer and the enhancement layer. The best mode may be chosen based on, for example, rate distortion optimization because it represents the best tradeoff between motion accuracy and bit cost of encoding motion information.Decision module 410 may utilize TECM 500 (seeFIG. 5 ) for optimization purposes. The mode may provideprocessor 120 and/orencoder 130 with a set of guidelines, functions, instructions, parameters, routines, or any combination thereof, to perform the encoding of the video data. - The description below provides an example of three different modes, a, b and c. Assume the base layer has the best performance at mode a, and the enhancement layer has the best performance at mode b. If
decision module 410 selects mode a, then ΔRa— enh overhead is introduced at the enhancement layer and no overhead is introduced at the base layer. Ifdecision module 410 selects mode b, then ΔRb— base overhead is introduced at the base layer and no overhead is introduced at the enhancement layer. Ifdecision module 410 selects mode c, then ΔRc— base overhead is introduced at the base layer and ΔRc— enh overhead is introduced at the enhancement layer. From these variables, the cost of overhead for each mode for each layer can be determined. - The total cost for both layers can be determined as follows. Criteria 1: If the total cost is defined as C=ΔRx
— base, where x can be a, b or c, then the base layer has the highest coding efficiency and the results of the enhancement layer are immaterial. Criteria 2: If the total cost is defined as C=ΔRx— enh, where x can be a, b or c, then the enhancement layer has the highest coding efficiency and the results of the base layer are immaterial. Criteria 3: If the total cost is defined as C=ΔRx— base/2+ΔRx— enh/2, where x can be a, b or c, then both the base layer and the enhancement layer are treated equally or similarly. Criteria 4: If the total overhead for the entire base layer frame should be no more than 5%, then the defined requirement on a macroblock basis can be determined. For example, when a macroblock j at the base layer is encoded, the upper bound of the overhead allowed can be calculated as upper bound=(Bj-1−Ej-1+bj)*5%−Ej-1, where Bj-1 is the total number of bits used to encode pervious j-1 macroblocks, Ej-1 is the overhead bits in Bj-1, and bj is the used bits when encoding macroblock j at its best mode at the base layer. After encoding macroblock j, Bj and Ej can be updated for the following macroblock. -
FIG. 5 is a flow chart of a Transform+Entropy Coding Module (TECM) 500, which may be part ofprediction modules FIG. 3 .TECM 500 may include a baselayer encoding module 505, adecoding module 510, achecking module 515, aninterlayer prediction module 520, and atemporal prediction module 525.TECM 500 may be implemented byprocessor 120 and/orencoder 130.TECM 500 uses the encoded base layer to predict the enhancement layer. Baselayer encoding module 505 may be used to determine motion information (e.g., motion vectors) for the base layer.Decoding module 510 may be used to decode the encoded base layer prior to interlayer prediction. Checkingmodule 515 may be used to determine the number of zero and/or non-zero coefficients in the transformed base layer residual. Depending on the coefficients, interlayer prediction (520) or temporal prediction (525) may be selected to predict the enhancement layer. -
FIG. 6 is a flow chart illustrating interlayer prediction on a macroblock basis or a block basis. Interlayer prediction may be performed on a macroblock basis or a block basis (i.e., any portion of the macroblock (e.g., a 4×4 block basis)). For interlayer prediction on a 4×4 block basis or a 2×2 block basis, motion information and/or residual information from the macroblocks in the base layer may be used to determine whether to use interlayer prediction or temporal prediction. Baselayer encoding module 505 may determine motion information and residual information for the base layer (605). Baselayer encoding module 505 may also obtain a reference (e.g., a macroblock or frame) for the enhancement layer. Baselayer encoding module 505 may determine the number of non-zero or zero coefficients of the residual information for the base layer (610). If the residual information from the base layer contains more information than the reference from the enhancement layer, then the residual information in the base layer is useful to the enhancement layer. Checkingmodule 515 may determine whether the number of non-zero or zero coefficients meet a selected condition (615). For example, checkingmodule 515 may examine the residual information of the base layer to determine if the number of non-zero coefficients is greater than, less than or equal to a threshold (T) or the number of zero coefficients is greater than, less than or equal to a threshold (T). If the residual information includes all non-zero coefficients or some non-zero coefficients, then the residual information in the base layer may be useful to the enhancement layer andencoder 130 may use interlayer prediction to predict the macroblocks in the enhancement layer (625). If the residual information includes all zeros or some zeros, then the residual information in the base layer may not be useful to the enhancement layer andencoder 130 may use temporal prediction to predict the macroblocks in the enhancement layer (620).Encoder 130 may transmit the encoded macroblocks or encoded blocks to decoder 150 (630). -
FIG. 7 shows six 4×4 blocks in the transform domain to illustrate interlayer prediction on a dct coefficient-by-coefficient basis andFIG. 8 illustrates amethod 800 of interlayer prediction on a dct coefficient-by-coefficient basis. The top row includes a motion compensated prediction (MCP) orreference block 700, aresidual block 705, and areconstructed block 710 for the base layer. The bottom row includes a MCP orreference block 715, aresidual block 720, and areconstructed block 725 for the enhancement layer. MCP andresidual blocks Reconstructed block 710 may be formed by using coefficients from MCP andresidual blocks Reconstructed block 725 may be formed by using (e.g., copying) coefficients fromreconstructed block 710. - The interlayer prediction may be performed on the non-zero coefficients in
residual block 705 for the base layer. InFIG. 7 , {circumflex over (X)}b,t represents a coefficient in MCP block 700 and Et+1 represents an encoded non-zero coefficient inresidual block 705. The reconstructed coefficient forreconstructed block 710 may be represented by {circumflex over (X)}b,t+1=Xb,t+Et+1 and may be used for interlayer prediction. The reconstructed coefficient at the same position for the enhancement layer may be a copy of the reconstructed coefficient from the base layer. If Et+1=0 or approximately 0, then the coefficient may not be useful to the enhancement layer andtemporal prediction module 525 may perform temporal prediction to generate thereconstructed block 725 by usingMCP block 715 andresidual block 720. If Et+1≠0 or approximately 0, then the coefficient may be useful to the enhancement layer andinterlayer prediction module 520 may perform interlayer prediction using the coefficients. Hence, the reconstructed coefficients for the enhancement layer can be copied from the base layer. Each coefficient may also be compared to a threshold to determine whether to use interlayer prediction or temporal prediction. The coefficients for the enhancement layer may be sent fromencoder 130 todecoder 150. - The term “Coded Block Pattern (CBP)” refers to the sum of all non-zero coefficients in a macroblock. Using the residual coefficients in
residual macroblock 705,interlayer prediction module 520 may determine a CBP for the base layer (CBPb) (810). Using the residual coefficients inresidual macroblock 720,interlayer prediction module 520 may determine a CBP for the enhancement layer (CBPe) (815). - If CBPb=0 or CBPb<T (threshold), then
interlayer prediction module 520 may assign all coefficients inresidual macroblock 705 to zero (820) and may transmitresidual macroblock 720 to decoder 150 (825). In some embodiments, T may be 4 (or approximately 4) where the sum of all non-zero coefficients may be determined by a linear sum or a weighed sum of the residual coefficients based on the location of the residual coefficients in themacroblock 705. - If CBPb+CBPe≠0, then
interlayer prediction module 520 may determine minimum quantized coefficients using the residual coefficients of the base layer and the enhancement layer (830). For example, the minimum quantized coefficients may be determined using the equation MQC(i,j)=Cb(i,j)−min[Cb(i,j), Ce(i,j)], where Ce may be the residual coefficients of the enhancement layer and Cb may be the residual coefficients of the base layer.Interlayer prediction module 520 may transmit the MQC(i,j) to decoder 150 (835). - If the sign of Ce(i,j)≠sign of Cb(i,j), then
interlayer prediction module 520 may assign all residual coefficients of the base layer (Cb(i,j)) to zero (840) and may transmit all residual coefficients of the enhancement layer (Ce(i,j)) to decoder 150 (845). -
FIG. 9 is a flow chart of amethod 900 of decoding a multimedia bitstream using intralayer prediction or interlayer prediction.Processor 140 may receive a multimedia bitstream having a base layer and an enhancement layer (905).Decoder 150 may decode the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction (910). The base layer may include a plurality of base layer coefficients. In some embodiments, to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction,decoder 150 may determine whether the plurality of base layer coefficients include at least one non-zero coefficient.Decoder 150 may decode the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value and may decode the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value. -
FIG. 10 is a block diagram of adecoder 1000 with intralayer prediction and interlayer prediction.Decoder 1000 may be part ofprocessor 140 and/ordecoder 150 and may be used to implement the method ofFIG. 9 .Decoder 1000 may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof.Decoder 1000 may include adecision module 1005, anintralayer prediction module 1010 and aninterlayer prediction module 1015.Decision module 1005 may receive a multimedia bitstream having a base layer and an enhancement layer and may decode the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.Intralayer prediction module 1010 may be used to decode the enhancement layer using intralayer prediction.Interlayer prediction module 1015 may be used to decode the enhancement layer using interlayer prediction. - In some embodiments of the invention, an apparatus for processing multimedia data being associated with multiple layers is disclosed. The apparatus may include means for determining a base layer residual. The means for determining a base layer residual may be
processor 120,encoder 130, baselayer encoding module 300, enhancementlayer encoding module 305,prediction modules motion estimation module 405,decision module 410 and/or baselayer encoding module 505. The apparatus may include means for performing interlayer prediction to generate an enhancement layer residual if at least one of a number of non-zero coefficients of the base layer residual or a number of zero coefficients of the base layer residual meets a first selected condition. The means for performing interlayer prediction may beprocessor 120,encoder 130, baselayer encoding module 300, enhancementlayer encoding module 305,prediction modules layer encoding module 505 and/orinterlayer prediction module 520. The apparatus may include means for performing temporal prediction to generate the enhancement layer residual if at least one of a number of non-zero coefficients of the base layer residual or a number of zero coefficients of the base layer residual meets a second selected condition. The means for performing temporal prediction may beprocessor 120,encoder 130, baselayer encoding module 300, enhancementlayer encoding module 305,prediction modules layer encoding module 505 and/ortemporal prediction module 525. - In some embodiments of the invention, an apparatus for decoding a multimedia bitstream is disclosed. The apparatus may include means for receiving a multimedia bitstream having a base layer and an enhancement layer. The means for receiving a multimedia bitstream may be
processor 140,decoder 150 and/ordecision module 1005. The apparatus may include means for decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction. The means for decoding may beprocessor 140,decoder 150,decision module 1005,intralayer prediction module 1010 and/orinterlayer prediction module 1015. - Those of ordinary skill would appreciate that the various illustrative logical blocks, modules, and algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed methods.
- The various illustrative logical blocks, modules, and circuits described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- The steps of a method or algorithm described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). The ASIC may reside in a wireless modem. In the alternative, the processor and the storage medium may reside as discrete components in the wireless modem.
- The previous description of the disclosed examples is provided to enable any person of ordinary skill in the art to make or use the disclosed methods and apparatus. Various modifications to these examples will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other examples without departing from the spirit or scope of the disclosed method and apparatus. The described embodiments are to be considered in all respects only as illustrative and not restrictive and the scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (70)
1. A method of using a base layer to predict an enhancement layer, comprising:
using a block of multimedia data to generate a base residual including a plurality of base quantized coefficients;
using the block of multimedia data to generate an enhancement residual including a plurality of enhancement quantized coefficients;
determining a first value based on the plurality of base quantized coefficients;
determining a second value based on the plurality of enhancement quantized coefficients; and
determining the enhancement layer using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
2. The method of claim 1 , further comprising transmitting a minimum value of the plurality of base quantized coefficients if the first value is equal to the second value.
3. The method of claim 1 , further comprising determining a minimum value from the plurality of base quantized coefficients and the plurality of enhancement quantized coefficients.
4. The method of claim 1 , further comprising setting the first value to zero if the first sum is less than a threshold.
5. The method of claim 1 , further comprising transmitting the plurality of enhancement quantized coefficients if a sign of the plurality of base quantized coefficients is not equal to a sign of the plurality of enhancement quantized coefficients.
6. The method of claim 1 , further comprising using temporal prediction to generate a base motion vector and the base residual.
7. The method of claim 1 , further comprising using temporal prediction to generate an enhancement motion vector and the enhancement residual.
8. The method of claim 1 , further comprising using interlayer prediction to generate a base motion vector and the base residual.
9. The method of claim 1 , further comprising using interlayer prediction to generate an enhancement motion vector and the enhancement residual.
10. An apparatus for using a base layer to predict an enhancement layer, comprising:
a motion estimation module for using a block of multimedia data to generate a base residual including a plurality of base quantized coefficients and for using the block of multimedia data to generate an enhancement residual including a plurality of enhancement quantized coefficients; and
a prediction module for determining a first value based on the plurality of base quantized coefficients, for determining a second value based on the plurality of enhancement quantized coefficients and for determining the enhancement layer using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
11. The apparatus of claim 10 , wherein the prediction module further comprises transmitting a minimum value of the plurality of base quantized coefficients if the first value is equal to the second value.
12. The apparatus of claim 10 , wherein the prediction module further comprises determining a minimum value from the plurality of base quantized coefficients and the plurality of enhancement quantized coefficients.
13. The apparatus of claim 10 , wherein the prediction module further comprises setting the first value to zero if the first sum is less than a threshold.
14. The apparatus of claim 10 , wherein the motion estimation module further comprises transmitting the plurality of enhancement quantized coefficients if a sign of the plurality of base quantized coefficients is not equal to a sign of the plurality of enhancement quantized coefficients.
15. The apparatus of claim 10 , wherein the motion estimation module further comprises using temporal prediction to generate a base motion vector and the base residual.
16. The apparatus of claim 10 , wherein the motion estimation module further comprises using temporal prediction to generate an enhancement motion vector and the enhancement residual.
17. The apparatus of claim 10 , wherein the motion estimation module further comprises using interlayer prediction to generate a base motion vector and the base residual.
18. The apparatus of claim 10 , wherein the motion estimation module further comprises using interlayer prediction to generate an enhancement motion vector and the enhancement residual.
19. An apparatus for using a base layer to predict an enhancement layer, comprising:
means for using a block of multimedia data to generate a base residual including a plurality of base quantized coefficients;
means for using the block of multimedia data to generate an enhancement residual including a plurality of enhancement quantized coefficients;
means for determining a first value based on the plurality of base quantized coefficients;
means for determining a second value based on the plurality of enhancement quantized coefficients; and
means for determining the enhancement layer using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
20. The apparatus of claim 19 , further comprising means for transmitting a minimum value of the plurality of base quantized coefficients if the first value is equal to the second value.
21. The apparatus of claim 19 , further comprising means for determining a minimum value from the plurality of base quantized coefficients and the plurality of enhancement quantized coefficients.
22. The apparatus of claim 19 , further comprising means for setting the first value to zero if the first sum is less than a threshold.
23. The apparatus of claim 19 , further comprising means for transmitting the plurality of enhancement quantized coefficients if a sign of the plurality of base quantized coefficients is not equal to a sign of the plurality of enhancement quantized coefficients.
24. The apparatus of claim 19 , further comprising means for using temporal prediction to generate a base motion vector and the base residual.
25. The apparatus of claim 19 , further comprising means for using temporal prediction to generate an enhancement motion vector and the enhancement residual.
26. The apparatus of claim 19 , further comprising means for using interlayer prediction to generate a base motion vector and the base residual.
27. The apparatus of claim 19 , further comprising means for using interlayer prediction to generate an enhancement motion vector and the enhancement residual.
28. A machine-readable medium embodying a method of using a base layer to predict an enhancement layer, the method comprising:
using a block of multimedia data to generate a base residual including a plurality of base quantized coefficients;
using the block of multimedia data to generate an enhancement residual including a plurality of enhancement quantized coefficients;
determining a first value based on the plurality of base quantized coefficients;
determining a second value based on the plurality of enhancement quantized coefficients; and
determining the enhancement layer using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
29. The machine-readable medium of claim 28 , wherein the method further comprises transmitting a minimum value of the plurality of base quantized coefficients if the first value is equal to the second value.
30. The machine-readable medium of claim 28 , wherein the method further comprises determining a minimum value from the plurality of base quantized coefficients and the plurality of enhancement quantized coefficients.
31. The machine-readable medium of claim 28 , wherein the method further comprises setting the first value to zero if the first sum is less than a threshold.
32. The machine-readable medium of claim 28 , wherein the method further comprises transmitting the plurality of enhancement quantized coefficients if a sign of the plurality of base quantized coefficients is not equal to a sign of the plurality of enhancement quantized coefficients.
33. The machine-readable medium of claim 28 , wherein the method further comprises using temporal prediction to generate a base motion vector and the base residual.
34. The machine-readable medium of claim 28 , wherein the method further comprises using temporal prediction to generate an enhancement motion vector and the enhancement residual.
35. The machine-readable medium of claim 28 , wherein the method further comprises using interlayer prediction to generate a base motion vector and the base residual.
36. The machine-readable medium of claim 28 , wherein the method further comprises using interlayer prediction to generate an enhancement motion vector and the enhancement residual.
37. A processor for using a base layer to predict an enhancement layer, the processor being configured to:
use a block of multimedia data to generate a base residual including a plurality of base quantized coefficients;
use the block of multimedia data to generate an enhancement residual including a plurality of enhancement quantized coefficients;
determine a first value based on the plurality of base quantized coefficients;
determine a second value based on the plurality of enhancement quantized coefficients; and
determine the enhancement layer using at least one of the plurality of base quantized coefficients or the plurality of enhancement quantized coefficients.
38. The processor of claim 37 , further configured to transmit a minimum value of the plurality of base quantized coefficients if the first value is equal to the second value.
39. The processor of claim 37 , further configured to determine a minimum value from the plurality of base quantized coefficients and the plurality of enhancement quantized coefficients.
40. The processor of claim 37 , further configured to set the first value to zero if the first sum is less than a threshold.
41. The processor of claim 37 , further configured to transmit the plurality of enhancement quantized coefficients if a sign of the plurality of base quantized coefficients is not equal to a sign of the plurality of enhancement quantized coefficients.
42. The processor of claim 37 , further configured to use temporal prediction to generate a base motion vector and the base residual.
43. The processor of claim 37 , further configured to use temporal prediction to generate an enhancement motion vector and the enhancement residual.
44. The processor of claim 37 , further configured to use interlayer prediction to generate a base motion vector and the base residual.
45. The processor of claim 37 , further configured to use interlayer prediction to generate an enhancement motion vector and the enhancement residual.
46. A method of decoding a multimedia bitstream comprising:
receiving a multimedia bitstream having a base layer and an enhancement layer; and
decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
47. The method of claim 46 , wherein the intralayer prediction is performed on an N×M block basis or a coefficient basis.
48. The method of claim 46 , wherein the interlayer prediction is performed on an N×M block basis or a coefficient basis.
49. The method of claim 46 , wherein the intralayer or interlayer prediction is performed on a macroblock basis or a coefficient basis.
50. The method of claim 46 , further comprising:
determining whether a plurality of base layer coefficients include at least one non-zero coefficient;
decoding the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value; and
decoding the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value.
51. An apparatus for decoding a multimedia bitstream comprising:
a decision module for receiving a multimedia bitstream having a base layer and an enhancement layer; and
an interlayer prediction module for decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
52. The apparatus of claim 51 , further comprising an intralayer prediction module and wherein:
the decision module determines whether a plurality of base layer coefficients include at least one non-zero coefficient;
the intralayer prediction module decodes the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value; and
the interlayer prediction module decodes the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value.
53. The apparatus of claim 51 , wherein the intralayer prediction is performed on an N×M block basis or a coefficient basis.
54. The apparatus of claim 51 , wherein the interlayer prediction is performed on an N×M block basis or a coefficient basis.
55. The apparatus of claim 51 , wherein the intralayer or interlayer prediction is performed on a macroblock basis or a coefficient basis.
56. An apparatus for decoding a multimedia bitstream comprising:
means for receiving a multimedia bitstream having a base layer and an enhancement layer; and
means for decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
57. The apparatus of claim 56 , wherein the intralayer prediction is performed on an N×M block basis or a coefficient basis.
58. The apparatus of claim 56 , wherein the interlayer prediction is performed on an N×M block basis or a coefficient basis.
59. The apparatus of claim 56 , wherein the intralayer or interlayer prediction is performed on a macroblock basis or a coefficient basis.
60. The apparatus of claim 56 , further comprising:
means for determining whether a plurality of base layer coefficients include at least one non-zero coefficient;
means for decoding the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value; and
means for decoding the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value.
61. A machine-readable medium embodying a method of decoding a multimedia bitstream, the method comprising:
receiving a multimedia bitstream having a base layer and an enhancement layer; and
decoding the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
62. The machine-readable medium of claim 61 , wherein the intralayer prediction is performed on an N×M block basis or a coefficient basis.
63. The machine-readable medium of claim 61 , wherein the interlayer prediction is performed on an N×M block basis or a coefficient basis.
64. The machine-readable medium of claim 61 , wherein the intralayer or interlayer prediction is performed on a macroblock basis or a coefficient basis.
65. The machine-readable medium of claim 61 , wherein the method further comprises:
determining whether a plurality of base layer coefficients include at least one non-zero coefficient;
decoding the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value; and
decoding the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value.
66. A processor for decoding a multimedia bitstream, the processor being configured to:
receive a multimedia bitstream having a base layer and an enhancement layer; and
decode the base layer to determine whether the enhancement layer should be decoded using intralayer prediction or interlayer prediction.
67. The processor of claim 66 , wherein the intralayer prediction is performed on an N×M block basis or a coefficient basis.
68. The processor of claim 66 , wherein the interlayer prediction is performed on an N×M block basis or a coefficient basis.
69. The processor of claim 66 , wherein the intralayer or interlayer prediction is performed on a macroblock basis or a coefficient basis.
70. The processor of claim 66 , further configured to:
determine whether a plurality of base layer coefficients include at least one non-zero coefficient;
decode the base layer using intralayer prediction if all the plurality of base layer coefficients have a zero value; and
decode the base layer using interlayer prediction if at least one of the plurality of base layer coefficients has a non-zero value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/468,493 US20120219060A1 (en) | 2005-05-03 | 2012-05-10 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US67761105P | 2005-05-03 | 2005-05-03 | |
US67760905P | 2005-05-03 | 2005-05-03 | |
US67760705P | 2005-05-03 | 2005-05-03 | |
US67761005P | 2005-05-03 | 2005-05-03 | |
US78927106P | 2006-04-04 | 2006-04-04 | |
US11/416,851 US8619860B2 (en) | 2005-05-03 | 2006-05-02 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
US13/468,493 US20120219060A1 (en) | 2005-05-03 | 2012-05-10 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/416,851 Continuation US8619860B2 (en) | 2005-05-03 | 2006-05-02 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120219060A1 true US20120219060A1 (en) | 2012-08-30 |
Family
ID=37308713
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/416,851 Expired - Fee Related US8619860B2 (en) | 2005-05-03 | 2006-05-02 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
US13/468,493 Abandoned US20120219060A1 (en) | 2005-05-03 | 2012-05-10 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/416,851 Expired - Fee Related US8619860B2 (en) | 2005-05-03 | 2006-05-02 | System and method for scalable encoding and decoding of multimedia data using multiple layers |
Country Status (9)
Country | Link |
---|---|
US (2) | US8619860B2 (en) |
EP (1) | EP1877959A4 (en) |
JP (2) | JP4902642B2 (en) |
KR (1) | KR100942396B1 (en) |
CN (3) | CN102724496B (en) |
BR (1) | BRPI0610903A2 (en) |
CA (1) | CA2608279A1 (en) |
TW (1) | TWI326186B (en) |
WO (1) | WO2006119443A2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060262985A1 (en) * | 2005-05-03 | 2006-11-23 | Qualcomm Incorporated | System and method for scalable encoding and decoding of multimedia data using multiple layers |
WO2014129873A1 (en) * | 2013-02-25 | 2014-08-28 | 엘지전자 주식회사 | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
US9491459B2 (en) | 2012-09-27 | 2016-11-08 | Qualcomm Incorporated | Base layer merge and AMVP modes for video coding |
US9906786B2 (en) | 2012-09-07 | 2018-02-27 | Qualcomm Incorporated | Weighted prediction mode for scalable video coding |
US9967576B2 (en) | 2013-10-29 | 2018-05-08 | Kt Corporation | Multilayer video signal encoding/decoding method and device |
US10045020B2 (en) | 2013-10-22 | 2018-08-07 | Kt Corporation | Method and apparatus for encoding/decoding multilayer video signal |
US10194158B2 (en) | 2012-09-04 | 2019-01-29 | Qualcomm Incorporated | Transform basis adjustment in scalable video coding |
US10616607B2 (en) | 2013-02-25 | 2020-04-07 | Lg Electronics Inc. | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7383421B2 (en) * | 2002-12-05 | 2008-06-03 | Brightscale, Inc. | Cellular engine for a data processing system |
US8442108B2 (en) * | 2004-07-12 | 2013-05-14 | Microsoft Corporation | Adaptive updates in motion-compensated temporal filtering |
US8340177B2 (en) * | 2004-07-12 | 2012-12-25 | Microsoft Corporation | Embedded base layer codec for 3D sub-band coding |
US8374238B2 (en) * | 2004-07-13 | 2013-02-12 | Microsoft Corporation | Spatial scalability in 3D sub-band decoding of SDMCTF-encoded video |
US7451293B2 (en) * | 2005-10-21 | 2008-11-11 | Brightscale Inc. | Array of Boolean logic controlled processing elements with concurrent I/O processing and instruction sequencing |
US7956930B2 (en) | 2006-01-06 | 2011-06-07 | Microsoft Corporation | Resampling and picture resizing operations for multi-resolution video coding and decoding |
CN101416513A (en) * | 2006-01-09 | 2009-04-22 | 诺基亚公司 | System and apparatus for low-complexity fine granularity scalable video coding with motion compensation |
TW200803464A (en) * | 2006-01-10 | 2008-01-01 | Brightscale Inc | Method and apparatus for scheduling the processing of multimedia data in parallel processing systems |
WO2008027567A2 (en) * | 2006-09-01 | 2008-03-06 | Brightscale, Inc. | Integral parallel machine |
US20080059467A1 (en) * | 2006-09-05 | 2008-03-06 | Lazar Bivolarski | Near full motion search algorithm |
US8548056B2 (en) | 2007-01-08 | 2013-10-01 | Qualcomm Incorporated | Extended inter-layer coding for spatial scability |
EP2186338A1 (en) * | 2007-08-28 | 2010-05-19 | Thomson Licensing | Staggercasting with no channel change delay |
US8700792B2 (en) * | 2008-01-31 | 2014-04-15 | General Instrument Corporation | Method and apparatus for expediting delivery of programming content over a broadband network |
US8953673B2 (en) | 2008-02-29 | 2015-02-10 | Microsoft Corporation | Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers |
KR101431545B1 (en) * | 2008-03-17 | 2014-08-20 | 삼성전자주식회사 | Method and apparatus for Video encoding and decoding |
US8711948B2 (en) | 2008-03-21 | 2014-04-29 | Microsoft Corporation | Motion-compensated prediction of inter-layer residuals |
US8752092B2 (en) | 2008-06-27 | 2014-06-10 | General Instrument Corporation | Method and apparatus for providing low resolution images in a broadcast system |
US9571856B2 (en) * | 2008-08-25 | 2017-02-14 | Microsoft Technology Licensing, Llc | Conversion operations in scalable video encoding and decoding |
US8213503B2 (en) * | 2008-09-05 | 2012-07-03 | Microsoft Corporation | Skip modes for inter-layer residual video coding and decoding |
US8306153B2 (en) * | 2009-09-21 | 2012-11-06 | Techwell Llc | Method and system for tracking phase in a receiver for 8VSB |
CN101742321B (en) * | 2010-01-12 | 2011-07-27 | 浙江大学 | Layer decomposition-based Method and device for encoding and decoding video |
KR101432771B1 (en) * | 2010-03-05 | 2014-08-26 | 에스케이텔레콤 주식회사 | Video encoding apparatus and method therefor, and video decoding apparatus and method therefor |
US9357244B2 (en) | 2010-03-11 | 2016-05-31 | Arris Enterprises, Inc. | Method and system for inhibiting audio-video synchronization delay |
US9338458B2 (en) * | 2011-08-24 | 2016-05-10 | Mediatek Inc. | Video decoding apparatus and method for selectively bypassing processing of residual values and/or buffering of processed residual values |
FR2982447A1 (en) * | 2011-11-07 | 2013-05-10 | France Telecom | METHOD FOR ENCODING AND DECODING IMAGES, CORRESPONDING ENCODING AND DECODING DEVICE AND COMPUTER PROGRAMS |
FR2982446A1 (en) | 2011-11-07 | 2013-05-10 | France Telecom | METHOD FOR ENCODING AND DECODING IMAGES, CORRESPONDING ENCODING AND DECODING DEVICE AND COMPUTER PROGRAMS |
GB2499865B (en) * | 2012-03-02 | 2016-07-06 | Canon Kk | Method and devices for encoding a sequence of images into a scalable video bit-stream, and decoding a corresponding scalable video bit-stream |
GB2501115B (en) | 2012-04-13 | 2015-02-18 | Canon Kk | Methods for segmenting and encoding an image, and corresponding devices |
KR20130107861A (en) * | 2012-03-23 | 2013-10-02 | 한국전자통신연구원 | Method and apparatus for inter layer intra prediction |
KR102001415B1 (en) | 2012-06-01 | 2019-07-18 | 삼성전자주식회사 | Rate control Method for multi-layer video encoding, and video encoder and video signal processing system using method thereof |
CN104620578B (en) * | 2012-07-06 | 2018-01-02 | 三星电子株式会社 | Method and apparatus for the multi-layer video coding of random access and the method and apparatus of the multi-layer video decoding for random access |
EP2891311A1 (en) | 2012-08-29 | 2015-07-08 | VID SCALE, Inc. | Method and apparatus of motion vector prediction for scalable video coding |
KR101835360B1 (en) | 2012-10-01 | 2018-03-08 | 지이 비디오 컴프레션, 엘엘씨 | Scalable video coding using subblock-based coding of transform coefficient blocks in the enhancement layer |
US9602841B2 (en) * | 2012-10-30 | 2017-03-21 | Texas Instruments Incorporated | System and method for decoding scalable video coding |
US9247256B2 (en) | 2012-12-19 | 2016-01-26 | Intel Corporation | Prediction method using skip check module |
GB2509705B (en) * | 2013-01-04 | 2016-07-13 | Canon Kk | Encoding and decoding methods and devices, and corresponding computer programs and computer readable media |
KR20140092198A (en) | 2013-01-07 | 2014-07-23 | 한국전자통신연구원 | Video Description for Scalable Coded Video Bitstream |
US9807421B2 (en) * | 2013-04-05 | 2017-10-31 | Sharp Kabushiki Kaisha | NAL unit type restrictions |
WO2014163467A1 (en) * | 2013-04-05 | 2014-10-09 | 삼성전자 주식회사 | Multi-layer video coding method for random access and device therefor, and multi-layer video decoding method for random access and device therefor |
US10085034B2 (en) * | 2013-07-12 | 2018-09-25 | Sony Corporation | Image coding apparatus and method |
US9794558B2 (en) * | 2014-01-08 | 2017-10-17 | Qualcomm Incorporated | Support of non-HEVC base layer in HEVC multi-layer extensions |
US9712837B2 (en) * | 2014-03-17 | 2017-07-18 | Qualcomm Incorporated | Level definitions for multi-layer video codecs |
GB2538531A (en) * | 2015-05-20 | 2016-11-23 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
CN113810708B (en) | 2016-04-29 | 2024-06-28 | 世宗大学校产学协力团 | Method and apparatus for encoding and decoding image signal |
US11140368B2 (en) | 2017-08-25 | 2021-10-05 | Advanced Micro Devices, Inc. | Custom beamforming during a vertical blanking interval |
US11539908B2 (en) * | 2017-09-29 | 2022-12-27 | Advanced Micro Devices, Inc. | Adjustable modulation coding scheme to increase video stream robustness |
US11398856B2 (en) | 2017-12-05 | 2022-07-26 | Advanced Micro Devices, Inc. | Beamforming techniques to choose transceivers in a wireless mesh network |
US10904563B2 (en) * | 2019-01-02 | 2021-01-26 | Tencent America LLC | Method and apparatus for improved zero out transform |
US11699408B2 (en) | 2020-12-22 | 2023-07-11 | Ati Technologies Ulc | Performing asynchronous memory clock changes on multi-display systems |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5515377A (en) * | 1993-09-02 | 1996-05-07 | At&T Corp. | Adaptive video encoder for two-layer encoding of video signals on ATM (asynchronous transfer mode) networks |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04177992A (en) | 1990-11-09 | 1992-06-25 | Victor Co Of Japan Ltd | Picture coder having hierarchical structure |
JP3788823B2 (en) | 1995-10-27 | 2006-06-21 | 株式会社東芝 | Moving picture encoding apparatus and moving picture decoding apparatus |
IL127274A (en) * | 1997-04-01 | 2006-06-11 | Sony Corp | Picture coding device, picture coding method,picture decoding device, picture decoding method, and providing medium |
KR100261254B1 (en) | 1997-04-02 | 2000-07-01 | 윤종용 | Scalable audio data encoding/decoding method and apparatus |
US6233356B1 (en) * | 1997-07-08 | 2001-05-15 | At&T Corp. | Generalized scalability for video coder based on video objects |
US6731811B1 (en) * | 1997-12-19 | 2004-05-04 | Voicecraft, Inc. | Scalable predictive coding method and apparatus |
US6275531B1 (en) | 1998-07-23 | 2001-08-14 | Optivision, Inc. | Scalable video coding method and apparatus |
US6563953B2 (en) | 1998-11-30 | 2003-05-13 | Microsoft Corporation | Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock |
US6263022B1 (en) | 1999-07-06 | 2001-07-17 | Philips Electronics North America Corp. | System and method for fine granular scalable video with selective quality enhancement |
US6788740B1 (en) * | 1999-10-01 | 2004-09-07 | Koninklijke Philips Electronics N.V. | System and method for encoding and decoding enhancement layer data using base layer quantization data |
US6480547B1 (en) * | 1999-10-15 | 2002-11-12 | Koninklijke Philips Electronics N.V. | System and method for encoding and decoding the residual signal for fine granular scalable video |
EP1161839A1 (en) * | 1999-12-28 | 2001-12-12 | Koninklijke Philips Electronics N.V. | Snr scalable video encoding method and corresponding decoding method |
US6700933B1 (en) | 2000-02-15 | 2004-03-02 | Microsoft Corporation | System and method with advance predicted bit-plane coding for progressive fine-granularity scalable (PFGS) video coding |
US20020126759A1 (en) * | 2001-01-10 | 2002-09-12 | Wen-Hsiao Peng | Method and apparatus for providing prediction mode fine granularity scalability |
WO2003036978A1 (en) * | 2001-10-26 | 2003-05-01 | Koninklijke Philips Electronics N.V. | Method and apparatus for spatial scalable compression |
US7317759B1 (en) * | 2002-02-28 | 2008-01-08 | Carnegie Mellon University | System and methods for video compression mode decisions |
US6674376B1 (en) * | 2002-09-13 | 2004-01-06 | Morpho Technologies | Programmable variable length decoder circuit and method |
TWI419219B (en) * | 2002-11-15 | 2013-12-11 | Ebara Corp | Apparatus and method for substrate processing |
KR20060105407A (en) | 2005-04-01 | 2006-10-11 | 엘지전자 주식회사 | Method for scalably encoding and decoding video signal |
US7406176B2 (en) * | 2003-04-01 | 2008-07-29 | Microsoft Corporation | Fully scalable encryption for scalable multimedia |
KR100505961B1 (en) * | 2003-09-09 | 2005-08-03 | 주식회사 엠투시스 | Cellular phone having sliding type opening and closing mechanism |
JP4153410B2 (en) | 2003-12-01 | 2008-09-24 | 日本電信電話株式会社 | Hierarchical encoding method and apparatus, hierarchical decoding method and apparatus, hierarchical encoding program and recording medium recording the program, hierarchical decoding program and recording medium recording the program |
KR101031588B1 (en) * | 2004-02-24 | 2011-04-27 | 주식회사 포스코 | Apparatus for manufacturing fine slag with a function of utilizing sensible heat |
US20060008009A1 (en) * | 2004-07-09 | 2006-01-12 | Nokia Corporation | Method and system for entropy coding for scalable video codec |
KR20060059772A (en) | 2004-11-29 | 2006-06-02 | 엘지전자 주식회사 | Method and apparatus for deriving motion vectors of macro blocks from motion vectors of pictures of base layer when encoding/decoding video signal |
KR20060101847A (en) | 2005-03-21 | 2006-09-26 | 엘지전자 주식회사 | Method for scalably encoding and decoding video signal |
KR100746007B1 (en) * | 2005-04-19 | 2007-08-06 | 삼성전자주식회사 | Method and apparatus for adaptively selecting context model of entrophy coding |
AU2006201490B2 (en) | 2005-04-19 | 2008-05-22 | Samsung Electronics Co., Ltd. | Method and apparatus for adaptively selecting context model for entropy coding |
US8619860B2 (en) * | 2005-05-03 | 2013-12-31 | Qualcomm Incorporated | System and method for scalable encoding and decoding of multimedia data using multiple layers |
KR100878811B1 (en) | 2005-05-26 | 2009-01-14 | 엘지전자 주식회사 | Method of decoding for a video signal and apparatus thereof |
KR100682405B1 (en) * | 2005-07-29 | 2007-02-15 | 김재홍 | The embossing polyethylene firing fabric manufacture system which in the polyethylene firing fabric becomes embossing in fluid resin coating and coating layer |
-
2006
- 2006-05-02 US US11/416,851 patent/US8619860B2/en not_active Expired - Fee Related
- 2006-05-03 EP EP06752235A patent/EP1877959A4/en not_active Ceased
- 2006-05-03 CN CN201210148543.9A patent/CN102724496B/en not_active Expired - Fee Related
- 2006-05-03 JP JP2008510212A patent/JP4902642B2/en not_active Expired - Fee Related
- 2006-05-03 CN CN2006800227748A patent/CN101542926B/en not_active Expired - Fee Related
- 2006-05-03 CN CN201410330918.2A patent/CN104079935B/en not_active Expired - Fee Related
- 2006-05-03 TW TW095115759A patent/TWI326186B/en not_active IP Right Cessation
- 2006-05-03 CA CA002608279A patent/CA2608279A1/en not_active Abandoned
- 2006-05-03 KR KR1020077028255A patent/KR100942396B1/en not_active IP Right Cessation
- 2006-05-03 WO PCT/US2006/017179 patent/WO2006119443A2/en active Application Filing
- 2006-05-03 BR BRPI0610903-9A patent/BRPI0610903A2/en not_active IP Right Cessation
-
2011
- 2011-02-14 JP JP2011028942A patent/JP5335833B2/en not_active Expired - Fee Related
-
2012
- 2012-05-10 US US13/468,493 patent/US20120219060A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5515377A (en) * | 1993-09-02 | 1996-05-07 | At&T Corp. | Adaptive video encoder for two-layer encoding of video signals on ATM (asynchronous transfer mode) networks |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060262985A1 (en) * | 2005-05-03 | 2006-11-23 | Qualcomm Incorporated | System and method for scalable encoding and decoding of multimedia data using multiple layers |
US8619860B2 (en) | 2005-05-03 | 2013-12-31 | Qualcomm Incorporated | System and method for scalable encoding and decoding of multimedia data using multiple layers |
US10194158B2 (en) | 2012-09-04 | 2019-01-29 | Qualcomm Incorporated | Transform basis adjustment in scalable video coding |
US9906786B2 (en) | 2012-09-07 | 2018-02-27 | Qualcomm Incorporated | Weighted prediction mode for scalable video coding |
US9491459B2 (en) | 2012-09-27 | 2016-11-08 | Qualcomm Incorporated | Base layer merge and AMVP modes for video coding |
WO2014129873A1 (en) * | 2013-02-25 | 2014-08-28 | 엘지전자 주식회사 | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
US10616607B2 (en) | 2013-02-25 | 2020-04-07 | Lg Electronics Inc. | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
US10045020B2 (en) | 2013-10-22 | 2018-08-07 | Kt Corporation | Method and apparatus for encoding/decoding multilayer video signal |
US10045019B2 (en) | 2013-10-22 | 2018-08-07 | Kt Corporation | Method and device for encoding/decoding multi-layer video signal |
US10051267B2 (en) | 2013-10-22 | 2018-08-14 | Kt Corporation | Method and device for encoding/decoding multi-layer video signal |
US10602136B2 (en) | 2013-10-22 | 2020-03-24 | Kt Corporation | Method and apparatus for encoding/decoding multilayer video signal |
US10602137B2 (en) | 2013-10-22 | 2020-03-24 | Kt Corporation | Method and device for encoding/decoding multi-layer video signal |
US10045035B2 (en) | 2013-10-29 | 2018-08-07 | Kt Corporation | Multilayer video signal encoding/decoding method and device |
US9967575B2 (en) | 2013-10-29 | 2018-05-08 | Kt Corporation | Multilayer video signal encoding/decoding method and device |
US10602165B2 (en) | 2013-10-29 | 2020-03-24 | Kt Corporation | Multilayer video signal encoding/decoding method and device |
US10602164B2 (en) | 2013-10-29 | 2020-03-24 | Kt Corporation | Multilayer video signal encoding/decoding method and device |
US9967576B2 (en) | 2013-10-29 | 2018-05-08 | Kt Corporation | Multilayer video signal encoding/decoding method and device |
Also Published As
Publication number | Publication date |
---|---|
KR20080015830A (en) | 2008-02-20 |
KR100942396B1 (en) | 2010-02-17 |
US20060262985A1 (en) | 2006-11-23 |
JP2008543130A (en) | 2008-11-27 |
CA2608279A1 (en) | 2006-11-09 |
WO2006119443A3 (en) | 2009-04-16 |
US8619860B2 (en) | 2013-12-31 |
CN102724496A (en) | 2012-10-10 |
TWI326186B (en) | 2010-06-11 |
CN102724496B (en) | 2017-04-12 |
TW200718214A (en) | 2007-05-01 |
WO2006119443A2 (en) | 2006-11-09 |
EP1877959A4 (en) | 2013-01-02 |
JP5335833B2 (en) | 2013-11-06 |
JP4902642B2 (en) | 2012-03-21 |
CN104079935B (en) | 2018-02-16 |
CN101542926A (en) | 2009-09-23 |
EP1877959A2 (en) | 2008-01-16 |
JP2011120281A (en) | 2011-06-16 |
BRPI0610903A2 (en) | 2008-12-02 |
CN104079935A (en) | 2014-10-01 |
CN101542926B (en) | 2012-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8619860B2 (en) | System and method for scalable encoding and decoding of multimedia data using multiple layers | |
US20240340448A1 (en) | Method and apparatus for decoding video signal | |
US11425408B2 (en) | Combined motion vector and reference index prediction for video coding | |
CN109644270B (en) | Video encoding method and encoder, video decoding method and decoder, and storage medium | |
US7499495B2 (en) | Extended range motion vectors | |
US8897360B2 (en) | Method and apparatus for encoding and decoding images by adaptively using an interpolation filter | |
US20050025246A1 (en) | Decoding jointly coded transform type and subblock pattern information | |
US10931945B2 (en) | Method and device for processing prediction information for encoding or decoding an image | |
US7577200B2 (en) | Extended range variable length coding/decoding of differential motion vector information | |
RU2409857C2 (en) | System and method for scalable coding and decoding multimedia data using multiple layers | |
Grecos et al. | Audiovisual Compression for Multimedia Services in Intelligent Environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QUALCOMM INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, PEISONG;RAVEENDRAN, VIJAYALAKSHMI R.;REEL/FRAME:028189/0168 Effective date: 20060707 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |