[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

AU2020203130A1 - Coding systems - Google Patents

Coding systems Download PDF

Info

Publication number
AU2020203130A1
AU2020203130A1 AU2020203130A AU2020203130A AU2020203130A1 AU 2020203130 A1 AU2020203130 A1 AU 2020203130A1 AU 2020203130 A AU2020203130 A AU 2020203130A AU 2020203130 A AU2020203130 A AU 2020203130A AU 2020203130 A1 AU2020203130 A1 AU 2020203130A1
Authority
AU
Australia
Prior art keywords
sps
layer
information
nal unit
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2020203130A
Other versions
AU2020203130B2 (en
Inventor
Jiancong Luo
Jiheng Yang
Peng Yin
Lihua Zhu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby International AB
Original Assignee
Dolby International AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2008241568A external-priority patent/AU2008241568B2/en
Priority claimed from AU2012238298A external-priority patent/AU2012238298B2/en
Application filed by Dolby International AB filed Critical Dolby International AB
Priority to AU2020203130A priority Critical patent/AU2020203130B2/en
Publication of AU2020203130A1 publication Critical patent/AU2020203130A1/en
Application granted granted Critical
Publication of AU2020203130B2 publication Critical patent/AU2020203130B2/en
Priority to AU2021203777A priority patent/AU2021203777B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)

Abstract

In an implementation, a supplemental sequence parameter set ("SPS") structure is provided that has its own network abstraction layer ("NA) unit typeand allows transmission of layer-dependent parameters for non-base layers in a SVC environment. The supplemental SPS structure also may be used for view information in an MVC environment. In a general aspect, a structure is provided that includes (1) information (1410) from an SPS NAL unit, the information describing a parameter for use in decoding a first-layer encoding of a sequence of images, and (2) information (1420) from a supplemental SPS NAL unit having a different structure than the SPS NAL unit, and the information from the supplemental SPS NAL unit describing a parameter for use in decoding a second-layer encoding for the sequence of images. Associated methods and apparatuses are provided on the encoder and decoder sides, as well as for the signal. SPS SUP 1STLAYER 2ND LAYER SPS ENCODED DATA ENCODED DATA 1410 1420 1430 1440 ENCODED SITSTREAM 1500 ENCODED PARSING UNIT 1510 1610 BrrSTREAM HEADER ENCODED DATA VIDEO DATA PROCESSOR DC)ODER P12012 RECONSTRUCTED RECONSTRUCTED VIDEO FIG 16 ACCESSING INFORMATION FROM AN SPS NAL UNIT 1710 ACCESSING INFORMATION FROM A SUP SPS NAL UNlT HAVING A DIFFERENT STRUCTURE THAN THE SPSNAL UNIT ACCESSING A FIRST-LAYER ENCODING AND A 1730 SECOND-LAYER ENCODING FOR A SEQUENCE OF IMAGES GENERATING A DECODING OF THE SEQUENCE OF IMAGES 1740

Description

CODING SYSTEMS
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application is a divisional application from Australian Patent Application No 2017258902, which is a divisional of Australian Patent No. 2015203559 which is a divisional of Australian Patent No 2012238298, which is a divisional of Australian Patent No 2008241568, the entire disclosure of each of which is incorporated herein by reference.
TECHNICAL FIELD
At least one implementation relates to encoding and decoding video data in a scalable manner.
BACKGROUND
Coding video data according to several layers can be useful Iwhen terminals for which data are intended have different capacities and therefore do not decode a full data stream but only part of a full data stream. When the video data are coded according to several layers in a scalable manner, the receiving terminal can extract from the received bit-stream a portion of the data according to the terminal’s profile. A full data stream may also transmit overhead information for each supported layer, to facilitate decoding of each of the layers at a terminal.
SUMMARY
According to one aspect, there is provided herein a method comprising:
accessing information from a sequence parameter set (“SPS”) network abstraction layer (“NAL”) unit, the information describing a parameter for use in decoding a first-layer encoding of an image in a sequence of images;
accessing supplemental information from a supplemental SPS NAL unit having an available NAL unit type code that is a different NAL unit type code from that of the SPS NAL unit, and having a different syntax structure than the SPS NAL unit, and the supplemental information from the supplemental SPS NAL unit describing a parameter for use in decoding a second-layer encoding of the image in the sequence of images; and
2020203130 13 May 2020
IB decoding the first-layer encoding, and the second-layer encoding, based on, respectively, the accessed information from the SPS NAL unit, and the accessed supplemental information from the supplemental SPS NAL unit.
According to a another aspect, the present invention provides a decoder comprising: a parsing unit to receive information in a first parameter set contained in a first network abstraction layer unit, the first parameter set being a syntax structure which contains syntax elements that apply to zero or more entire coded video sequences, and the information describing a parameter for use in decoding multiple layers of the video sequences, and supplemental information contained in a second network abstraction layer unit, the second NAL unit having a different syntax structure than the first network abstraction layer unit and corresponding to one layer of said multiple layers; and a decoding unit to decode said one layer of said multiple layers based on, the accessed information from the first NAL unit and the accessed supplemental information from the second NAL unit.
According to a another aspect, the invention provides a method comprising: receiving information in a first parameter set contained in a first network abstraction layer unit, the first parameter set being a syntax structure which contains syntax elements that apply to zero or more entire coded video sequences, and the information describing a parameter for use in decoding multiple layers of the video sequences;
receiving supplemental information contained in a second network abstraction layer unit, the second NAL unit having a different syntax structure than the first network abstraction layer unit and corresponding to one layer of said multiple layers; and decoding said one layer of said multiple layers based on, the accessed information from the first NAL unit and the accessed supplemental information from the second NAL unit.
2020203130 13 May 2020
1C
According to a yet another aspect aspect, the present invention provides an encoder comprising:
a generation unit to generate information in a first parameter set contained in a first network abstraction layer unit, the first parameter set being a syntax structure which contains syntax elements that apply to zero or more entire coded video sequences, and the information describing a parameter for use in decoding multiple layers of the video sequences, and supplemental information in a second network abstraction layer unit, the second NAL unit having a different syntax structure than the first network abstraction layer unit, the second NAL unit corresponding to one layer of said multiple layers; and an encoding unit to encode said one layer of said multiple layers based on the generated information for the first NAL unit and the generated supplemental information for the second NAL unit.
According to a further aspect, the present invention provides a method comprising:
generating information in a first parameter set contained in a first network abstraction layer unit, the first parameter set being a syntax structure which contains syntax elements that apply to zero or more entire coded video sequences, and the information describing a parameter for use in decoding multiple layers of the video sequences;
generating supplemental information in a second network abstraction layer unit, the second NAL unit having a different syntax structure than the first network abstraction layer unit, the second NAL unit corresponding to one layer of said multiple layers; and encoding said one layer of said multiple layers based on the generated information for the first NAL unit and the generated supplemental information for the second NAL unit.
According to a yet another aspect, the present invention provides a signal having
2020203130 13 May 2020
ID decoding parameters, the signal formatted to comprise:
information from a first parameter set contained in a first network abstraction layer unit, the first parameter set being a syntax structure which contains syntax elements that apply to zero or more entire coded video sequences, and the 5 information describing a parameter at least for use in decoding multiple layers of the video sequences;
supplemental information from a second NAL unit having a different syntax structure than the first network abstraction layer unit, the second NAL unit corresponding to one layer of said multiple layers; and io data representing said one layer of said multiple layers.
According to a general aspect, information is accessed from a sequence parameter set (“SPS”) network abstraction layer (“NAL”) unit. The information describes a parameter for use in decoding a first-layer encoding of a sequence of images. Information is also accessed from a supplemental SPS NAL unit having a is different structure than the SPS NAL unit. The------------------------------------------2 information from the supplemental SPS NAL unit describes a parameter for use in decoding a second-layer encoding of the sequence of images. A decoding of the sequence of images is generated based on the first-layer encoding, the second-layer encoding, the accessed information from the SPS NAL unit, and the accessed information from the supplemental SPS NAL unit.
According to another general aspect, a syntax structure is used that provides for decoding a sequence of images in multiple layers. The syntax structure includes syntax for an SPS NAL unit including information describing a parameter for use in decoding a firat-layer encoding of a sequence of images. The syntax Structure also includes syntax for a supplemental SPS NAL unit having a different structure than the SPS NAL unit The supplemental SPS NAL unit includes information describing a parameter for use in decoding a second-layer encoding of the sequence of images. A decoding of the sequence of images may be generated based on the firstlayer encoding, the second-layer encoding, the information from the SPS NAL unit, and the information from the supplemental SPS NAL unit.
According to another general aspect, a signal is formatted to include information from an SPS NAL unit. The information describes a parameter for use in decoding a first-layer encoding of a sequence of images. The signal is father formatted to include information from a supplemental SPS NAL Unit haying a different structure than the SPS NAL unit. The information from the Supplemehtal SPS NAL unit describes a parameter for use in decoding a second-layer encoding of the sequence of images.
According to another general aspect, a SPS NAL unit is generated that includes information describing a parameter for use in decoding a first-layer encoding of a sequence of images. A supplemental SPS NAL unit is generated that has a different structure than the SPS NAL unit. The supplemental SPS NAL unit includes information that describes a parameter for use in decoding a second-layer encoding of the sequence of images. A set of data is provided that includes the first-layer encoding of the sequence of Images, the second-layer encoding of the sequence of images, the SPS NAL Unit, arid the supplemehtal SPS NAL unit.
2020203130 13 May 2020
According to another general aspect, a syntax structure is used that provides tor encoding a sequence of images to multiple layers, The syntax structure includes syntax tor an SPS NAL unit The SPS NAL unit includes information that describes a parameter for use in decoding a first-layer 5 encoding of a sequence of images. The syntax structure includes syntax for a supplemental SPS NAL unit The supplemental SPS NAL unit has a different structure than the SPS NAL unit. The supplemental SPS NAL unit includes information that describes a parameter tor use in decoding a second-layer encoding of the sequence of images. A set Of data may be provided that 10 includes the first-layer encoding of the sequence of images, the second-layer encoding of the sequence of images, the SPS NAL unit, and the supplemental SPS NAL unit.
According to another general aspect, first layer-dependent information is accessed in a first normative parameter set The accessed first layer15 dependent information is for use in decoding a first-layer encoding Of a sequence of images. Second layer-dependent information is accessed to a second normative parameter set. The second norinative parameter set has a different structure than the first normative parameter set, The accessed second layeudependent iriformation is for use in decoding a second-layer 20 encoding of the sequence of images. The sequence of images is decoded based on one or more of the accessed first layer-dependent information or the accessed second layer^ependent information.
According to another general aspect, a first normative parameter set is generated that includes first layer-dependent information. The first iayer25 dependent information is tor use in decoding a first-layer encoding of a sequence of images. A second normative parameter set is generated haying a different structure than the first normative parameter set, The second normative parameter set includes second layer-dependent information for use in decoding a second-layer encoding of the sequence of images. A set of 30 data is provided that includes the first normative parameter set and the second normative parameter set.
2020203130 13 May 2020
The details of one or more implementations are set forth in the accompanying drawings and the description below. Even if described in one particular manner, it should be clear that implementations may be configured or embodied in various marmere. For example, an implementation may be 5 performed as a method, or embodied as an apparatus, such as, for example, an apparatus configured to perform a set of operations or an apparatus storing instructions for performing a set of operations, or embodied in a signal. Other aspects and features will become apparent from the following detailed description considered in conjunction with the accompanying drawings and the 10 claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram for an implementation of an encoder.
FIG. 1a is a block diagram for another implementation of an encoder.
FIG. 2 is a block diagram for an implementation Of a decoder.
FIG. 2a js a block diagram for another implementation of a decoder.
FIG. 3 is a Structure of an implementation Of a Single-Layer Sequence Parameter Set (SPS) Network Abstraction Layer (NAL) unit.
FIG. 4 is a block view of ah example of portions of a data stream 20 illustrating use of a n SPS NAL unit;
FIG. 5 is a structure of an implementation of a supplemental SPS (SUP SPS’T NAL unit
FIG. 6 is an implernentation of an organizational hierarchy among an SPS unit and multiple SUP SPS units.
FIG, 7 is a structure of another implementation ofa SUP SPS NAL unit.
FIG. 8 is a functional view of an implementation of a scalable video coder that generates SUP SPS units.
FIG. 9 is a hierarchical view of an implementatipri of the generation of a datastream that contains SUP SPS units.
FIG. 10 is a block view of an example of a data stream generated by the implementation of FIG. 9.
FIG-11 is a block diagram of an implementation of an encoder.
2020203130 13 May 2020
FIG. 12 is a block diagram of another implementation of an encoder.
FIG. 13 is a flow chart of an implementation of an encoding process used by the encoders of FIGS; 11 or 12.
FIG; 14 is a block view of an example of a data stream generated by 5 the process of FIG. 13.
FlG. 15 is a block diagram of an implementation of a decoder.
FIG. 16 is a block diagram of another implementation of a decoder:
FIG. 17 is a flow chart of an implementation of a decoding process used bythedecoders of FIGS. 15 or 16:
DETAILED DESCRIPTION
Several video coding standards exist today that can code video data according fo different layers and/or profiles. Among them, one can cite H.264/MPEG-4 AVC (the AVC standard), also referenced as the 15 International Organization for Standardization/lnternafional Electrotechnical Commission (ISO/IEQ) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/lnteroational Telecommunication Union, TeiecommunicaWoh Sector (ITU-T) H.264 recommendation. Additionally, extensions to the AVC standard exist. A first such extension iS a 20 scalable video coding (’'SVC) extension (Annex G) referred to as H.264/MPEG-4 AVC, scalable video coding extension (the ’’SVC extension’1), A second such extension is a multi-view video coding (’’MVC“) extension (Annex H) referred to as H.2647MPEG-4 AVC, MVC extension (the MVC extension'’).
At least one implementation described in this disclosure may* be used with the AVC standard as well as the SVC and MVC extensions. The implementation provides a supplemental (SUP) sequence parameter set (SPS) network abstraction layer (NAL) unit having a different NAL unit type than SPS NAL units. An SPS· unit typically includes, but need not, information 30 for at least a single layer; Further, the SUP SPS NAL., unit includes layerdependent informatibn for at least one additional layer, Thus, by accessing
2020203130 13 May 2020 $
SPS and SUP SPS units, a decoder has available pertain (and typically dll) layer-dependent information needed to decode a bit stream.
Using this implementation In an AVC system, the SUP SPS NAU units need not be transmitted, and a single-layer SPS NAU unit (aS described 5 below) may be transmitted. Using this Implementation in an SVC (or MVC) system, the SUP SPS NAL unit(s) may be transmitted for the desired additional layers (or views), in addition to an SPS NAU unit. Using this implemenfotton in a system including both AVC-cornpatible decoders and SVC-compatibte (or MVO-compatlbie) decoders, the AVC-compatible TO decoders may ignore the SUP SPS NAL units by detecting the NAL unit type in each case, efficiency and compatibility are achieved.
The above implementation also provides benefits for systems (standards or otherwise) that impose a requirement that certain layers share header information, such as, for example, an SPS or particular inforrhattoh 15 typically carried in an SPS. For example, tr a base layer and its composite temporal layers need to share an SPS, then the layer-dependent information cannot be transmitted with the shared SPS. However, the SUP SPS provides a mechanism for transmitting the layer-dependent informafidn.
The SUP SPS of various implementations also provides an efficiency 20 advantage in that the SUP SPS need riot include, and therefore repeat, all of the parameters in the SPS. The SUP SPS will typically be focused: on the layer-dependent parameters. However, various implementations include a SUP SPS structure that Includes non-layer-dependent parameters, or even repeats all of an SPS structure,
Various implementations relate to the SVC extension. The SVC extension proposes the transmission of video data according to several spatial levels, temporal levels, and quality levels. For one spatial level one can code according to several temporal levels, and for each temporal level according to several quality levels. Therefore* when there are defined m spatial levels* n 30 temporal levels, ahd O quality levels, the video data can be coded according to m*n*0 different combinations: These combinations are referred to as layers, or as interoperability points (“lOPs*); According to the decoder (also
2020203130 13 May 2020 referred to as the receiver or the Client) capabilities, different layers may be transmitted up to a certain layer corresponding to the maximum of the client capabilities.
As used herein, '*layefdependent information refers to information that relates specifically to a single layer That is, as the name suggests, the information is dependent upon the specific layer. Such information need not necessarilyvary from layer to layer, but would typically be provided separately for each layer.
As used herein, “high level syntax” refers to syntax present in the bitstream that resides hierarchically above the macroblock layer. For examplei high level syntaXi as used herein, may refer to, but is not limited to, syntax at the slice header level. Supplemental Enhancement Information {SEI) level. Picture Parameter Set (PPS) level; Sequence Parameter Set (SPS) level, and Network Abstraction Layer (NAL) unit header level ,
Referring to FIG. 1, an exemplary SVC encoder Is indicated generally by the reference humeral 100. The SVC encoder 1Q0 may also be used for AVC encoding, that is, tor a single layer (for example, base layer). Further, the SVC encoder 100 may be used for MVC encoding as one of ordinary skill in the art will appreciate. For example, various components of the SVC encoder 100, or variations of these components, may be used in encoding multiple views.
A first output of a temporal decomposition module 142 is connected in signal communication with a first input of an infra prediction for infra block module 146. A second output of the temporal decomposition module 142 is 25 connected in signal communication with a first input of a motion coding module 144, An Output of the intra prediction for infra block module 146 is connected in signal communication with an input of a transfornVentropy coder (Signal to noise ratio (SNR) scalable) 149; A first output of the fransform/enfropy coder 149 is connected in signal communication with a first 30 input of a multiplexer 170
A first output of a temporal decompOsitioh module 132 is connected in signal communication with a first input of an infra prediction for intra block
2020203130 13 May 2020 module 136. A second output of the temporal decomposition module 132 is connected in signal communication with a first input of a motion coding module 134. Ah output of the intra predictioh for intra block module 136 is connected in signal communication with an input of a transform/entropy coder 5 (signal to noise ratio (SNR) scalable) 139, A first output of the transform/entropy coder 139 is connected in signal communication With a first inpiitofa multiplexer 170.
A second output of the transform/entropy coder 149 is connected »rt signal communication with an input of a 2D spatial interpolation module 138.
An output of 2D spatial interpolation module 138 is connected in signal communication with a second Input of the intra prediction for intra block module 136. A second output of the motion coding module 144 is connected in signal communication with an input of the motion coding module 134.
A first output of a temporal decomposition module 122 is connected in signal communication with a first input of an intra predictor 126· A second output of the temporal decomposition module 122 is connected in signal; communication with a first input Of a motion coding module 124. An output of the intra predictor 126 is connected in signal communication with an input of a transform/entropy coder (Signal to noise ratio (SNR) scalable) 129. An output of the transform/entropy coder 129 is connected in signal communication with a first input of a multiplexer 170.
A second output of the transform/entropy coder 139 is connected in signal communication With an input of a 2D spatial interpolation module 128. An output of 2D spatial interpolation module 128 is connected in signal 25 communicatipn with a second input of the intra predictor module 126. A second output of the motion coding module 134 is connected in signal communication with an in put of the motion coding module 124.
A first output of the motion coding module 124, a first output of the motion coding module 134, and a first gutputpf the motion coding module 144 ; 30 are each connected in Signal communication with a second input of the multiplexer 170.
2020203130 13 May 2020
9A first output of a 2D spatial decimation module 104 is connected in Signal communication with art input of the temporal decomposition module 132.. A second output of the 2D spatial decimation module 104 is connected in signal communication with an input of the temporal decomposition module 5 142.
An input of the temporal decomposition module 122 and an input of the 2D spatial decimation module 104 are available as inputs of the encoder 100. for receiving Input video 102.
An output of the multiplexer 170 is available as an output of the 10 encoder 100, for providing a bitstream 130.
The temporal decomposition module 122, the temporal decomposition module 132, the temporal decomposition module 142, the motion coding module 124, the motion coding module 134, the motion coding module 144, the Intra predictor '126, the intra predictor 136, the intra predictor 146, the 15 transform/entropy coder 129, the transform/entropy coder 139, the transformfentropy coder 149, the 2D spatial interpolation module 128, and the 2D spatial interpolation module 138 are included in a core encoder portion 187 of the encoder 100 .
FIG, 1 includes three core encoders 187. In the implementation 20 shown, the bottom-most core encoder 187 may encode a base layer, with the middle and upper core encoders 187 encoding higher layers.
Turning to FIG, 2, an exemplary SVC decoder is indicated generally by the reference numeral 200. The SVC decoder 200 may also be used for AVC decoding, that is, for a single view. Further, the SVC; decoder 200 may be 25 used for MVC decoding as one of ordinary skill in the art will appreciate. For example, various components of the SVC decoder 200, or variations of these components, may be used in decoding multiple views*
Note that encoder 100 and decoder 200. as well as other encoders and decoders discussed in this disclosure, can be configured to perform various 30 methods shown throughput this disclosure. In addition to performing encoding operations, the encoders described in this disclosure may perform various decoding operations during a reconstrectipn process in order to mirror the
2020203130 13 May 2020
ΊΟ expected actions of a decoder. For example, an encoder may decode SUP SPS units tq decode encoded video data in order to produce a reconstruction of the encoded video data for use in predicting additional video data. Consequently; an encoder may perform substantially all of the operations that 5 are performed by a decoder.
An input of a demultiplexer 202 is available as an input to the scalable Video decoder 200, for receiving a scalable bitstream, A first output of the demultiplexer 202 is connected in signal communication with an input of a spatial inverse transform SNR scalable entropy decoder 204. A first output of 10 the spatial inverse transform SN R scalable entropy decoder204 is connected in signal communication with a first input of a prediction module 206. Ah Output of the prediction module 206is connected in signal communication with a first input of a combiner 230.
A second output of the spatial inverse transform SNR scalable entropy 15 decoder 204 is connected in signal communication with a first input of a motion vector (MV) decoder 210. An output of the MV decoder 210 is connected in signal communication with an input of a motion compensator 232. An output of the motion compensator 232 is connected in signal communication with a second input of the combiner 230.
A second output of the demultiplexer 202 is connected in signal communication with an input of a spatial inverse transform SNR scalable entropy decoder 212. A first output of the spatial inverse transform SNR scalable entropy decoder 212 is connected in signal communication with a first input of a prediction module 214. A first output of the prediction module 25 214 is connected tn signal communication with an input of an interpolation module 216. An output of the interpolation module 216 is connected tn signal communication with a second input of the prediction module 206. A second output of the prediction module 214 is connected in signal communication with a first input of a combiner 240.
A second output of the spatial inverse transform SNR scalable entropy decoder 212 is connected in signal communication with a first input of an MV decoder 220. A first output of the MV decoder 220 is connected in signal
2020203130 13 May 2020 communication with/a second input of the MV decoder 210, A second output of the MV decoder 220 is connected in signal communication with an input of a motion compensator 242. Ari output of the motion compensator 242 is connected in signal communication with a second input of the combiner 240,
A third output of the demultiplexer 202 is connected in signal communication with an input of a spatial inverse transform SNR scalable entropy decoder 222; A first output Of the spatial inverse transform SNR scalable entropy decoder 222 is connected in signal communication with an input of a prediction module 224. A first output of the prediction module 224 is 10 connected in signal communication With an input of an interpolation module 220. An output of the interpolation module 226 is connected in signal communication with a second input of the prediction module 214.
A Second output of foe prediction module 224 is connected in signal communication Wifo a first input of a combiner 250. A second output of foe 15 spatial inverse transform SNR scalable entropy decoder 222 is connected in signal communication With an input of a h MV decoder 230, A first output of the MV decOder 230 is connected in signal communication With a second input of foe MV decoder 220. A second output of the MV decoder 230 is connected in signal communication with an input of a motion compensator 20 252. An output of the motion compensator 252 is connected in signal communication with a second input of the combiner 250.
An output of the combiner 250 is available as an output of the decoder 200, for outputting a layer 0 Signal, An output of the combiner 240 is available as an output of the decoder 200, for outputting a layer 1 signal. An output of 25 the combiner 230 is available as an output of the decoder 200, for outputting a layer 2 signal.
Referring to FIG. la, an exemplary AVC encoder is indicated generally by the reference numeral 2100. The AVC encoder 2100 may be used, for example, for encoding a single layer (for example, base layer),
The video encoder 2100 includes a frame ordering buffer 2110 having an Output in signal communication With a non-inverting input of a combiner 2185. An output of foe combiner 2185 is connected in signal communication
2020203130 13 May 2020
With a first input of a transformer anti quantizer 2125. An output of the transformer and quantizer 2125 is connected in signal communication with a first input of an entropy coder 2145 and a first input of an inverse transformer and inverse quantizer 2150. An output of the entropy coder 2145 is 5 connected in signer communioatipri With a first non-inverting input of a combiner 2180. An output of the (fombiner 2190 is connected in signal communication with a first input of an output buffer 2135.
A first output of an encoder Controller 2105 is connected in signal communication with a second input of the frame ordering buffer 2110; a 10 second input of the inverse transformer and inverse quantizer 2150, an input of a picture-type decision module 2115, an input of a macroblock-type (MBtype) decision module 2120, a second input of an intra prediction module 2165, a second input of a deblocking fitter 2165, a first input of a motion compensator 2170; a first input of a motion estimator 2175, and a second 15 input qf a reference picture buffer 2180;
A second output of the encoder controller 2105 is connected in signal communication with A first input of a Supplemental Enhancement information (SEI) inserter 2130, a second input of the transformer and quantizer 2125, a second input of the entropy coder 2145, a second input of the output buffer 20 2135, arid an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 2140, . A first output of the picture-type decision module 2115 is connected in signal communication with a third input of a frame ordering buffer 2110 A second output of the picture-type decision module 2115 is connected in signal 25 communication with a second input of a macrobtock-type decision module 2120,
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (*PPS“) inserter 2140 is connected in signal communication with a third non-inverting input of the combiner 2190. An output of the SEI 30 Inserter 2130 is connected in signal cpmmunicatipn with a second nonin verting inp ut of the combiner 21Θ0,
2020203130 13 May 2020
An output of the inverse quantizer and inverse transformer 2150 is connected in signal communication with a first non-inverting input of a combiner 2127, An output of the combiner 2127 is connected in Signal communication with a first input of the intra prediction module 2160 and a first 5 input of the deblocking filter 2165. An output of foe deblocking filter 2165 is connected in signal communication with a first input of a reference picture buffer 2180* An output of foe reference picture buffer 2180 is connected in signal communication with a second input of foe motion estimator 2175 and with a first inptrtof a motion compensator 2170. A first output of foe motion 10 estimator 2175 is connected in signa! commuhication With a second input of the motion compensator 2170. A second output of foe motion estimator 2175 is connected in signal communication with a third input of foe entropy coder 2145L
An output of the motion compensator 2170 is cohnected in signal 15 communication with a first input of a switch 2197. An output of the intra prediction module 2160 is connected in signal communication With a second input of the switch 2197. An output of foe macrablock-type decision module 2120 is connected in signal communication with a third input of the switch 2197 in order to provide a control input to foe switch 2197, An output of the 20 switch 2197 is connected in signal communication with a second honrthvertihg input: Of the combiner 2127 and with an inverting input of the combiner 2185.
Inputs of the frame ordering buffer 2110 and the encoder controller ' 2105 are availa ble as input of foe encode r 2100, for receiving an Input picture 2101* Moreover, an input of the SEI inserter 2130 Is available as an input of 25 the encoder 21 Op, for receiving metadata. An cutout of the output buffer 2135 is available as an output of the encoder 2100, for outputting a bitstream.
Referring to FIG, 2a, a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC standard Is indicated generally by the reference numeral 2200.
The video decoder2200 includes an input buffer 2210 having an output connected in signal communication with a first input of an entropy decoder 2245 A first output of foe entropy decoder 2245 is connected in signal
2020203130 13 May 2020 communication with a first input of an inverse transformer and inverse quantizer 2250. An output Of the inverse transformer and inverse quantizer 2250 is connected in signal communicatfon with a second non-inverting input of a combiner 2225. An output of the combiner 2225 is connected in signal 5 communication with a second input of a deblocking filter 22S5 and a first input of an intra prediction module 2260. A second output of the deblocking fitter 2265 is connected in signal communication with a first input of a reference picture buffer 2260. An Output of the reference picture buffer 2260 is connected in signal communication with a second input of a motion 10 compensator 2270.
A second output of the entropy decoder 2245 is connected in signa! communication with a third input of the motion compensator 2270 and a first input of the deblocking filter 2265. A third output of the entropy decoder 2245 is connected In signal communication with an input of a decoder controller 15 2205. A first output of the decoder controller 2205 is connected in signal communication with a second input of the entropy decoder 2245. A second output of the decoder controller 2205 is connected in signal communication wife a second input of the inverse transformer and inverse quantizer 2250. A third output of the decoder controller 2205 is connected in signal 20 communication with a third input of lhe deblocking filter 2265. A fourth output of the decoder controller 2205 is connected in signal communication wife a second input of the intra prediction module 2260, with a first input of the motion compensator 2270, and with a second input of the reference picture buffer 2280.
An output Of the motion compensator 2270 is connected in signal communication with a first input of a switch 2297. An output of the intra prediction module 2260 is connected in signal communication with a second input of the switch 2297. An output of the switch 2297 is connected in signal communication with a first non-inverting input of the combiner 2225.
An input of the input buffer 2210 is available as an input Of the decoder
2200, for receiving an input bitstream. A first output of the deblocking filter
2020203130 13 May 2020
2265 is available as an output of the decoder 2200, for outputting an output picture. '
Referring to FIG- 3. a structure fora single-layer SPS 300 is shown.
SPG is a syntax structure that generally contains syntax elements that apply 5 to zero or more entire coded video sequences. In the SVC extension, 'the values of some syntax elements conveyed in the SPS are layer dependent These layer-dependent syntax elements include but are not limited to, the timing informatipn, HRD (standing for “’Hypothetical Reference Decoder'’) parameters, and bitstream restriction information. HRD parameters may 10 include, for example, indicators of buffer size, maximum bit rate, and initial delay; HRD parameters may allow a receiving system, for exampfei to verify the integrity of a received bit stream and/qr to determine if the receiving system (for example, a decoder) can decode the bit stream. Therefore, a system hwy provide for the transmission Of the aforementioned syntax 15 elements for each layer.
The single-iayer SPS 300 Includes an SPS4D 310 that provides an identifier for the SPS. The single-layer SPS 300 aisq includes the VUI (standing for Video Usability Information) parameters 320 for a single layer. The VUI parameters include the HRD parameters 330 for a single layer, such 20 as, for example, the base layer. The single-layer SPS 300 also may include additional parameters 340, although implementations need not include any additional parameters 340.
Referring to FIG. 4, a block view of a data stream 400 shows a typical use of the single-layer SPS 300, in the AVC standard, for example, a typical 25 data stream may include, among other components, an SPS unit, multiple PPS (picture parameter sequence) units providing parameters for a particular picture, and multiple units for encoded picture data. Such a general framework is followed in FIG. 4, which includes foe SPS 300, a PPS-1 410, one or more Units 420 including encoded picture-1 data, a PPS-2 430, and 30 one or more units 440 including encoded picture-2 data. The PPS-1 410 includes parameters for the encoded picture-1 data 420, and foe PPS-2 430 includes parameters for the encoded picture-2 data 440.
2020203130 13 May 2020
The encoded picture-1 data 420, and the encode pictured data 440, are each associated with a particular SPS (the SPS 300 in the implementation of FIG. 4), This is achieved through the use of pointers, as now explained. The encoded picture-1 data 420 includes a PPS-IG (not shown) identifying the 5 PPS-1 410, as shown by an arrow 450. ’The PPS-ID may be stored in, for example, a slice header. The encoded pictured data440 includes a PPS-ID (not shown) identifying the PPS-2 430, as shown by an arrow 460. The PPSF i 410 and the PPS-2 430 each include an SPS-ID (not shown) identifying the SPS 300, as shown by arrows 470 and 480 respectively.
Referring to FIG. 5, a structure for a SUP SPS 500 Is shown, SUP SPS 500 includes an SPS ID 510, a VUI 520 that includes HRD parameters 530 for a single additional layer referred to by ’'(D2, T2, (22) and optional additional parameters 540. D2, T2, Q2 refers to a second layer having spatial (D) level 2, temporal (T) level 2, and quality (Ci) level 2.
Note that various numbering schemes may be used to refer to layers^ In one numbering scheme, base layers have a D, T, Q of 0, χ O, meaning a spatial level of zero, any temporal level, and a qualify level of zero, in that numbering scheme, enhancement layers have a Dt T, Q in which D or Q are greater than zero.
The use of SUP SPS 500 allows, for example, a system to use an SPS structure that only includes parameters for a single layer, or that does not include any layer-dependent information. Such a system may create a separate SUP SPS for each additional layer beyond the base layer The additional layers can identify the SPS with which they are associated through the use of the SPS ID 510. Clearly several layers can share a single SPS by using a common SPS Ip in their respective SUP SPS units.
Referring to FIG. 6, an Organizational hierarchy 600 <s shown among an SPS unit 605 and multiple SUP SPS units 610 arid 620. The SUP SPS unite 610 and 620 are shown as being single-layer SUP SPS units, but other 30 implementations may use one or more multiple-layer SUP SPS units iri addition to, or in lieu of, single-layer SUP SPS units. The hierarchy 600 illustrates that, in a typical scenario, multiple SUP SPS units may be
2020203130 13 May 2020 associated with a single SPS unit, implementations may, of course, include multiple SPS units, and each of the SPS units may have associated SUP SPS units.
Referring to FIG. 7, a structure for another SUP SPS 700 is shown.
SUP SPS 700 includes parameters for multiple layers, whereas SUP SPS 500 includes para meters for a single layer. SUP SPS 700 includes an SPS ID 710, a VUI 720, and optional additional parameters 740. The VW 720 includes HRD parameters 730 for a first additional layer (D2, T2, Q2), and for other additional layers up to layer (Dn, Tn, Qn).
Referring again to FIG, 0, the hierarchy 600 may be modified to use a multiple layer SUP SPS· For example, the combination of the SUP SPS 610 and 620 may be replaced with the SUP SPS 700 if both the SUP SP® 610 and620 include the same SPS ID.
Additionally, the SUP SPS 700 may be used, for example With an SPS IS that includes parameters for a single layer, or that includes parameters for multiple layers, or that does not include layer-dependent parameters for any layers. The SUP SPS 700 allows a system to provide parameters for multiple layers With little overhead,
Other implementations may be based, for example, oh an SPS that 20 Includes all the needed parameters for all possible layers. That is, the SPS of such an implementation includes all the corresponding spatial (Dj), temporal (Ti), and quality (Qi) levels that ate available to be transmitted, whether all the layers ate transmitted or not. Even with such a system, however, a SUP SPS may be used to provide ah ability to change the parameters for one or more 25 layers without transmitting the entire SPS again.
Referring to Table 1, syntax is provided for a specific implementation of tai. single-layer SUP SPS. The Syntax includes sequence_parameter_setjd to identify the associated SPS, and the identifiers of temporaijevel, dependencyjd, and qualifyjevel to identify a scalable layer. The VW 30 parameters are included through the use of syc_vuij3arametem() (see Table 2), which includes HRD parameters through the use of hrd _parameters(). The
2020203130 13 May 2020 syntax below allows each layer to specify its own layer-dependent parameters, such as, for example, HRD parameters.
sup_seqj3arameter_sei_svc(){ ' J ' ' _ c Etescriptor
' 5eque«ce_pa?ametersefid ' ' 0 ue(v)
teinpAr»lJeW o W
dependencyjd...... o O: :.... '
quality Jevei o u(2) -
y M»^arnmrier»j>re3ent sy . : s <l>
W VW Jat^ctejs)
svc vuijparameters()
i ' ' .. .... ... i
Table 1
The semantics for the syntax ofsup_seq_param0ter_SGVsvc() is as follows.
- sequencej3arameter_setjd identifies the sequence parameter set which the current SUP SPS maps to for the current layer;
- temporeIJfeyel, dependencyJd, and qualityJevei specify the temporal level, dependency identifier, and quality level for the current layer, Dependencyjd generally indicates spatial level However, dependencyJd also is used to indicate the Coarse Grain Scafobility C’CGS”) hierarchy^ which includes both spatial and SNR scalability, with SNR scalability befog a traditional quality scalability. Acrfordingiy, quaiityjevei and dependencyjd may both be used to distinguish quality levels.
* vui_parameters_presenVsvcjiag equals to 1 specifies that svc_vujparameters() syntax structure as defined below is present 20 vujparameters_present_svc_flag equals to 0 specifies that svcvu Jparameters() syntax structure is not present
Table 2 gives the syntax for svc_vui_j?ararneters(). The VU1 parameters are therefore separated for each layer and put into individual SUP 25 SPS units. Other implementations, however, group the VU| parameters for multiple layers into a Single SUP SPS,
2020203130 13 May 2020
svc vuij>aramete«(){ . c Descriptor
tiaiing^^ 0 “G)
IKtiming Jnfojpressnt fiag ){
rwmunitsiajick 0
time scaie 0
fixctiAranjeL.rate^.flag β «(V
nsl hr4^J«ramiters,prciienCflag o «(0
IK hal IKO4taineters prasent Hag)
hrd parameters( j
vcLhrd^pa ramete i-s p resen t flag 0 u(’)
IKypEht'^.jjarameteri^re^enLnag)
hrd parameters()
<Knal hrd^mmeters present flag 11 vcLhr4 parameters j>resenl flag)
low delay hrd flag 0 xn
pic»truct present fiag 0
bit3trcnm re3triction nng 0
IKbtlsiream reslricfion flag) {
motion veeior$aver pic boundarfes i1ag 0 «<>)
maxjbytes per ptcjienam Ci u<v)
nt3x bhs^er mbji<!nbm 0 ue(v)
Iog2 max mv fength yertical ό 0 ue(y) He(y)
numreonterframes e ve(v)
max dec frasne buHering 0 ve(y)
} '
} . . ..
Table!
The fields of the svc _vui_parametersO syntax of table 2 are defined the version pF the SVC extension that existed in April 2007 under JVT-U201 annex E E.1. In particular, hrd_parameters() is as defined for the AVC , standard. Note also that svc_vui_paranieters() includes various layerdependent information, including HRD-reiated parameters. The HRD-related 10 parameters include num^nitsjn tick, timejscafe, fixed_frame_rate_flag, nal_htd_parametersjoresenMag, vckhrdj>arameters_present„flag, hrd jaarametersQ, low_deiay_hrd_flag, and pic jtructjjresent fiag. Further,
2020203130 13 May 2020 the syntax elements in the bitsreamjestrictionjlag ifloop are layer dependent even though not HRD-related.
As mentioned above, the SUP SPS is defined as a new type of NAL unit. Table 3 lists some of the NAL unit codes as defined by the standard 5 OVT’USOI, but modified to assign type 24 to the SUP SPS; The ellipsis between NAL unit types 1 and 16; and between 18 and 24, indicate that those types are unchanged. The ellipsis between NAL unit types 25 and 31 means that those types are all unspecified. The implementation of Table 3 below changes type 24 of the standard from Unspecified4’ to IQ sup_seqj3arameter_set_svc()'’. Unspecified is generally reserved for user applications. Reserved, on the other hand, is generally reserved for future standard modifications. Accordingly, another imptemenfotipn changes one of the reserved4· types (for example, type 16, 17, or 18) to sup_seqj>arameter_setjwcO‘\ Changing an unspecified type results in an 15 implementation for a given user, whereas changing a reserved type results in an implementation that changes the standard for all users.
naruniOyp® Content: of NAL unit and RBSP syntax structure C
o Unspecified
1 Coded slice of a non-IDR picture 2,.3/4 * Λ '
16-18 Reserved
24 sup seq jparam eter set svc()
25..31 Unspecified
Tables
FIG. 8 shows a functional view of an implementation of a scalable video coder BOO that generates SUP SPS units. A video is received at the
2020203130 13 May 2020 input of the scalable video coder 1. The video Is coded according to different spatial levels. Spatial levels mainly refer to different levels of resolution of the same video. For example, as the input of a scalable video coder, one cart have a CIF sequence (352 per 288) or a QCIF sequence (176 per 144) which 5 represent each one spatial level.
Each of the spatial levels is sent to an encoder. The spatial level 1 is sent to an encoder 2, the spatial level 2 is sent to an encoder 2*. and the spatial level m is sent to an encoder 2.
The spatial levels are coded With 3 bits, using the dependencyjd.
Therefore, foe maximum number of spatial levels in this implementation is &
The encoders 2, 2', and 2 encode one or more layers having the indicated spatial level. The encoders 2,2ζ arid 2” may be designed to have particular quality levels and temporal levels, or the quality levels and temporal levels may be configurable. As ca n be seen from FIG. 8, foe encoders 2,. 2\ 15 and 2” are hierarchically arranged. That is, the encoder 2 feeds foe encoder 2*. which in turn feeds foe encoder 2. The hierarchical arrangement indicates the typical Scenario in which higher layers use a lower layerfs) as a reference.
After the coding, the headers are prepared for each of the layers. In the implementation shown, for each spatial level, an SPS message, a PPS 20 message, and multiple SUP_SPS messages are created, SUP SPG messages (or units) may be created, for example, for layers corresponding to foe variousbifferent quality and temporal levels.
For spatial level i, SPS and PPS 5” are created and a set of SUP^SPS}, .,,, are also created,
For spatial level 2, SPG and pps 5' are created and a set of SUP_SPSf, SUP_SPSl,..., are also created.
For spatial level m, SPS and PPS 5 pre created and a set of SUP_SPS”t SUP^SPS”,.,.,, SUP_SPS”.oare also created.
The bitstreams 7, 7/, and 7” encoded by the encoders 2, 2’, and 2, 30 typically follow foe plurality of SPS, PPS, and SUP„SPS (also referred to as headers, units, or messages) th the global bitstream.
2020203130 13 May 2020
A bitstream r includes BPS and PPS 5, SPSj, W > · , SUff~SPSfo 6, and encoded video bitstream 7, which constitute all the encoded data associated with spatial level 1.
A bitstream 8' includes SPS and PPS 5‘, SUP_SPS( , ... , sup_sps*.o β*. and encoded video bitstream 7\ which constitute all the encoded data associated with spatial level 2.
A bitstream 8 includes BPS and PPS 5, STTP^SPSf , ... ,
SUffffSffS”^ &t and encoded video bitstream 7, which constitute all the encoded data associated with spatial level m.
The diflerent SUP„SPS headers are compliant with the headers described in Tables 14.
The encoder 800 depicted in FIG. 8 generates one SPS for each spatial level. However, other implementations may generate multiple BPS for each spatial level or may generate an BPS that serves multiple spatial levels.
The bitstreams 8, 8‘, and S’ are combined in a multiplexer 9 which produces an SVC bitstream, as shown in FIG. 8.
Referring to FIG. 9, a hierarchical view 900 illustrates toe generation of a data stream that contains SUP BPS units. The view 900 may be used to Illustrate foe possible bitstreams generated by the scalable video encoder 800 of FIG. 8. The view 900 provides an SVC bitstream to a transmission interface 17.
The SVC bitstream may be generated, for example, according to the implementation of FIG. 8, and comprises one SPS for each of the spatial levels. When m spatial levels are encoded, the SVC bitstream comprises SPS1, SPS2 anp SPSm represented by 1¼ 10’ and 10” in FIG. 9.
in toe SVC bitstream, each SPS codes toe general information relative to the spatial level. The SPS is followed by a header 11,11 *, it, 13,13’, 13, 15. 15’, and 15 of SliP_SPS type. The SUP_SPS is followed by the corresponding encoded video data 12, 12', 12, 14, 14\ 14, 16, 16’, and 16 which each correspond to one temporal level (n) and one quality level (0).
2020203130 13 May 2020
Therefore, when one layer is not transmitted» toe corresponding SUPJSPS is also hot transmitted- This is :be.oeW there is typically one SUP_SPS header corresponding to each layer.
typical implementations use a numbering scheme for layers in which the base layer has a P arid Q of zero. If such a numbering scheme is used for the view 900, then the view 900 does not explicitly show a base layer That does not preclude the use of a base layer. Additionally, however, the view 900 may be augmented to explicitly show a bitstream for a base layer, as well as, for example, a separate SPS for a base layer. Further, the: View 900 may 10 us© an alternate: numbering scheme for base layers; in which one or more of the bitstreams ¢1,1,1) through (m, η. O) refers to a base layer.
Referring to FIG- 10, a block view is provided of a data stream 1000 generated by the implementation of FIGS. 8 and 9. FIG, 10 illustrates the transmission of the following layers:
0 Layer 0,1,1): spatial fowl 1, temporal level 1, quality level 1; which includes transmission of blocks i< 11..,. and 12 o Layer (1, 2, 1): spatial level 1, temporal level 2, quality level 1; which includes the additional transmission of blocks 1T and 12*;
o Layer (2, 1,1)· spatial level 2, temperal level 1, quality level 1; which 20 includes the additional transmission of blocks 10‘, 13, and 14;
» Layer (3, 1., 1) spatial level 3, temporal level 1., quality level 1; Which includes the additional transmission of blocks 10”, 15, and 16;
Layer (3, 2, 1): spatial level 3, temporal level 2, quality level T, Which includes the additional: transmission of blocks 15’ and 16';
0 Layer (3, 3, 1): spatial level 3, temporal level 3, quality level 1; which indudes the additional transmission of blocks 15** and 16”,
Th© block view of the data stream 1000 illustrates that SPS 10 is only sent once and is used by both Layer (1, 1, 1) and Layer (1,. 2, 1), and that 30 SPS 10 is only sent Once is used each of Layer (3,1,1), Layer (3, 2,1), and Layer (3, 3,1), Further, the data stream 1000 illustrates that the parameters for all of the layers are not transmitted, but rather only the parameters
24;
2020203130 13 May 2020 corresponding to the transmitted layers. For example, the parameters for layer (2, 2,1), corresponding to ere not transmitted because that layer is not transmitted. This provides an efficiency for this implementation.
Referring to FIG; 11, an encoder 1100 includes an SPS generation unit
1110, a video encoder 1120, and a formatter 1130. The video encoder 1120 receives input video, encodes the input video, and provides the encoded input video to the formatter 1130. The encoded input video may include, for example, multiple layers such as, for example, an encoded base layer and an · encoded enhancement layer. The SPS generation unit 1110 generates header information, such as, for example, SPS units and SUP SPS units) and provides the header information to the formatter 1130. The SPS generation unit 1110 also communicates with the video encoder 1120 to provide parameters used by the video encoder 1120 in encoding the input video.
The SPS generation unit 1110 may be configured, for example, to generate an SPS NAL unit. The SPS NAL unit may include information that describes a. parameter for use in decoding a first-layer encoding of a sequence Of images, The SPS generation unit 1110 also may be configured, for example, to generate a SUP SPS; NAL unit having a different; structure than the SPS NAL unit. The SUP SPS NAL unit may include information that describee a parameter for use in decoding a second-layer encoding of the sequenceof images. Th© firstlayer encoding and the second-layer encoding may be produced by the video encoder 1120.
The formatter 1130 muftiptexes the encoded video from the video encoder 1120, and the header information from the SPS generation unit 1110, to produce an output encoded bitstream. The encoded bitstream may be a set of data that includes th© first-layer encoding of the sequence of images, the second-layer encoding of the sequence ofimages, the SPS NAL unit, arid the SUP SPS NAL unit
The components 1110,1120, arid 1130 of the encoder 1100 may take many forms, One or mor© of the components 1110, 1120, and 1130 may include hardware, software, firmware, or a combination, and may be operated
2020203130 13 May 2020 from a variety of platforms, such as, for example, a dedicated encoder or a general processor configured through^soWare to function as an encoder
FIGS. 8 and i 1 may be compared. The SPS generation unit 1110 may generate the SPS and the various SUP^SPS^a shown in FIG. 8. the video encoder 1120 may generate the bitstreams 7. 7l. and 7 {which are the encodings of the input video) shown in FIG. 8. The video encoder 1120 may correspond, for example; to one or more of the encoders 2. 2Y or 2. The formatter 1130 may generate the hierarchically arranged data shown by reference humerals 8, 8’, 8, as Well as perform the operation of foe 10 multiplexer 9 to generate the SVC bitstream of FIG. 8.
FIGS, 1 and 11 also may be compared, The video encoder 1120 may correspond, for example, to blocks 104 and 187 of FIG. 1. The formatter 1130 may correspond, for example, to the multiplexer 170. The SPS generation unit 1110 is not explicitly Shown in FIG. 1 although the fanctionality of the SPS 15 generation unit 1110 may be performed, for example, by the multiplexer 170.
Other implementations of encoder 1100 dp not include the video encoder 1120 because, for example, the data is pre-encoded The encoder 1100 also may provide additional outputs and provide additional communication between the components. The encoder 1100 also may be 20 modified to provide additional components which may, for example, be located between existing components.
Referring to FIG. 12, an encoder 1200 is shown that operates in the same manner as the encoder 1100; The encoder 1200 includes a memory 1210 in communication With a processor 1220. The memory 1210 may be 25 used, for example, to store the input video, to store encoding or decoding parameters, to store intermediate or final results during the encoding process, or to store instructions for performing an encoding method. Such storage may be temporary or permanent
The processor 1220 receives input video arid encodes the input video.
The processor 1220 also generates header information, and formats an encoded bitstream that includes header information and encoded input video. As in the encoder 1100, the header information provided by the processor
2020203130 13 May 2020
Β
1220 may include separate structures for conveying header information for multiple tay®m· The processor 1220 may operate according to instructions stored on, Or otherwise resident on or part of, for example, the processor 1220 or the memory 1210.
Referring to FIG. 13, a process 1300 is shown for encod ing input video. The process 1300 may be performed by, for example, either of the encoders 1100 pr 1200.
The process 1300 includes generating an SPS NAL unit (1310). The SPS NAL unit includes information that describes a parameter for use in 10 decoding the first-layer encoding of the sequence of images. The SPSiMAL unit may be defined by a coding standard or not. If the SPS NAL unit is defined by a coding standard, then the coding standard may require a decoder to operate in accordance with received SPS NAL units, Such a requirement is generally referred to by stating that the SPS NAL unit is . 15 foormatiW’. SPS, for example, are normative in the AVC standard, whereas supplemental enhancement information (SEI) messages, for example^ are not normative, Accordingly, AVOcompatible decoders may ignore received SEI messages but must operate in accordant® with received SPS*
The SPS NAL unit includes information describing one or more parameters for decoding a first layer* The parameter may be, for example, information that is layer-dependent, or is not layer-dependent. Examples of parameters that are typically laye^dependent include a VUI parameter or an HRD parameter.
Operation 1310 may be perfoimed , for example, by the SPS generation 25 unit 1110, the processor 1220. or the SPS and PPS Inserter 2140. The operation 1310 also may correspond to the generation of SPS in any of blocks 5j 5\ 5” in FIG. B.
Accordingly, a means for performing the operation 1310, that is, generating an SPS NAL unit, may include various components. For example, 30 such means may include a module for generating SPS 5, 5'« or 5, an entire encoder system of FIG. 1,. B, 11, or 12. an SPS generation unit 1110, a processor 1220, or an SPS and PPS Inserter 2140. or their equivalents
2020203130 13 May 2020 including known and future-developed encoders.
The process 1300 includes generating a supplemental (’SUP*) SPS NAL unit having a different structure than the SPS NAL unit (1320). The SUP SPS NAL unit includes information that describes a parameter for use in 5 decoding the second-layer encoding of the sequence of images. The SUP
SPS NAL unit may be defined by a coding standard , or not. If the SUP SPS NAL unit is defined by a coding Standard, then the coding standard may require a decoder to operate in accordance with received SUP SPS NAL units. As discussed above With respect to operation 1310, such a requirement 10 is generally referred to by stating that the SUP SPS NAL unit is normative.
Various implementations include normative SUP SPS messages. For example, SUP SPS messages may be normative for decoders that decode; more than one layer (for exampie, SVC-compatible decoders), Such multilayer decoders (for example, SVC-compatible decoders) would be required to i 5 operate in accordance with the information conveyed in SUP SPS messages.
However, single-layer decoders (for example, AVC-compaWle decoders) could ignore SUP SPS messages, As another example, SUP SPS messages may be normative for all decoders, including single-layer and multMayer decoders. It is not surprising that many implementations include normative 20 SUP SPS messages, given that SUP SPS messages are based in large part on SPS messages, and that SPS messages are normative in the AVC standard and the SVC and MVC extensions. That is, SUP SPS messages carry similar data as SPS messages, serve a similar purpose as SPS messages, and may be considered to be a type of SPS message. It should 25 be clear that implementations having normative SUP SPS messages may provide compatibility advantages, for example, allowing AVC and SVC decoders to receive a common data stream.
The SUP SPS NAL unit (also referred to as the SUP SPS message) includes one or more parameters for decoding a second layer. The parameter 30 may be, for example, information that is layeHfependenL or is not layerdependent· Specific examples include a VUI parameter dr an HRD parameter. The SUP SPS may also be used for decoding the first layer, in
2020203130 13 May 2020 addition to being used for decoding the second layer.
Operation 1320 may be performed, for example, by the SPS generation unit; 11ft the processor 1220, or a module analogous to the SPS and PPS Inserter 2140. The operation 1320 also may correspond to the generation of 5 SUP_SPS in any of blocks 6,6', 6 in FIG. 8.
Accordingly, a means for performing the operation 1320, that is, generating a SUP SPS NAL unit, may include various components; For examptei such means may include a module for generating SUP_SPS 6, 6‘, or 6, an entire encoder system of FIG* 1,8,11, or 12, an SPS generation unit 1G 11 TO, a processor 1220, or a module analogous fo the SPS and PPS Inserter 2140, or their equivalents including known and future-developedencoders.
The process 1300 includes encoding a first-layer encoding, such as, for example, the base layer, for a sequence Of images, and encoding a second- layer encoding for the sequence of images (1330). These encodings of the 15 sequence of images produce the first-layer encoding and the second-layer encoding. The first-layer encoding may W formatted into a series of units referred to as first-layer encoding units, and the second-layer encoding may be formatted into a series of nits referred to as second-layer encoding units. The operation 1330 may be performed, for example, by the video encoder 20 1120, foe processor 1220, the encoders 2, 2’, or 2” of FIG, 8, or the implementation of FIG. 1.
Accordingly, a means for performing the operation 1330, may include various components, For example, suchmeans may include an encoder 2,2', or 2, an entire encoder system of FIG. 1,8, it, er 12, a video encoder 1120, 25 a processor 1220k or one or more core encoders 187 (possibly including decimation module 104), or their equivalents including known and futuredevelopedencoders.
The process 1300 includes .providing a set of data (1340). The set of data includes the first-tayer encoding of the sequence of images, the second30 layer encoding of the sequence of Images, the SPS NAL unit, and the SUP SPS NAL unit. The set Of data may be, for example, a bitstream, encoded according to a known standard, to be stored tn memory or transmitted to one
2020203130 13 May 2020 or more decoders. Operation 1340 may be performed. for example, by the formatter 1130, the processor 1220, or the multiplexer 170 of FIG. 1. Operation 1340 may also be performed in FIG. 8 by the generation of any of the bitstreams 8, 8’, and 8*\ as well as the generation of the multiplexed SVG 5 bitstream.
Accordingly, a means for performing the operation 1340, that is, providing a set of data,, may include various components. For example, such means may include a module for generating the bitstream 8, 8’, or 8, a multiplexer 9, an entire encoder system of FIG; 1, 8, 11, or 12, a formatter 10 1130, a processor 1220, or a multiplexer 170, or their equivalents including known and future-developed encoders.
The process 1300 may be modified in various ways. For example, operation 1330 may be removed from the process 1300 in implementations in which, for example, the data is pre-encoded. Further, in addition to removing 15 operation 1330, operation 1340 may be removed to provide a process directed toward generating description units for multiple layers.
Referring to FIG, 14, a data stream 1400 is shown that may be generated, for example, by the process 1300. The data stream 1400 includes a portion 1410 for an SPS NAL unit, a portion 1420 for a SUP SPS NAL unit, 20 a portion 1430 for the first-layer encoded data, and a portion 1440 for the second-layer encoded data. The first-layer encoded data 1430 is the firstlayer encoding, which may be formatted as first-layer encoding units, The second-layer encoded data 1440 is the second-layer encoding, which may be formattecl as second-layer encoding units. The data stream 1400 may include 25 additional portions which may be appended after the portion 1440 or interspersed between the portions 1410-1440; Additionally, other implementations may modify one or more of the portions 141CF1440;
The date stream 1400 may be compared to FIGS. 9 and 10. The SPS NAL unit 1410 may be, for exampie.any of the SPS1 10, the SPS2 10’, df the 30 SPSm 10”, The SUP SPS NAL unit 1420 may be, for example, any of the SUP_SPS headers 11, 1T, 11”, 13, 13’, 13”, 15, 15’, or 15’\ The first-layer encoded data 1430 and the second-layer encoded data 1440 may be any of
2020203130 13 May 2020 the bitstreams for the individual layers shown as Bitstream of Layer (1, 1, 1) 12 through (m, π, p) 16*', and including th® bitstreams 12,12’, 12, 14,14', 14, 16; 16', and 16. It is possible for foe first-layer encoded data 1430 to be a bitstream with d higher set of levels (hah the second-layer encoded data :5: 1440; For example, the first-layer encoded data 1430 may be foe Bitstream of
Layer (2, 2, 1) 14’, and foe second-layer encoded data 1440 may be the Bitstream of Layer (1,1,1) 12.
An irhptemehtatioh of the data stream 1400 may also correspond to the data stream 1000. The SPS NAL unit 1410 may correspond to the SPS 10 module IO of foe data stream 1000. The SUP SPS NAL unit 1420 may correspond to the SUPjSpS module 11 of the data stream 1000. The firstlayer encoded data 1430 may correspond to foe Bitstream of Layer (1, 1, i) 12 of the data stream 1000. The second-layer encoded data 1440 may correspond to the Bitstream of Layer (1, 2. 1) 12’ of the data stream 1000.
The SUPSPS module 11’ of the data stream 1Q00 may be interspersed between the first-layer encoded data 1430 and foe second-layer encoded data 1440. The remaining blocks (10'46) shown in the data stream 1000 may be appended to: the data stream 14Q0 in foe same order shown in the data stream 1000.
FIGS, 9 and 10 may suggest that foe BPS modules do not include any layer-specific parameters. Various implementations do operate in this manner, and typically require a SUPSPS for each layer. However, Other implementations allow foe BPS to include layer-specific parameter for ope or more layers, fous allowing one or more layers to be transmitted without requiring a SUP_SPS.
FIGS. 9 and 10 suggest that each spatial level has its own SPS. Other implementations vary this feature, For example, other implementations provide a separate BPS for each temporal level, or for each quality level. Still other implementations provide a separate SPS for each layer, and other 30 implementations provide a single SPS that serves ail layers.
Referring to FIG. 15, a decoder 1500 includes a parsing unit 1510 that receives an encoded bitstream, such as, for example, the encoded bitstream
2020203130 13 May 2020 provided by the encoder 1100, the encoder 1200, the process 1300, or the data stream 1W. The parsing unit 1510 is coupled to a decoder 1520.
The parsing unit 1510 is configured to access information from an SPS ML unit. The information from the SPS NAL unit describes a parameter for 5 use in decoding a first-layer encoding of a sequence of images. The parsing unit 1510 is further configured to access information from a SUP SPS NAL unit having a different structure than: th© SPS NAL unit. The information from the SUP SPS NAL unit describes a parameter for Use in decoding a secondlayer encoding of the sequence of images. As described above hi conjunction 10 with FIG, 13, the parameters may be layer-dependent or non-layerdependent.
The parsing unit 1510 provides parsed header data as an output, The header data includes the information accessed from the SPS ML unit and also includes the information accessed from the SUP SPS NAL unit. The 15 parsing unit .1510 also provides parsed encoded Video data as an output The encoded video data includes the firsMayer encoding and the second-layer encoding. Both the header data and the encoded video data are provided to the decoder 1520.
The decoder 1520 decodes the first-layer encoding using the 20 information accessed from the SPS ML unit The decoder 1520 also decodes the second-layer encoding using the information accessed from the SUP SPS ML unit. The decoder 1520 further generates a reconstruction of the sequence of images based on the decoded first-layer and/or foe decoded second-layer. The decoder 1520 provides a reconstructed video as an output 25 The reconstructed video may be, for example, a reconstruction of foe firstlayer encoding or a reconstruction of the second-layer encoding.
Comparing FIGS, 15, 2. and 2a, the parsing unit 1510 may correspond, for example, to the demultiplexer 202, aridfor one or more of the entropy decoders 204, 212, 222, or 2245, in some implementations; The decoder 30 1520 may correspond, for example, to the remaining blocks in FIG. 2.
The decoder 1500 also may provide additional outputs and provide additional communication between the components. The decoder 1500 also
2020203130 13 May 2020 may be modified to provide additional components which may, for example, be located between existing components.
The components 1510 and 1520 of the decoder 1500 may take many forms. One or more of the components 1510 and 1520 may include hardware, software, firmwarei or a combination, and may be operated from a variety of platforms, such as, for example, a dedicated decoder or a general processor configured through software to function as a decoder, ...
Referring to FIG. 16, a decoder 1600 is shown that operates in the same manner as the decoder 1500. The decoder 1600 includes a memory 10 1610 in communication with 8 processor 1620, The memory 1610 may be used, for example, to store th® input encoded bitstream, to store decoding or encoding parameters, to store intermediate or final results during the decoding process, or to store instructions for performing a decoding method. Such storage may be temporary or permanent.
The processor 1620 receives ah encoded bitstream and decodes the encoded bitstream into a reconstructed video. The encoded bitstream includes, for example, (1) a first-layer encoding of a sequence of images, (2) a second-layer encoding of the sequence of images, ¢3) ah BPS NAL unit having information that describes a. parameter for use in decoding the first20 layer encoding, and (4) a SUP SPS NAL unit having a different structure than the SPS NAL unit, and having information that describes a parameter for use in decoding the second-jayer encoding.
The processor 1620 produces the reconstructed video based on at least the first-layer encoding, the second-layer encoding, the information from 25 the SPS NAL unit, and the information from the SUP SPS NAL unit The reconstructed video may be, for example, a reconstruction of the first-layer encoding or a reconstruction of the second-layer encoding. The processor 1620 may operate according to instructions stored on, or otherwise resident on or part of, for example, the processor 1620 or the memory 1610.
Referring to FIG. 17, a process 1700 is shown for decoding an encoded bitstream. The process 1700 may be performed by, for example, either of the decoders 1500 or 1600.
2020203130 13 May 2020
The process 1700 includes accessing Information from ah SPS NAL unit (1710). The accessed information describes a parameter for use in decoding a first-layer encoding of a sequence of images.
The SPS NAL unit may be as described earlier with respect to FIG. 13.
5- Further, the accessed information may be, for example, ah HRD parameter; Operation 1710 may be performed, for example, by the parsing unit 1510, the processor 1620, an entropy decoder 204, 212, 222, or 2245, or decoder control 2205. Operation 1710 also may be performed in a reconstruction process at an encoder by one or more components of an encoder.
Accordingly, a means for performing the operation 1710, that is, accessing information from an SPS NAL unit, may include varipgs components. For example; such means may include a parsing unit 1510, a processor 1620, a single-layer decoder, an entire decoder system of FIG, 2, 15, or 16, or one or more components of a decoder, or one or more components of encoders 800, 1100, or 1200, or heir equivalents including known and future-developed decoders and encoders.
The process 1700 includes accessing information from a SUP SPS NAL Unit haying a different structure than the SPS NAL unit (1720). The information accessed from the SUP SPS NAL unit describes a parameter for 20 use in decoding a second-layer encoding of the sequence of images.
The SUP SPS NAL unit may be as described earlier with respect to FIG- 13. Further, the accessed information may be, for example, an HRD parameter. Operation 1720 may be performed, for example, by the parsing unit 1510, the processor 1620, an entropy decoder 204,212, 222, or 2245, or 25 decoder control 2205. Operation 1720 also may be performed in a reconstruction process at an encoder by one dr more components of an encoder.
Accordingly, a means for performing the operation 1720, that is, accessing information from a SUP SPS NAL unit, may include various 30 components. For example, such means may include a parsing unit 1510, a processor 1620, a demultiplexer 202, an entropy decoder 204, 212, or 222, a single-layer decoder, or an entire decoder system 200,1500, or 1600, or one
2020203130 13 May 2020 or more components of a decoder, or one or more components of encoders 800,1130, or 1200, or their equivalents including known and future-developed decoders and encoders. '
The process 1700 includes accessing a first-layer encoding and a 5 second-layer encoding for the sequence of images (1730) The first-layer encoding may have been formatted into first-layer encoding units, and the second-layer encoding may have been formatted Into second-layer encoding units. Operation 1730 may be performed, for example, by the parsing unit 1510, the decoder 1520, the processor 1620, an entropy decoder 204, 212, 10 222, or 2245, or various other blocks downstream of the entropy decoders.
Operation 1730 also may be performed in a reconstruction process at an encoder by one or more components of an encoder.
Accordingly, a means for performing the operation 1730 may include various components. For example; such means may include a parsing unit 15 1510; a decoder 1520, a processor 1620. A demultiplexer 202, an entropy decoder 204, 212, or 222, a single-layer decoder, a bitstream receiver, a receiving device, dr an entire decoder system 200, 1500, or 1600, or one or more components of a decoder, or one or more components of encoders 800, 11Q0, or 1200, Or their equivalents including known and future-developed 20 decoders and encoders.
The process 1700 includes generating a decoding of the sequence of images (1740). The decoding of the sequence of images may be based on the first-layer encoding, the second-layer encoding, the accessed information from the SPS NAL unit, and the accessed information from the GUP SPS NAL 25 unit; Operation 1740 may be performed, for example, by the decoder 1520, the processor 1620; or various blocks downstream of demultiplexer 202 and input buffer 2210; Operation 1740 also may be performed in a reconstruction process at an encoder by one or more components of an encoder.
Accordingly, a means for performing the operation 1740 may include 30 various components. For example, such means may include a decoder 1530, a processor 1620, a single-layer decoder; an entire decoder system 700, 1500, or 1600, or one or more components of a decoder, an encoder
2020203130 13 May 2020 performing a reconstruction, dr One or more components of encoders 800, 1100, or 1200, or their equivalents including know and future-developed decoders or encoders.
Another irnplemehtation performs an encoding method that includes 5 accessing first layer-dependent information ih a first normative parameter set, The accessed first layer-dependent information is for use in decoding a firstlayer encoding of a sequence of images. Th© first normative parameter set may be, for example, an SPS that includes HRD-related parameters or other layer-dependent information However, the first normative parameter set 10 need not be an SPS and need not be related to an H.264 standard.
Ih addition to the first parameter set being normative, which requires a decoder to operate In accordance with the first parameter set if such a parameter set is received, the first parameter set may also be required to be received in an implementation. That is, an implementation may further require 15 that the first parameter set be provided to a decoder.
The encoding method of this implemenlationtarther includes accessing second layer-dependent information in a second normative parameter set The second normative parameter set has a different structure than the first normative parameter set, Also, the accessed second layer-dependent 20 information is for use in decoding a second-layer encoding of the sequence of images. The second normative parameter set may be, for example, a supplemental SPS, The supplemental SPS has a structure that is different from, for example, an SPS, The supplemental SPS also includes HRD parameters or Other layer-dependent information for a second layer (different 25 from the first layer).
The encoding method of this implementation further includes decoding the sequence of images based on one or more of the accessed first layerdependent information or the accessed second layer-dependent information, This may include, for example, decoding a base layer or an enhancement 30 layer.
Corresponding apparatuses are also provided in other implementations, for implementing the encoding method of this
2020203130 13 May 2020 implementation. Such apparatuses include, for example, programmed encoders, programmed processors, hardware implementations, or processor* readable media having instructions for performing the encoding method. The systems 1100 and 1200, for example, may implement the encoding method of thisimptementation,
Corresponding signals; and media storing such signals or the data of such signals, are also provided. Such signals are produced, for example, by an encoder that performs the encoding method of this implemehtation.
Another implementation performs a decoding method analogous to the 10 above encoding method. The decoding method includes generating a first normative parameter set that includes first layer-dependent information, The first layer-dependent information is for use in decoding a first-layer encoding of a sequence of images. The decoding method also includes generating a second normative parameter set having a different structure than the first 15 normative parameter set, The second normative parameter set includes second layer-dependent information for use in decoding a second-layer encoding of the sequence of images. The decoding method further includes providing a set of data including the first normative parameter set and the second normative parameter set.
Corresponding apparatuses are also provided in other implemehtations, for implementing the above decoding method of this implementation. Such apparatuses include, for example, programmed decoders, programmed processors, hardware impiementations, or processorreadable media haying instructions for performing the decoding method. The systems 1500 and 1600, for example, may implement the decoding method of this implementation.
Note ttiat the term supplemental^ as used above, for example, in referring to supplemental SPS is a descriptive term. As such, supplemental SPS·' does not preclude units that do not include the term supplemental'* in 30 the unit name. Accordingly, and by way of exampte. a current draft of the SVC extension defines a subset SPS syntax structure, and the ,!subset SPS syntax structure is fully encompassed by the descriptive term '‘supplemental.
2020203130 13 May 2020
So that the subset SPS of the current SVC extension is one implementation of a SUP SPS as described in this disclosure.
implementations may use other types of messages in addition to. or as a replacement for. the SPS NAL units and/or the SUP SPS NAL units. For 5 example, at least one implementations generates, transmits, receives, accesses, and parses other parameter sets having layer-dependent information.
Furthen although SPS and supplemental SPS have been discussed largely in the context of H.264 standards, other standards also may include 10 SPS, supplemental SPS, or variations of SPS or supplemental SPS. Accordingly, other standards (existing of future-developed) may include structures referred to as SPS or supplemental SPS, and such structures may be identical to or be variations of the SPS and supplemental SPS described herein. Such other standards may, for example, be related to current H.264 15 standards (for example, an amendment to an existing H.264 standard), or be completely new standards. Alternatively, other standards (existing or futuredeveloped) may include structures that are not referred to as SPS or supplemental SPS, but such structures may be identical to, analogous to, or variations of the SPS or supplemental SPS described herein.
Note that a parameter set is a set of data including parameters. For example, an SPS, a PPS, or a supplemental SPS.
lh various implementations, date is said to be ’’aocessed, Accessing)’* data may include, for example, receiving, storing, transmitting, or processing data.
Various impfementatfons are provided and described. These Implementations can be used to solve a variety of problems. One such problem arises when multiple interoperability points (lOPs) (also referred to as layers) need different values for parameters that are typically carried in the SPS, There is no adequate method to transmit the layer dependent syntax 30 elements in the SPS for different layers having the same SPS identifier. It is problematic to serid separate SPS data for each such layer. For example, in many existing systems a base layer arid its composite temporal layers share
2020203130 13 May 2020 the same SPS identifier;
Several implementations provide a different NAL unit type for supplemental SPS data. Thus, multiple NAL units may be sent, and each NAL unit may include supplemental SPS information for a different SVC layer, but each NAL unit may be identified by the same NAL unit type. The supplemental SPS information may, in one implementation, be provided in the “subset SPS* NAL unit type of the Current SVC extension.
It should be clear that the implementations described in this disclosure are not restricted to the SVC extension or to any other standard. The concepts and features of the disclosed implementations may be used with other standards that exist now or are developed in the future, or may be used in systems that do not adhere to any standard. As one example, toe concepts and features disclosed herein may be used for implementations that work in the environment of the MVC extension. For examples MVC views may need different SPS information* or SVC layers supported within the MVC extension may need different SPS- information. Additionally, feature® and aspect® of described implementations may also be adapted for yet other imptemertfatipns. Accordingly, although implementations described herein may be described in the context SPS for SVC layers, such descriptions should in no Way be taken as limiting the features and concepts to such implementations or contexts.
The implementations described herein may be implemented in, for example, a method or process, an apparatus, or a software program. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be Implemented in other forms (for example, an apparatus or program), An apparatus may be implemented in, for example, appropriate hardware, software, and firmware, the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example* a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones,
2020203130 13 May 2020
Μ portable/persona! digital assistants (PDAs”), and other devices that facilitate communication of information between end-users.
implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, 5 particularly, for example, equipment or applications associated with data encoding and decoding.; Examples of equipment include video coders, video decoders, video codecs, web servers, set-top boxes, laptops, personal computers, cell phones, PDAs, arid other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle, 10 Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions may be stored on a prOcessorrreadable medium such as, for example, an integrated circuit,; a software carrier or other storage device such as, for example, a hard disk, a compact diskette, a random access memory (“RAM”), or a read-only memory 15 (’‘ROM’’). The instructions may form an application program tangibly embodied Oh a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore,as, for 20 example, both a device configured to carry out a process and a device that includes a computer readable medium having instructions for canying out a process.
As will be evident to one of skill in the art, impiementations may produce a variety of signals formatted to carry information that may be, for 25 example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to cany as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described 30 embodiment. Such a signal may be formatted, for example as ah electromagnetic wave (for example, usirig a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example,
2020203130 13 May 2020 encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog dr digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known.
A number of implementations have been described. Nevertheless, it will be understood that varidus modifications may be made. For example, elementsof different implementations may be (x>mbined,suppiemerited, modified, Or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those 10 disclosed and the resulting implementations will perform at least Substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the. same result(s) as the implementations disclosed, Accordingly, these and other implementations are contemplated by this application and are within the scope of the following claims.
Where the terms cOmpiisa, ’comprises, comprised or cornprising are used in this specification (including the daims) they are tg be interpreted as spedfylng the presence of the stated features, integers, steps or components, but not precluding the presence of one or more other features, integers, steps or components, or group thereof

Claims (3)

1. A method comprising:
accessing information from a sequence parameter set (“SPS”) network abstraction layer (“NAL”) unit, the information describing a parameter for use in decoding a first-layer encoding of an image in a sequence of images;
accessing supplemental information from a supplemental SPS NAL unit having an available NAL unit type code that is a different NAL unit type code from that ofthe SPS NAL unit, and having a different syntax structure than the SPS NAL unit, and the supplemental information from the supplemental SPS NAL unit describing a parameter for use in decoding a second-layer encoding ofthe image in the sequence of images; and decoding the first-layer encoding, and the second-layer encoding, based on, respectively, the accessed information from the SPS NAL unit, and the accessed supplemental information from the supplemental SPS NAL unit.
2. The method of claim 1 wherein the parameter for use in decoding the second-layer encoding comprises a video usability information (“VUI”) parameter.
3. The method of claim 1 wherein the parameter for use in decoding the firstlayer encoding comprises a HRD parameter.
AU2020203130A 2007-04-18 2020-05-13 Coding systems Active AU2020203130B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2020203130A AU2020203130B2 (en) 2007-04-18 2020-05-13 Coding systems
AU2021203777A AU2021203777B2 (en) 2007-04-18 2021-06-08 Coding systems

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US60/923,993 2007-04-18
US11/824,006 2007-06-28
AU2008241568A AU2008241568B2 (en) 2007-04-18 2008-04-07 Coding systems
AU2012238298A AU2012238298B2 (en) 2007-04-18 2012-10-10 Coding systems
AU2015203559A AU2015203559B2 (en) 2007-04-18 2015-06-26 Coding systems
AU2017258902A AU2017258902B2 (en) 2007-04-18 2017-11-09 Coding Systems
AU2020203130A AU2020203130B2 (en) 2007-04-18 2020-05-13 Coding systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2017258902A Division AU2017258902B2 (en) 2007-04-18 2017-11-09 Coding Systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
AU2021203777A Division AU2021203777B2 (en) 2007-04-18 2021-06-08 Coding systems

Publications (2)

Publication Number Publication Date
AU2020203130A1 true AU2020203130A1 (en) 2020-06-04
AU2020203130B2 AU2020203130B2 (en) 2021-03-11

Family

ID=53674363

Family Applications (4)

Application Number Title Priority Date Filing Date
AU2015203559A Active AU2015203559B2 (en) 2007-04-18 2015-06-26 Coding systems
AU2017258902A Active AU2017258902B2 (en) 2007-04-18 2017-11-09 Coding Systems
AU2020203130A Active AU2020203130B2 (en) 2007-04-18 2020-05-13 Coding systems
AU2021203777A Active AU2021203777B2 (en) 2007-04-18 2021-06-08 Coding systems

Family Applications Before (2)

Application Number Title Priority Date Filing Date
AU2015203559A Active AU2015203559B2 (en) 2007-04-18 2015-06-26 Coding systems
AU2017258902A Active AU2017258902B2 (en) 2007-04-18 2017-11-09 Coding Systems

Family Applications After (1)

Application Number Title Priority Date Filing Date
AU2021203777A Active AU2021203777B2 (en) 2007-04-18 2021-06-08 Coding systems

Country Status (1)

Country Link
AU (4) AU2015203559B2 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR960036641A (en) * 1995-03-21 1996-10-28 김광호 High speed decoding device for decoding low speed video bit stream
US20060146734A1 (en) * 2005-01-04 2006-07-06 Nokia Corporation Method and system for low-delay video mixing
JP4903195B2 (en) * 2005-04-13 2012-03-28 ノキア コーポレイション Method, device and system for effectively encoding and decoding video data

Also Published As

Publication number Publication date
AU2017258902A1 (en) 2017-11-30
AU2021203777A1 (en) 2021-07-08
AU2017258902B2 (en) 2020-02-13
AU2021203777B2 (en) 2023-07-06
AU2020203130B2 (en) 2021-03-11
AU2015203559B2 (en) 2017-08-10
AU2015203559A1 (en) 2015-07-23

Similar Documents

Publication Publication Date Title
US8619871B2 (en) Coding systems
US11412265B2 (en) Decoding multi-layer images
AU2020203130A1 (en) Coding systems
AU2008241568B2 (en) Coding systems
AU2012238296B2 (en) Coding systems

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)