[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020141260A1 - An apparatus, a method and a computer program for video coding and decoding - Google Patents

An apparatus, a method and a computer program for video coding and decoding Download PDF

Info

Publication number
WO2020141260A1
WO2020141260A1 PCT/FI2019/050938 FI2019050938W WO2020141260A1 WO 2020141260 A1 WO2020141260 A1 WO 2020141260A1 FI 2019050938 W FI2019050938 W FI 2019050938W WO 2020141260 A1 WO2020141260 A1 WO 2020141260A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
sub
pictures
bitstream
manipulated
Prior art date
Application number
PCT/FI2019/050938
Other languages
French (fr)
Inventor
Miska Hannuksela
Alireza Aminlou
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to EP19908067.2A priority Critical patent/EP3906675A4/en
Publication of WO2020141260A1 publication Critical patent/WO2020141260A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Definitions

  • the present invention relates to an apparatus, a method and a computer program for video coding and decoding.
  • a video coding system may comprise an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
  • the encoder may discard some information in the original video sequence in order to represent the video in a more compact form, for example, to enable the storage/transmission of the video information at a lower bitrate than otherwise might be needed.
  • Video compression systems such as Advanced Video Coding standard (H.264/AVC), the Multiview Video Coding (MVC) extension of H.264/AVC or scalable extensions of HEVC (High Efficiency Video Coding) can be used.
  • H.264/AVC Advanced Video Coding standard
  • MVC Multiview Video Coding
  • HEVC High Efficiency Video Coding
  • Two-dimensional (2D) video codecs can be used as basis for novel usage scenarios, such as point cloud coding and 360-degree video.
  • the following challenges have been faced. It may be needed to make a trade-off between selecting projection surfaces optimally for a single time instance vs. keeping projection surfaces constant for a time period in order to facilitate inter prediction. Also motion over projection surface boundary might not be handled optimally.
  • projection surfaces are packed onto a 2D picture, techniques like motion-constrained tile sets have to be used to avoid unintentional prediction leaks from one surface to another.
  • 360-degree video coding geometry padding have been shown to improve compression but would require changes in the core (de)coding process.
  • an enhanced encoding method is introduced herein.
  • a method, apparatus and computer program product for video coding as well as decoding utilizing a reference sub-picture manipulation process are provided.
  • a reference sub-picture manipulation processing block may be considered to be outside of the core coding decoding process or specification.
  • the manipulated reference sub-picture is stored in the decoded picture buffer. Its marking status (e.g. marking as "used for reference” and “unused for reference) can be controlled as described in embodiments further below.
  • the reference sub-picture manipulation provides the manipulated reference sub-picture directly to the decoding process rather than to the decoded sub-picture buffering.
  • the manipulated reference sub-picture may be temporary in a sense that it may be required only for decoding one coded sub-picture after which it may be discarded.
  • An encoder may include in or along the bitstream an identification of the reference sub picture manipulation process.
  • the encoder may also include in the bitstream information indicative of or infers a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub pictures to be generated.
  • the encoder may generate the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and include at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
  • a decoder may decode from or along the bitstream an identification of the reference sub picture manipulation process.
  • the decoder may also decode from the bitstream information indicative of or infers: a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub-pictures to be generated.
  • the decoder may also generate the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and include at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
  • the identification of the reference sub-picture manipulation process may for example be a uniform resource identifier (URI) or a registered type value.
  • URI uniform resource identifier
  • One or more reference sub-pictures may be used as a source for generating a manipulated reference sub-picture.
  • the generation of the set of manipulated reference sub-pictures may comprise one or more of sub-picture packing, geometry packing, padding, reference patch reprojection, view synthesis, resampling, color gamut conversion, dynamic range conversion, color mapping conversion, bit depth conversion, chroma format conversion, projection conversion and/or frame rate conversion.
  • Sub-picture packing of one or more reference sub-pictures or regions therein may comprise but is not limited to one or more of the following (as indicated by the encoder as part of the information):
  • resampling e.g. rescaling the width and/or height
  • overlaying over i.e. overwriting
  • blending with the samples already present within the indicated area of the manipulated reference sub-picture (e.g., occupied by sub-pictures or regions arranged previously onto the manipulated reference sub-picture).
  • the overwriting may be useful e.g. in the case the one/some of the sub-pictures are coded with higher quality.
  • Geometry padding for 360° video may comprise, for example, cube face padding from neighboring cube faces projected onto the same plane as the cube face in the sub-picture.
  • a geometry image and/or a texture image may be padded by an image padding element.
  • Padding aims at filling the empty space between patches in order to generate a piecewise smooth image suited for video compression.
  • the image padding element may consider keeping the compression high as well as enabling estimating of occupancy map (EOM) with enough accuracy as compared to the original occupancy map (OOM).
  • Each block of TxT (e.g., 16x16) pixels is processed independently. If the block is empty (i.e., all its pixels belong to an empty space), then the pixels of the block are filled by copying either the last row or column of the previous TxT block in raster order. If the block is full (i.e., no empty pixels), nothing is done. If the block has both empty and filled pixels, then the empty pixels are iteratively filled with the average value of their non-empty neighbors.
  • the generated images/layers may be stored as video frames and compressed.
  • the padded geometry image and the padded texture image are provided to a video compression element for compressing the padded geometry image and the padded texture image, from which the compressed geometry and texture images are provided, for example, to a multiplexer which multiplexes the input data to a compressed bitstream(s).
  • the compressed geometry and texture images are also provided, for example, to an occupancy map estimator which generates an estimated occupancy map.
  • an algorithm may be used to find the borders of geometry and/or texture images ft is noted that the borders are aligned with each other in general and prior to encoding.
  • the occupancy map may consist of a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. One cell of the 2D grid would produce a pixel during the image generation process.
  • the estimated occupancy generation step based on the embodiment used in the padding step, different processes between respective padded geometry, Y, U, and/or V components may be considered. Based on such processes, an estimation of edges (i.e. contours defining the occupancy map) will be created. Such estimation may be fine-tuned in the cases where more than one component/image are to be used for estimating the occupancy map.
  • An example of an edge detection algorithm is a multiscale edge detection algorithm, which is based on wavelet domain vector hidden Markov tree model. However, some other algorithm may be applied in this context.
  • the content of the padding area of the manipulated reference sub-picture may be generated from other sub-pictures. For example, in region of interest coding, if a first sub-picture may represent a bigger area than a second sub-picture, the manipulated reference for the second sub-picture may be padded using the content in the first sub-picture.
  • reference patch reprojection reference sub-picture(s) may be interpreted as 3D point cloud patches and the 3D point cloud patches may be re-projected onto a plane suitable for 2D inter prediction.
  • MPEG W17248 discloses a test model for MPEG point cloud coding to provide a standardized way of dynamic point cloud compression.
  • the 2D- projected 3D volume surfaces are determined in terms of three image data: motion images, texture images and depth/attribute images.
  • a point cloud re-sampling block the input 3D point cloud frame is resampled on the basis of a reference point cloud frame.
  • a 3D motion compensation block is used during the inter- frame encoding/decoding processes. It computes the difference between the positions of the reference point cloud and its deformed version.
  • the obtained motion fields consists of 3D motion vectors ⁇ MV_i(dx, dy, dz) ⁇ _i, associated with the point of the reference frame.
  • the 3D to 2D mapping of the reference frame is used to convert the motion field into a 2D image by storing dx as Y, dy as U and dz as V, where this 2D image may be referred to as a motion image.
  • a scale map providing the scaling factor for each block of the motion image is also encoded.
  • the image generation process exploits the 3D to 2D mapping computed during the packing process to store the geometry/texture/motion of the point cloud as images. These images are stored as video frames and compressed with a video encoder, such as an HEVC encoder.
  • the generated videos may have the following characteristics:
  • View synthesis (a.k.a. depth- image-based rendering) may be performed from sub-pictures representing one or more texture and depth views.
  • Depth- image-based rendering (DIBR) or view synthesis refers to generation of a novel view based on one or more existing/received views. Depth images may be used to assist in correct synthesis of the virtual views. Although differing in details, most of the view synthesis algorithms utilize 3D warping based on explicit geometry, i.e. depth images, where typically each texture pixel is associated with a depth pixel indicating the distance or the z- value from the camera to the physical object from which the texture pixel was sampled.
  • 3D warping based on explicit geometry, i.e. depth images, where typically each texture pixel is associated with a depth pixel indicating the distance or the z- value from the camera to the physical object from which the texture pixel was sampled.
  • 3D warping uses a non-Euclidean formulation of the 3D warping, which is efficient under the condition that the camera parameters are unknown or the camera calibration is poor.
  • Occlusions, pinholes and reconstruction errors are the most common artifacts introduced in the 3D warping process. These artifacts occur more frequently in the object edges, where pixels with different depth levels may be mapped to the same pixel location of the virtual image. When those pixels are averaged to reconstruct the final pixel value for the pixel location in the virtual image, an artifact might be generated, because pixels with different depth levels usually belong to different objects.
  • auxiliary depth map video streams multiview video plus depth (MVD) and layered depth video (LDV).
  • the depth map video stream for a single view can be regarded as a regular monochromatic video stream and coded with any video codec. Some characteristics of the depth map stream, such as the minimum and maximum depth in world coordinates, can be indicated in messages formatted according to the MPEG-C Part 3 standard, for example.
  • the depth picture sequence for each texture view is coded with any video codec, such as MVC.
  • the texture and depth of the central view are coded conventionally, while the texture and depth of the other view are partially represented and cover only the dis-occluded areas required for correct view synthesis of intermediate views.
  • the resampling may be either upsampling (for switching to a higher resolution) or downsampling (for switching to a lower resolution).
  • the resampling may be used for but are not limited to one or more of the following use cases:
  • Inter- view prediction may be performed by enabling prediction from a first sub-picture (of a first sub-picture sequence) to a second sub-picture (of a second sub-picture sequence), where the first and second sub-pictures may be of the same time instance.
  • it may be beneficial to rotate one of the views e.g. for arranging the sub-pictures side-by-side or top-botom in the output picture compositing).
  • resampling may be accompanied by rotation (e.g. by 90, 180, or 270 degrees).
  • Color gamut conversion For example, if one sub-picture used as a source is represented by a first color gamut or format, such as ITU-R BT.709, and the manipulated reference sub-picture is represented by a second color gamut or format, such as ITU-R BT.2020, the sub-picture used as a source may be converted to the second color gamut or format as part of the process.
  • a first color gamut or format such as ITU-R BT.709
  • the manipulated reference sub-picture is represented by a second color gamut or format, such as ITU-R BT.2020
  • the sub-picture used as a source may be converted to the second color gamut or format as part of the process.
  • Dynamic range conversion and/or color mapping conversion may refer to the mapping of sample values to linear light representation.
  • the reconstructed sub-picture(s) used as a source for generating the manipulated reference sub-picture may be converted to the target dynamic range and color mapping.
  • bit depth conversion the reconstructed sub-picture(s) used as source for generating the manipulated reference sub-picture may be converted to the bit-depth of the manipulated reference sub picture.
  • a manipulated reference sub-picture may have YUV 4:4:4 chroma format while at least some reconstructed sub-pictures used as source for generating the manipulated reference sub-picture may have chroma format 4:2:0.
  • the sub-pictures used as source may be upsampled to YUV 4:4:4 as part of the process, in this example.
  • Projection conversion For example, if one sub-picture is in a first projection, such as ERP, and the manipulated sub-picture is in a second projection, such as CMP, the sub-picture is used as reference may be converted to the second projection.
  • the whole 360-degree content may be coded in lower resolution in ERP format, and the viewport content may be coded in higher resolution in CMP format.
  • Frame rate conversion For example, if one sub-picture is coded with a first frame rate, and a second sub-picture may be coded with a second frame rate, the sub-picture is used as reference may be interpolated in temporal domain to the time instance of the second sub-picture.
  • the dominant view may be transmitted in higher frame rate, and the auxiliary view may be transmitted in lower frame rate.
  • a method according to a first aspect comprises:
  • the method further comprises
  • An apparatus comprises at least one processor and at least one memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
  • the determine whether to use the sub-picture as a source for a manipulated reference sub-picture determines whether to use the sub-picture as a source for a manipulated reference sub-picture; generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub picture is to be used as the source for the manipulated reference sub-picture.
  • a computer program product comprises computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:
  • the determine whether to use the sub-picture as a source for a manipulated reference sub-picture determines whether to use the sub-picture as a source for a manipulated reference sub-picture; generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub picture is to be used as the source for the manipulated reference sub-picture.
  • a determinator configured to determine whether to use the sub-picture as a source for a manipulated reference sub-picture
  • a manipulator configured to generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
  • a determinator configured to determine whether to use the sub-picture as a source for a manipulated reference sub-picture
  • a manipulator configured to generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
  • the further aspects relate to apparatuses and computer readable storage media stored with code thereon, which are arranged to carry out the above methods and one or more of the embodiments related thereto.
  • Figure 1 shows an example of MPEG Omnidirectional Media Format (OMAF);
  • Figure 2 shows an example of image stitching, projection and region- wise packing
  • Figure 3 shows another example of image stitching, projection and region-wise packing
  • Figure 4 shows an example of a process of forming a monoscopic equirectangular panorama picture
  • Figure 5 shows an example of tile-based omnidirectional video streaming
  • Figure 6 shows an example of a decoding process
  • Figure 7 shows a sub-picture-sequence- wise buffering according to an embodiment
  • Figure 8 shows a decoding process with a reference sub-picture manipulation process, in accordance with an embodiment
  • Figure 9 illustrates a flowchart of a method according to an embodiment
  • Figure 10 shows an example of a picture that has been divided into four sub-pictures
  • Figure 11 shows predictions applicable in an encoding process and/or in a decoding process according to an embodiment
  • Figure 12 shows an example of using a shared coded sub-picture for multi-resolution viewport independent 360-degree video streaming
  • Figure 13 shows an example of a sub-picture using a part of another sub-picture as a reference frame
  • Figure 14 shows another example of a sub-picture using a part of another sub-picture as a reference frame
  • Figure 15 shows an example of a patch generation according to an embodiment
  • Figures 16a— 16d illustrate generation of the set of manipulated reference sub-picture by unfolding projection surfaces, in accordance with an embodiment
  • Figure 16e illustrates generation of the set of manipulated reference sub-picture by unfolding projection surfaces and sample-line-wise resampling, in accordance with an embodiment.
  • Figure 17a illustrates a use of a reference sub-picture manipulation process for adaptive resolution change, in accordance with an embodiment
  • Figure 17b illustrates a possible encoding arrangement for an adaptive resolution change, in accordance with an embodiment
  • Figure 17c illustrates an example situation for an adaptive resolution change, in accordance with an embodiment
  • Figure 18 shows an apparatus according to an embodiment
  • Figure 19 illustrates, on the left, initial selection of subpictures to be streamed, and on the right, subpictures to be streamed after viewing orientation change;
  • Figure 20 shows an example of encoding of "switching" subpicture sequences using
  • Figure 21 illustrates an example of a merged bitstream
  • Figure 22 illustrates encoding of "switching" subpicture sequences using DRAP pictures in RWMR+SCP method
  • Figure 23 illustrates merged bitstream in RWMR+SCP method
  • Figure 24 illustrates encoding of "switching" subpicture sequences using DRAP pictures in RWMR+SCP method, mixed resolution SCP;
  • Figure 25 illustrates a merged bitstream in RWMR+SCP method, mixed resolution SCP
  • Figure 26 illustrates an example embodiment of a RWMR 360° method
  • Figure 27 presents an example embodiment continuing the example illustrated in Figure
  • video coding arrangement ft is to be noted, however, that the invention is not limited to this particular arrangement.
  • the invention may be applicable to video coding systems like streaming system, DVD (Digital Versatile Disc) players, digital television receivers, personal video recorders, systems and computer programs on personal computers, handheld computers and communication devices, as well as network elements such as transcoders and cloud computing arrangements where video data is handled.
  • DVD Digital Versatile Disc
  • network elements such as transcoders and cloud computing arrangements where video data is handled.
  • the Advanced Video Coding standard (which may be abbreviated AVC or H.264/AVC) was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organization for Standardization (ISO) / International Electrotechnical Commission (IEC).
  • JVT Joint Video Team
  • MPEG Moving Picture Experts Group
  • ISO International Organization for Standardization
  • IEC International Electrotechnical Commission
  • the H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC).
  • ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10 also known as MPEG-4 Part 10 Advanced Video Coding (AVC).
  • AVC MPEG-4 Part 10 Advanced Video Coding
  • High Efficiency Video Coding standard (which may be abbreviated HEVC or H.265/HEVC) was developed by the Joint Collaborative Team - Video Coding (JCT-VC) of VCEG and MPEG.
  • JCT-VC Joint Collaborative Team - Video Coding
  • the standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC).
  • Extensions to H.265/HEVC include scalable, multiview, three-dimensional, and fidelity range extensions, which may be referred to as SHVC, MV- HEVC, 3D-HEVC, and REXT, respectively.
  • the Versatile Video Coding standard (WC, H.266, or H.266/WC) is presently under development by the Joint Video Experts Team (JVET), which is a collaboration between the ISO/IEC MPEG and ITU-T VCEG.
  • JVET Joint Video Experts Team
  • bitstream and coding structures, and concepts of H.264/AVC and HEVC and some of their extensions are described in this section as an example of a video encoder, decoder, encoding method, decoding method, and a bitstream structure, wherein the embodiments may be implemented.
  • Some of the key definitions, bitstream and coding structures, and concepts of H.264/AVC are the same as in HEVC standard - hence, they are described below jointly.
  • the aspects of various embodiments are not limited to H.264/AVC or HEVC or their extensions, but rather the description is given for one possible basis on top of which the present embodiments may be partly or fully realized.
  • Video codec may comprise an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
  • the compressed representation may be referred to as a bitstream or a video bitstream.
  • a video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec.
  • the encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
  • Hybrid video codecs may encode the video information in two phases. At first, pixel values in a certain picture area (or“block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Then, the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This may be done by transforming the difference in pixel values using a specified transform (e.g.
  • DCT Discreet Cosine Transform
  • inter prediction In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures).
  • IBC intra block copy
  • prediction is applied similarly to temporal prediction, but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process.
  • Inter layer or inter- view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively.
  • inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter view prediction provided that they are performed with the same or similar process than temporal prediction.
  • Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
  • Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
  • One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
  • Entropy coding/decoding may be performed in many ways. For example, context-based coding/decoding may be applied, where in both the encoder and the decoder modify the context state of a coding parameter based on previously coded/decoded coding parameters.
  • Context-based coding may for example be context adaptive binary arithmetic coding (CABAC) or context-based variable length coding (CAVLC) or any similar entropy coding.
  • Entropy coding/decoding may alternatively or additionally be performed using a variable length coding scheme, such as Huffman coding/decoding or Exp-Golomb coding/decoding. Decoding of coding parameters from an entropy-coded bitstream or codewords may be referred to as parsing.
  • Video coding standards may specify the bitstream syntax and semantics as well as the decoding process for error- free bitstreams, whereas the encoding process might not be specified, but encoders may just be required to generate conforming bitstreams. Bitstream and decoder conformance can be verified with the Hypothetical Reference Decoder (HRD).
  • HRD Hypothetical Reference Decoder
  • the standards may contain coding tools that help in coping with transmission errors and losses, but the use of the tools in encoding may be optional and decoding process for erroneous bitstreams might not have been specified.
  • a syntax element may be defined as an element of data represented in the bitstream.
  • a syntax structure may be defined as zero or more syntax elements present together in the bitstream in a specified order.
  • An elementary unit for the input to an encoder and the output of a decoder, respectively, is typically a picture.
  • a picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoded may be referred to as a decoded picture or a reconstructed picture.
  • the source and decoded pictures are each comprised of one or more sample arrays, such as one of the following sets of sample arrays:
  • RGB Green, Blue and Red
  • YZX also known as XYZ
  • these arrays may be referred to as luma (or L or Y) and chroma, where the two chroma arrays may be referred to as Cb and Cr; regardless of the actual color representation method in use.
  • the actual color representation method in use can be indicated e.g. in a coded bitstream e.g. using the Video Usability Information (VUI) syntax of HEVC or alike.
  • VUI Video Usability Information
  • a component may be defined as an array or single sample from one of the three sample arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.
  • a picture may be defined to be either a frame or a field.
  • a frame comprises a matrix of luma samples and possibly the corresponding chroma samples.
  • a field is a set of alternate sample rows of a frame and may be used as encoder input, when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or chroma sample arrays may be subsampled when compared to luma sample arrays.
  • each of the two chroma arrays has half the height and half the width of the luma array.
  • each of the two chroma arrays has the same height and half the width of the luma array.
  • - In 4:4:4 sampling when no separate color planes are in use each of the two chroma arrays has the same height and width as the luma array.
  • Coding formats or standards may allow to code sample arrays as separate color planes into the bitstream and respectively decode separately coded color planes from the bitstream. When separate color planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.
  • the location of chroma samples with respect to luma samples may be determined in the encoder side (e.g. as pre processing step or as part of encoding).
  • the chroma sample positions with respect to luma sample positions may be pre-defined for example in a coding standard, such as H.264/AVC or HEVC, or may be indicated in the bitstream for example as part of VUI of H.264/AVC or HEVC.
  • the source video sequence(s) provided as input for encoding may either represent interlaced source content or progressive source content. Fields of opposite parity have been captured at different times for interlaced source content. Progressive source content contains captured frames.
  • An encoder may encode fields of interlaced source content in two ways: a pair of interlaced fields may be coded into a coded frame or a field may be coded as a coded field.
  • an encoder may encode frames of progressive source content in two ways: a frame of progressive source content may be coded into a coded frame or a pair of coded fields.
  • a field pair or a complementary field pair may be defined as two fields next to each other in decoding and/or output order, having opposite parity (i.e.
  • Some video coding standards or schemes allow mixing of coded frames and coded fields in the same coded video sequence.
  • predicting a coded field from a field in a coded frame and/or predicting a coded frame for a complementary field pair may be enabled in encoding and/or decoding.
  • Partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
  • a macroblock is a 16x16 block of luma samples and the corresponding blocks of chroma samples. For example, in the 4:2:0 sampling pattern, a macroblock contains one 8x8 block of chroma samples per each chroma component.
  • a picture is partitioned to one or more slice groups, and a slice group contains one or more slices.
  • a slice consists of an integer number of macroblocks ordered consecutively in the raster scan within a particular slice group.
  • a coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning.
  • a coding tree block may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning.
  • a coding tree unit may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
  • a coding unit may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
  • video pictures may be divided into coding units (CU) covering the area of the picture.
  • a CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in the said CU.
  • the CU may consist of a square block of samples with a size selectable from a predefined set of possible CU sizes.
  • a CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non- overlapping LCUs.
  • An LCU can be further split into a combination of smaller CUs, e.g. by recursively splitting the LCU and resultant CUs.
  • Each resulting CU may have at least one PU and at least one TU associated with it.
  • Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively.
  • Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU (e.g. motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs).
  • Each TU can be associated with information describing the prediction error decoding process for the samples within the said TU (including e.g. DCT coefficient information). It may be signalled at CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for the said CU.
  • the division of the image into CUs, and division of CUs into PUs and TUs may be signalled in the bitstream allowing the decoder to reproduce the intended structure of these units.
  • the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure.
  • the multi-type tree leaf nodes are called coding units (CUs).
  • CU, PU and TU have the same block size, unless the CU is too large for the maximum transform length.
  • a segmentation structure for a CTU is a quadtree with nested multi-type tree using binary and ternary splits, i.e. no separate CU, PU and TU concepts are in use except when needed for CUs that have a size too large for the maximum transform length.
  • a CU can have either a square or rectangular shape.
  • the decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame.
  • the decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
  • the filtering may for example include one more of the following: deblocking, sample adaptive offset (SAO), and/or adaptive loop filtering (ALF).
  • deblocking sample adaptive offset (SAO)
  • ALF adaptive loop filtering
  • the deblocking loop filter may include multiple filtering modes or strengths, which may be adaptively selected based on the features of the blocks adjacent to the boundary, such as the quantization parameter value, and/or signaling included by the encoder in the bitstream.
  • the deblocking loop filter may comprise a normal filtering mode and a strong filtering mode, which may differ in terms of the number of filter taps (i.e. number of samples being filtered on both sides of the boundary) and/or the filter tap values. For example, filtering of two samples along both sides of the boundary may be performed with a filter having the impulse response of (3 7 9 -3)/16, when omitting the potential impact of a clipping operation.
  • the motion information may be indicated with motion vectors associated with each motion compensated image block in video codecs.
  • Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures.
  • the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks.
  • Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor.
  • the reference index of previously coded/decoded picture can be predicted.
  • the reference index may be predicted from adjacent blocks and/or co-located blocks in temporal reference picture.
  • high efficiency video codecs may employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction.
  • predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co- located blocks.
  • Video codecs may support motion compensated prediction from one source image (uni prediction) and two sources (bi-prediction).
  • uni-prediction a single motion vector is applied whereas in the case of bi-prediction two motion vectors are signaled and the motion compensated predictions from two sources are averaged to create the final sample prediction.
  • weighted prediction the relative weights of the two predictions can be adjusted, or a signaled offset can be added to the prediction signal.
  • the displacement vector indicates where from the same picture a block of samples can be copied to form a prediction of the block to be coded or decoded.
  • This kind of intra block copying methods can improve the coding efficiency substantially in presence of repeating structures within the frame - such as text or other graphics.
  • the prediction residual after motion compensation or intra prediction may be first transformed with a transform kernel (like DCT) and then coded.
  • a transform kernel like DCT
  • Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired Macroblock mode and associated motion vectors.
  • This kind of cost function uses a weighting factor l to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
  • C D + R (Eq. 1)
  • C the Lagrangian cost to be minimized
  • D the image distortion (e.g. Mean Squared Error) with the mode and motion vectors considered
  • R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
  • POC picture order count
  • a value of POC is derived for each picture and is non- decreasing with increasing picture position in output order. POC therefore indicates the output order of pictures.
  • POC may be used in the decoding process for example for implicit scaling of motion vectors and for reference picture list initialization. Furthermore, POC may be used in the verification of output order conformance.
  • a compliant bit stream must be able to be decoded by a hypothetical reference decoder that may be conceptually connected to the output of an encoder and consists of at least a pre-decoder buffer, a decoder and an output/display unit.
  • This virtual decoder may be known as the hypothetical reference decoder (HRD) or the video buffering verifier (VBV).
  • HRD hypothetical reference decoder
  • VBV video buffering verifier
  • a stream is compliant if it can be decoded by the HRD without buffer overflow or, in some cases, underflow. Buffer overflow happens if more bits are to be placed into the buffer when it is full. Buffer underflow happens if some bits are not in the buffer when said bits are to be fetched from the buffer for decoding/playback.
  • One of the motivations for the HRD is to avoid so-called evil bitstreams, which would consume such a large quantity of resources that practical decoder implementations would not be able to handle.
  • HRD models typically include instantaneous decoding, while the input bitrate to the coded picture buffer (CPB) of HRD may be regarded as a constraint for the encoder and the bitstream on decoding rate of coded data and a requirement for decoders for the processing rate.
  • An encoder may include a CPB as specified in the HRD for verifying and controlling that buffering constraints are obeyed in the encoding.
  • a decoder implementation may also have a CPB that may but does not necessarily operate similarly or identically to the CPB specified for HRD.
  • a Decoded Picture Buffer may be used in the encoder and/or in the decoder. There may be two reasons to buffer decoded pictures, for references in inter prediction and for reordering decoded pictures into output order. Some coding formats, such as HEVC, provide a great deal of flexibility for both reference picture marking and output reordering, separate buffers for reference picture buffering and output picture buffering may waste memory resources. Hence, the DPB may include a unified decoded picture buffering process for reference pictures and output reordering. A decoded picture may be removed from the DPB when it is no longer used as a reference and is not needed for output.
  • An HRD may also include a DPB. DPBs of an HRD and a decoder implementation may but do not need to operate identically.
  • Output order may be defined as the order in which the decoded pictures are output from the decoded picture buffer (for the decoded pictures that are to be output from the decoded picture buffer).
  • a decoder and/or an HRD may comprise a picture output process.
  • the output process may be considered to be a process in which the decoder provides decoded and cropped pictures as the output of the decoding process.
  • the output process is typically a part of video coding standards, typically as a part of the hypothetical reference decoder specification.
  • lines and/or columns of samples may be removed from decoded pictures according to a cropping rectangle to form output pictures.
  • a cropped decoded picture may be defined as the result of cropping a decoded picture based on the conformance cropping window specified e.g. in the sequence parameter set that is referred to by the corresponding coded picture.
  • One or more syntax structures for (decoded) reference picture marking may exist in a video coding system.
  • An encoder generates an instance of a syntax structure e.g. in each coded picture, and a decoder decodes an instance of the syntax structure e.g. from each coded picture.
  • the decoding of the syntax structure may cause pictures to be adaptively marked as "used for reference” or "unused for reference”.
  • a reference picture set (RPS) syntax structure of HEVC is an example of a syntax structure for reference picture marking.
  • a reference picture set valid or active for a picture includes all the reference pictures that may be used as reference for the picture and all the reference pictures that are kept marked as "used for reference” for any subsequent pictures in decoding order.
  • the reference pictures that are kept marked as "used for reference” for any subsequent pictures in decoding order but that are not used as reference picture for the current picture or image segment may be considered inactive. For example, they might not be included in the initial reference picture list(s).
  • reference picture for inter prediction may be indicated with an index to a reference picture list.
  • two reference picture lists (reference picture list 0 and reference picture list 1) are generated for each bi-predictive (B) slice, and one reference picture list (reference picture list 0) is formed for each inter-coded (P) slice.
  • a reference picture list such as the reference picture list 0 and the reference picture list 1, may be constructed in two steps: First, an initial reference picture list is generated.
  • the initial reference picture list may be generated using an algorithm pre-defined in a standard. Such an algorithm may use e.g. POC and/or temporal sub-layer, as the basis.
  • the algorithm may process reference pictures with particular marking(s), such as "used for reference”, and omit other reference pictures, i.e. avoid inserting other reference pictures into the initial reference picture list.
  • An example of such other reference picture is a reference picture marked as "unused for reference” but still residing in the decoded picture buffer waiting to be output from the decoder.
  • the initial reference picture list may be reordered through a specific syntax structure, such as reference picture list reordering (RPLR) commands ofH.264/AVC or reference picture list modification syntax structure of HEVC or anything alike.
  • RPLR reference picture list reordering
  • the number of active reference pictures may be indicated for each list, and the use of the pictures beyond the active ones in the list as reference for inter prediction is disabled.
  • One or both the reference picture list initialization and reference picture list modification may process only active reference pictures among those reference pictures that are marked as "used for reference” or alike.
  • Scalable video coding refers to coding structure where one bitstream can contain multiple representations of the content at different bitrates, resolutions or frame rates.
  • the receiver can extract the desired representation depending on its characteristics (e.g. resolution that matches best the display device).
  • a server or a network element can extract the portions of the bitstream to be transmitted to the receiver depending on e.g. the network characteristics or processing capabilities of the receiver.
  • a scalable bitstream may include a "base layer" providing the lowest quality video available and one or more enhancement layers that enhance the video quality when received and decoded together with the lower layers.
  • the coded representation of that layer may depend on the lower layers. E.g. the motion and mode information of the enhancement layer can be predicted from lower layers.
  • the pixel data of the lower layers can be used to create prediction for the enhancement layer.
  • a scalable video codec for quality scalability also known as Signal-to-Noise or SNR
  • spatial scalability may be implemented as follows.
  • a base layer a conventional non- scalable video encoder and decoder is used.
  • the reconstructed/decoded pictures of the base layer are included in the reference picture buffer for an enhancement layer.
  • the base layer decoded pictures may be inserted into a reference picture list(s) for coding/decoding of an enhancement layer picture similarly to the decoded reference pictures of the enhancement layer.
  • the encoder may choose a base-layer reference picture as inter prediction reference and indicate its use e.g. with a reference picture index in the coded bitstream.
  • the decoder decodes from the bitstream, for example from a reference picture index, that a base-layer picture is used as inter prediction reference for the enhancement layer.
  • a decoded base-layer picture is used as prediction reference for an enhancement layer, it is referred to as an inter-layer reference picture.
  • Scalability modes or scalability dimensions may include but are not limited to the following:
  • Base layer pictures are coded at a lower quality than enhancement layer pictures, which may be achieved for example using a greater quantization parameter value (i.e., a greater quantization step size for transform coefficient quantization) in the base layer than in the enhancement layer.
  • a greater quantization parameter value i.e., a greater quantization step size for transform coefficient quantization
  • Base layer pictures are coded at a lower resolution (i.e. have fewer pixels).
  • Spatial scalability and quality scalability may sometimes be considered the same type of scalability.
  • Bit-depth scalability Base layer pictures are coded at lower bit-depth (e.g. 8 bits) than
  • enhancement layer pictures (e.g. 10 or 12 bits).
  • Dynamic range scalability Scalable layers represent a different dynamic range and/or images obtained using a different tone mapping function and/or a different optical transfer function.
  • Chroma format scalability Base layer pictures provide lower spatial resolution in chroma sample arrays (e.g. coded in 4:2:0 chroma format) than enhancement layer pictures (e.g. 4:4:4 format).
  • enhancement layer pictures have a richer/broader color representation range than that of the base layer pictures - for example the enhancement layer may have UHDTV (ITU-R BT.2020) color gamut and the base layer may have the ITU-R BT.709 color gamut.
  • UHDTV ITU-R BT.2020
  • ROI scalability An enhancement layer represents of spatial subset of the base layer. ROI scalability may be used together with other types of scalability, e.g. quality or spatial scalability so that the enhancement layer provides higher subjective quality for the spatial subset.
  • the base layer represents a first view
  • an enhancement layer represents a second view
  • Depth scalability which may also be referred to as depth- enhanced coding.
  • a layer or some layers of a bitstream may represent texture view(s), while other layer or layers may represent depth view(s).
  • base layer information could be used to code enhancement layer to minimize the additional bitrate overhead.
  • Scalability can be enabled in two basic ways. Either by introducing new coding modes for performing prediction of pixel values or syntax from lower layers of the scalable representation or by placing the lower layer pictures to the reference picture buffer (decoded picture buffer, DPB) of the higher layer.
  • the first approach is more flexible and thus can provide better coding efficiency in most cases.
  • the second, reference frame -based scalability, approach can be implemented very efficiently with minimal changes to single layer codecs while still achieving majority of the coding efficiency gains available.
  • a reference frame -based scalability codec can be implemented by utilizing the same hardware or software implementation for all the layers, just taking care of the DPB management by external means.
  • NAL Network Abstraction Layer
  • NAL units consist of a header and payload.
  • HEVC a two-byte NAL unit header is used for all specified NAL unit types, while in other codecs NAL unit header may be similar to that in HEVC.
  • the NAL unit header contains one reserved bit, a six-bit NAL unit type indication, a three-bit temporal_id_plusl indication for temporal level or sub-layer (may be required to be greater than or equal to 1) and a six-bit nuh layer id syntax element.
  • the abbreviation TID may be used to interchangeably with the Temporalld variable.
  • Temporalld equal to 0 corresponds to the lowest temporal level.
  • temporal_id_plusl is required to be non-zero in order to avoid start code emulation involving the two NAL unit header bytes.
  • the bitstream created by excluding all VCL NAL units having a Temporalld greater than or equal to a selected value and including all other VCL NAL units remains conforming. Consequently, a picture having Temporalld equal to tid value does not use any picture having a Temporalld greater than tid value as inter prediction reference.
  • a sub-layer or a temporal sub-layer may be defined to be a temporal scalable layer (or a temporal layer, TL) of a temporal scalable bitstream.
  • Such temporal scalable layer may comprise VCL NAL units with a particular value of the Temporalld variable and the associated non-VCL NAL units nuh layer id can be understood as a scalability layer identifier.
  • NAL units can be categorized into Video Coding Layer (VCL) NAL units and non-VCL NAL units.
  • VCL NAL units are typically coded slice NAL units.
  • VCL NAL units contain syntax elements representing one or more CU.
  • the NAL unit type within a certain range indicates a VCL NAL unit, and the VCL NAL unit type indicates a picture type.
  • Images can be split into independently codab le and decodab le image segments (e.g. slices or tiles or tile groups). Such image segments may enable parallel processing, "Slices” in this description may refer to image segments constructed of certain number of basic coding units that are processed in default coding or decoding order, while “tiles” may refer to image segments that have been defined as rectangular image regions. A tile group may be defined as a group of one or more tiles. Image segments may be coded as separate units in the bitstream, such as VCL NAL units in H.264/AVC and HEVC. Coded image segments may comprise a header and a payload, wherein the header contains parameter values needed for decoding the payload.
  • a picture can be partitioned in tiles, which are rectangular and contain an integer number of CTUs.
  • the partitioning to tiles forms a grid that may be characterized by a list of tile column widths (in CTUs) and a list of tile row heights (in CTUs).
  • Tiles are ordered in the bitstream consecutively in the raster scan order of the tile grid.
  • a tile may contain an integer number of slices.
  • a slice consists of an integer number of CTUs.
  • the CTUs are scanned in the raster scan order of CTUs within tiles or within a picture, if tiles are not in use.
  • a slice may contain an integer number of tiles or a slice can be contained in a tile.
  • the CUs have a specific scan order.
  • a slice is defined to be an integer number of coding tree units contained in one independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same access unit.
  • a slice segment is defined to be an integer number of coding tree units ordered consecutively in the tile scan and contained in a single NAL (Network Abstraction Layer) unit. The division of each picture into slice segments is a partitioning.
  • an independent slice segment is defined to be a slice segment for which the values of the syntax elements of the slice segment header are not inferred from the values for a preceding slice segment
  • a dependent slice segment is defined to be a slice segment for which the values of some syntax elements of the slice segment header are inferred from the values for the preceding independent slice segment in decoding order.
  • a slice header is defined to be the slice segment header of the independent slice segment that is a current slice segment or is the independent slice segment that precedes a current dependent slice segment
  • a slice segment header is defined to be a part of a coded slice segment containing the data elements pertaining to the first or all coding tree units represented in the slice segment.
  • the CUs are scanned in the raster scan order of LCUs within tiles or within a picture, if tiles are not in use. Within an LCU, the CUs have a specific scan order.
  • a motion-constrained tile set is such that the inter prediction process is constrained in encoding such that no sample value outside the motion-constrained tile set, and no sample value at a fractional sample position that is derived using one or more sample values outside the motion-constrained tile set, is used for inter prediction of any sample within the motion- constrained tile set. Additionally, the encoding of an MCTS is constrained in a manner that motion vector candidates are not derived from blocks outside the MCTS.
  • an MCTS may be defined to be a tile set that is independent of any sample values and coded data, such as motion vectors, that are outside the MCTS.
  • An MCTS sequence may be defined as a sequence of respective MCTSs in one or more coded video sequences or alike.
  • an MCTS may be required to form a rectangular area ft should be understood that depending on the context, an MCTS may refer to the tile set within a picture or to the respective tile set in a sequence of pictures.
  • the respective tile set may be, but in general need not be, collocated in the sequence of pictures.
  • a motion-constrained tile set may be regarded as an independently coded tile set, since it may be decoded without the other tile sets.
  • sample locations used in inter prediction may be saturated so that a location that would be outside the picture otherwise is saturated to point to the corresponding boundary sample of the picture.
  • motion vectors may effectively cross that boundary or a motion vector may effectively cause fractional sample interpolation that would refer to a location outside that boundary, since the sample locations are saturated onto the boundary.
  • encoders may constrain the motion vectors on picture boundaries similarly to any MCTS boundaries.
  • the temporal motion-constrained tile sets SEI (Supplemental Enhancement Information) message of HEVC can be used to indicate the presence of motion-constrained tile sets in the bitstream.
  • a non-VCL NAL unit may be for example one of the following types: a sequence parameter set, a picture parameter set, a supplemental enhancement information (SEI) NAL unit, an access unit delimiter, an end of sequence NAL unit, an end of bitstream NAL unit, or a filler data NAL unit.
  • SEI Supplemental Enhancement Information
  • Parameter sets may be needed for the reconstruction of decoded pictures, whereas many of the other non-VCL NAL units are not necessary for the reconstruction of decoded sample values.
  • Some coding formats specify parameter sets that may carry parameter values needed for the decoding or reconstruction of decoded pictures.
  • Parameters that remain unchanged through a coded video sequence may be included in a sequence parameter set (SPS).
  • the sequence parameter set may optionally contain video usability information (VUI), which includes parameters that may be important for buffering, picture output timing, rendering, and resource reservation.
  • VUI video usability information
  • a picture parameter set (PPS) contains such parameters that are likely to be unchanged in several coded pictures.
  • a picture parameter set may include parameters that can be referred to by the coded image segments of one or more coded pictures.
  • a header parameter set (HPS) has been proposed to contain such parameters that may change on picture basis.
  • a parameter set may be activated when it is referenced e.g. through its identifier.
  • a header of an image segment such as a slice header, may contain an identifier of the PPS that is activated for decoding the coded picture containing the image segment.
  • a PPS may contain an identifier of the SPS that is activated, when the PPS is activated.
  • An activation of a parameter set of a particular type may cause the deactivation of the previously active parameter set of the same type.
  • video coding formats may include header syntax structures, such as a sequence header or a picture header.
  • a sequence header may precede any other data of the coded video sequence in the bitstream order.
  • a picture header may precede any coded video data for the picture in the bitstream order.
  • the phrase along the bitstream (e.g. indicating along the bitstream) or along a coded unit of a bitstream (e.g. indicating along a coded tile) may be used in claims and described embodiments to refer to transmission, signaling, or storage in a manner that the "out-of-band" data is associated with but not included within the bitstream or the coded unit, respectively.
  • the phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively.
  • the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
  • a container file such as a file conforming to the ISO Base Media File Format
  • certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
  • a coded picture is a coded representation of a picture.
  • a Random Access Point (RAP) picture which may also be referred to as an intra random access point (IRAP) picture, may comprise only intra-coded image segments. Furthermore, a RAP picture may constrain subsequence pictures in output order to be such that they can be correctly decoded without performing the decoding process of any pictures that precede the RAP picture in decoding order.
  • RAP Random Access Point
  • IRAP intra random access point
  • An access unit may comprise coded video data for a single time instance and associated other data.
  • an access unit (AU) may be defined as a set of NAL units that are associated with each other according to a specified classification rule, are consecutive in decoding order, and contain at most one picture with any specific value of nuh layer id.
  • an access unit may also contain non-VCL NAL units. Said specified classification rule may for example associate pictures with the same output time or picture output count value into the same access unit.
  • coded pictures may appear in certain order within an access unit. For example, a coded picture with nuh layer id equal to nuhLayerldA may be required to precede, in decoding order, all coded pictures with nuh layer id greater than nuhLayerldA in the same access unit.
  • a bitstream may be defined as a sequence of bits, which may in some coding formats or standards be in the form of a NAL unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences.
  • a first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a communication protocol.
  • An elementary stream (in the context of video coding) may be defined as a sequence of one or more bitstreams.
  • the end of the first bitstream may be indicated by a specific NAL unit, which may be referred to as the end of bitstream (EOB) NAL unit and which is the last NAL unit of the bitstream.
  • EOB end of bitstream
  • a coded video sequence may be defined as such a sequence of coded pictures in decoding order that is independently decodable and is followed by another coded video sequence or the end of the bitstream.
  • Bitstreams or coded video sequences can be encoded to be temporally scalable as follows. Each picture may be assigned to a particular temporal sub-layer. Temporal sub-layers may be enumerated e.g. from 0 upwards. The lowest temporal sub-layer, sub-layer 0, may be decoded independently. Pictures at temporal sub-layer 1 may be predicted from reconstructed pictures at temporal sub-layers 0 and 1. Pictures at temporal sub-layer 2 may be predicted from reconstructed pictures at temporal sub-layers 0, 1, and 2, and so on. In other words, a picture at temporal sub-layer N does not use any picture at temporal sub-layer greater than N as a reference for inter prediction. The bitstream created by excluding all pictures greater than or equal to a selected sub-layer value and including pictures remains conforming.
  • a sub-layer access picture may be defined as a picture from which the decoding of a sub layer can be started correctly, i.e. starting from which all pictures of the sub-layer can be correctly decoded.
  • TSA temporal sub-layer access
  • STSA step-wise temporal sub-layer access
  • the TSA picture type may impose restrictions on the TSA picture itself and all pictures in the same sub-layer that follow the TSA picture in decoding order. None of these pictures is allowed to use inter prediction from any picture in the same sub-layer that precedes the TSA picture in decoding order.
  • the TSA definition may further impose restrictions on the pictures in higher sub layers that follow the TSA picture in decoding order. None of these pictures is allowed to refer a picture that precedes the TSA picture in decoding order if that picture belongs to the same or higher sub-layer as the TSA picture.
  • TSA pictures have Temporalld greater than 0.
  • the STSA is similar to the TSA picture but does not impose restrictions on the pictures in higher sub-layers that follow the STSA picture in decoding order and hence enable up-switching only onto the sub-layer where the STSA picture resides.
  • Available media file format standards include ISO base media file format (ISO/IEC 14496- 12, which may be abbreviated ISOBMFF), MPEG-4 file format (ISO/IEC 14496-14, also known as the MP4 format), file format for NAL unit structured video (ISO/IEC 14496-15) and 3GPP file format (3GPP TS 26.244, also known as the 3GP format).
  • ISOBMFF ISO base media file format
  • MPEG-4 file format ISO/IEC 14496-14, also known as the MP4 format
  • file format for NAL unit structured video ISO/IEC 14496-15
  • 3GPP file format 3GPP TS 26.244
  • ISOBMFF Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which the embodiments may be implemented.
  • the aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
  • a basic building block in the ISO base media file format is called a box.
  • Each box has a header and a payload.
  • the box header indicates the type of the box and the size of the box in terms of bytes.
  • a box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.
  • a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
  • 4CC four character code
  • the media data may be provided in a media data‘mdat‘ box and the movie‘moov’ box may be used to enclose the metadata.
  • the ‘mdat’ and‘moov’ boxes may be required to be present.
  • the movie‘moov’ box may include one or more tracks, and each track may reside in one
  • a track may be one of the many types, including a media track that refers to samples formatted according to a media compression format (and its encapsulation to the ISO base media file format).
  • a track may be regarded as a logical channel.
  • Movie fragments may be used e.g. when recording content to ISO files e.g. in order to avoid losing data if a recording application crashes, runs out of memory space, or some other incident occurs. Without movie fragments, data loss may occur because the file format may require that all metadata, e.g., the movie box, be written in one contiguous area of the file. Furthermore, when recording a file, there may not be sufficient amount of memory space (e.g., random access memory RAM) to buffer a movie box for the size of the storage available, and re-computing the contents of a movie box when the movie is closed may be too slow. Moreover, movie fragments may enable simultaneous recording and playback of a file using a regular ISO file parser. Furthermore, a smaller duration of initial buffering may be required for progressive downloading, e.g., simultaneous reception and playback of a file when movie fragments are used and the initial movie box is smaller compared to a file with the same media content but structured without movie fragments.
  • memory space e.g.
  • the movie fragment feature may enable splitting the metadata that otherwise might reside in the movie box into multiple pieces. Each piece may correspond to a certain period of time of a track. In other words, the movie fragment feature may enable interleaving file metadata and media data. Consequently, the size of the movie box may be limited and the use cases mentioned above be realized.
  • the media samples for the movie fragments may reside in an mdat box, if they are in the same file as the moov box.
  • a moof box may be provided.
  • the moof box may include the information for a certain duration of playback time that would previously have been in the moov box.
  • the moov box may still represent a valid movie on its own, but in addition, it may include an mvex box indicating that movie fragments will follow in the same file.
  • the movie fragments may extend the presentation that is associated to the moov box in time.
  • the movie fragment there may be a set of track fragments, including anywhere from zero to a plurality per track.
  • the track fragments may in turn include anywhere from zero to a plurality of track runs (a.k.a. track fragment runs), each of which document is a contiguous run of samples for that track.
  • track runs a.k.a. track fragment runs
  • the metadata that may be included in the moof box may be limited to a subset of the metadata that may be included in a moov box and may be coded differently in some cases. Details regarding the boxes that can be included in a moof box may be found from the ISO base media file format specification.
  • a self- contained movie fragment may be defined to consist of a moof box and an mdat box that are consecutive in the file order and where the mdat box contains the samples of the movie fragment (for which the moof box provides the metadata) and does not contain samples of any other movie fragment (i.e. any other moof box).
  • the track reference mechanism can be used to associate tracks with each other.
  • the TrackReferenceBox includes box(es), each of which provides a reference from the containing track to a set of other tracks. These references are labeled through the box type (i.e. the four-character code of the box) of the contained box(es).
  • TrackGroupBox which is contained in TrackBox, enables indication of groups of tracks where each group shares a particular characteristic or the tracks within a group have a particular relationship.
  • the box contains zero or more boxes, and the particular characteristic or the relationship is indicated by the box type of the contained boxes.
  • the contained boxes include an identifier, which can be used to conclude the tracks belonging to the same track group.
  • the tracks that contain the same type of a contained box within the TrackGroupBox and have the same identifier value within these contained boxes belong to the same track group.
  • a uniform resource identifier may be defined as a string of characters used to identify a name of a resource. Such identification enables interaction with representations of the resource over a network, using specific protocols.
  • a UR1 is defined through a scheme specifying a concrete syntax and associated protocol for the URL
  • the uniform resource locator (URL) and the uniform resource name (URN) are forms of UR1.
  • a URL may be defined as a UR1 that identifies a web resource and specifies the means of acting upon or obtaining the representation of the resource, specifying both its primary access mechanism and network location.
  • a URN may be defined as a URI that identifies a resource by name in a particular namespace. A URN may be used for identifying a resource without implying its location or how to access it.
  • HTTP Hypertext Transfer Protocol
  • RTP Real-time Transport Protocol
  • UDP User Datagram Protocol
  • HTTP is easy to configure and is typically granted traversal of firewalls and network address translators (NAT), which makes it attractive for multimedia streaming applications.
  • Adaptive HTTP streaming was first standardized in Release 9 of 3rd Generation Partnership Project (3GPP) packet- switched streaming (PSS) service (3GPP TS 26.234 Release 9:“Transparent end-to-end packet- switched streaming service (PSS); protocols and codecs”).
  • 3GPP 3rd Generation Partnership Project
  • PSS packet- switched streaming
  • MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 :“Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,” International Standard, 2nd Edition, , 2014).
  • 3 GPP continued to work on adaptive HTTP streaming in communication with MPEG and published 3GP-DASH (Dynamic Adaptive Streaming over HTTP; 3GPP TS 26.247: “Transparent end-to-end packet-switched streaming Service (PSS); Progressive download and dynamic adaptive Streaming over HTTP (3GP-DASH)”.
  • MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH.
  • DASH Dynamic Adaptive Binary Arithmetic Coding
  • the multimedia content may be stored on an HTTP server and may be delivered using HTTP.
  • the content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single file or multiple files.
  • MPD Media Presentation Description
  • the MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP.
  • the MPD contains information describing media presentation, such as an HTTP- uniform resource locator (URL) of each Segment to make GET Segment request.
  • URL HTTP- uniform resource locator
  • the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods.
  • the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
  • DRM digital rights management
  • a media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments.
  • a Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc.
  • the Segment contains certain duration of media data, and metadata to decode and present the included media content.
  • a Segment is identified by a URI and can typically be requested by a HTTP GET request.
  • a Segment may be defined as a unit of data associated with an HTTP -URL and optionally a byte range that are specified by an MPD.
  • the DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML.
  • XML Extensible Markup Language
  • descriptor elements are structured in the same way, namely they contain a @schemeIdUri attribute that provides a URI to identify the scheme and an optional attribute @value and an optional attribute @id.
  • the semantics of the element are specific to the scheme employed.
  • the URI identifying the scheme may be a URN or a URL.
  • an independent representation may be defined as a representation that can be processed independently of any other representations.
  • An independent representation may be understood to comprise an independent bitstream or an independent layer of a bitstream.
  • a dependent representation may be defined as a representation for which Segments from its complementary representations are necessary for presentation and/or decoding of the contained media content components.
  • a dependent representation may be understood to comprise e.g. a predicted layer of a scalable bitstream.
  • a complementary representation may be defined as a representation which complements at least one dependent representation.
  • a complementary representation may be an independent representation or a dependent representation.
  • Dependent Representations may be described by a Representation element that contains a @dependencyld attribute. Dependent
  • Representations can be regarded as regular Representations except that they depend on a set of complementary Representations for decoding and/or presentation.
  • the @dependencyld contains the values of the @id attribute of all the complementary Representations, i.e. Representations that are necessary to present and/or decode the media content components contained in this dependent Representation.
  • Track references of ISOBMFF can be reflected in the list of four-character codes in the @associationType attribute of DASH MPD that is mapped to the list of Representation@id values given in the @associationId in a one to one manner. These attributes may be used for linking media Representations with metadata Representations.
  • a DASH service may be provided as on-demand service or live service.
  • the MPD is a static and all Segments of a Media Presentation are already available when a content provider publishes an MPD.
  • the MPD may be static or dynamic depending on the Segment URLs construction method employed by a MPD and Segments are created continuously as the content is produced and published to DASH clients by a content provider.
  • Segment URLs construction method may be either template-based Segment URLs construction method or the Segment list generation method.
  • a DASH client is able to construct Segment URLs without updating an MPD before requesting a Segment.
  • a DASH client has to periodically download the updated MPDs to get Segment URLs.
  • the template-based Segment URLs construction method is superior to the Segment list generation method.
  • An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments.
  • an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
  • a Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration.
  • the content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to- end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests.
  • a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment.
  • different strategies of selecting Segment duration may be used.
  • a Segment may be further partitioned into Subsegments e.g. to enable downloading segments in multiple parts. Subsegments may be required to contain complete access units.
  • Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment.
  • the Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets.
  • a DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation.
  • the indexing information of a segment may be put in the single box at the beginning of that segment, or spread among many indexing boxes in the segment. Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
  • the notation (Sub)segment refers to either a Segment or a Subsegment. If Segment Index boxes are not present, the notation (Sub)segment refers to a Segment. If Segment Index boxes are present, the notation (Sub)segment may refer to a Segment or a Subsegment, e.g. depending on whether the client issues requests on Segment or Subsegment basis.
  • MPEG-DASH defines segment-container formats for both ISO Base Media File Format and MPEG-2 Transport Streams.
  • Other specifications may specify segment formats based on other container formats. For example, a segment format based on Matroska container file format has been proposed.
  • DASH supports rate adaptation by dynamically requesting Media Segments from different Representations within an Adaptation Set to match varying network bandwidth.
  • a DASH client switches up/down Representation, coding dependencies within Representation have to be taken into account.
  • a Representation switch may happen at a random access point (RAP), which is typically used in video coding techniques such as H.264/AVC.
  • RAP random access point
  • SAP Stream Access Point
  • a SAP is specified as a position in a Representation that enables playback of a media stream to be started using only the information contained in Representation data starting from that position onwards (preceded by initialising data in the Initialisation Segment, if any). Hence, Representation switching can be performed in SAP.
  • @frameRate the bitrate (@bandwidth); indicated quality ordering between the Representations (@qualityRanking).
  • qualityRanking The semantics of @qualityRanking are specified as follows: specifies a quality ranking of the Representation relative to other Representations in the same Adaptation Set. Lower values represent higher quality content. If not present, then no ranking is defined.
  • SAP Type 1 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps) and in addition the first picture in decoding order is also the first picture in presentation order.
  • SAP Type 2 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps), for which the first picture in decoding order may not be the first picture in presentation order.
  • SAP Type 3 corresponds to what is known in some coding schemes as an“Open GOP random access point”, in which there may be some pictures in decoding order that cannot be correctly decoded and have presentation times less than intra-coded picture associated with the SAP.
  • each intra picture has been a random access point in a coded sequence.
  • the capability of flexible use of multiple reference pictures for inter prediction in some video coding standards, such as H.264/AVC and H.265/HEVC, has a consequence that an intra picture may not be sufficient for random access. Therefore, pictures may be marked with respect to their random access point functionality rather than inferring such iunctionality from the coding type; for example an IDR picture as specified in the H.264/AVC standard can be used as a random access point.
  • a closed group of pictures is such a group of pictures in which all pictures can be correctly decoded. For example, in H.264/AVC, a closed GOP may start from an IDR access unit.
  • An open group of pictures is such a group of pictures in which pictures preceding the initial intra picture in output order may not be correctly decodable but pictures following the initial intra picture in output order are correctly decodable.
  • Such an initial intra picture may be indicated in the bitstream and/or concluded from the indications from the bitstream, e.g. by using the CRA NAL unit type in HEVC.
  • the pictures preceding the initial intra picture starting an open GOP in output order and following the initial intra picture in decoding order may be referred to as leading pictures. There are two types of leading pictures: decodable and non-decodable.
  • Decodable leading pictures such as RADL pictures of HEVC
  • decodable leading pictures use only the initial intra picture or subsequent pictures in decoding order as reference in inter prediction.
  • Non-decodable leading pictures such as RASL pictures of HEVC, are such that cannot be correctly decoded when the decoding is started from the initial intra picture starting the open GOP.
  • a DASH Preselection defines a subset of media components of an MPD that are expected to be consumed jointly by a single decoder instance, wherein consuming may comprise decoding and rendering.
  • the Adaptation Set that contains the main media component for a Preselection is referred to as main Adaptation Set.
  • each Preselection may include one or multiple partial Adaptation Sets. Partial Adaptation Sets may need to be processed in combination with the main Adaptation Set.
  • a main Adaptation Set and partial Adaptation Sets may be indicated by one of the two means: a preselection descriptor or a Preselection element.
  • Virtual reality is a rapidly developing area of technology in which image or video content, sometimes accompanied by audio, is provided to a user device such as a user headset (a.k.a. head- mounted display).
  • the user device may be provided with a live or stored feed from a content source, the feed representing a virtual space for immersive output through the user device.
  • 3DoF three degrees of freedom
  • rendering by taking the position of the user device and changes of the position into account can enhance the immersive experience.
  • an enhancement to 3DoF is a six degrees-of-freedom (6DoF) virtual reality system, where the user may freely move in Euclidean space as well as rotate their head in the yaw, pitch and roll axes.
  • 6DoF degrees-of-freedom virtual reality systems
  • Volumetric content comprises data representing spaces and/or objects in three- dimensions from all angles, enabling the user to move fully around the space and/or objects to view them from any angle.
  • Such content may be defined by data describing the geometry (e.g. shape, size, position in a three- dimensional space) and attributes such as colour, opacity and reflectance.
  • the data may also define temporal changes in the geometry and attributes at given time instances, similar to frames in two- dimensional video.
  • VR video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view.
  • HMD head-mounted display
  • the spatial subset of the VR video content to be displayed may be selected based on the orientation of the HMD.
  • a flat-panel viewing environment is assumed, wherein e.g. up to 40-degree field-of-view may be displayed.
  • wide-FOV content e.g. fisheye
  • MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard.
  • OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport).
  • OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position.
  • 3DoF degrees of freedom
  • the viewport-dependent streaming scenarios described further below have also been designed for 3DoF although could potentially be adapted to a different number of degrees of freedom.
  • a real-world audio-visual scene (A) may be captured by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors. The acquisition results in a set of digital image/video (Bi) and audio (Ba) signals.
  • the cameras/lenses may cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
  • Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics).
  • the channel-based signals may conform to one of the loudspeaker layouts defined in Cl CP (Coding- Independent Code-Points).
  • Cl CP Coding- Independent Code-Points
  • the loudspeaker layout signals of the rendered immersive audio program may be binaraulized for presentation via headphones.
  • the input images of one time instance may be stitched to generate a projected picture representing one view.
  • An example of image stitching, projection, and region-wise packing process for monoscopic content is illustrated with Figure 2.
  • Input images (Bi) are stitched and projected onto a three-dimensional projection structure that may for example be a unit sphere.
  • the projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof.
  • a projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed.
  • the image data on the projection structure is further arranged onto a two-dimensional projected picture (C).
  • a region- wise packing is then applied to map the projected picture (C) onto a packed picture (D). If the region- wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding. Otherwise, regions of the projected picture (C) are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding.
  • region- wise packing may be defined as a process by which a projected picture is mapped to a packed picture.
  • the term packed picture may be defined as a picture that results from region- wise packing of a projected picture.
  • the input images of one time instance are stitched to generate a projected picture representing two views (CL, CR), one for each eye.
  • Both views (CL, CR) can be mapped onto the same packed picture (D), and encoded by a traditional 2D video encoder.
  • each view of the projected picture can be mapped to its own packed picture, in which case the image stitching, projection, and region- wise packing is performed as illustrated in Figure 2.
  • a sequence of packed pictures of either the left view or the right view can be independently coded or, when using a multiview video encoder, predicted from the other view.
  • Input images (Bi) are stitched and projected onto two three- dimensional projection structures, one for each eye.
  • the image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere.
  • Frame packing is applied to pack the left view picture and right view picture onto the same projected picture.
  • region- wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region- wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
  • the image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure.
  • the region-wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
  • 360-degree panoramic content covers horizontally the full 360- degree field-of-view around the capturing position of an imaging device.
  • the vertical field-of-view may vary and can be e.g. 180 degrees.
  • Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that has been mapped to a two-dimensional image plane using equirectangular projection (ERP).
  • ERP equirectangular projection
  • the horizontal coordinate may be considered equivalent to a longitude
  • the vertical coordinate may be considered equivalent to a latitude, with no transformation or scaling applied.
  • the process of forming a monoscopic equirectangular panorama picture is illustrated in Figure 4.
  • a set of input images such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image.
  • the spherical image is further projected onto a cylinder (without the top and bottom faces).
  • the cylinder is unfolded to form a two-dimensional projected picture.
  • one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere.
  • the projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
  • 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
  • polyhedron i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid
  • cylinder by projecting a spherical image onto the cylinder, as described above with the equirectangular projection
  • cylinder directly without projecting onto a sphere first
  • cone etc. and then unwrapped to a two-dimensional image plane.
  • panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane.
  • a panoramic image may have less than 360-degree horizontal field-of-view and up to 180- degree vertical field-of-view, while otherwise has the characteristics of equirectangular projection format.
  • Region-wise packing information may be encoded as metadata in or along the bitstream.
  • the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier.
  • Rectangular region-wise packing metadata may be described as follows:
  • the metadata defines a rectangle in a projected picture, the respective rectangle in the packed picture, and an optional transformation of rotation by 90, 180, or 270 degrees and/or horizontal and/or vertical mirroring. Rectangles may, for example, be indicated by the locations of the top-left comer and the bottom-right comer.
  • the mapping may comprise resampling. As the sizes of the respective rectangles can differ in the projected and packed pictures, the mechanism infers region- wise resampling.
  • region- wise packing provides signalling for the following usage scenarios:
  • regions of ERP or faces of CMP can have different sampling densities and the underlying projection structure can have different orientations.
  • a guard band may be defined as an area in a packed picture that is not rendered but may be used to improve the rendered part of the packed picture to avoid or mitigate visual artifacts such as seams.
  • the OMAF allows the omission of image stitching, projection, and region- wise packing and encode the image/video data in their captured format.
  • images (D) are considered the same as images (Bi) and a limited number of fisheye images per time instance are encoded.
  • the stitching process is not needed, since the captured signals are inherently immersive and omnidirectional.
  • the stitched images (D) are encoded as coded images (Ei) or a coded video bitstream (Ev).
  • the captured audio (Ba) is encoded as an audio bitstream (Ea).
  • the coded images, video, and/or audio are then composed into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format.
  • the media container file format is the ISO base media file format.
  • the file encapsulator also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
  • the metadata in the file may include:
  • Region- wise packing information may be encoded as metadata in or along the bitstream, for example as region-wise packing SEI message(s) and/or as region-wise packing boxes in a file containing the bitstream.
  • the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier.
  • the region- wise mapping information may for example comprise for each mapped region a source rectangle (a.k.a. projected region) in the projected picture and a destination rectangle (a.k.a.
  • the packing information may comprise one or more of the following: the orientation of the three-dimensional projection structure relative to a coordinate system, indication which projection format is used, region-wise quality ranking indicating the picture quality ranking between regions and/or first and second spatial region sequences, one or more transformation operations, such as rotation by 90, 180, or 270 degrees, horizontal mirroring, and vertical mirroring.
  • the semantics of packing information may be specified in a manner that they are indicative for each sample location within packed regions of a decoded picture which is the respective spherical coordinate location.
  • the segments (Fs) may be delivered using a delivery mechanism to a player.
  • the file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F').
  • a file decapsulator processes the file (F') or the received segments (F's) and extracts the coded bitstreams (E'a, E'v, and/or E'i) and parses the metadata.
  • the audio, video, and/or images are then decoded into decoded signals (B'a for audio, and D' for images/video).
  • the decoded packed pictures (D') are projected onto the screen of a head-mounted display or any other display device based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region-wise packing metadata parsed from the file.
  • decoded audio (B'a) is rendered, e.g. through headphones, according to the current viewing orientation.
  • the current viewing orientation is determined by the head tracking and possibly also eye tracking functionality. Besides being used by the Tenderer to render the appropriate part of decoded video and audio signals, the current viewing orientation may also be used the video and audio decoders for decoding optimization.
  • a video rendered by an application on a HMD or on another display device renders a portion of the 360-degree video.
  • This portion may be defined as a viewport.
  • a viewport may be understood as a window on the 360-degree world represented in the omnidirectional video displayed via a rendering display.
  • a viewport may be defined as a part of the spherical video that is currently displayed.
  • a viewport may be characterized by horizontal and vertical field of views (FOV or FoV).
  • a viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint.
  • a viewing position may be defined as the position within a viewing space from which the user views the scene.
  • a viewing space may be defined as a 3D space of viewing positions within which rendering of image and video is enabled and VR experience is valid.
  • Typical representation formats for volumetric content include triangle meshes, point clouds and voxels.
  • Temporal information about the content may comprise individual capture instances, i.e. frames or the position of objects as a function of time.
  • volumetric content may depend on how the data is to be used. For example, dense voxel arrays may be used to represent volumetric medical images. In three-dimensional graphics, polygon meshes are extensively used. Point clouds, on the other hand, are well suited to applications such as capturing real-world scenes where the topology of the scene is not necessarily a two- dimensional surface or manifold. Another method is to code three-dimensional data to a set of texture and depth maps. Closely related to this is the use of elevation and multi-level surface maps. For the avoidance of doubt, embodiments herein are applicable to any of the above technologies.
  • Voxel of a three-dimensional world corresponds to a pixel of a two-dimensional world. Voxels exist in a three-dimensional grid layout.
  • An octree is a tree data structure used to partition a three-dimensional space. Octrees are the three-dimensional analog of quadtrees.
  • a sparse voxel octree (SVO) describes a volume of a space containing a set of solid voxels of varying sizes. Empty areas within the volume are absent from the tree, which is why it is called "sparse".
  • a three-dimensional volumetric representation of a scene may be determined as a plurality of voxels on the basis of input streams of at least one multicamera device.
  • at least one but preferably a plurality (i.e. 2, 3, 4, 5 or more) of multicamera devices may be used to capture 3D video representation of a scene.
  • the multicamera devices are distributed in different locations in respect to the scene, and therefore each multicamera device captures a different 3D video representation of the scene.
  • the 3D video representations captured by each multicamera device may be used as input streams for creating a 3D volumetric representation of the scene, said 3D volumetric representation comprising a plurality of voxels.
  • Voxels may be formed from the captured 3D points e.g.
  • Voxels may also be formed through the construction of the sparse voxel octree. Each leaf of such a tree represents a solid voxel in world space; the root node of the tree represents the bounds of the world.
  • the sparse voxel octree construction may have the following steps: 1) map each input depth map to a world space point cloud, where each pixel of the depth map is mapped to one or more 3D points; 2) determine voxel attributes such as colour and surface normal vector by examining the neighbourhood of the source pixel(s) in the camera images and the depth map; 3) determine the size of the voxel based on the depth value from the depth map and the resolution of the depth map; 4) determine the SVO level for the solid voxel as a function of its size relative to the world bounds; 5) determine the voxel coordinates on that level relative to the world bounds; 6) create new and/or traversing existing SVO nodes until arriving at the determined voxel coordinates; 7) insert the solid voxel as a leaf of the tree, possibly replacing or merging attributes from a previously existing voxel at those coordinates. Nevertheless, the size of voxel within the 3D volumetric
  • a volumetric video frame may be regarded as a complete sparse voxel octree that models the world at a specific point in time in a video sequence.
  • Voxel attributes contain information like colour, opacity, surface normal vectors, and surface material properties. These are referenced in the sparse voxel octrees (e.g. colour of a solid voxel), but can also be stored separately.
  • Point clouds are commonly used data structures for storing volumetric content. Compared to point clouds, sparse voxel octrees describe a recursive subdivision of a finite volume with solid voxels of varying sizes, while point clouds describe an unorganized set of separate points limited only by the precision of the used coordinate values.
  • User's position can be detected relative to content provided within the volumetric virtual reality content, e.g. so that the user can move freely within a given virtual reality space, around individual objects or groups of objects, and can view the objects from different angles depending on the movement (e.g. rotation and location) of their head in the real world.
  • the user may also view and explore a plurality of different virtual reality spaces and move from one virtual reality space to another one.
  • the angular extent of the environment observable or hearable through a rendering arrangement, such as with a head-mounted display, may be called the visual field of view (FOV).
  • FOV visual field of view
  • the actual FOV observed or heard by a user depends on the inter-pupillary distance and on the distance between the lenses of the virtual reality headset and the user's eyes, but the FOV can be considered to be approximately the same for all users of a given display device when the virtual reality headset is being worn by the user.
  • a volumetric image/video delivery system may comprise providing a plurality of patches representing part of a volumetric scene, and providing, for each patch, patch visibility information indicative of a set of directions from which a forward surface of the patch is visible.
  • a volumetric image/video delivery system may further comprise providing one or more viewing positions associated with a client device, and processing one or more of the patches dependent on whether the patch visibility information indicates that the forward surface of the one or more patches is visible from the one or more viewing positions.
  • Patch visibility information is data indicative of where in the volumetric space the forward surface of the patch can be seen.
  • patch visibility information may comprise a visibility cone, which may comprise a visibility cone direction vector (X, Y, Z) and an opening angle (A).
  • the opening angle (A) defines a set of spatial angles from which the forward surface of the patch can be seen.
  • the patch visibility metadata may comprise a definition of a bounding sphere surface and sphere region metadata, identical or similar to that specified by the omnidirectional media format (OMAF) standard (ISO/IEC 23090-2).
  • the bounding sphere surface may for example be defined by a three-dimensional location of the centre of the sphere, and the radius of the sphere.
  • the patch When the viewing position collocates with the bounding sphere surface, the patch may be considered visible within the indicated sphere region.
  • the geometry of the bounding surface may also be something other than a sphere, such as cylinder, cube, or cuboid.
  • Multiple sets of patch visibility metadata may be defined for the same three-dimensional location of the centre of the bounding surface, but with different radii (or information indicative of the distance of the bounding surface from the three-dimensional location). Indicating several pieces of patch visibility metadata may be beneficial to handle occlusions.
  • a volumetric image/video delivery system may comprise one or more patch culling modules.
  • One patch culling module may be configured to determine which patches are transmitted to a user device, for example the rendering module of the headset.
  • Another patch culling module may be configured to determine which patches are decoded.
  • a third patch culling module may be configured to determine which decoded patches are passed to rendering. Any combination of patch culling modules may be present or active in a volumetric image/video delivery or playback system. Patch culling may utilize the patch visibility information of patches, the current viewing position, the current viewing orientation, the expected iuture viewing positions, and/or the expected future viewing orientations.
  • each volumetric patch may be projected to a two-dimensional colour (or other form of texture) image and to a corresponding depth image, also known as a depth map. This conversion enables each patch to be converted back to volumetric form at a client rendering module of the headset using both images.
  • a source volume of a volumetric image may be projected onto one or more projection surfaces. Patches on the projection surfaces may be determined, and those patches may be arranged onto one or more two-dimensional frames.
  • texture and depth patches may be formed similarly shows a projection of a source volume to a projection surface, and inpainting of a sparse projection.
  • a three-dimensional (3D) scene model comprising geometry primitives such as mesh elements, points, and/or voxel, is projected onto one or more projection surfaces.
  • 3D three-dimensional
  • the "unfolding" may include determination of patches.
  • 2D planes may then be encoded using standard 2D image or video compression technologies. Relevant projection geometry information may be transmitted alongside the encoded video files to the decoder.
  • the decoder may then decode the coded image/video sequence and perform the inverse projection to regenerate the 3D scene model object in any desired representation format, which may be different from the starting format e.g. reconstructing a point cloud from original mesh model data.
  • multiple points of volumetric video or image are projected to the same pixel position.
  • Such cases may be handled by creating more than one "layer”.
  • layer in volumetric video such as point cloud compression
  • terms such as PCC layer or volumetric video layer may be used to make a distinction from a layer of scalable video coding.
  • Each volumetric (3D) patch may be projected onto more than one 2D patch, representing different layers of visual data, such as points, projected onto the same 2D positions.
  • the patches may be organized for example based on ascending distance to the projection plane.
  • H(u,v) be the set of points of the current patch that get projected to the same pixel (u, v).
  • the first layer also called the near layer, stores the point of H(u,v) with the lowest depth DO.
  • the second layer referred to as the far layer, captures the point of H(u,v) with the highest depth within the interval [DO, D0+?], where ? is a user-defined parameter that describes the surface thickness.
  • volumetric image/video can comprise, additionally or alternatively to texture and depth, other types of patches, such as reflectance, opacity or transparency (e.g. alpha channel patches), surface normal, albedo, and/or other material or surface attribute patches.
  • patches such as reflectance, opacity or transparency (e.g. alpha channel patches), surface normal, albedo, and/or other material or surface attribute patches.
  • Two-dimensional form of patches may be packed into one or more atlases.
  • Texture atlases are known in the art, comprising an image consisting of sub-images, the image being treated as a single unit by graphics hardware and which can be compressed and transmitted as a single image for subsequent identification and decompression.
  • Geometry atlases may be constructed similarly to texture atlases. Texture and geometry atlases may be treated as separate pictures (and as separate picture sequences in case of volumetric video), or texture and geometry atlases may be packed onto the same frame, e.g. similarly to how frame packing is conventionally performed. Atlases may be encoded as frames with an image or video encoder.
  • the sub-image layout in an atlas may also be organized such that it is possible to encode a patch or a set of patches having similar visibility information into spatiotemporal units that can be decoded independently of other spatiotemporal units.
  • a tile grid as understood in the context of High Efficiency Video Coding (HEVC)
  • HEVC High Efficiency Video Coding
  • an atlas may be organized in a manner such that a patch or a group of patches having similar visibility information can be encoded as a motion-constrained tile set (MCTS).
  • MCTS motion-constrained tile set
  • one or more (but not the entire set of) spatiotemporal units may be provided and stored as a track, as is understood in the context of the ISO base media file format, or as any similar container file format structure.
  • a track may be referred to as a patch track.
  • Patch tracks may for example be sub-picture tracks, as understood in the context of OMAF, or tile tracks, as understood in the context of 1SO/1EC 14496-15.
  • versions of the one or more atlases are encoded. Different versions may include, but are not limited to, one or more of the following: different bitrate versions of the one or more atlases at the same resolution; different spatial resolutions of the atlases; and different versions for different random access intervals; these may include one or more intra-coded atlases (where every picture can be randomly accessed).
  • combinations of patches from different versions of the texture atlas may be prescribed and described as metadata, such as extractor tracks, as will be understood in the context of OMAF and/or ISO/IEC 14496-15.
  • a prescription may be authored in a manner so that the limit is obeyed. For example, patches may be selected from a lower-resolution texture atlas according to subjective importance. The selection may be performed in a manner that is not related to the viewing position.
  • the prescription may be accompanied by metadata characterizing the obeyed limit(s), e.g. the codec Level that is obeyed.
  • a prescription may be made specific to a visibility cone (or generally to a specific visibility) and hence excludes the patches not visible in the visibility cone.
  • the selection of visibility cones for which the prescriptions are generated may be limited to a reasonable number, such that switching from one prescription to another is not expected to occur frequently.
  • the visibility cones of prescriptions may overlap to avoid switching back and forth between two prescriptions.
  • the prescription may be accompanied by metadata indicative of the visibility cone (or generally visibility information).
  • a prescription may use a specific grid or pattern of independent spatiotemporal units.
  • a prescription may use a certain tile grid, wherein tile boundaries are also MCTS boundaries.
  • the prescription may be accompanied by metadata indicating potential sources (e.g. track groups, tracks, or representations) that are suitable as spatiotemporal units.
  • a patch track forms a Representation in the context of DASH. Consequently, the Representation element in DASH MPD may provide metadata on the patch, such as patch visibility metadata, related to the patch track. Clients may select patch Representations and request
  • a collector track may be defined as a track that extracts implicitly or explicitly coded video data, such as coded video data of MCTSs or sub-pictures, from other tracks. When resolved by a file reader or alike, a collector track may result into a bitstream that conforms to a video coding standard or format.
  • a collector track may for example extract MCTSs or sub-pictures to form a coded picture sequence where MCTSs or sub-pictures are arranged to a grid. For example, when a collector track extracts two MCTSs or sub-pictures, they may be arranged into a 2x1 grid of MCTSs or sub-pictures.
  • an extractor track that extracts MCTSs or sub-pictures from other tracks may be regarded as a collector track.
  • a tile base track as discussed subsequently is another example of a collector track.
  • a collector track may also be called a collection track.
  • a track that is a source for extracting to a collector track may be referred to as a collection item track.
  • Extractors specified in ISO/IEC 14496-15 for H.264/AVC and HEVC enable compact formation of tracks that extract NAL unit data by reference.
  • An extractor is a NAL-unit-like structure.
  • a NAL-unit-like structure may be specified to comprise a NAL unit header and NAL unit payload like any NAL units, but start code emulation prevention (that is required for a NAL unit) might not be followed in a NAL-unit-like structure.
  • an extractor contains one or more constructors.
  • a sample constructor extracts, by reference, NAL unit data from a sample of another track.
  • An in-line constructor includes NAL unit data. The term in-line may be defined e.g.
  • an extractor is processed by a file reader that requires it, the extractor is logically replaced by the bytes resulting when resolving the contained constructors in their appearance order. Nested extraction may be disallowed, e.g. the bytes referred to by a sample constructor shall not contain extractors; an extractor shall not reference, directly or indirectly, another extractor.
  • An extractor may contain one or more constructors for extracting data from the current track or from another track that is linked to the track in which the extractor resides by means of a track reference of type 'seal'.
  • the bytes of a resolved extractor may represent one or more entire NAL units.
  • a resolved extractor starts with a valid length field and a NAL unit header.
  • the bytes of a sample constructor are copied only from the single identified sample in the track referenced through the indicated 'seal' track reference.
  • the alignment is on decoding time, i.e. using the time-to- sample table only, followed by a counted offset in sample number.
  • Extractors are a media-level concept and hence apply to the destination track before any edit list is considered. (However, one would normally expect that the edit lists in the two tracks would be identical).
  • viewport-dependent streaming which may be also referred to as viewport-adaptive streaming (VAS) or viewport-specific streaming
  • VAS viewport-adaptive streaming
  • a subset of 360-degree video content covering the viewport i.e., the current view orientation
  • VAS viewport-adaptive streaming
  • MCTSs motion-constrained tile sets
  • Several versions of the content are encoded at different bitrates or qualities using the same MCTS partitioning. Each MCTS sequence is made available for streaming as a DASH Representation or alike. The player selects on MCTS basis which bitrate or quality is received.
  • H.264/AVC does not include the concept of tiles, but the operation like MCTSs can be achieved by arranging regions vertically as slices and restricting the encoding similarly to encoding of MCTSs.
  • tile and MCTS are used in this document but should be understood to apply to H.264/AVC too in a limited manner. In general, the terms tile and MCTS should be understood to apply to similar concepts in any coding format or specification.
  • tile-based viewport-dependent streaming schemes are the following:
  • RWMQ Region-wise mixed quality
  • One or more bitrate and/or resolution versions of a complete low- resolution/low-quality omnidirectional video are encoded and made available for streaming.
  • MCTS-based encoding is performed and MCTS sequences are made available for streaming.
  • Players receive a complete low-resolution/low-quality omnidirectional video and select and receive the high-resolution MCTSs covering the viewport.
  • MCTSs are encoded at multiple
  • tile-based viewport- dependent streaming methods may not be exhaustive, i.e. they may be tile-based viewport-dependent streaming methods that do not belong to any of the described categories.
  • All above-described viewport-dependent streaming approaches, tiles or MCTSs (or guard bands of tiles or MCTSs) may overlap in sphere coverage by an amount selected in the pre-processing or encoding.
  • All above-described viewport-dependent streaming approaches may be realized with client- driven bitstream rewriting (a.k.a. late binding) or with author-driven MCTS merging (a.k.a. early binding).
  • client- driven bitstream rewriting a.k.a. late binding
  • author-driven MCTS merging a.k.a. early binding
  • a player selects MCTS sequences to be received, selectively rewrites portions of the received video data as necessary (e.g. parameter sets and slice segment headers may need to be rewritten) for combining the received MCTSs into a single bitstream, and decodes the single bitstream.
  • Early binding refers to the use of author-driven information for rewriting portions of the received video data as necessary, for merging of MCTSs into a single bitstream to be decoded, and in some cases for selection of MCTS sequences to be received.
  • Early binding approaches include an extractor-driven approach and tile track approach, which are described subsequently.
  • one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is stored as a tile track (e.g. an HEVC tile track) in a file.
  • a tile base track (e.g. an HEVC tile base track) may be generated and stored in a file.
  • the tile base track represents the bitstream by implicitly collecting motion-constrained tile sets from the tile tracks.
  • the tile tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive tile tracks covering the entire omnidirectional content. Better quality or higher resolution tile tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video.
  • a tile base track may include track references to the tile tracks, and/or tile tracks may include track references to the tile base track.
  • the 'sabt' track reference is used to refer to tile tracks from a tile base track, and the tile ordering is indicated by the order of the tile tracks contained by a 'sabt' track reference.
  • a tile track has is a 'tbas' track reference to the tile base track.
  • one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is modified to become a compliant bitstream of its own (e.g. HEVC bitstream) and stored as a sub-picture track (e.g. with untransformed sample entry type 'hvcT for HEVC) in a file.
  • a compliant bitstream of its own e.g. HEVC bitstream
  • a sub-picture track e.g. with untransformed sample entry type 'hvcT for HEVC
  • One or more extractor tracks e.g. an HEVC extractor tracks
  • the extractor track represents the bitstream by explicitly extracting (e.g. by HEVC extractors) motion-constrained tile sets from the sub picture tracks.
  • the sub-picture tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive sub-picture tracks covering the entire omnidirectional content. Better quality or higher resolution sub-picture tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video.
  • tile track approach and extractor-driven approach are described in details, specifically in the context of HEVC, they apply to other codecs and similar concepts as tile tracks or extractors.
  • a combination or a mixture of tile track and extractor-driven approach is possible. For example, such a mixture could be based on the tile track approach, but where a tile base track could contain guidance for rewriting operations for the client, e.g. the tile base track could include rewritten slice or tile group headers.
  • content authoring for tile-based viewport-dependent streaming may be realized with sub-picture-based content authoring, described as follows.
  • the pre-processing (prior to encoding) comprises partitioning uncompressed pictures to sub pictures.
  • Several sub-picture bitstreams of the same uncompressed sub-picture sequence are encoded, e.g. at the same resolution but different qualities and bitrates.
  • the encoding may be constrained in a manner that merging of coded sub-picture bitstream to a compliant bitstream representing
  • Each sub-picture bitstream may be encapsulated as a sub-picture track, and one or more extractor tracks merging the sub-picture tracks of different sub-picture locations may be additionally formed. If a tile track based approach is targeted, each sub-picture bitstream is modified to become an MCTS sequence and stored as a tile track in a file, and one or more tile base tracks are created for the tile tracks.
  • Tile-based viewport-dependent streaming approaches may be realized by executing a single decoder instance or one decoder instance per MCTS sequence (or in some cases, something in between, e.g. one decoder instance per MCTSs of the same resolution), e.g. depending on the capability of the device and operating system where the player runs.
  • the use of single decoder instance may be enabled by late binding or early binding.
  • the extractor-driven approach may use sub-picture tracks that are compliant with the coding format or standard without modifications.
  • Other approaches may need either to rewrite image segment headers, parameter sets, and/or alike information in the client side to construct a conforming bitstream or to have a decoder implementation capable of decoding an MCTS sequence without the presence of other coded video data.
  • tile group identifiers from a tile base track or an extractor track, wherein the tile group identified by a tile group identifier contains the collocated tile tracks or the sub-picture tracks that are alternatives for extraction.
  • one extractor track per each picture size and each tile grid is sufficient.
  • one extractor track may be needed for each distinct viewing orientation.
  • tile rectangle based encoding and streaming An approach similar to above-described tile-based viewport-dependent streaming approaches, which may be referred to as tile rectangle based encoding and streaming, is described next. This approach may be used with any video codec, even if tiles similar to HEVC were not available in the codec or even if motion-constrained tile sets or alike were not implemented in an encoder.
  • the source content is split into tile rectangle sequences before encoding.
  • Each tile rectangle sequence covers a subset of the spatial area of the source content, such as full panorama content, which may e.g. be of equirectangular projection format.
  • Each tile rectangle sequence is then encoded independently from each other as a single-layer bitstream.
  • bitstreams may be encoded from the same tile rectangle sequence, e.g. for different bitrates.
  • Each tile rectangle bitstream may be encapsulated in a file as its own track (or alike) and made available for streaming.
  • the tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive tracks covering the entire omnidirectional content. Better quality or higher resolution tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non- visible viewports.
  • each track may be decoded with a separate decoder instance.
  • the primary viewport i.e., the current viewing orientation
  • the remaining of 360-degree video is transmitted at a lower quality/resolution.
  • the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display
  • another version of the content needs to be streamed, matching the new viewing orientation.
  • the new version can be requested starting from a stream access point (SAP), which are typically aligned with (Sub)segments.
  • SAPs correspond to random-access pictures, are intra-coded, and are hence costly in terms of rate- distortion performance.
  • the delay (here referred to as the viewport quality update delay) in upgrading the quality after a viewing orientation change (e.g. a head turn) is conventionally in the order of seconds and is therefore clearly noticeable and annoying.
  • viewport switching in viewport-dependent streaming which may be compliant with MPEG OMAF, is enabled at stream access points, which involve intra coding and hence a greater bitrate compared to respective inter coded pictures at the same quality.
  • a compromise between the stream access point interval and the rate- distortion performance is hence chosen in an encoding configuration.
  • HEVC bitstreams of the same omnidirectional source content may be encoded at the same resolution but different qualities and bitrates using motion- constrained tile sets.
  • the MCTS grid in all bitstreams is identical.
  • each bitstream is encapsulated in its own file, and the same track identifier is used for each tile track of the same tile grid position in all these files.
  • HEVC tile tracks are formed from each motion-constrained tile set sequence, and a tile base track is additionally formed.
  • the client may parse tile base track to implicitly reconstruct a bitstream from the tile tracks.
  • the reconstructed bitstream can be decoded with a conforming HEVC decoder.
  • Clients can choose which version of each MCTS is received.
  • the same tile base track suffices for combining MCTSs from different bitstreams, since the same track identifiers are used in the respective tile tracks.
  • Figure 5 illustrates an example how tile tracks of the same resolution can be used for tile- based omnidirectional video streaming.
  • a 4x2 tile grid has been used in forming of the motion- constrained tile sets.
  • Two HEVC bitstreams originating from the same source content are encoded at different picture qualities and bitrates.
  • Each bitstream may be encapsulated in its own file wherein each motion-constrained tile set sequence may be included in one tile track and a tile base track is also included.
  • the client may choose the quality at which each tile track is received based on the viewing orientation. In this example the client receives tile tracks 1, 2, 5, and 6 at a particular quality and tile tracks 3, 4, 7, and 8 at another quality.
  • the tile base track is used to order the received tile track data into a bitstream that can be decoded with an HEVC decoder.
  • Video coding formats have constraints on spatial partitioning of pictures.
  • HEVC uses a tile grid of picture-wide tile rows and picture-high tile columns specified in units of CTUs with certain minimum with and height constraints for tile columns and tile rows.
  • Different parts may have different sizes, so their optimal packing along spatial partitioning units of 2D video codecs might not be possible.
  • an extractor track with rewritten slice header which relates to ISO/IEC 14496-15 including the design for extractors, which can be used for rewriting parameter sets and slice headers in the extractor track, while the tile data is included by reference.
  • Such an approach may require one extractor track per each possible extracted combination, such as one extractor track per each range of 360-degree video viewing orientations that results in a different set of picked tiles.
  • the present embodiments are related to sub-picture-based video codec operation.
  • Visual content at specific time instances is divided into several parts, were each part is represented using a sub-picture (a.k.a. subpicture).
  • Respective sub-pictures at different time instances form a sub-picture sequence, wherein the definition of "respective" may depend on the context, but can be for example the same spatial portion of a picture area in a sequence of pictures or the content acquired with the same settings, such as the same acquisition position, orientation, and projection surface.
  • a picture at specific time instance may be defined as a collection of all the sub-pictures at the specific time instance.
  • Each sub-picture is coded using a conventional video encoder, and reconstructed sub-picture is stored in a reconstructed sub-picture memory corresponding to the sub-picture sequence.
  • the encoder can use reconstructed sub pictures of the same sub-picture sequence as reference for prediction.
  • Coded sub-pictures are included as separate units (e.g. VCL NAL units) in the same bitstream.
  • a decoder receives coded video data (e.g. a bitstream).
  • a sub-picture is decoded as a separate unit from other sub-pictures using a conventional video decoder.
  • the decoded sub-picture may be buffered using a decoded picture buffering process.
  • the decoded picture buffering process may provide the decoded sub-picture of a particular sub-picture sequence to the decoder, and the decoder may use the decoded sub-picture as reference for prediction for predicting a sub-picture at the same sub-picture sequence.
  • Figure 6 illustrates an example of a decoder.
  • the decoder receives coded video data (e.g. a bitstream).
  • a sub-picture is decoded in a decoding process 610 as a separate unit from other sub pictures using a conventional video decoder.
  • the decoded sub-picture may be buffered using a decoded picture buffering process 620.
  • the decoded picture buffering process may provide the decoded sub-picture of a particular sub-picture sequence to the decoding process 610, and the decoder may use the decoded sub-picture as a reference for prediction for predicting a sub-picture at the same sub-picture sequence.
  • the decoded picture buffering process 620 may comprise a sub-picture-sequence-wise buffering 730, which may comprise marking of reconstructed sub-pictures as "used for reference” and "unused for reference” as well as keeping track of whether reconstructed sub-pictures have been output from the decoder.
  • the buffering of sub-picture sequences may be independent form each other, or may be synchronized in one or both of the following ways:
  • the reference picture marking of reconstructed sub-pictures of the same time instance may be performed synchronously.
  • the sub-picture-sequence-wise buffering 730 may be illustrated with Figure 7.
  • the example illustrates decoding of two sub-picture sequences, which have the same height but different width. It needs to be understood that the number of sub-picture sequences and/or the sub-picture dimensions could have been chosen differently and these choices are only meant as possible examples.
  • output from a decoder comprises a collection of the different and separate decoded sub-pictures.
  • an output picture which may also or alternatively be referred to as a decoded picture, from a decoding process 810 is a collection of the different and separate sub-pictures.
  • the output picture is composed by arranging reconstructed sub-pictures into a two-dimensional (2D) picture.
  • 2D two-dimensional
  • This embodiment keeps a conventional design of a single output picture (per time instance) as the output of a video decoder and hence can be straightforward for integrating to systems.
  • the decoded sub-pictures are provided to a decoded sub-picture buffering 812.
  • the decoding process 810 may then use buffered sub-picture(s) as a reference for decoding succeeding pictures.
  • the decoding process may obtain an indication or infer which of the decoded sub-picture(s) are to be used as a source for generating manipulated sub- picture ⁇ ). Those sub-pictures are provided 814 to a reference sub-picture manipulation process 816. Manipulated reference sub-pictures are then provided 818 to the decoded sub-picture buffering 812, where the manipulated reference sub-pictures are buffered. The sub-pictures and the manipulated reference sub-pictures may then be used by the output picture compositing process 820 that takes the picture composition data as input and arranges reconstructed sub-pictures into output pictures.
  • An encoder encodes picture composition data into or along the bitstream, wherein the picture composition data is indicative of how reconstructed sub-pictures are to be arranged into 2D picture(s) forming output picture(s).
  • a decoder decodes picture composition data from or along the bitstream and forms 820 an output picture from reconstructed sub-pictures and/or manipulated reference sub-pictures according to the decoded picture composition data.
  • the decoding or picture composition data may happen as a part of or operationally connected with the output picture compositing process 820.
  • a conventional video decoding process decodes the picture composition data.
  • the picture composition data is encoded in or along the bitstream and/or decoded from or along the bitstream using the bitstream or decoding order of sub pictures and the dimensions of sub-pictures.
  • An algorithm for positioning sub-pictures within a picture area is followed in an encoder and/or in a decoder, wherein sub-pictures are input to the algorithm in their bitstream or decoding order.
  • the algorithm for positioning sub-pictures within a picture area is the following: When a picture comprises multiple sub-pictures and when encoding of a picture and/or decoding of a coded picture is started, each CTU location in the reconstructed or decoded picture is marked as unoccupied. For each sub-picture in bitstream or decoding order, the sub-picture takes the next such unoccupied location in CTU raster scan order within a picture that is large enough to fit the sub-picture within the picture boundaries.
  • an encoder indicates in or along the bitstream if
  • the decoder is intended to output a collection of the different and separate decoded sub
  • the decoder is intended to generate output pictures according to the picture composition data
  • the decoder is allowed to perform either of the options above.
  • a decoder decodes from or along the bitstream if
  • the decoder is intended to output a collection of the different and separate decoded sub
  • the decoder is intended to generate output pictures according to the picture composition data
  • the decoder is allowed to perform either of the options above.
  • the decoder adapts its operation to conform to the decoded intent or allowance.
  • a decoder includes an interface for selecting at least among outputting a collection of the different and separate decoded sub-pictures or generating output pictures according to the picture composition data.
  • the decoder adapts its operation to conform to what has been indicated through the interface.
  • pictures are divided into sub-pictures, tile groups and tiles.
  • a tile may be defined similarly to an HEVC tile, thus a tile may be defined as a sequence of CTUs that cover a rectangular region of a picture.
  • a tile group may be defined as a sequence of tiles in tile raster scan within a sub-picture. It may be specified that A VCL NAL unit contains exactly one tile group, i.e. a tile group is contained in exactly one VCL NAL unit.
  • a sub-picture may be defined as a rectangular set of one or more entire tile groups.
  • a picture is partitioned to sub- pictures, i.e. the entire picture is occupied by sub-pictures and there are no unoccupied areas within a picture.
  • a picture comprises sub-pictures and one or more unoccupied areas.
  • an encoder encodes in or along the bitstream and/or a decoder decodes from or along the bitstream information indicative of one or more tile partitionings for sub-pictures.
  • a tile partitioning may for example be a tile grid specified as widths and heights of tile columns and tile rows, respectively.
  • An encoder encodes in or along a bitstream and/or a decoder decodes from or along the bitstream which tile partitioning applies for a particular sub-picture or sub picture sequence.
  • syntax elements describing a tile partitioning are encoded in and/or decoded from a picture parameter set, and a PPS is activated for a sub-picture e.g.
  • Each sub-picture may refer to its own PPS and may hence have its own tile partitioning.
  • Figure 10 illustrates a picture that is divided into 4 sub-pictures.
  • Each sub-picture may have its own tile grid.
  • sub-picture 1 is divided into a grid of 3x2 tiles of equal width and equal height
  • sub-picture 2 is divided into 2x1 tiles of 3 and 5 CTUs high.
  • Sub-picture 1 has 3 tile groups containing 1, 3, and 2 tiles, respectively.
  • Each of sub-pictures 2, 3, and 4 has one tile group.
  • FIG. 10 also illustrates the above-discussed algorithm for positioning sub-pictures within a picture area.
  • Sub-picture 1 is the first in decoding order and thus placed in the top-left comer of the picture area.
  • Sub-picture 2 is the second in decoding order and thus placed to the next unoccupied location in raster scan order.
  • the algorithm also operates the same way for the third and fourth sub pictures in decoding order, i.e. sub- pictures 3 and 4, respectively.
  • the sub-picture decoding order is indicated with the number (1, 2, 3, 4) outside the picture boundaries.
  • an encoder encodes in the bitstream and/or a decoder decodes from the bitstream, e.g. in an image segment header such as a tile group header, information indicative of one or more tile positions within a sub-picture. For example, a tile position of the first tile, in decoding order, of the image segment or tile group may be encoded and/or decoded.
  • a decoder concludes that the current image segment or tile group is the first image segment or tile group of a sub-picture, when the first tile of an image segment or tile group is the top- left tile of a sub-picture (e.g. having a tile address or tile index equal to 0 in raster scan order of tiles).
  • a decoder in relation to concluding a first image segment or tile group, concludes if a new access unit is started. In an embodiment, it is concluded that a new access is started when the picture order count value or syntax element value(s) related to picture order count (such as least significant bits of picture order count) differ from that of the previous sub-picture.
  • decoded picture buffering is performed on picture-basis rather than on sub-picture basis.
  • An encoder and/or a decoder generates a reference picture from decoded sub-pictures of the same access unit or time instance using the picture composition data. The generation of a reference picture is performed identically or similarly to what is described in other embodiments for generating output pictures.
  • reference sub-pictures for encoding and/or decoding the sub-picture are generated by extracting the area collocating with the current sub-picture from the reference pictures in the decoded picture buffer.
  • the decoding process gets reference sub-picture(s) from the decoded picture buffering process similarly to other embodiments, and the decoding process may operate similarly to other embodiments.
  • an encoder selects reference pictures for predicting a current sub-picture in a manner that the reference pictures contain a sub-picture that has the same location as the current sub-picture (within the picture) and the same dimensions (width and height) as the current sub-picture.
  • An encoder avoids selecting reference pictures for predicting a current sub-picture if the reference pictures do not contain a sub-picture that has the same location as the current sub-picture (within the picture) or the same dimensions as the current sub-picture.
  • sub-pictures of the same access unit or time instance are allowed to have different types, such as random-access sub-picture and non-random-access sub-picture, defined similarly to what has been described earlier in relation to NAL unit types and/or picture types.
  • An encoder encodes a first access unit with both a random-access sub-picture in a first location and size and a non-random-access sub-picture in a second location and size, and a subsequent access unit in decoding order including a sub-picture in the first location and size constrained in a manner that reference pictures preceding the first access unit in decoding order are avoided, and including another sub-picture in the second location and size using a reference picture preceding the first access unit decoding order as a reference for prediction.
  • an encoder and/or a decoder includes only such reference pictures into the initial reference picture list that contain a sub picture that has the same location as the current sub-picture (within the picture) and the same dimensions (width and height) as the current sub-picture.
  • Reference pictures into that do not contain a sub-picture that has the same location as the current sub-picture (within the picture) or the same dimensions (width and height) as the current sub-picture are skipped or excluded for generating an initial reference picture list for encoding and/or decoding the current sub-picture.
  • sub-pictures of the same access unit or time instance are allowed to have different types, such as random-access sub-picture and non-random-access sub-picture, defined similarly to what has been described earlier in relation to NAL unit types and/or picture types.
  • Reference picture list initialization process or algorithm in an encoder and/or a decoder only includes the previous random-access sub picture and subsequent sub-pictures, in decoding order, in an initial reference picture list and skips or excludes sub-pictures preceding, in decoding order, the previous random-access sub-picture.
  • a sub-picture at a second sub-picture sequence is predicted from one or more sub-pictures of a first sub-picture sequence. Spatial relationship of the sub-picture in relation to the one or more sub-pictures of the first sub-picture sequence is either inferred or indicated by an encoder in or along the bitstream and/or decoded by a decoder from or along the bitstream. In the absence of such spatial relationship information in or along the bitstream, an encoder and/or a decoder may infer that the sub-pictures are collocated, i.e. exactly overlapping for spatial correspondence in prediction.
  • the spatial relationship information is independent of the picture composition data. For example, sub-pictures may be composed to be above each other in an output picture (in a top-bottom packing arrangement) while they are considered to be collocated for prediction.
  • FIG. 11 An embodiment of an encoding process or a decoding process is illustrated in Figure 11, where the arrows from the first sub-picture to the second sub-picture sequence indicate prediction.
  • the sub-pictures may be inferred to be collocated for prediction.
  • an encoder indicates a sub-picture sequence identifier or alike in or along the bitstream in a manner that the sub-picture sequence identifier is associated with coded video data units, such as VCL NAL units.
  • a decoder decodes a sub-picture sequence identifier or alike from or along the bitstream in a manner that the sub-picture sequence identifier is associated with coded video data units and/or the respective reconstructed sub pictures.
  • the syntax structure containing the sub-picture sequence identifier and the association mechanism may include but are not limited to one or more of the following:
  • a sub-picture sequence identifier included in a header included in a VCL NAL unit such as a tile group header or a slice header and associated with the respective image segment (e.g. tile group or slice).
  • a sub-picture delimiter may for example be a specific NAL unit that starts a new sub-picture.
  • Implicit referencing may for example mean that the previous syntax structure (e.g. sub-picture delimiter or picture header) in decoding or bitstream order may be referenced.
  • Explicit referencing may for example mean that the identifier of the reference parameter set is included in the coded video data, such as in a tile group header or in a slice header.
  • sub-picture sequence identifier values are valid within a pre-defined subset of a bitstream (which may be called "validity period” or “validity subset”), which may be but is not limited to one of the following:
  • a single access unit i.e. coded video data for a single time instance.
  • a closed random-access access unit may be defined as an access unit within and after which all present sub-picture sequences start with a closed random-access sub-picture.
  • a closed random-access sub-picture may be defined as an intra-coded sub-picture, which is followed, in decoding order, by no such sub-pictures in the same sub-picture sequence that reference any sub-picture preceding the intra-coded sub picture, in decoding order, in the same sub-picture sequence.
  • a closed random-access sub-picture may either be an intra-coded sub-picture or a sub-picture associated with and predicted only from external reference sub-picture(s) (see an embodiment described further below) and is otherwise constrained as described above.
  • sub-picture sequence identifier values are valid within an indicated subset of a bitstream.
  • An encoder may for example include a specific NAL unit in the bitstream, where the NAL unit indicates a new period for sub-picture sequence identifiers that is unrelated to earlier period(s) of sub-picture sequence identifiers.
  • a sub-picture with a particular sub-picture sequence identifier value is concluded to be within the same sub-picture sequence as a preceding sub-picture in decoding order that has the same sub-picture sequence identifier value, when both sub-pictures are within the same validity period of sub-picture sequence identifiers.
  • two pictures are on different validity periods of sub-picture sequence identifiers or have different sub-picture sequence identifiers, they are concluded to be in different sub-picture sequences.
  • a sub-picture sequence identifier is a fixed-length codeword.
  • the number of bits in the fixed-length codeword may be encoded into or along the bitstream, e.g. in a video parameter set or a sequence parameter set, and/or may be decoded from or along the bitstream, e.g. from a video parameter set or a sequence parameter set.
  • a sub-picture sequence identifier is a variable-length codeword, such as an exponential-Golomb code or alike.
  • an encoder indicates a mapping of VCL NAL units or image segments, in decoding order, to sub-pictures or sub-picture sequences in or along the bitstream, e.g. in a video parameter set, a sequence parameter set, or a picture parameter set.
  • a decoder decodes a mapping of VCL NAL units or image segments, in decoding order, to sub-pictures or sub-picture sequence from or along the bitstream.
  • the mapping may concern a single time instance or access unit at a time.
  • mappings are provided e.g. in a single container syntax structure and each mapping is indexed or explicitly identified e.g. with an identifier value.
  • an encoder indicates in the bitstream, e.g. in an access unit header or delimiter, a picture parameter set, a header parameter set, a picture header, a header of an image segment (e.g. tile group or slice), which mapping applies to a particular access unit or time instance.
  • a decoder decodes form the bitstream which mapping applies to a particular access unit or time instance.
  • the indication which mapping applies is an index to a list of several mappings (specified e.g. in a sequence parameter set) or an identifier to a set of several mappings (specified e.g. in a sequence parameter set).
  • the indication which mapping applies comprises the mapping itself e.g. as a list of sub-picture sequence identifiers for VCL NAL units in decoding order included in the access unit associated with the mapping.
  • the decoder concludes the sub-picture or sub-picture sequence for a VCL NAL unit or image segment as follows:
  • the start of an access unit is concluded e.g. as specified in a coding specification, or the start of a new time instance is concluded as specified in a packetization or container file specification.
  • mapping applied to the access unit or time instance is concluded according to any earlier embodiment.
  • the respective sub-picture sequence or sub-picture is concluded from the mapping.
  • mappings are specified in a sequence parameter set.
  • mappings are specified to map VCL NAL units to sub-picture sequences.
  • num_subpic_pattems 0 specifies that sub-picture-based decoding is not in use.
  • num_subpic_pattems greater than 0 specifies the number of mappings from VCL NAL units to sub-picture sequence identifiers.
  • subpic_seq_id_len_minusl plus 1 specifies the length of the subpic_seq_id[ i ][ j ] syntax element in bits num vcl nal units minusl [ i ] plus 1 specifies the number of VCL NAL units that are mapped in the i-th mapping.
  • subpic_seq_id[ i ][ j ] specifies the sub-picture sequence identifier of the j-th VCL NAL unit in decoding order in an access unit associated with the i-th mapping.
  • subpic_pattem_idx specifies the index of the mapping from VCL NAL units to sub-picture sequence identifiers that applies for this access unit. It may be required that subpic_pattem_idx has the same value in all tile_group_header( ) syntax structures of the same access unit.
  • a random-access sub-picture of a particular sub-picture sequence may be predicted from one or more reference sub-pictures of other sub-picture sequences (excluding the particular sub-picture sequence).
  • One of the following may be required and may be indicated for a random-access sub-picture:
  • the random-access sub-picture is constrained so that prediction of any sub-picture at or after the random-access sub-picture in output order does not depend on any reference sub-picture (of the same sub-picture sequence) preceding the random-access sub-picture in decoding order; this case corresponds to an open GOP random-access point.
  • the random-access sub-picture is constrained so that prediction of any sub-picture at or after the random-access sub-picture in decoding order does not depend on any reference sub-picture (of the same sub-picture sequence) preceding the random-access sub-picture in decoding order; this case corresponds to a closed GOP random-access point.
  • random-access sub-picture may be predicted from other sub-picture sequence(s), random-access sub-pictures are more compact than similar random-access sub-pictures realized with intra-coded pictures.
  • Stream access points (which may also or alternatively be referred to as sub-picture sequence access point) for sub-picture sequences may be defined as a position in a sub-picture sequence (or alike) that enables playback of the sub-picture sequence to be started using only the information from that position onwards assuming that the referenced sub-picture sequences have already been decoded earlier.
  • Stream access points of sub-picture sequences may coincide or be equivalent with random-access sub-pictures.
  • the decoding of all sub-picture sequences is marked as uninitialized in the decoding process.
  • a sub-picture When a sub-picture is coded as a random-access sub-picture (e.g. like an IRAP picture in HEVC) and prediction across sub picture sequences is not enabled, the decoding of the corresponding sub-picture sequence is marked as initialized.
  • a current sub-picture is coded as a random-access sub-picture (e.g. like an IRAP picture in predicted layers in multilayer HEVC) and decoding of all sub-picture sequences used as reference for prediction is marked as initialized
  • the decoding of the sub-picture sequence of the current sub-picture is marked as initialized.
  • no sub-picture for a sub-picture sequence of an identifier is not present for a time instance (e.g.
  • the decoding of the corresponding sub-picture sequence is marked as uninitialized in the decoding process.
  • the decoding of the current sub-picture may be omitted. Areas that correspond to omitted sub-pictures (e.g. on the basis of picture composition data) can be treated like unoccupied areas in the output picture compositing process, as described in other embodiments.
  • Picture composition data may comprise but is not limited to one or more of the following pieces of information per sub-picture:
  • composition area is indicated per one effective area of a sub-picture.
  • the effective area of a sub-picture is mapped onto the composition area.
  • the effective picture area is rescaled or resampled to match the effective area
  • - Mirroring e.g. vertically or horizontally, for mapping the effective area on the composition area.
  • ft is appreciated that other choices for syntax elements than those presented above may be equivalently used.
  • coordinates and dimensions of an effective area and/or a composition area may be indicated by the coordinates of the top-left comer of the area, the width of the area, and the height of the area ft needs to be understood that the units for indicating the coordinates or extents may be inferred or indicated in or along the bitstream and/or decoded from or along the bitstream.
  • coordinates and/or extents may be indicated as integer multiples of coding tree units.
  • a z-order or an overlaying order may be indicated by the encoder or another entity as part of picture composition data in or along the bitstream.
  • a z-order or an overlaying order may be inferred for example to be an ascending sub picture identifier or the same as the decoding order of the sub-pictures of the same output time or the same output order.
  • Picture composition data may be associated with a sub-picture sequence identifier or alike. Picture composition data may be encoded into and/or decoded from a video parameter set, a sequence parameter set, or a picture parameter set.
  • Picture composition data may describe sub-pictures or sub-picture sequences, which are not encoded, requested, transmitted, received, and/or decoded. This enables selecting a subset of possible or available sub-pictures or sub-picture sequences for encoding, requesting, transmission, receiving, and/or decoding.
  • a decoder or a player may include an output picture compositing process or alike, which may take as input two or more reconstructed sub-pictures that represent the same output time or the same output order.
  • An output picture compositing process may be a part of the decoded picture buffering process or may be connected to the decoded picture buffering process.
  • An output picture compositing process may be invoked when a decoder is triggered to output a picture. Such triggering may for example happen when an output picture at a correct output order can be composed, i.e. when no coded video data preceding the next reconstructed sub-pictures in output order follows the current decoding position within the bitstream. Another example of such triggering is when an indicated buffering time has elapsed.
  • the output picture compositing process picture composition data is applied to locate said two or more reconstructed sub-pictures on the same coordinates or onto the same output picture area.
  • the output picture area that is unoccupied is set to a determined value, which may be separately derived per each color component.
  • the determined value may be a default value (e.g. pre-defined in a coding standard), an arbitrary value determined by the output picture compositing process, or a value indicated by an encoder in or along the bitstream and/or decoded from or along the bitstream.
  • the output picture area may be initialized to the determined value prior to locating said two or more reconstructed sub-pictures onto it.
  • a decoder indicates unoccupied areas together with the output picture.
  • the output interface of the decoder or the output picture compositing process may comprise an output picture and information indicative of the unoccupied areas.
  • the output picture of the output picture compositing process is formed by locating the possibly resampled sample arrays of the two or more reconstructed sub pictures in the z-order onto the output picture in such a manner that the sample array later in the z- order covers or replaces the sample values in collocated positions of the sample arrays earlier in the z- order.
  • the output picture compositing process comprises aligning the decoded representations of said two or more reconstructed sub-pictures.
  • the first one may be upsampled to YUV 4:4:4 as part of the process.
  • the first one may be converted to the second color gamut or format as part of the process.
  • the output picture compositing process may include one or more conversions from a color representation format to another (or, equivalently, from one set of primary colors to another set of primary colors).
  • the destination color representation format may be selected for example based on the display in use.
  • the output picture compositing process may include a conversion from YUV to RGB.
  • the resulting output picture may form the picture to be displayed or to be used in the displaying process e.g. for generating content for the viewport.
  • the output picture compositing process may additionally contain other steps than those described above and may lack some steps from those described above. Alternatively, or additionally, the described steps of the output picture compositing process may be performed in another order than that described above.
  • the spatial correspondence between a current sub-picture and the reference sub-picture may be indicated by the encoder and/or decoded by the decoder using spatial relationship information described in the following:
  • the spatial relationship information indicates the location of the top-left sample of the current sub-picture in the reference sub-picture ft is noted that the top-left sample of the current sub-picture may be indicated to correspond to a location outside the reference sub-picture (e.g. have negative horizontal and/or vertical coordinates). Likewise, bottom and/or right- side samples of the current sub-picture may be located outside the reference sub-picture.
  • the current sub-picture references samples or decoded variable values (e.g. motion vectors) that are outside the reference sub-picture, they may be considered to be unavailable for prediction.
  • the spatial relationship information indicates the location of an indicated or inferred sample location of the reference sub-picture (for example the top-left sample location of the reference sub-picture) in the current sub-picture.
  • the indicated or inferred sample location of the reference sub-picture may be indicated to correspond to a location outside the current sub-picture (e.g. have negative horizontal and/or vertical coordinates).
  • some sample locations, e.g. bottom and/or right-side samples, of the reference sub-picture may be located outside the current sub-picture.
  • the current sub-picture references samples or decoded variable values (e.g. motion vectors) that are outside the reference sub-picture, they may be considered to be unavailable for prediction.
  • sub-pictures of different sub-picture sequences may use the same reference sub-picture as a reference for prediction using the same or different spatial relationship information. It is also noted that the indicated or inferred sample location of the reference sub-picture may be indicated to correspond to a fractional location in the current sub-picture. In this case, reference sub-picture is generated by resampling the current sub-picture.
  • the spatial relationship information indicates the location of the four comer (e.g. top-left, top-right, bottom-left, bottom-right) samples of the current sub-picture in the reference sub-picture.
  • the corresponding location of each sample of the current picture in the reference subpicture may be calculated using a for example bilinear interpolation.
  • an encoder and/or a decoder may be indicated in or along the bitstream by an encoder and/or it may be decoded from or along the bitstream by a decoder that spatial correspondence is applied in a wrap-around manner horizontally and/or vertically.
  • An encoder may indicate such wrap-around correspondence for example when a sub picture covers an entire 360-degree picture and sub-picture sequences of both views are present in the bitstream.
  • wrap-around correspondence is in use and a sample location outside a boundary of the reference sub-picture would be referenced in the decoding process, the referenced sample location may be wrapped around horizontally or vertically (depending which boundary is crossed) to the other side of the reference sub-picture.
  • an encoder generates and/or a decoder decodes more than one instance of spatial relationship information to indicate spatial correspondence between a current sub-picture and more than one reference sub-pictures.
  • an encoder generates and/or a decoder decodes more than one instance of spatial relationship information to indicate more than one spatial correspondence between a current sub-picture and the reference sub-picture (from a different sub-picture sequence).
  • a separate reference picture index in one or more reference picture lists may be generated in an encoder and/or in a decoder.
  • reference picture list initialization may comprise including the number of times a reference sub-picture is included in an initial reference picture list can be equal to the number of instance of the spatial relationship information concerning the reference sub-picture.
  • An encoder may indicate the use the reference sub-picture associated with a particular instance of spatial relationship information using the corresponding reference index when indicating a reference for inter prediction.
  • a decoder may decode the reference index to be used as a reference for inter prediction, conclude the particular instance of spatial relationship information corresponding to that reference index, and use the associated reference sub-picture with the concluded particular instance of spatial relationship information as a reference for inter prediction.
  • the present embodiment may be used e.g. when the reference sub-picture is bigger than the current sub-picture, and object motions in different borders of the current sub-picture are in different directions (spatially when they are toward outside the sub picture). Thus, for each border a different reference with different instance of spatial relationship information may be helpful.
  • unavailable samples may be copied from the other side of the sub-picture. This may be useful especially in 360-degree videos.
  • an access unit contains sub-pictures of the same time instance, and coded video data for a single access unit is contiguous in decoding order and is not interleaved, in decoding order, with any coded data of any other access unit.
  • sub-pictures of the same time instance need not be contiguous in decoding order.
  • sub-pictures of the same time instance need not be contiguous in decoding order.
  • This embodiment may be used for example for retroactive decoding of some sub-layers of sub-picture sequences that were earlier decoded at a reduced picture rate but are now to be decoded at a higher picture rate.
  • Such operation for multiple picture rates or different number of sub-layers for sub-picture sequence is described in another embodiment further below.
  • all sub-picture sequences have sub-pictures of the same time instances present.
  • all other sub-pictures also have a sub-picture for that time instance.
  • An encoder may indicate in or along the bitstream, e.g. in a VPS (Video Processing System), and/or a decoder may decode from or along the bitstream if all sub-picture sequences have sub-pictures of the same time instances present.
  • sub-picture sequences may have sub-pictures present whose time instances are at least partially differing. For example, sub-picture sequences may have different picture rates from each other.
  • all sub-picture sequences may have the same prediction structure, have sub-pictures of the same time instances present and use sub-pictures of the same time instances as reference.
  • An encoder may indicate in or along the bitstream, e.g. in a VPS, and/or a decoder may decode from or along the bitstream if all sub-picture sequences have the same prediction structure.
  • reference picture marking for a sub-picture sequence is independent of other sub-picture sequences. This may be realized e.g. by using separate SPSs (Sequence Parameter Set) and PPSs (Picture Parameter Set) for different sub-picture sequences.
  • reference picture marking for all sub-picture sequences is synchronized.
  • all sub-pictures of a single time instance are either all marked as "used for reference” or all marked as "unused for reference”.
  • syntax structures affecting reference picture marking are included in and/or referenced by sub-picture-specific data units, such as VCL NAL units for sub-pictures.
  • syntax structures affecting reference picture marking are included in and/or reference by picture-specific data units, such as a picture header, a header parameter set, or alike.
  • bitstream or CVS Coded Video Seqeuence
  • bitstream or CVS Coded Video Seqeuence
  • the properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
  • Properties per sub-picture sequence may be indicated in a syntax structure that applies to the sub picture sequence. Properties applying collectively to all sub-picture sequences may be indicated in a syntax structure applying to the entire CVS or bitstream.
  • two levels of bitstream or CVS (Coded Video Sequence) properties are decoded, namely per sub-picture sequence and collectively to all sub-picture sequences (i.e. all coded video data).
  • the properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
  • a decoder or a client may determine from the properties indicated for all sub-picture sequences collectively whether it can process the entire bitstream.
  • a decoder or a client may determine from the properties indicated for individual sub-picture sequences which sub-picture sequences it is able to process.
  • bitstream e.g. in SPS
  • bitstream e.g. in SPS
  • motion vectors may cause references to sample locations over sub-picture boundaries.
  • the properties per sub-picture sequence and/or the properties applying collectively to all sub-picture sequences are informative of the sample count and/or sample rate limit applicable in the sub-picture sequence and/or all sub-picture sequences wherein:
  • parameters related to a sub-picture and/or sub-picture sequence are encoded into and/or decoded from a picture parameter set.
  • Sub-pictures of the same picture, access unit, or time instance are allowed but not necessarily required to refer to different picture parameter sets.
  • information indicative of sub-picture width and height are indicated in and/or decoded from a picture parameter set.
  • the sub-picture width and height may be indicated and/or decoded in units of CTUs.
  • the picture parameter set syntax structure may comprise the following syntax elements:
  • multiple subpics enabled flag 0 specifies that a picture contains exactly one sub-picture and that all VCL NAL units of an access unit reference the same active PPS.
  • multiple subpics enabled flag equal to 1 specifies that a picture may contain more than one sub picture and each sub-picture may reference a different active PPS.
  • subpic width in ctus minusl plus 1 when present, specifies the width of the sub-picture for which this PPS is the active PPS.
  • subpic height in ctus minusl plus 1 when present, specifies the height of the sub-picture for which this PPS is the active PPS.
  • variables related to picture dimensions may be derived based on them and may override the respective variables derived from the syntax elements of SPS.
  • PPS may contain the tile row heights and tile column widths of all tile rows and tile columns, respectively, and the sub picture height and width are the sums of all the tile column heights and widths, respectively.
  • sub-picture width and height may be indicated and/or decoded in units of minimum coding block size. This option would enable finer granularity for the last tile column and the last tile row.
  • parameters related to a sub-picture sequence are encoded into and/or decoded from a sub-picture parameter set.
  • a single sub-picture parameter set may be used by sub-picture of more than one sub-picture sequence but is not required to be used by all sub-picture sequences.
  • a sub-picture parameter set may for example comprise information similar to that included in a picture parameter set for conventional video coding, such as HEVC.
  • a sub-picture parameter set may indicate which coding tools are enabled in coded image segments of the sub pictures referring to the sub-picture parameter set.
  • Sub-pictures of the same time instance may refer to different sub-picture parameter sets.
  • a picture parameter set may indicate parameters that applies collectively to more than one sub-picture sequences or across sub-pictures, such as spatial relationship information.
  • a sub-picture sequence is encapsulated as a track in a container file.
  • a container file may contain multiple tracks of sub-picture sequences. Prediction of a sub-picture sequence from another sub-picture sequence may be indicated through file format metadata, such as a track reference.
  • selected sub-layer(s) of a sub-picture sequence are encapsulated as a track.
  • sub-layer 0 may be encapsulated as a track.
  • Sub-layer- wise encapsulation may enable requesting, transmission, reception, and/or decoding of a subset of sub layers for tracks that are not needed for rendering.
  • one or more collector tracks are generated.
  • a collector track indicates which sub-picture tracks are suitable to be consumed together.
  • Sub-picture tracks may be grouped into groups containing alternatives to be consumed. For example, one sub-picture track of per a group may be intended to be consumed for any time range.
  • Collector tracks may reference either or both of sub-picture tracks and/or groups of sub-picture tracks.
  • Collector tracks might not contain instructions for modifying coded video content, such as VCL NAL units.
  • the generation of a collector track comprises but is not limited to authoring and storing one or more of the following pieces of information:
  • sequence parameter set(s), picture parameter set(s), header parameter set(s), and/or picture header(s) may be generated.
  • a collector track may contain the picture header that applies for picture when its sub-pictures may originate from both random-access pictures and non-random-access pictures or be of both random-access sub-picture type and non-random- access sub-picture type.
  • Bitstream or CVS Coded Video Sequence
  • properties applying collectively to the sub-picture sequences resolved from the collector track.
  • the properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
  • a sample in a collector track pertains to multiple samples of associated sub-picture tracks. For example, by selecting the sample duration of a collector track to pertain to multiple samples of associated sub-picture tracks, it can be indicated that the same parameter sets and/or header, and/or the same picture composition data applies to a period of time in the associated sub-picture tracks.
  • a client or alike identifies one or more collector tracks being available, wherein
  • a collector track indicates which sub-picture tracks are suitable to be consumed together, and collector tracks may reference either or both of sub-picture tracks and/or groups of sub-picture tracks (e.g. a group containing alternative sub-picture tracks out of which one is intended to be selected for consumption for any time range), and collector tracks might not contain instructions for modifying coded video content, such as VCL NAL units.
  • the client or alike parses from one or more collector tracks or information accompanying the one or more collector tracks one or more of the following pieces of information:
  • Bitstream or CVS Coded Video Sequence
  • properties applying collectively to the sub-picture sequences resolved from the collector track.
  • the properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
  • the client or alike selects a collector track from the one or more collector tracks to be consumed.
  • the selection may be based on but is not limited to the above-listed pieces of information.
  • the client or alike resolves the collector track to generate a bitstream for decoding. At least a subset of the information included in or accompanying the collector track may be included in the bitstream for decoding.
  • the bitstream may be generated piece-wise, e.g. access unit by access unit.
  • the bitstream may then be decoded, and the decoding may be performed piece-wise, e.g. access unit by access unit.
  • collector track equally apply to tracks called differently but essentially with the same nature.
  • parameter set track could be used, since the information included in the track could be considered parameter or parameter sets rather than VCL data.
  • sub-picture sequence(s) are decapsulated from selected tracks of a container file. Samples of the selected tracks may be arranged into a decoding order that complies with a coding format or a coding standard, and then passed to a decoder. For example, when a second sub-picture is predicted from a first sub-picture, the first sub-picture is arranged prior to the second sub-picture in decoding order.
  • each track containing a sub-picture sequence forms a
  • An Adaptation Set is generated per each group of sub-picture sequence tracks that is collocated and also otherwise share the same properties such that switching between the Representations of an Adaptation Set is possible e.g. with a single decoder instance.
  • prediction of a sub-picture sequence from another sub picture sequence may be indicated through streaming manifest metadata, such as a @dependencyld in DASH MPD.
  • an indication of a group of Adaptation Sets is generated into an MPD, wherein the Adaptation Sets contain Representations that carry sub-picture sequences, and the sub-picture sequences are such that can be decoded with a single decoder.
  • a client infers from the indicated group that any combination of selected dependent Representations whose complementary Representations are also in the combination and any selected independent or complementary Representations can be decoded.
  • a client selects e.g. based on the above-mentioned indicated group, estimated throughput, and use case needs (see e.g. below embodiments on viewport-dependent streaming) from which Representations (Sub)segments are requested.
  • sub-pictures are encoded onto and/or decoded from more than one layer of scalable video coding.
  • a reference picture for inter-layer prediction comprises a picture generated by the output picture compositing process.
  • inter-layer prediction is performed from a reconstructed sub-picture of a reference layer to a sub-picture of an enhancement layer.
  • a sub-picture sequence corresponds to a layer of scalability video coding.
  • Embodiments can be used to realize e.g. quality scalability, region- of-interest scalability, or view scalability (i.e. multiview or stereoscopic video coding).
  • multi-layer coding may be replaced by sub-picture-based coding.
  • Sub-picture-based coding may be more advantageous in many use cases compared to scalable video coding. For example, many described embodiments enable a large number of sub-picture sequences, which may be advantageous e.g. in point cloud coding or volumetric video coding where generated of patches is dynamically adapted.
  • scalable video coding has conventionally assumed a fixed maximum number of layers (e.g. as determined by the number of bits in nuh layer id syntax element in HEVC). Furthermore, many described embodiments enable dynamic selection of (de)coding order of sub-pictures and reference sub-pictures for prediction, whereas the scalable video coding conventionally has a fixed (de)coding order of layers (within an access unit) and a fixed set of allowed inter-layer dependencies within a coded video sequence.
  • Embodiments may be used but are not limited with selecting (for encoding) and/or decoding sub-pictures or sub-picture sequences as any of the following:
  • partitions may correspond to coded image
  • spatiotemporal partitions may be
  • a projection structure of 360-degree projection such as faces of a multi- face 360- degree projection (e.g. cubemap) - packed regions as indicated by region- wise packing information
  • a sub-picture sequence may comprise respective patches in subsequent time instances
  • sub-picture-based (de)coding are discussed, e.g. from a point of view of Viewport-dependent 360-degree video streaming; coding of scalable, multiview and stereoscopic video; coding of multi-face content with overlapping; coding of point cloud content.
  • a coded sub-picture sequence may be encapsulated in a track of a container file, the track may be partitioned into Segments and/or Subsegments, and a
  • Representation may be created in a streaming manifest (e.g. MPEG-DASH MPD) to make the (Sub)segments available through requests and to announce properties of the coded sub-picture sequence.
  • a streaming manifest e.g. MPEG-DASH MPD
  • the process of the previous sentence may be performed for each of the coded sub-picture sequences.
  • a client apparatus may be configured to parse from a manifest information of a plurality of Representations and to parse from the manifest a spherical region for each of the plurality of Representations.
  • the client apparatus may also parse from the manifest values indicative of the quality of the spherical regions and/or resolution information for the spherical regions or their 2D projections.
  • the client apparatus determines which Representations are suitable for its use.
  • the client apparatus may include means to detect head orientation when using a head-mounted display and select a Representation with a higher quality to cover the viewport than in Representations selected for other regions. As a consequence of the selection, the client apparatus may request (Sub)Segments of the selected Representations.
  • the same content is coded at multiple resolutions and/or bitrates using sub-picture sequences. For example, different parts of a 360-degree content may be projected to different surfaces, and the projected faces may be downsampled to different resolutions. For example, the faces that are not in the current viewport may be downsampled to lower resolution. Each face may be coded as a sub-picture.
  • the same content is coded at different random-access intervals using sub-picture sequences.
  • a change in viewing orientation causes a partly different selection of Representations to be requested than earlier.
  • the new Representations to be requested may be requested or their decoding may be started from the next random-access position within the sub-picture sequences carried in the Representations.
  • Representations having more frequent random-access positions may be requested as a response to a viewing orientation change until a next (Sub)segment with random-access position and of similar quality is available from respective Representations having less frequent random-access positions.
  • sub-pictures may be allowed to have different sub-picture types or NAL unit types.
  • a sub-picture of a particular access unit or time instance may be of a random-access type while another sub-picture of the same particular access unit or time instance may be of a non-random-access type.
  • sub-pictures of bitstreams having different random-access intervals can be combined.
  • shared coded sub-pictures are coded among the sub-picture sequences.
  • Shared coded sub-pictures are identical in respective sub-picture sequences of different bitrates, both in their coded form (e.g. VCL NAL units are identical) and in their reconstructed form (the reconstructed sub-pictures are identical).
  • shared coded sub-pictures are coded in their own sub-picture sequence.
  • shared coded sub-pictures are indicated in or along the bitstream (e.g. by an encoder) not to be output by a decoder, and/or are decoded from or along the bitstream not to be output by a decoder.
  • Shared coded sub-pictures may be made available as separate Representation(s) or may be included in "normal" Representations.
  • the client apparatus may constantly request and receive those Representation(s).
  • Figure 12 illustrates an example of using shared coded sub-picture for multi-resolution viewport-dependent 360-degree video streaming.
  • the cubemap content is resampled before encoding to three resolutions (A, B, C). It needs to be understood that cubemap projection is meant as one possible choice for which the embodiment can be realized, but generally other projection formats can likewise be used. In this example, the content at each resolution is split into sub-pictures of equal dimensions, although generally different dimensions could likewise be used. [0385] In this example, shared coded sub-pictures (indicated with rectangle containing the S character) are coded periodically, but it needs to be understood that different strategies of coding shared coded sub-pictures could additionally or alternatively be used. For example, scene cuts could be detected, IRAP pictures or alike could be coded for detected scene cuts, and periods for coding shared coded sub-pictures could be reset at IRAP pictures or alike.
  • shared coded sub-pictures are coded with "normal" sub-pictures (indicated with striped rectangles in the figure) in the same sub-picture sequences.
  • the shared coded sub-picture and the respective "normal" sub-picture represent conceptually different units in the bitstream, e.g. with different decoding times, with different picture order counts, and/or belonging to different access units.
  • a sequence of shared coded sub-pictures could form its own sub-picture sequence from which the respective "normal" sub-picture sequence could be predicted.
  • shared coded sub-picture and the respective "normal" sub-picture from the same input picture can belong to the time instance (e.g. be a part of the same access unit).
  • shared coded sub-pictures have the same dimensions as the respective "normal" sub-pictures.
  • shared coded sub-pictures could have different dimensions.
  • a shared coded sub-picture could cover an entire cube face or all cube faces of a cubemap, and spatial relationship information could be used to indicate how "normal" sub pictures spatially relate to shared coded sub-pictures.
  • An advantage of this approach is to enable prediction across a larger area within and between shared coded sub-pictures when compared to "normal" sub-pictures.
  • the client apparatus can select, request, receive, and decode:
  • a sub-picture sequence representing 360-degree video is coded at a "base" fidelity or quality, and hence the sub-picture sequence may be referred to as the base sub-picture sequence.
  • This sub-picture sequence may be considered to carry shared coded sub pictures.
  • one or more sub-picture sequences representing spatiotemporal subsets of the 360-degree video are coded at a fidelity or quality that is higher than the base fidelity or quality.
  • the projected picture area or the packed picture area may be partitioned into rectangles, and each sequence of rectangles may be coded as a "region-of-interest" sub-picture sequence.
  • An ROI sub picture sequence may be predicted from base sub-picture sequence and from reference sub-pictures of the same ROI sub-picture sequence. Spatial relationship information is used to indicate the spatial correspondence of the ROI sub-picture sequence in relation to the base sub-picture sequence.
  • ROI sub-picture sequences of the same spatial position can be coded, e.g. for different bitrate or resolution.
  • the base sub-picture sequence has the same picture rate as the ROI sub picture sequences and thus ROI sub-picture sequences can be selected to cover a subset of the 360- degree video, e.g. the viewport with a selected margin for viewing orientation changes.
  • the base sub-picture sequence has a lower picture rate than the ROI sub-picture sequences and thus ROI sub-picture sequences can be selected to cover the entire 360-degree video.
  • the viewport with a selected margin for viewing orientation changes can be selected to be requested, transmitted, received, and/or decoded from ROI sub-picture sequences with higher quality than the ROI sub-picture sequences covering the remaining of the sphere coverage.
  • the base sub-picture sequence is always received and decoded.
  • ROI sub-picture sequences selected on the basis of the current viewing orientation are received and decoded.
  • Random-access sub-pictures for the ROI sub-picture sequences may be predicted from the base sub-picture sequence. Since the base sub-picture sequence is consistently received and decoded, random-access sub-picture interval (i.e., the SAP interval) for the base sub-picture sequence can be longer than that for ROI sub-picture sequences.
  • the encoding method facilitates switching to requesting and/or receiving and/or decoding another ROI sub-picture sequence at a SAP position of that ROI sub-picture sequence. No intra-coded sub-picture at that ROI sub-picture sequence is required to start the decoding of that ROI sub-picture sequence, and consequently compression efficiency is improved compared to a conventional approach.
  • the number of sub-picture tracks was pre-defined when creating an extractor track for merging of the content of the sub-picture tracks into a single bitstream.
  • the number of decoded sub-pictures can be dynamically chosen depending on available decoding capacity, e.g. on a multi-process or multi-tasking system with resource sharing.
  • the coded data for a particular time instance can be passed to decoding even if some requested sub-pictures for it have not been received. Thus, delivery delays concerning only a subset of sub-picture sequences do not stall the decoding and playback of other sub-picture sequences.
  • - Switching between bitrates and received sub-pictures can take place at any shared coded sub picture and/or random-access sub-picture.
  • Several versions of the content can be encoded at different intervals of shared coded sub-pictures and/or random-access sub-pictures.
  • shared coded sub-pictures and/or random-access sub-pictures need not be aligned in all sub-picture sequences, thus better rate-distortion efficiency can be achieved when switching and/or random-access property is only in those sub-picture sequences where it is needed.
  • sub-picture can refer to various use cases and/or types of projections. Examples relating to the coding of sub-pictures in the context of few of these use cases are discussed next.
  • different parts of a 360-degree content may be projected to different surfaces, and the projected faces may have overlapped content.
  • a content may be divided to several regions (e.g. tiles) with overlapped content.
  • Each face or region may be coded as a sub-picture.
  • Each sub-picture may use a part of the other sub-picture as a reference frame as is shown in Figures 13 and 14 for two examples, where the non-overlapped contents have been shown in white box, the overlapped areas have been shown in gray color, and the corresponding parts in sub-pictures have been indicated by a dashed rectangle.
  • Spatial relationship information could be used to indicate how a sub-picture spatially relate to other sub-pictures.
  • each part of a point cloud content is projected to a surface to generate a patch.
  • Each patch may be coded as a sub-picture. Different patches may have redundant data. Each sub-picture may use other sub-picture to compensate this redundancy.
  • different parts of a point cloud have been projected to surface 1 and surface 2 to generate patch 1 and patch 2, respectively.
  • Each patch is coded as a sub-picture.
  • a part of the point cloud content which is indicated by c,d,e is redundantly projected to two surfaces, so the
  • a patch of a second PCC layer is coded as a second sub-picture and is predicted the reconstructed sub-picture of the respective patch of a first PCC layer.
  • a second sub-picture is decoded, wherein the second sub-picture represents a patch of a second PCC layer, and wherein the decoding comprises prediction from the reconstructed sub-picture that represents the respective patch of a first PCC layer.
  • sub-picture sequences are intentionally encoded, requested, transmitted, received, and/or decoded at different picture rates and/or at different number of sub layers.
  • This embodiment is applicable e.g. when only a part of the content is needed for rendering at a particular time. For example, in 360-degree video only the viewport is needed for rendering at a particular time, and in point cloud coding and volumetric video the part needed for rendering may depend on the viewing position and viewing orientation.
  • the picture rate and/or the number of sub layers for sub-picture sequences that are needed for rendering may be selected (in encoding, requesting, transmitting, receiving, and/or decoding) to be higher than for those sub-picture sequences that are not needed for rendering and/or not likely to be needed for rendering soon (e.g. for responding to a viewing orientation change).
  • the needed decoding capacity and power consumption may be reduced.
  • delivery and/or decoding speedup may be achieved e.g. for faster than real-time playback.
  • sub-layer access pictures such as TSA and/or STSA pictures of HEVC, may be used to restart encoding, requesting, transmitting, receiving, and/or decoding sub-layers.
  • a TSA sub-picture or alike can be encoded into the lowest sub-layer of a sub-picture sequence not predicted from other sub-picture sequences.
  • This TSA sub picture indicates that all sub-layers of this sub-picture sequence can be predicted starting from this TSA picture.
  • a TSA sub-picture or alike is decoded from the lowest sub layer of a sub-picture sequence not predicted from other sub-picture sequences. In an embodiment, it is concluded that requesting, transmission, reception, and/or decoding of any sub-layers above the lowest sub-layer can start starting from this TSA sub-picture, and consequently such requesting, transmission, reception, and/or decoding takes place.
  • the present embodiments provide also other advantages in addition to those already discussed above. For example, loop filtering across sub-picture boundaries is disabled. Thus, a very low delay operation may be achieved by processing the decoded sub-pictures output by the decoding process immediately (e.g., through color space conversion from YUV to RGB, etc.). This enables pipelining of the processes involved in playing (e.g. receiving VCL NAL units, decoding VCL NAL units, post-processing decoded sub-pictures). Similar benefit may also be achieved in the encoding end. Filtering over borders of non-contiguous image content, such as filtering across disjoint projection surfaces, may cause visible artefacts.
  • a sequence of patches of point cloud or volumetric video can be indicated to be of the same or similar source (e.g. the same projection surface) by indicating them to below to the same sub-picture sequence. Consequently, patches of the same source can be inter- predicted from each other.
  • patches of point cloud or volumetric video have been packed onto a 2D picture and patches of the same or similar source should have been positioned spatially to the same location on the 2D picture.
  • the number and size of patches may vary, such temporal alignment of corresponding patches might not be straightforward.
  • the number or pixel count of sub-pictures per picture does not have the stay constant. This makes 360-degree and 6DoF streaming applications that are based on "late binding" and adaptation based on viewing orientation and / viewing position easier to implement.
  • the number of received sub-pictures can be chosen based on the viewport size and/or the decoding capacity. If a sub picture is not received in time, the picture can be decoded without it.
  • the picture width and height is may be allowed to be not aligned with CTU boundary (or alike) and since sub-picture decoding operates as conventional picture decoding, flexibility in defining sub-picture sizes is achieved. For example, sub-picture sizes used for 360-degree video need not be multiples of CTU width and height. This decoding capacity in terms of pixels/second can be utilized more flexibly.
  • sub picture coding can improve intra coding in the face boundaries by not using the neighboring face pixels for prediction.
  • reference sub-picture manipulation process will be described in more detail, in accordance with an embodiment.
  • An encoder selects which of the sub-pictures could be used as a source of a manipulated reference sub-picture.
  • the encoder generates the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and includes at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
  • the encoder includes in or along the bitstream an identification of the reference sub-picture manipulation process and may also include in the bitstream information indicative of or infers a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub-pictures to be generated.
  • a decoder decodes from or along the bitstream the identification of the reference sub picture manipulation process.
  • the decoder also decodes from the bitstream information indicative of or infers a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub pictures to be generated.
  • the decoder may also generate the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and include at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
  • an encoder indicates in or along the bitstream and/or a decoder decodes from or along the bitstream and/or it is inferred by an encoder and/or a decoder that a reference sub picture manipulation operation is to be carried out when the reference sub-picture(s) used as input in the reference sub-picture manipulation become available.
  • an encoder encodes into or along the bitstream and/or a decoder decodes from or along the bitstream a control signal if a reference sub-picture is to be provided for reference sub-picture manipulation when it becomes available (e.g., right after it has been decoded).
  • the control signal may be included for example in a sequence parameter set, a picture parameter set, a header parameter set, a picture header, a sub-picture delimiter or header, and/or an image segment header (e.g. a tile group header).
  • the control signal may apply to each sub-picture referring to the parameter set.
  • the control signal may be specific to a sub-picture sequence (and may be accompanied by a sub-picture sequence identifier) or may apply to all sub picture sequences that are decoded.
  • the control signal may apply to the spatiotemporal unit wherein the header is applied.
  • the control signal is applicable in the first header and may be repeated in subsequent headers applying to the same spatiotemporal units.
  • the control signal may be included in an image segment header (e.g. a tile group header) of a sub-picture, indicating that the decoded sub-picture is provided to the reference sub-picture manipulation.
  • an encoder indicates in or along the bitstream and/or a decoder decodes from or along the bitstream and/or it is inferred by an encoder and/or a decoder that a reference sub- picture manipulation operation is to be carried out when the manipulated reference sub-picture is referenced in encoding and/or decoding or is about to be reference in encoding and/or decoding.
  • the reference sub-picture manipulation process may be carried out when the manipulated reference sub-picture is included in a reference picture list among "active" reference sub-pictures that may be used as reference for prediction in the current sub-picture.
  • Decoded picture buffering is performed on picture basis rather than on sub-picture basis.
  • An encoder and/or a decoder generates a reference picture from decoded sub-pictures of the same access unit or time instance using the picture composition data.
  • the generation of a reference picture is performed identically or similarly to what is described in other embodiments for generating output pictures.
  • An embodiment, in which decoded picture buffering is performed on picture basis, comprises the following: A reference sub-picture to be used as input to the reference sub-picture manipulation process is generated by extracting an area from a reference picture in the decoded picture buffer. The extraction may be done as a part of the decoded picture buffering process or a part of the reference sub-picture manipulation process or be operationally connected to the decoded picture buffering process and/or the reference sub-picture manipulation process.
  • the area is the area that collocates with the current sub-picture being encoded or decoded.
  • the area is provided through spatial relationship information.
  • the reference sub picture manipulation process gets reference sub-picture(s) from the decoded picture buffering process similarly to other embodiments, and the reference sub-picture manipulation process may operate similarly to other embodiments.
  • the above-mentioned sub-picture packing may involve indicating packing information for sub-picture sequences, sub-pictures, or regions within sub-pictures that may be used as source for the sub-picture packing.
  • the packing information is indicated similarly to but separately from the picture composition data.
  • an encoder indicates in or along the bitstream that the picture composition data is reused as packing information, and/or likewise a decoder decodes from or along the bitstream that the picture composition data is reused as packing information.
  • the packing information is indicated similarly to region-wise packing SEI message or region- wise packing metadata of OMAF.
  • packing information may be indicated for a set of reconstructed sub-pictures (e.g. all sub-pictures to be used for output picture compositing), but a manipulated reference sub picture may be generated from those reconstructed sub-pictures that are available at the time when the manipulated reference sub-picture is created.
  • a manipulated reference sub-picture that is used as a reference for a third sub-picture of a first time instance may be generated from a first reconstructed sub-picture and a second reconstructed sub-picture (also of the first time instance) that precede the third sub-picture in decoding order, while the packing information used in generating the manipulated reference sub-picture may comprise the information for the first, second, and third sub pictures.
  • the blending as part of generating the manipulated reference sub-picture may be performed in either so that each value sample for a sample position in calculated as the average of all samples of the reference sub-pictures positioned onto this sample position, or so that each sample may be calculated using a weighted average according to the location of the sample with respect to the location of available and unavailable samples.
  • An adaptive resolution change refers to dynamically changing the resolution within the video bitstream or video session, for example in video-conferencing use-cases.
  • Adaptive resolution change may be used e.g. for better network adaptation and error resilience against transmission errors or losses.
  • it may be desired to be able to change both the temporal/spatial resolution in addition to quality.
  • ARC may also enable a fast start of a session or after seeking to a new time position, wherein the start-up time of a session may be able to be reducing by first sending a low resolution frame and then increasing the resolution.
  • ARC may further be used in composing a conference. For example, when a person starts speaking, his/her corresponding resolution may be increased.
  • ARC may be conventionally carried out by an encoding a random-access picture (e.g. an HEVC IRAP picture) at the position where the resolution change takes place.
  • a random-access picture e.g. an HEVC IRAP picture
  • intra coding applied in random-access pictures make them more inefficient than inter-coded pictures in rate- distortion performance. Consequently, one possibility is to encode a random-access picture at a relatively low quality to keep the bit count close to that of inter-coded pictures so that the delay is not significantly increased.
  • a low-quality picture may be subjectively noticeable and also negatively affects the rate-distortion performance of pictures predicted from it.
  • Another possibility is to encode a random-access picture at a relatively high quality, but then the relatively high bit count may cause higher delay. In low-delay conversational applications, it might not be possible to compensate the high delay with initial buffering, which might cause noticeable picture rate fluctuation or motion discontinuity.
  • FIG. 17a The use of the reference sub-picture manipulation process for adaptive resolution changes is illustrated with Figure 17a, in accordance with an embodiment.
  • empty squares illustrate coded sub-pictures (e.g. 200, 201) and squares with the letter M illustrate manipulated reference sub-pictures (e.g.210, 211).
  • the arrows 220, 221 illustrate generation of the manipulated reference sub-pictures.
  • inter prediction is not illustrated in the Figure 17a, inter prediction may be used. Any preceding reference sub-picture of the same sub-picture sequence, in decoding order, may be used as a reference for prediction. Moreover, the illustrated manipulated reference sub-pictures may be used as reference for prediction.
  • the example illustrates that the last reconstructed sub-picture of a certain resolution is resampled to generate a manipulated reference sub-picture 210 for a new resolution.
  • Such an arrangement may suit low-delay applications, where the decoding and output order of (sub-)pictures are the same. It needs to be understood that this is not the only possible arrangement, but any reconstructed sub-picture(s) may be resampled to generate manipulated reference sub-picture(s) to be used as a reference for prediction of sub-pictures of a new resolution.
  • Resampling ratio and the corresponding area in the current sub-picture and the reference sub-picture may be indicated by determining a scaling window for the current and the reference sub picture.
  • Scaling ratio in horizontal/vertical direction may be derived by dividing width/height of the scaling window in the current sub-picture to that of the reference sub-picture.
  • the top-left comer of the scaling window in the current sub-picture may be matched to the top-left comer of the scaling window in the reference sub-picture.
  • Resampling operation may be performed by resampling the whole sub-picture and creating a new (temporary) subpicture to be used as reference frame in inter prediction process.
  • the resampling operation may be integrated into the motion compensation process.
  • top- left comer of the current block in the current sub-picture in mapped to reference sub-picture using the parameters of scaling windows (i.e. scaling ratio and corresponding area).
  • scaling windows i.e. scaling ratio and corresponding area.
  • the sub-sample position of horizontal and vertical filtering process of motion compensation operation is updated for each row and each column. Based on the scaling ratio, different resampling filter with different frequency characteristics may be used for interpolation filtering to avoid for example Nyquist resampling ratio.
  • Sub-picture sequences may be formed so that the sub-pictures of the same resolution are in the same sub-picture sequence. Consequently, there are two sub-picture sequences in this example.
  • Another option for forming sub-picture sequences is such that the sub-pictures of the same resolution starting from a resolution switch point are in the same sub-picture sequence. Consequently, there are three sub-picture sequences in this example.
  • the example above illustrates a possible operation for live encoding adapted e.g. to network throughput and/or decoding capability.
  • the example above may also illustrate the decoding operation, where the decoded sub-pictures are a subset of sub-pictures that are available for decoding, e.g. in a container file or as a part of received streams.
  • An adaptive resolution change may be facilitated in streaming (for multiple players) for example as described in the following.
  • a possible encoding arrangement is illustrated in Figure 17b.
  • sub-picture sequence(s) for switching from high resolution to low resolution may be encoded.
  • sub picture sequences of more than two resolutions or for several qualities or bitrates are encoded and switching between those may be enabled by encoding sub-picture sequences using manipulated reference sub-pictures.
  • Selected sub-picture sequences may be encoded for relatively infrequent random-access interval.
  • a low-resolution sub-picture sequence and a high-resolution sub-picture sequence are generated for random-access period of every third (sub)segment.
  • These sub-picture sequences may be received e.g. at a stable reception condition, when the receiver buffer occupancy is sufficiently high and network throughput is sufficient and stable for the bitrate of the sub-picture sequence.
  • Selected sub-picture sequences are encoded for switching between resolutions using manipulated reference sub-pictures created through resampling.
  • one sub-picture sequence is encoded for resolution change from low to high resolution at any (sub)segment boundary.
  • the sub-pictures of each (sub)segment in this sub-picture sequence are encoded in a manner that they only depend on each other or on the low-resolution sub-picture sequence.
  • the sub-picture sequences are made available separately for streaming. For example, they may be announced as separate Representations in DASH MPD.
  • the client chooses on (sub)segment basis which sub-picture sequence is received, an example of this is illustrated in Figure 17c.
  • empty squares illustrate coded non-random- access sub-pictures 200
  • squares with the letter M illustrate manipulated reference sub-pictures 210
  • squares with the letter I illustrate coded random-access sub-pictures 230.
  • the vertical dotted lines 222 illustrate (sub)segment boundaries and the thick line 224 illustrates received/decoded/generated sub-pictures.
  • the vertical arrows 220, 221 illustrate generation of the manipulated reference sub pictures and the diagonal arrows 223 illustrate inter prediction.
  • the client first receives one (sub)segment of the low-resolution sub-picture sequence 240 (of the infrequent random-access interval). The client then decides to switch up to a higher resolution 250 and receives two (sub)segments of the sub-picture sequence 245 that uses manipulated reference sub pictures generated from the low-resolution sub-pictures as a reference for prediction. However, since the latter manipulated reference sub-picture requires the second low-resolution (sub)segment to be decoded, the second (sub)segment of the low-resolution sub-picture sequence is also received. The client then switches to the high-resolution sub-picture sequence of the infrequent random-access interval.
  • the manipulated reference sub-pictures are generated from specific temporal sub-layers only (e.g. the lowest temporal sub-layer, e.g. Temporalld equal to 0 in HEVC).
  • Those specific temporal sub-layers may be made available for streaming separately from the other temporal sub-layers of the same sub-picture sequence. For example, those specific temporal sub-layers may be announced as a first Representation, and the other sub-layers of the same sub-picture sequence may be made available as a second Representation.
  • only the specific sub-layers need to be received from the second (sub)segment of the low- resolution sub-picture sequence.
  • the specific sub-layers may be made available as a separate
  • RWMR 360° video streaming offers an increased effective spatial resolution on the viewport.
  • a scheme where subpictures covering the viewport originate from a cubemap (CMP) equivalent to 6K (6144x3072) equirectangular projection (ERP) resolution is described in the following.
  • the achieved resolution on the viewport may be suitable e.g. for head-mounted displays using quad-HD (2560x1440) display panel.
  • the merged bitstream can be decoded with "4K" decoding capacity (e.g. like specified for HEVC Level 5.1).
  • the content is encoded at two spatial resolutions at with cube face size 1536x1536 (high resolution, HR) and 768x768 (low resolution, LR).
  • a 6x4 subpicture grid is used, and subpicture boundaries are treated like picture boundaries.
  • Two bitstreams are encoded for each resolution, i.e. "normal” (N), which may have a random-access picture interval suitable for seeking or random accessing (e.g. 49 pictures), and "switching" (S), which may provide switching capability from the other resolution at an interval suitable for responding to viewing orientation changes (e.g. 8 pictures).
  • a subpicture in the S bitstream is (de)coded using only one or more respective manipulated reference subpictures of the N bitstream as a reference in inter prediction.
  • the S bitstream comprises dependent random access point (DRAP) pictures that use only a preceding IRAP picture as reference in inter prediction, and the IRAP picture is identical to the corresponding IRAP picture in the N bitstream.
  • DRAP dependent random access point
  • An arrow in the figure implies both reference sub-picture manipulation (by upsampling or downsampling) and inter prediction (or motion compensation).
  • an IDR picture is used as an IRAP picture, but any other type of an IRAP picture could likewise be used.
  • "GOP" in Figure 20 comprises a sequence of one or more sub-pictures and the sub-pictures within“GOP” are of the resolution of the preceding IRAP/DRAP picture.
  • the sub-pictures within a "GOP” may have any inter prediction structure but their reference pictures may be constrained not to include any pictures preceding, in decoding order, the latest previous IRAP or DRAP picture. It needs to be understood that the embodiment is not limited to having a DRAP picture in the S-bitstream, but a picture that provides the switching capability from the N bitstream to the S bitstream may be (de)coded using any pictures of the N bitstream (or identically coded pictures of the S bitstream) as reference.
  • the picture immediately preceding, in output order, the picture that provides switching capability may be used as reference in inter prediction. It also needs to be understood that in some embodiments the pictures in the S bitstream from an IDR picture (inclusive) to the first DRAP picture (exclusive) following the IDR picture need not be (de)coded or made available for streaming, since they may be identical to the time-aligned pictures in N bitstream.
  • a DRAP picture may be defined to have one or more of the following properties:
  • the DRAP picture is a trailing picture (as defined in HEVC or WC).
  • the DRAP picture has a temporal sublayer identifier equal to 0.
  • the DRAP picture is predicted only from the previous IRAP picture in decoding order (a.k.a. the associated IRAP picture). Consequently, in some coding formats, such as VVC, the DRAP picture does not include any pictures in the active entries of its reference picture lists except the associated IRAP picture of the DRAP picture.
  • any picture that follows the DRAP picture in both decoding order and output order does not use any picture that precedes the DRAP picture in decoding order or output order as reference in inter prediction, with the exception of the associated IRAP picture of the DRAP picture. Consequently, in some coding formats, such as VVC, any picture that follows the DRAP picture in both decoding order and output order does not include, in the active entries of its reference picture lists, any picture that precedes the DRAP picture in decoding order or output order, with the exception of the associated IRAP picture of the DRAP picture.
  • a dependent random access point (DRAP) indication SEI message is associated with a DRAP picture.
  • each subpicture sequence may be encapsulated as an ISO base media file format track (e.g. a sub-picture track) and may be made available as a Representation in DASH.
  • a single (Sub)segment may comprise a sequence of pictures from an IDR or DRAP picture (inclusive) to the next IDR or DRAP picture (exclusive).
  • a player or alike may for example select 12 subpictures from the high-resolution bitstream and the complementary 12 subpictures may be selected from the low-resolution bitstream.
  • a hemi-sphere (180°xl80°) of the streamed content originates from the high resolution.
  • the subpictures of the normal bitstreams may be streamed (the left half of Figure 19 shows an example). If the viewing orientation changes, the subpictures that need to be updated may be selected from the "switching" bitstreams until the next IRAP picture of the respective "normal" bitstreams.
  • the right half of Figure 19 presents an example which subpictures are updated and streamed from the "switching" bitstreams after a viewing orientation change.
  • Figure 21 presents how the received subpictures of a single time instance are merged into a coded picture of 3840x2304 luma samples, which conforms to "4K" decoding capability (e.g. like HEVC Level 5.1).
  • a coded picture comprises side-by- side a 4x3 grid of subpictures originating from the high-resolution version and a 2x6 grid of subpictures originating from a low-resolution version.
  • Subpicture are classified into two types (typel or type2) which is explained below. The prediction structure is presented only for one of the bitstreams which is the same for all the other bitstreams in Figure 22.
  • each subpicture the first frame of each GOP is downsampled and duplicated at the beginning of each GOP.
  • These shared coded subpicture e.g. a'O, d'O, a'8, d'8, a'16, d'16
  • the frames between two SCPs (which is indicated by GOP) are coded using only the preceding SCP.
  • SCPs are always transmitted to the receiver, so the IDR period may be set to a large value, e.g. 10 seconds.
  • SCPs provide switching capability between different resolutions (i.e. HR and LR) at an interval suitable for responding to viewing orientation changes (e.g. 8 pictures).
  • HR bitstream high resolution subpictures may be coded from low resolution SCP using reference picture resampling (RPR) method in WC.
  • RPR reference picture resampling
  • the received subpictures of a single time instance are merged into a coded picture of 3840x2304 luma samples similar to the layout shown in Figure 21. However, to explain the layout of merged bitstream, it is assumed that the whole 360-degree content is divided into four subpicture including A, B, C, and D. Given the condition that the subpicture layout and picture size shale remain unchanged in whole bitstream in WC, the sample layout of the merged stream is presented in Figure 23. In this example layout, half of the subpictures are transmitted in high resolution, and half of them are transmitted in low resolution. The switched subpictures are highlighted in yellow to demonstrating as switching example. The location of subpictures in SCP is always fixed, to enable inter prediction.
  • the location of the subpicture in other pictures may change based on the viewport orientation.
  • half of the low-resolution subpictures e.g., a'O and b'O
  • the rest of the subpicture area i.e. grey area
  • Scaling window (to be used by RPR in WC) is set to the actual content. This means that the scaling window (showed in dashed red rectangle) covers only the actual content (not dummy area) in the case of a'O and b'O for example. For other cases, scaling window (which has not been shown) covers whole the subpicture area.
  • Dummy area may be coded with the lower bitrate.
  • dummy area may be coded in Intra horizontal and vertical mode to realize the padding of pixels at picture boundaries to achieve this, the blocks on the right/bottom/right-bottom side of the actual content are coed in horizontal/vcrtical/DC Intra mode.
  • the subpicture that is partially covered by actual content may be divided to different tiles in a way that the whole actual content corresponds to one tile, or the content boundaries (on left and bottom side) matches tile boundaries. This may help coding the actual content more efficiently.
  • the first subpicture after SCP (e.g. cO, dO, a8, d8, bl6, dl 6) may be coded identical to SCP picture if they have the same resolution. This may be realized by using skip coding for all the blocks of that subpicture.
  • Typel includes subpictures (e.g. subpictures A and B) that their low-resolution SCPs (e.g. a' and b') are packed in larger SCP.
  • Type2 includes subpictures (e.g. subpictures C and D) that their low-resolution SCPs (e.g. c' and d') are packed in an SCP with the same size.
  • some of the subpictures may have low-resolution SCP and the other subpicture may have normal (i.e. high) resolution SCP (e.g. C and D).
  • SCP normal (i.e. high) resolution SCP
  • sub-pictures may be encoded onto and/or decoded from more than one layer of scalable video coding, and inter-layer prediction may be performed from a reconstructed sub picture of a reference layer to a sub-picture of an enhancement layer. Further embodiments related to coding and/or decoding sub-pictures in more than layer are described in the next paragraphs.
  • a spatial correspondence between a first sub-picture in a reference layer and a second sub-picture of an enhancement is concluded e.g. by a decoder based on the sub-picture sequence identifiers of the first and second sub-pictures being the same.
  • sub-picture-wise ARC is performed to realize inter-layer prediction as a part of encoding and/or decoding.
  • a RWMR 360° method is realized as follows.
  • An independent layer e.g. layer 0
  • cubemap projected content is encoded in the independent layer (layer 0) having 768x768 resolution per cube face (but likewise any other resolution could be used).
  • a 6x4 subpicture grid is used, and subpicture boundaries are treated like picture boundaries, but likewise any other subpicture layout could be used.
  • An enhancement layer predicted from the independent layer e.g. layer 2 is encoded in high resolution.
  • the high-resolution enhancement layer layer 2 has 1536x1536 resolution per cube face (but likewise any other resolution could be used).
  • Subpicture- wise inter-layer prediction is applied between the subpictures having spatial correspondence, e.g. between subpictures having the same sub-picture sequence identifier value.
  • the independent layer may have a lower picture frequency than the enhancement layer and/or only some pictures of the independent layer may be used as reference for inter-layer prediction.
  • horizontal arrows (within a layer) indicate temporal inter prediction (within the same layer) and arrows from one layer to another indicate subpicture- wise inter-layer prediction.
  • Zero or more other enhancement layers predicted from the independent layer may be encoded.
  • an enhancement layer (layer 1) having the same resolution as the independent layer is encoded. Pictures in layer 1 may but need not have improved picture quality compared to respective pictures in the independent layer.
  • "GOP" in Figure 26 indicates a sequence of one or more pictures with any inter prediction hierarchy. Pictures within a "GOP” do not use inter-layer prediction.
  • a client receives the independent layer. Additionally, the client selects a number of subpictures from the enhancement layer(s) and arranges the selected subpictures into coded pictures. The client then decodes the generated bitstream. As part of the decoding, the client applies subpicture-wise inter-layer prediction, wherein a spatial correspondence between subpictures in different layers is concluded (e.g. based on the same sub-picture sequence identifier values). As part of the subpicture- wise inter-layer prediction, subpicture-wise ARC is applied if the subpictures in the current and reference layers are of different width and/or height.
  • Figure 27 presents an example embodiment continuing the example illustrated in Figure 26.
  • the client first selects 12 subpictures from layer 2 and the remaining 12 subpictures from layer 1 like presented in the left side of Figure 19. The same selection of subpictures is followed in the layer- 1 and layer-2 GOPs following the initial picture.
  • the client makes a new selection of 12 subpictures from layer 2 and the remaining 12 subpictures from layer 1 like presented in the right side of Figure 19, and that selection of subpictures is then followed in the subsequent layer- 1 and layer-2 GOPs. Since subpicture- wise inter- layer prediction is applied, the resulting multi-layer bitstream is valid and can be decoded.
  • random access point pictures may be encoded at the segment boundaries.
  • random-access pictures starting a so-called closed group of pictures (GOP) prediction structure have been used at segment boundaries of DASH representations. It has been found that open-GOP random-access pictures improve rate- distortion performance compared to closed-GOP random-access pictures. Moreover, open-GOP random-access pictures have been found to reduce observable picture quality fluctuation when compared to closed-GOP random-access pictures.
  • an open-GOP random-access picture e.g.
  • RASL random access skipped leading
  • Seamless representation switching may be enabled when representations use open GOP structures and share the same resolution and other characteristics, i.e. when a decoded picture of the source representation can be used as such as a reference picture for predicting pictures of a target representation.
  • representations may not share the same characteristics, e.g., they may be of different spatial resolution, wherein seamless representation switching may need some further considerations.
  • an encoder indicates in or along the bitstream that reference sub-picture manipulation is applied for those reference sub-pictures of leading sub-pictures or alike that precede, in decoding order, the open-GOP random-access sub-picture associated with the leading sub-pictures.
  • a decoder decodes from or along the bitstream or infers that reference sub-picture manipulation is applied for those reference sub-pictures of leading sub pictures or alike that precede, in decoding order, the open-GOP random-access sub-picture associated with the leading sub-pictures.
  • a decoder may infer reference sub-picture manipulation e.g.
  • the reference sub-picture manipulation may be indicated (by an encoder), decoded (by a decoder), or inferred (by an encoder and/or a decoder) to be resampling to match the resolution of the reference sub-pictures to that of the leading sub-pictures using the reference sub-pictures as reference for prediction.
  • multiple versions of the content can be coded at different random-access picture intervals (or SAP intervals).
  • Representations of RWMR 360° streaming is desirable to improve rate- distortion performance and to avoid visible picture quality pumping caused by closed GOP prediction structures.
  • Adaptive resolution change may also be used when there are multiple sub-pictures per access unit.
  • cubemap projection may be used, and each cube face may be coded as one or more sub-pictures.
  • the sub-pictures that cover the viewport may be streamed and decoded at a higher resolution than the other sub pictures.
  • switching from one resolution to another may be performed as described above.
  • Adaptive resolution change and/or stream switching at open GOP random-access pictures may also be used when there are multiple sub-pictures per access unit.
  • multiple versions of sub-picture sequences for each sub picture location have been encoded. For example, a separate version is coded for each combination among two resolutions and two random access intervals (here referred to as "short" and "long") for each sub-picture location.
  • An open GOP prediction structure has been used in at least of a sub-picture sequence.
  • Sub-picture sequences have been encapsulated into sub-picture tracks and made available as a sub-picture Representations in DASH. At least some of the (Sub)segments formed from the coded sub-picture sequences start with an open GOP prediction structure.
  • a client selects for a first range of (Sub)segments a first set of sub-picture locations to be received at a first resolution and a second set of sub-picture locations to be received at a second resolution.
  • a viewing orientation change is handled by the client by selecting for a second range of (Sub)segments a third set of sub-picture locations to be received at a first resolution and a fourth set of sub-picture locations to be received at a second resolution.
  • the first and third sets are not identical, and the intersection of the first and third sets is non-empty.
  • the second and fourth sets are not identical, and the intersection of the second and fourth sets is non-empty.
  • the client requests (Sub)segments of the short- random-access sub-picture Representations for sub-picture locations for which the resolution needs to change (i.e. that are within the third set but outside the intersection of the first and third sets, or within the fourth set but outside of the intersection of the second and fourth sets).
  • the reference sub- picture ⁇ ) for RASL sub-picture(s) of (Sub)segments of a changed resolution and starting with an open-GOP random-access picture are processed by reference sub-picture manipulation as described in other embodiments.
  • the reference sub-picture(s) may be resampled to the resolution of the RASL sub-picture(s).
  • cubemap projection may be used, and each cube face may be coded as one or more sub-pictures.
  • the sub-pictures that cover the viewport may be streamed and decoded at a higher resolution than the other sub pictures.
  • a viewing orientation changes in a manner that new sub-pictures would need to be streamed at a higher resolution while they were earlier streamed at a lower resolution, or vice versa switching from one resolution to another may be performed as described above.
  • reference sub-picture manipulation happens in-place.
  • the modified sub-sequence modifies, overwrites or replaces the reference sub-picture. No other codec or bitstream changes beyond indicating reference sub-picture manipulation might be needed.
  • An encoder and/or a decoder may conclude in-place manipulation taking place through, but not limited to, one or more of the following means:
  • - In-place manipulation may be pre-defined, e.g. in a coding standard, to apply always when a manipulated reference sub-picture is generated.
  • - In-place manipulation may be specified, e.g. in a coding standard, to apply for a pre-defined subset of manipulation processes.
  • An encoder indicates in or along the bitstream, e.g. in a sequence parameter set, and/or a decoder decodes from or along the bitstream that in-place manipulation takes place.
  • in-place manipulation may be understood to comprise the following:
  • reference sub-picture manipulation happens as a part of motion compensation process.
  • the manipulated reference sub-picture is not stored in the DPB but rather a prediction block is formed as a part of the motion compensation process by manipulating the reference sub-picture.
  • the manipulation may for example be upsampling or downsampling.
  • the manipulation may be performed for example by adjusting the step size of the interpolation filter used in motion compensation process in pixel- wise manner when accessing the reference sub-picture. It needs to be understood that the embodiments described in relation to storing a manipulated reference sub-picture into the DPB can likewise be applied to block-level reference sub-picture manipulation.
  • Implicit resampling Implicit resampling
  • the identification of the reference sub-picture manipulation process identifies resampling.
  • the identification may, for example, be a sequence-level indication that reference sub-pictures may need to be resampled.
  • the identification is a profile indicator or alike, whereby the feature of resampling of reference sub-pictures is included.
  • the set of decoded sub-pictures to be manipulated may be inferred as follows: if a reference sub-picture has a different resolution than the current sub-picture, it is resampled to the resolution of the current sub picture. In an embodiment, the resampling takes place only if the reference sub-picture is among active pictures in any reference picture list.
  • reference sub-picture manipulation involves implicit upsampling or downsampling.
  • the horizontal and vertical scaling factors upsampling or downsampling are derived from the width and height ratios of the respective sub-pictures (with the same sub-picture sequence identifier) in the current picture and in the reference picture.
  • reference sub-picture manipulation for resampling is realized as a block- level reference sub-picture manipulation described above.
  • the sample locations of the current coding subblock are relative to the top-left comer of the sub-picture for the scaling by the scaling factors.
  • the top-left position of the subpicture within the reference picture is added to the scaled sub-picture-relative sample location in order to obtain the sample locations within the reference picture.
  • resampling may be accompanied or replaced by any other operations for generating manipulated reference sub-picture, as described above.
  • the identification of the reference sub-picture manipulation process identifies which operations are to be carried out when the reference sub-picture has a different resolution or format (e.g. chroma format or bit depth) than the current sub picture.
  • an encoder encodes in the bitstream and/or a decoder decodes from the bitstream a control operation to generate a manipulated reference sub-picture.
  • the control operation is included in the coded video data of the sub-picture that is used as a source for generating the manipulated reference sub-picture.
  • the control operation is included in the coded video data of the sub-picture that is using the manipulated reference sub-picture as a reference for prediction.
  • the control operation is included in the coded video data of any sub-picture at or subsequent to (in decoding order) the sub-picture used as a source for generating the manipulated reference sub-picture.
  • the manipulated reference sub-picture is paired with the corresponding "source” reference sub-picture in its marking as “used for reference” or "unused for reference” (e.g. in a reference picture set). I.e., when a "source” reference sub-picture is marked as “unused for reference”, the corresponding manipulated reference sub-picture is also marked as "unused for reference”.
  • an encoder encodes in the bitstream and/or a decoder decodes from the bitstream a control operation to mark a manipulated reference sub-picture as "used for reference” or "unused for reference".
  • the control operation may, for example, be a specific reference picture set for manipulated reference sub-pictures only.
  • a reference picture list is initialized to contain manipulated reference sub-pictures that are marked as "used for reference”. In an embodiment, a reference picture list is initialized to contain manipulated reference sub-pictures that are indicated to be active references for the current sub-picture.
  • the decoding process provides an interface for inputting an "external reference sub-picture".
  • the reference sub-picture manipulation process may provide the manipulated reference sub-picture to the decoding process through the interface.
  • the external reference sub-picture may have pre-defined properties and/or may be inferred and/or properties may be provided through the interface. These properties may include but are not limited to one or more of the following:
  • POC Picture order count
  • LSBs least significant bits
  • MSBs POC most significant bits
  • an external reference sub-picture is treated as a long-term reference picture and/or has picture order count equal to 0.
  • an encoder encodes into or along the bitstream and/or a decoder decodes from or along the bitstream a control signal if an external reference sub-picture is to be obtained for decoding.
  • the control signal may be included for example in a sequence parameter set, a picture parameter set, a header parameter set, a picture header, a sub-picture delimiter or header, and/or an image segment header (e.g. a tile group header).
  • the control signal may cause the decoding to obtain an external reference sub-picture when the parameter set is activated.
  • the control signal may be specific to a sub-picture sequence (and may be
  • control signal may cause the decoding the obtain an external reference sub-picture e.g. when the header is decoded or at the start of decoding the spatiotemporal unit wherein the header is applied. For example, if the control signal is included in an image segment header (e.g. a tile group header), fetching of the external reference sub-picture may be carried out only for the first image segment header of a sub-picture.
  • an image segment header e.g. a tile group header
  • fetching of the external reference sub-picture may be carried out only for the first image segment header of a sub-picture.
  • the external reference sub-picture may only be given for the first sub picture of a coded sub-picture sequence that is independent of other coded sub-picture sequences.
  • each manipulated reference sub-pictures may start a coded sub-picture sequence. If only one sub-picture per coded picture, access unit or time instance is in use, a manipulated reference sub-picture may start a coded video sequence.
  • the external reference sub-picture is inferred to have properties that are the same as in the sub-pictures used as source for generating the external reference sub-picture.
  • the marking of external reference sub-pictures is controlled synchronously with the sub-picture(s) used as input for the reference sub picture manipulation.
  • external reference sub-pictures are included in the initial reference picture lists like other reference sub-pictures.
  • External reference sub-pictures may be accompanied by an identifier (e.g. ExtRefld) that is passed through the interface or inferred.
  • Memory management of the external reference sub-pictures e.g. which ExtRefld indices are kept in the decoded picture buffer
  • an encoder encodes into a bitstream and/or a decoder decodes from a bitstream an end of sequence (EOS) syntax structure and/or a start of sequence (SOS) syntax structure comprising but not limited to one or more of the following:
  • Identifier of sub-picture sequence to which the EOS and/or SOS syntax structure concerns Identifiers of parameter set(s) that are activated by the SOS syntax structure.
  • a reference sub-picture manipulation operation e.g. by implicit resampling
  • obtaining an external reference sub-picture e.g. by implicit resampling
  • a SOS syntax structure when present, may imply an external
  • an end of sequence (EOS) syntax structure and/or a start of sequence (SOS) syntax structure is included in aNAL unit whose NAL unit type indicates the end of sequence and/or the start of sequence, respectively.
  • an encoder encodes into a bitstream and/or a decoder decodes from a bitstream a start-of-bitstream indication, a start-of-coded- video-sequence indication, and/or a start-of-sub-picture-sequence indication.
  • the indication(s) may be included in and/or decoded from e.g. a parameter set syntax structure, a picture header, and/or a sub-picture delimiter. When present in a parameter set, the indication(s) may apply to the picture or sub-picture that activates the parameter set.
  • the indication(s) may apply in the bitstream order, i.e. indicate that the syntax structure or the access unit or coded picture containing the syntax structure starts a bitstream, a coded video sequence, or a sub-picture sequence.
  • bitstream or CVS properties are indicated in two levels, namely per sub picture sequence excluding the generation of the manipulated reference sub-pictures and per sub picture sequence including the generation of the manipulated reference sub-pictures.
  • the properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
  • Properties per sub-picture sequence excluding the generation of the manipulated reference sub-pictures may be indicated in a syntax structure that applies to the core decoding process, such as a sequence parameter set.
  • Properties per sub-picture sequence including the generation of the manipulated reference sub-pictures may be indicated in a syntax structure that applies to the generation of the manipulated reference sub-pictures instead of or in addition to the core decoding process.
  • Reference sub-picture manipulation may happen outside the core decoding specification and may be specified e.g. in an application-specific standard or annex.
  • the encoder indicates, and/or the decoder decodes a bitstream property data structure including a first-shell profile indicator and a second-shell profile indicator, wherein the first-shell profile indicator indicates properties excluding reference sub-picture manipulation and the second-shell profile indicator indicates properties including reference sub-picture manipulation.
  • bitstream or CVS properties are indicated collectively to all sub-picture sequences (i.e. all coded video data).
  • the properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
  • HRD parameters e.g. CPB and/or DPB size
  • separate set of properties may be indicated and/or decoded for sub-picture sequences excluding the generation of the manipulated reference sub-pictures; and for sub-picture sequences including the generation of the manipulated reference sub-pictures.
  • a manipulated reference sub-picture is generated by unfolding entire or partial projection surfaces onto a 2D plane.
  • the unfolding is performed through knowledge on the geometrical relations of the projection surfaces and knowledge on how the projection surfaces are mapped onto sub-pictures.
  • sub-picture packing is used for realizing the unfolding operation.
  • An example embodiment is described in relation to cubemap projection, but it needs to be understood that embodiments can be realized similarly for other projection formats.
  • cube faces that are adjacent to the "main" cube face (subject to being predicted) are unfolded onto a 2D plane next to the "main" cube face.
  • Figures 16a-16d provide an example.
  • the hatched cube face 261 is encoded or decoded as a sub-picture within the current access unit.
  • the hatched cube face 261 corresponds to the cube face marked by "Face" on the illustration of the cubemap 260.
  • the picture composition data may be authored by an encoder and/or decoded by a decoder to generate an output arranged as in Figure 16b from the reconstructed sub-pictures corresponding to cube faces.
  • the cube faces with vertical stripes in the cube may be arranged as the cube faces D and B in the 2D cubemap. It is remarked that the viewpoint for observing the cube may be in the middle of the cube and hence a cubemap may represent the inner surface of the cube.
  • the top and bottom cube faces may be arranged as cube faces C and A in the 2D cubemap.
  • the reconstructed sub-pictures of an access unit used as a reference for prediction are used in generating a manipulated reference sub-picture for the hatched cube face 261 of the current access unit by unfolding the cube faces of the cube as illustrated in Figure 16c and described as follows:
  • the unfolded cube faces are adjacent to the hatched cube face, i.e. share a common edge with the hatched cube face.
  • Subsequent to unfolding the picture area of the manipulated reference sub-picture may be cropped as illustrated in Figure 16d.
  • an encoder indicates information indicative of the cropping area in or along the bitstream and a decoder decodes information indicative of the cropping area from or along the bitstream.
  • an encoder and/or a decoder infers the cropping area, e.g. to be proportional to the maximum size of a prediction unit for inter prediction, which may be additionally appended proportionally to the maximum number of samples needed for interpolating samples at non-integer sample locations.
  • the comers of the unfolded area 263 may be handled e.g. in one of the following ways: the comers may be left unoccupied or the comers may be padded e.g. with the adjacent comer sample of the cross-hatched cube face.
  • the comers may be interpolated from the unfolded cube faces 262.
  • interpolation may be performed but is not limited to either of the following:
  • a sample row and a sample column from the adjacent unfolded cube faces may be rescaled to cover the comer area and blended (e.g. averaged).
  • the interpolation can be done as a weighted average proportional to the inverse of the distance to the border sample.
  • Spatial relationship information may be used to indicate that the hatched cube face 261 in the current access unit corresponds to the central area of the manipulated reference sub-picture.
  • a manipulated reference sub-picture is generated in two steps. First, entire or partial projection surfaces are unfolded onto a 2D plane as described in the previous embodiment. Second, since the unfolding may cause unoccupied sample locations in the manipulated reference sub-picture, the sample lines or columns of the unfolded projection surface, such as an entire or partial unfolded cube face, may be extended by resampling to cover up to 45 degree of the comer, as shown in Figure 16e just for two of the sides.
  • the projection structure such as the sphere
  • the projection structure may be rotated prior to deriving the 2D picture.
  • One reason for such rotation may be to adjust the 2D version of the content to suit coding tools better for improved rate- distortion performance.
  • only certain intra prediction directions may be available, and hence rotation could be applied to match the 2D version of the content with intra prediction directions. This may be done for example by computing localized gradients and statistically improve the match between the gradients and intra prediction directions by rotating the projection structure.
  • a reference sub-picture is associated with a first rotation and a current sub-picture is associated with a second rotation.
  • a manipulated reference sub-picture is generated wherein the essentially the second rotation is used.
  • the reference sub-picture manipulation may for example comprise the following steps: First, the reference sub-picture may be projected onto a projection structure, such as a sphere, using the first rotation. The image data on the projection structure may be projected onto a manipulated reference sub-picture using the second rotation. For example, the second rotation may be applied to rotate the sphere image and the sphere image may then be projected onto a projection structure (e.g. a cube or a cylinder) which is then unfolded to form a 2D sub-picture.
  • a projection structure e.g. a cube or a cylinder
  • point cloud sequences may be coded as video when point clouds are projected onto one or more projection surfaces.
  • An encoder could adapt properties of the projection surfaces to the content in a time-varying manner. Properties of the projection surfaces may comprise but are not limited to one or more of the following: 3D location, 3D orientation, shape, size, projection format (e.g. ortographic projection or a geometric projection with a projection center), and sampling resolution.
  • 3D location 3D orientation
  • shape e.g. ortographic projection or a geometric projection with a projection center
  • projection format e.g. ortographic projection or a geometric projection with a projection center
  • sampling resolution e.g. ortographic projection or a geometric projection with a projection center
  • reference sub-picture manipulation comprises inter-projection prediction.
  • One or more patches of one or more sub-pictures from one projection may be used as a source for generating a manipulated reference sub-picture comprising one or more reference patches.
  • the manipulated reference sub-picture may essentially represent the properties of the projection surface(s) of a current sub-picture being encoded or decoded.
  • a point cloud may be generated from the reconstructed texture and geometry sub-pictures, using the properties of the projection surface(s) applying to the reconstructed texture and geometry sub-pictures.
  • the point cloud may be projected onto a second set of projection surface(s) that may have the same or similar properties as the projection surfaces applying to the current texture sub-picture and/or the current geometry sub-picture being encoded or decoded and the respective texture and geometry prediction pictures are formed from this projection.
  • reference sub-picture manipulation is regarded as a part of decoded picture buffering rather than a process separate from the decoded picture buffering.
  • reference sub-picture manipulation access sub-pictures from a first bitstream and a second bitstream to generate manipulated reference sub-picture(s).
  • the first bitstream may represent texture video of a first viewpoint
  • the second bitstream may represent depth or geometry video for the first viewpoint
  • the manipulated reference sub-picture may represent texture video for a second viewpoint.
  • the above described embodiments provide a mechanism and an architecture to use core video (de)coding process and bitstream format in a versatile manner for many video-based purposes, including video-based point cloud coding, patch-based volumetric video coding, and 360-degree video coding with multiple projection surfaces. Compression efficiency may be improved compared to plain 2D video coding by enabling sophisticated application-tailored prediction.
  • the above described embodiments are suitable for interfacing a single-layer 2D video codec with additional functionality.
  • FIG. 9 is a flowchart illustrating a method according to an embodiment.
  • a method comprises obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub picture belonging to a sub-picture sequence (block 190 in Figure 9). It is then determined 192 whether the sub-picture would be used as a source for a manipulated reference sub-picture. If the determination 192 indicates that the sub-picture would be used as a source for a manipulated reference sub-picture, that sub-picture is used as a basis for a manipulated reference sub-picture. In other words, the manipulated reference sub-picture is generated 196 from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence.
  • the manipulation may comprise, for example, rotating the sub-picture, mirroring the sub picture, resampling the sub-picture, positioning within the area of the manipulated reference sub picture, overlaying over or blending with the samples already present within the indicated area of the manipulated reference sub-picture, or some other form of manipulation. It may also be possible to use more than one of the above mentioned and/or other manipulation principles to generate the manipulated reference sub-picture.
  • An apparatus comprises at least one processor and at least one memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
  • FIG. 18 An example of an apparatus, e.g. an apparatus for encoding and/or decoding, is illustrated in Figure 18.
  • the generalized structure of the apparatus will be explained in accordance with the functional blocks of the system. Several functionalities can be carried out with a single physical device, e.g. all calculation procedures can be performed in a single processor if desired.
  • a data processing system of an apparatus according to an example of Figure 18 comprises a main processing unit 100, a memory 102, a storage device 104, an input device 106, an output device 108, and a graphics subsystem 110, which are all connected to each other via a data bus 112.
  • the main processing unit 100 may be a conventional processing unit arranged to process data within the data processing system.
  • the main processing unit 100 may comprise or be implemented as one or more processors or processor circuitry.
  • the memory 102, the storage device 104, the input device 106, and the output device 108 may include conventional components as recognized by those skilled in the art.
  • the memory 102 and storage device 104 store data in the data processing system 100.
  • Computer program code resides in the memory 102 for implementing, for example, the methods according to embodiments.
  • the input device 106 inputs data into the system while the output device 108 receives data from the data processing system and forwards the data, for example to a display.
  • the data bus 112 is a conventional data bus and while shown as a single line it may be any combination of the following: a processor bus, a PCI bus, a graphical bus, an ISA bus. Accordingly, a skilled person readily recognizes that the apparatus may be any data processing device, such as a computer device, a personal computer, a server computer, a mobile phone, a smart phone or an Internet access device, for example Internet tablet computer.
  • a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment.
  • a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
  • the computer program code comprises one or more operational characteristics.
  • Said operational characteristics are being defined through configuration by said computer based on the type of said processor, wherein a system is connectable to said processor by a bus, wherein a programmable operational characteristic of the system comprises obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub picture sequence; determining whether to use the sub-picture as a source for a manipulated reference sub-picture; if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture; the method further comprises generating the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub picture sequence.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

There is disclosed a method, an apparatus and a computer program product for video encoding and decoding. In accordance with an embodiment the method comprises obtaining coded data of a sub- picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence and determining whether to use the sub-picture as a source for a manipulated reference sub-picture.If the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture, the manipulated reference sub-picture is generated from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence.

Description

AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING
AND DECODING
TECHNICAL FIELD
[0001 ] The present invention relates to an apparatus, a method and a computer program for video coding and decoding.
BACKGROUND
[0002] This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
[0003] A video coding system may comprise an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. The encoder may discard some information in the original video sequence in order to represent the video in a more compact form, for example, to enable the storage/transmission of the video information at a lower bitrate than otherwise might be needed.
[0004] Various technologies for providing three-dimensional (3D) video content are currently investigated and developed. Especially, intense studies have been focused on various multiview applications wherein a viewer is able to see only one pair of stereo video from a specific viewpoint and another pair of stereo video from a different viewpoint. One of the most feasible approaches for such multiview applications has turned out to be such wherein only a limited number of input views, e.g. a mono or a stereo video plus some supplementary data, is provided to a decoder side and all required views are then rendered (i.e. synthesized) locally by the decoder to be displayed on a display.
[0005] In the encoding of 3D video content, video compression systems, such as Advanced Video Coding standard (H.264/AVC), the Multiview Video Coding (MVC) extension of H.264/AVC or scalable extensions of HEVC (High Efficiency Video Coding) can be used.
[0006] Two-dimensional (2D) video codecs can be used as basis for novel usage scenarios, such as point cloud coding and 360-degree video. The following challenges have been faced. It may be needed to make a trade-off between selecting projection surfaces optimally for a single time instance vs. keeping projection surfaces constant for a time period in order to facilitate inter prediction. Also motion over projection surface boundary might not be handled optimally. When projection surfaces are packed onto a 2D picture, techniques like motion-constrained tile sets have to be used to avoid unintentional prediction leaks from one surface to another. In 360-degree video coding, geometry padding have been shown to improve compression but would require changes in the core (de)coding process.
SUMMARY
[0007] Now in order to at least alleviate the above problems, an enhanced encoding method is introduced herein. In some embodiments there is provided a method, apparatus and computer program product for video coding as well as decoding utilizing a reference sub-picture manipulation process.
[0008] In some embodiments, a reference sub-picture manipulation processing block may be considered to be outside of the core coding decoding process or specification.
[0009] In an embodiment, the manipulated reference sub-picture is stored in the decoded picture buffer. Its marking status (e.g. marking as "used for reference" and "unused for reference) can be controlled as described in embodiments further below.
[0010] In an embodiment, the reference sub-picture manipulation provides the manipulated reference sub-picture directly to the decoding process rather than to the decoded sub-picture buffering. In this embodiment, the manipulated reference sub-picture may be temporary in a sense that it may be required only for decoding one coded sub-picture after which it may be discarded.
[001 1 1 An encoder may include in or along the bitstream an identification of the reference sub picture manipulation process. The encoder may also include in the bitstream information indicative of or infers a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub pictures to be generated.
[0012] The encoder may generate the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and include at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
[0013 ] A decoder may decode from or along the bitstream an identification of the reference sub picture manipulation process. The decoder may also decode from the bitstream information indicative of or infers: a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub-pictures to be generated.
[0014] The decoder may also generate the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and include at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
[0015] The identification of the reference sub-picture manipulation process may for example be a uniform resource identifier (URI) or a registered type value.
[0016] One or more reference sub-pictures may be used as a source for generating a manipulated reference sub-picture.
[0017] The generation of the set of manipulated reference sub-pictures may comprise one or more of sub-picture packing, geometry packing, padding, reference patch reprojection, view synthesis, resampling, color gamut conversion, dynamic range conversion, color mapping conversion, bit depth conversion, chroma format conversion, projection conversion and/or frame rate conversion.
[0018 ] Sub-picture packing of one or more reference sub-pictures or regions therein may comprise but is not limited to one or more of the following (as indicated by the encoder as part of the information):
rotating e.g. by 0, 90, 180, or 270 degrees;
mirroring e.g. horizontally or vertically;
resampling (e.g. rescaling the width and/or height);
positioning within the area of the manipulated reference sub-picture;
overlaying over (i.e. overwriting) or blending with the samples already present within the indicated area of the manipulated reference sub-picture (e.g., occupied by sub-pictures or regions arranged previously onto the manipulated reference sub-picture). The overwriting may be useful e.g. in the case the one/some of the sub-pictures are coded with higher quality.
[0019] Geometry padding for 360° video may comprise, for example, cube face padding from neighboring cube faces projected onto the same plane as the cube face in the sub-picture.
[00201 A geometry image and/or a texture image may be padded by an image padding element. Padding aims at filling the empty space between patches in order to generate a piecewise smooth image suited for video compression. The image padding element may consider keeping the compression high as well as enabling estimating of occupancy map (EOM) with enough accuracy as compared to the original occupancy map (OOM).
[0021 ] According to an approach, a following padding strategy may be used:
[0022] Each block of TxT (e.g., 16x16) pixels is processed independently. If the block is empty (i.e., all its pixels belong to an empty space), then the pixels of the block are filled by copying either the last row or column of the previous TxT block in raster order. If the block is full (i.e., no empty pixels), nothing is done. If the block has both empty and filled pixels, then the empty pixels are iteratively filled with the average value of their non-empty neighbors.
[0023] The generated images/layers may be stored as video frames and compressed. For example, the padded geometry image and the padded texture image are provided to a video compression element for compressing the padded geometry image and the padded texture image, from which the compressed geometry and texture images are provided, for example, to a multiplexer which multiplexes the input data to a compressed bitstream(s).
[0024] The compressed geometry and texture images are also provided, for example, to an occupancy map estimator which generates an estimated occupancy map.
[0025] In this step, an algorithm may be used to find the borders of geometry and/or texture images ft is noted that the borders are aligned with each other in general and prior to encoding.
However, maybe after encoding, the edges are a bit misaligned which can be corrected based on the original occupancy map and in the following steps. [0026] The occupancy map may consist of a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. One cell of the 2D grid would produce a pixel during the image generation process.
[0027] In the estimated occupancy generation step, based on the embodiment used in the padding step, different processes between respective padded geometry, Y, U, and/or V components may be considered. Based on such processes, an estimation of edges (i.e. contours defining the occupancy map) will be created. Such estimation may be fine-tuned in the cases where more than one component/image are to be used for estimating the occupancy map.
[0028] An example of an edge detection algorithm is a multiscale edge detection algorithm, which is based on wavelet domain vector hidden Markov tree model. However, some other algorithm may be applied in this context.
[0029 ] In padding the content of the padding area of the manipulated reference sub-picture may be generated from other sub-pictures. For example, in region of interest coding, if a first sub-picture may represent a bigger area than a second sub-picture, the manipulated reference for the second sub-picture may be padded using the content in the first sub-picture.
[0030] In reference patch reprojection reference sub-picture(s) may be interpreted as 3D point cloud patches and the 3D point cloud patches may be re-projected onto a plane suitable for 2D inter prediction.
[0031 ] For the MPEG standard, there has been developed a test model for point cloud compression. MPEG W17248 discloses a test model for MPEG point cloud coding to provide a standardized way of dynamic point cloud compression. In MPEG W17248 test model, the 2D- projected 3D volume surfaces are determined in terms of three image data: motion images, texture images and depth/attribute images.
[0032] In a point cloud re-sampling block, the input 3D point cloud frame is resampled on the basis of a reference point cloud frame. A 3D motion compensation block is used during the inter- frame encoding/decoding processes. It computes the difference between the positions of the reference point cloud and its deformed version. The obtained motion fields consists of 3D motion vectors {MV_i(dx, dy, dz)}_i, associated with the point of the reference frame. The 3D to 2D mapping of the reference frame is used to convert the motion field into a 2D image by storing dx as Y, dy as U and dz as V, where this 2D image may be referred to as a motion image. A scale map providing the scaling factor for each block of the motion image is also encoded.
[0033] The image generation process exploits the 3D to 2D mapping computed during the packing process to store the geometry/texture/motion of the point cloud as images. These images are stored as video frames and compressed with a video encoder, such as an HEVC encoder. The generated videos may have the following characteristics:
[0034] Geometry: WxH YUV420-8bit,
[0035] Texture: WxH YUV420-8bit, [0036] Motion: WxH YUV444-10bit.
[0037] View synthesis (a.k.a. depth- image-based rendering) may be performed from sub-pictures representing one or more texture and depth views.
[0038] Depth- image-based rendering (DIBR) or view synthesis refers to generation of a novel view based on one or more existing/received views. Depth images may be used to assist in correct synthesis of the virtual views. Although differing in details, most of the view synthesis algorithms utilize 3D warping based on explicit geometry, i.e. depth images, where typically each texture pixel is associated with a depth pixel indicating the distance or the z- value from the camera to the physical object from which the texture pixel was sampled. One known approach uses a non-Euclidean formulation of the 3D warping, which is efficient under the condition that the camera parameters are unknown or the camera calibration is poor. Yet one other known approach, however, strictly follows Euclidean formulation, assuming the camera parameters for the acquisition and view interpolation are known. Yet in one other approach, the target of view synthesis is not to estimate a view as if a camera was used to shoot it but rather provide a subjectively pleasing representation of the content, which may include non-linear disparity adjustment for different objects.
[0039] Occlusions, pinholes and reconstruction errors are the most common artifacts introduced in the 3D warping process. These artifacts occur more frequently in the object edges, where pixels with different depth levels may be mapped to the same pixel location of the virtual image. When those pixels are averaged to reconstruct the final pixel value for the pixel location in the virtual image, an artifact might be generated, because pixels with different depth levels usually belong to different objects.
[0040] A number of approaches have been proposed for representing depth picture sequences, including the use of auxiliary depth map video streams, multiview video plus depth (MVD) and layered depth video (LDV). The depth map video stream for a single view can be regarded as a regular monochromatic video stream and coded with any video codec. Some characteristics of the depth map stream, such as the minimum and maximum depth in world coordinates, can be indicated in messages formatted according to the MPEG-C Part 3 standard, for example. In the MVD representation, the depth picture sequence for each texture view is coded with any video codec, such as MVC. In the LDV representation, the texture and depth of the central view are coded conventionally, while the texture and depth of the other view are partially represented and cover only the dis-occluded areas required for correct view synthesis of intermediate views.
[0041 ] The detailed operation of view synthesis algorithms depend on which representation format has been used for texture views and depth picture sequences.
[0042] The resampling may be either upsampling (for switching to a higher resolution) or downsampling (for switching to a lower resolution). The resampling may be used for but are not limited to one or more of the following use cases:
Adaptive resolution change, where a picture would typically comprise one sub-picture only. Mixed-resolution multiview video or image coding, where a sub-picture sequence corresponds to a view. Inter- view prediction may be performed by enabling prediction from a first sub-picture (of a first sub-picture sequence) to a second sub-picture (of a second sub-picture sequence), where the first and second sub-pictures may be of the same time instance. In some cases, it may be beneficial to rotate one of the views (e.g. for arranging the sub-pictures side-by-side or top-botom in the output picture compositing). Hence, resampling may be accompanied by rotation (e.g. by 90, 180, or 270 degrees).
[0043] Color gamut conversion: For example, if one sub-picture used as a source is represented by a first color gamut or format, such as ITU-R BT.709, and the manipulated reference sub-picture is represented by a second color gamut or format, such as ITU-R BT.2020, the sub-picture used as a source may be converted to the second color gamut or format as part of the process.
[0044 ] Dynamic range conversion and/or color mapping conversion: Color mapping may refer to the mapping of sample values to linear light representation. The reconstructed sub-picture(s) used as a source for generating the manipulated reference sub-picture may be converted to the target dynamic range and color mapping.
[0045] In bit depth conversion the reconstructed sub-picture(s) used as source for generating the manipulated reference sub-picture may be converted to the bit-depth of the manipulated reference sub picture.
[0046] Chroma format conversion: For example, a manipulated reference sub-picture may have YUV 4:4:4 chroma format while at least some reconstructed sub-pictures used as source for generating the manipulated reference sub-picture may have chroma format 4:2:0. The sub-pictures used as source may be upsampled to YUV 4:4:4 as part of the process, in this example.
[0047] Projection conversion: For example, if one sub-picture is in a first projection, such as ERP, and the manipulated sub-picture is in a second projection, such as CMP, the sub-picture is used as reference may be converted to the second projection. As a use case, the whole 360-degree content may be coded in lower resolution in ERP format, and the viewport content may be coded in higher resolution in CMP format.
[0048] Frame rate conversion: For example, if one sub-picture is coded with a first frame rate, and a second sub-picture may be coded with a second frame rate, the sub-picture is used as reference may be interpolated in temporal domain to the time instance of the second sub-picture. As a use case, in stereoscopic streaming the dominant view may be transmitted in higher frame rate, and the auxiliary view may be transmitted in lower frame rate.
[0049] A method according to a first aspect comprises:
obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
determining whether to use the sub-picture as a source for a manipulated reference sub-picture; if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture; the method further comprises
generating the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence.
[0050] An apparatus according to a second aspect comprises at least one processor and at least one memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
obtain coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
determine whether to use the sub-picture as a source for a manipulated reference sub-picture; generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub picture is to be used as the source for the manipulated reference sub-picture.
[0051 ] A computer program product according to a third aspect comprises computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:
obtain coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
determine whether to use the sub-picture as a source for a manipulated reference sub-picture; generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub picture is to be used as the source for the manipulated reference sub-picture.
[0052] An encoder according to a fourth aspect comprises:
an input for obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
a determinator configured to determine whether to use the sub-picture as a source for a manipulated reference sub-picture;
a manipulator configured to generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
[0053] A decoder according to a fifth aspect comprises:
an input for receiving coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
a determinator configured to determine whether to use the sub-picture as a source for a manipulated reference sub-picture;
a manipulator configured to generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
[0054] The further aspects relate to apparatuses and computer readable storage media stored with code thereon, which are arranged to carry out the above methods and one or more of the embodiments related thereto.
BRIEF DESCRIPTION OF THE DRAWINGS
[0055] For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings in which:
[0056] Figure 1 shows an example of MPEG Omnidirectional Media Format (OMAF);
[0057] Figure 2 shows an example of image stitching, projection and region- wise packing;
[0058] Figure 3 shows another example of image stitching, projection and region-wise packing;
[0059] Figure 4 shows an example of a process of forming a monoscopic equirectangular panorama picture;
[0060] Figure 5 shows an example of tile-based omnidirectional video streaming;
[0061 ] Figure 6 shows an example of a decoding process;
[0062] Figure 7 shows a sub-picture-sequence- wise buffering according to an embodiment;
[0063] Figure 8 shows a decoding process with a reference sub-picture manipulation process, in accordance with an embodiment;
[0064] Figure 9 illustrates a flowchart of a method according to an embodiment;
[0065] Figure 10 shows an example of a picture that has been divided into four sub-pictures;
[0066] Figure 11 shows predictions applicable in an encoding process and/or in a decoding process according to an embodiment;
[0067] Figure 12 shows an example of using a shared coded sub-picture for multi-resolution viewport independent 360-degree video streaming;
[0068] Figure 13 shows an example of a sub-picture using a part of another sub-picture as a reference frame;
[0069] Figure 14 shows another example of a sub-picture using a part of another sub-picture as a reference frame;
[0070] Figure 15 shows an example of a patch generation according to an embodiment;
[0071 ] Figures 16a— 16d illustrate generation of the set of manipulated reference sub-picture by unfolding projection surfaces, in accordance with an embodiment;
[0072] Figure 16e illustrates generation of the set of manipulated reference sub-picture by unfolding projection surfaces and sample-line-wise resampling, in accordance with an embodiment.
[0073] Figure 17a illustrates a use of a reference sub-picture manipulation process for adaptive resolution change, in accordance with an embodiment; [0074] Figure 17b illustrates a possible encoding arrangement for an adaptive resolution change, in accordance with an embodiment;
[0075 ] Figure 17c illustrates an example situation for an adaptive resolution change, in accordance with an embodiment;
[0076] Figure 18 shows an apparatus according to an embodiment;
[0077] Figure 19 illustrates, on the left, initial selection of subpictures to be streamed, and on the right, subpictures to be streamed after viewing orientation change;
[0078] Figure 20 shows an example of encoding of "switching" subpicture sequences using
DRAP pictures;
[0079] Figure 21 illustrates an example of a merged bitstream;
[0080] Figure 22 illustrates encoding of "switching" subpicture sequences using DRAP pictures in RWMR+SCP method;
[0081 ] Figure 23 illustrates merged bitstream in RWMR+SCP method;
[0082] Figure 24 illustrates encoding of "switching" subpicture sequences using DRAP pictures in RWMR+SCP method, mixed resolution SCP;
[0083] Figure 25 illustrates a merged bitstream in RWMR+SCP method, mixed resolution SCP;
[0084] Figure 26 illustrates an example embodiment of a RWMR 360° method; and
[0085] Figure 27 presents an example embodiment continuing the example illustrated in Figure
26.
PET ATT, ED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS
[0086] In the following, several embodiments will be described in the context of one video coding arrangement ft is to be noted, however, that the invention is not limited to this particular arrangement. For example, the invention may be applicable to video coding systems like streaming system, DVD (Digital Versatile Disc) players, digital television receivers, personal video recorders, systems and computer programs on personal computers, handheld computers and communication devices, as well as network elements such as transcoders and cloud computing arrangements where video data is handled.
[0087] In the following, several embodiments are described using the convention of referring to (de)coding, which indicates that the embodiments may apply to decoding and/or encoding.
[0088] The Advanced Video Coding standard (which may be abbreviated AVC or H.264/AVC) was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organization for Standardization (ISO) / International Electrotechnical Commission (IEC). The H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC). There have been multiple versions of the H.264/AVC standard, each integrating new extensions or features to the specification. These extensions include Scalable Video Coding (SVC) and Multiview Video Coding (MVC).
[0089] The High Efficiency Video Coding standard (which may be abbreviated HEVC or H.265/HEVC) was developed by the Joint Collaborative Team - Video Coding (JCT-VC) of VCEG and MPEG. The standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC). Extensions to H.265/HEVC include scalable, multiview, three-dimensional, and fidelity range extensions, which may be referred to as SHVC, MV- HEVC, 3D-HEVC, and REXT, respectively. The references in this description to H.265/HEVC, SHVC, MV-HEVC, 3D-HEVC and REXT that have been made for the purpose of understanding definitions, structures or concepts of these standard specifications are to be understood to be references to the latest versions of these standards that were available before the date of this application, unless otherwise indicated.
[00901 The Versatile Video Coding standard (WC, H.266, or H.266/WC) is presently under development by the Joint Video Experts Team (JVET), which is a collaboration between the ISO/IEC MPEG and ITU-T VCEG.
[0091] Some key definitions, bitstream and coding structures, and concepts of H.264/AVC and HEVC and some of their extensions are described in this section as an example of a video encoder, decoder, encoding method, decoding method, and a bitstream structure, wherein the embodiments may be implemented. Some of the key definitions, bitstream and coding structures, and concepts of H.264/AVC are the same as in HEVC standard - hence, they are described below jointly. The aspects of various embodiments are not limited to H.264/AVC or HEVC or their extensions, but rather the description is given for one possible basis on top of which the present embodiments may be partly or fully realized.
[0092] Video codec may comprise an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. The compressed representation may be referred to as a bitstream or a video bitstream. A video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec. The encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
[0093] Hybrid video codecs, for example ITU-T H.264, may encode the video information in two phases. At first, pixel values in a certain picture area (or“block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Then, the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This may be done by transforming the difference in pixel values using a specified transform (e.g. Discreet Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
[00941 In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures). In intra block copy (IBC; a.k.a. intra-block-copy prediction or current picture referencing), prediction is applied similarly to temporal prediction, but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter layer or inter- view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
[0095] Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
[0096] One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
[0097] Entropy coding/decoding may be performed in many ways. For example, context-based coding/decoding may be applied, where in both the encoder and the decoder modify the context state of a coding parameter based on previously coded/decoded coding parameters. Context-based coding may for example be context adaptive binary arithmetic coding (CABAC) or context-based variable length coding (CAVLC) or any similar entropy coding. Entropy coding/decoding may alternatively or additionally be performed using a variable length coding scheme, such as Huffman coding/decoding or Exp-Golomb coding/decoding. Decoding of coding parameters from an entropy-coded bitstream or codewords may be referred to as parsing.
[0098] Video coding standards may specify the bitstream syntax and semantics as well as the decoding process for error- free bitstreams, whereas the encoding process might not be specified, but encoders may just be required to generate conforming bitstreams. Bitstream and decoder conformance can be verified with the Hypothetical Reference Decoder (HRD). The standards may contain coding tools that help in coping with transmission errors and losses, but the use of the tools in encoding may be optional and decoding process for erroneous bitstreams might not have been specified.
[0099] A syntax element may be defined as an element of data represented in the bitstream. A syntax structure may be defined as zero or more syntax elements present together in the bitstream in a specified order.
[0100] An elementary unit for the input to an encoder and the output of a decoder, respectively, is typically a picture. A picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoded may be referred to as a decoded picture or a reconstructed picture.
[0101 ] The source and decoded pictures are each comprised of one or more sample arrays, such as one of the following sets of sample arrays:
- Luma (Y) only (monochrome).
- Luma and two chroma (Y CbCr or Y CgCo).
- Green, Blue and Red (GBR, also known as RGB).
- Arrays representing other unspecified monochrome or tri-stimulus color samplings (for
example, YZX, also known as XYZ).
[0102] In the following, these arrays may be referred to as luma (or L or Y) and chroma, where the two chroma arrays may be referred to as Cb and Cr; regardless of the actual color representation method in use. The actual color representation method in use can be indicated e.g. in a coded bitstream e.g. using the Video Usability Information (VUI) syntax of HEVC or alike. A component may be defined as an array or single sample from one of the three sample arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.
[01031 A picture may be defined to be either a frame or a field. A frame comprises a matrix of luma samples and possibly the corresponding chroma samples. A field is a set of alternate sample rows of a frame and may be used as encoder input, when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or chroma sample arrays may be subsampled when compared to luma sample arrays.
[0104] Some chroma formats may be summarized as follows:
- In monochrome sampling there is only one sample array, which may be nominally considered the luma array.
- In 4:2:0 sampling, each of the two chroma arrays has half the height and half the width of the luma array.
- In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array. - In 4:4:4 sampling when no separate color planes are in use, each of the two chroma arrays has the same height and width as the luma array.
[0105] Coding formats or standards may allow to code sample arrays as separate color planes into the bitstream and respectively decode separately coded color planes from the bitstream. When separate color planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.
[0106] When chroma subsampling is in use (e.g. 4:2:0 or 4:2:2 chroma sampling), the location of chroma samples with respect to luma samples may be determined in the encoder side (e.g. as pre processing step or as part of encoding). The chroma sample positions with respect to luma sample positions may be pre-defined for example in a coding standard, such as H.264/AVC or HEVC, or may be indicated in the bitstream for example as part of VUI of H.264/AVC or HEVC.
[0107 ] Generally, the source video sequence(s) provided as input for encoding may either represent interlaced source content or progressive source content. Fields of opposite parity have been captured at different times for interlaced source content. Progressive source content contains captured frames. An encoder may encode fields of interlaced source content in two ways: a pair of interlaced fields may be coded into a coded frame or a field may be coded as a coded field. Likewise, an encoder may encode frames of progressive source content in two ways: a frame of progressive source content may be coded into a coded frame or a pair of coded fields. A field pair or a complementary field pair may be defined as two fields next to each other in decoding and/or output order, having opposite parity (i.e. one being a top field and another being a bottom field) and neither belonging to any other complementary field pair. Some video coding standards or schemes allow mixing of coded frames and coded fields in the same coded video sequence. Moreover, predicting a coded field from a field in a coded frame and/or predicting a coded frame for a complementary field pair (coded as fields) may be enabled in encoding and/or decoding.
[01081 Partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
[0109] In H.264/AVC, a macroblock is a 16x16 block of luma samples and the corresponding blocks of chroma samples. For example, in the 4:2:0 sampling pattern, a macroblock contains one 8x8 block of chroma samples per each chroma component. In H.264/AVC, a picture is partitioned to one or more slice groups, and a slice group contains one or more slices. In H.264/AVC, a slice consists of an integer number of macroblocks ordered consecutively in the raster scan within a particular slice group.
[0110] When describing the operation of HEVC encoding and/or decoding, the following terms may be used. A coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning. A coding tree block (CTB) may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning. A coding tree unit (CTU) may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A coding unit (CU) may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
[011 1 ] In some video codecs, such as High Efficiency Video Coding (HEVC) codec, video pictures may be divided into coding units (CU) covering the area of the picture. A CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in the said CU. The CU may consist of a square block of samples with a size selectable from a predefined set of possible CU sizes. A CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non- overlapping LCUs. An LCU can be further split into a combination of smaller CUs, e.g. by recursively splitting the LCU and resultant CUs. Each resulting CU may have at least one PU and at least one TU associated with it. Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively. Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU (e.g. motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs).
[01 12] Each TU can be associated with information describing the prediction error decoding process for the samples within the said TU (including e.g. DCT coefficient information). It may be signalled at CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for the said CU. The division of the image into CUs, and division of CUs into PUs and TUs may be signalled in the bitstream allowing the decoder to reproduce the intended structure of these units.
[01 13] In a draft version of H.266/WC, the following partitioning applies. It is noted that what is described here might still evolve in later draft versions of H.266/VVC until the standard is finalized. Pictures are partitioned into CTUs similarly to HEVC, although the maximum CTU size has been increased to 128x128. A coding tree unit (CTU) is first partitioned by a quaternary tree (a.k.a.
quadtree) structure. Then the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure. There are four splitting types in multi-type tree structure, vertical binary splitting, horizontal binary splitting, vertical ternary splitting, and horizontal ternary splitting. The multi-type tree leaf nodes are called coding units (CUs). CU, PU and TU have the same block size, unless the CU is too large for the maximum transform length. A segmentation structure for a CTU is a quadtree with nested multi-type tree using binary and ternary splits, i.e. no separate CU, PU and TU concepts are in use except when needed for CUs that have a size too large for the maximum transform length. A CU can have either a square or rectangular shape.
[01 14 ] The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
[01 15 ] The filtering may for example include one more of the following: deblocking, sample adaptive offset (SAO), and/or adaptive loop filtering (ALF).
[01 16] The deblocking loop filter may include multiple filtering modes or strengths, which may be adaptively selected based on the features of the blocks adjacent to the boundary, such as the quantization parameter value, and/or signaling included by the encoder in the bitstream. For example, the deblocking loop filter may comprise a normal filtering mode and a strong filtering mode, which may differ in terms of the number of filter taps (i.e. number of samples being filtered on both sides of the boundary) and/or the filter tap values. For example, filtering of two samples along both sides of the boundary may be performed with a filter having the impulse response of (3 7 9 -3)/16, when omitting the potential impact of a clipping operation.
[01 17] The motion information may be indicated with motion vectors associated with each motion compensated image block in video codecs. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently those may be coded differentially with respect to block specific predicted motion vectors. The predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, the reference index of previously coded/decoded picture can be predicted. The reference index may be predicted from adjacent blocks and/or co-located blocks in temporal reference picture. Moreover, high efficiency video codecs may employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co- located blocks.
[01 18] Video codecs may support motion compensated prediction from one source image (uni prediction) and two sources (bi-prediction). In the case of uni-prediction a single motion vector is applied whereas in the case of bi-prediction two motion vectors are signaled and the motion compensated predictions from two sources are averaged to create the final sample prediction. In the case of weighted prediction, the relative weights of the two predictions can be adjusted, or a signaled offset can be added to the prediction signal.
In addition to applying motion compensation for inter picture prediction, similar approach can be applied to intra picture prediction. In this case the displacement vector indicates where from the same picture a block of samples can be copied to form a prediction of the block to be coded or decoded.
This kind of intra block copying methods can improve the coding efficiency substantially in presence of repeating structures within the frame - such as text or other graphics.
[0119] The prediction residual after motion compensation or intra prediction may be first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding.
[0120] Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired Macroblock mode and associated motion vectors. This kind of cost function uses a weighting factor l to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
[0121 ] C = D + R (Eq. 1) where C is the Lagrangian cost to be minimized, D is the image distortion (e.g. Mean Squared Error) with the mode and motion vectors considered, and R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
[0122] Some codecs use a concept of picture order count (POC). A value of POC is derived for each picture and is non- decreasing with increasing picture position in output order. POC therefore indicates the output order of pictures. POC may be used in the decoding process for example for implicit scaling of motion vectors and for reference picture list initialization. Furthermore, POC may be used in the verification of output order conformance.
[0123] In video coding standards, a compliant bit stream must be able to be decoded by a hypothetical reference decoder that may be conceptually connected to the output of an encoder and consists of at least a pre-decoder buffer, a decoder and an output/display unit. This virtual decoder may be known as the hypothetical reference decoder (HRD) or the video buffering verifier (VBV). A stream is compliant if it can be decoded by the HRD without buffer overflow or, in some cases, underflow. Buffer overflow happens if more bits are to be placed into the buffer when it is full. Buffer underflow happens if some bits are not in the buffer when said bits are to be fetched from the buffer for decoding/playback. One of the motivations for the HRD is to avoid so-called evil bitstreams, which would consume such a large quantity of resources that practical decoder implementations would not be able to handle.
[0124] HRD models typically include instantaneous decoding, while the input bitrate to the coded picture buffer (CPB) of HRD may be regarded as a constraint for the encoder and the bitstream on decoding rate of coded data and a requirement for decoders for the processing rate. An encoder may include a CPB as specified in the HRD for verifying and controlling that buffering constraints are obeyed in the encoding. A decoder implementation may also have a CPB that may but does not necessarily operate similarly or identically to the CPB specified for HRD.
[0125] A Decoded Picture Buffer (DPB) may be used in the encoder and/or in the decoder. There may be two reasons to buffer decoded pictures, for references in inter prediction and for reordering decoded pictures into output order. Some coding formats, such as HEVC, provide a great deal of flexibility for both reference picture marking and output reordering, separate buffers for reference picture buffering and output picture buffering may waste memory resources. Hence, the DPB may include a unified decoded picture buffering process for reference pictures and output reordering. A decoded picture may be removed from the DPB when it is no longer used as a reference and is not needed for output. An HRD may also include a DPB. DPBs of an HRD and a decoder implementation may but do not need to operate identically.
[0126] Output order may be defined as the order in which the decoded pictures are output from the decoded picture buffer (for the decoded pictures that are to be output from the decoded picture buffer).
[0127] A decoder and/or an HRD may comprise a picture output process. The output process may be considered to be a process in which the decoder provides decoded and cropped pictures as the output of the decoding process. The output process is typically a part of video coding standards, typically as a part of the hypothetical reference decoder specification. In output cropping, lines and/or columns of samples may be removed from decoded pictures according to a cropping rectangle to form output pictures. A cropped decoded picture may be defined as the result of cropping a decoded picture based on the conformance cropping window specified e.g. in the sequence parameter set that is referred to by the corresponding coded picture.
[0128] One or more syntax structures for (decoded) reference picture marking may exist in a video coding system. An encoder generates an instance of a syntax structure e.g. in each coded picture, and a decoder decodes an instance of the syntax structure e.g. from each coded picture. For example, the decoding of the syntax structure may cause pictures to be adaptively marked as "used for reference" or "unused for reference".
[0129 ] A reference picture set (RPS) syntax structure of HEVC is an example of a syntax structure for reference picture marking. A reference picture set valid or active for a picture includes all the reference pictures that may be used as reference for the picture and all the reference pictures that are kept marked as "used for reference" for any subsequent pictures in decoding order. The reference pictures that are kept marked as "used for reference" for any subsequent pictures in decoding order but that are not used as reference picture for the current picture or image segment may be considered inactive. For example, they might not be included in the initial reference picture list(s).
[0130] In some coding formats and codecs, a distinction is made between so-called short-term and long-term reference pictures. This distinction may affect some decoding processes such as motion vector scaling. Syntax structure(s) for marking reference pictures may be indicative of marking a picture as "used for long-term reference" or "used for short-term reference".
[0131 ] In some coding formats, reference picture for inter prediction may be indicated with an index to a reference picture list. In some codecs, two reference picture lists (reference picture list 0 and reference picture list 1) are generated for each bi-predictive (B) slice, and one reference picture list (reference picture list 0) is formed for each inter-coded (P) slice.
[0132] A reference picture list, such as the reference picture list 0 and the reference picture list 1, may be constructed in two steps: First, an initial reference picture list is generated. The initial reference picture list may be generated using an algorithm pre-defined in a standard. Such an algorithm may use e.g. POC and/or temporal sub-layer, as the basis. The algorithm may process reference pictures with particular marking(s), such as "used for reference", and omit other reference pictures, i.e. avoid inserting other reference pictures into the initial reference picture list. An example of such other reference picture is a reference picture marked as "unused for reference" but still residing in the decoded picture buffer waiting to be output from the decoder. Second, the initial reference picture list may be reordered through a specific syntax structure, such as reference picture list reordering (RPLR) commands ofH.264/AVC or reference picture list modification syntax structure of HEVC or anything alike. Furthermore, the number of active reference pictures may be indicated for each list, and the use of the pictures beyond the active ones in the list as reference for inter prediction is disabled. One or both the reference picture list initialization and reference picture list modification may process only active reference pictures among those reference pictures that are marked as "used for reference" or alike.
[0133] Scalable video coding refers to coding structure where one bitstream can contain multiple representations of the content at different bitrates, resolutions or frame rates. In these cases, the receiver can extract the desired representation depending on its characteristics (e.g. resolution that matches best the display device). Alternatively, a server or a network element can extract the portions of the bitstream to be transmitted to the receiver depending on e.g. the network characteristics or processing capabilities of the receiver. A scalable bitstream may include a "base layer" providing the lowest quality video available and one or more enhancement layers that enhance the video quality when received and decoded together with the lower layers. In order to improve coding efficiency for the enhancement layers, the coded representation of that layer may depend on the lower layers. E.g. the motion and mode information of the enhancement layer can be predicted from lower layers.
Similarly, the pixel data of the lower layers can be used to create prediction for the enhancement layer.
[01341 A scalable video codec for quality scalability (also known as Signal-to-Noise or SNR) and/or spatial scalability may be implemented as follows. For a base layer, a conventional non- scalable video encoder and decoder is used. The reconstructed/decoded pictures of the base layer are included in the reference picture buffer for an enhancement layer. In H.264/AVC, HEVC, and similar codecs using reference picture list(s) for inter prediction, the base layer decoded pictures may be inserted into a reference picture list(s) for coding/decoding of an enhancement layer picture similarly to the decoded reference pictures of the enhancement layer. Consequently, the encoder may choose a base-layer reference picture as inter prediction reference and indicate its use e.g. with a reference picture index in the coded bitstream. The decoder decodes from the bitstream, for example from a reference picture index, that a base-layer picture is used as inter prediction reference for the enhancement layer. When a decoded base-layer picture is used as prediction reference for an enhancement layer, it is referred to as an inter-layer reference picture.
[0135] Scalability modes or scalability dimensions may include but are not limited to the following:
• Quality scalability: Base layer pictures are coded at a lower quality than enhancement layer pictures, which may be achieved for example using a greater quantization parameter value (i.e., a greater quantization step size for transform coefficient quantization) in the base layer than in the enhancement layer.
• Spatial scalability: Base layer pictures are coded at a lower resolution (i.e. have fewer
samples) than enhancement layer pictures. Spatial scalability and quality scalability may sometimes be considered the same type of scalability.
• Bit-depth scalability: Base layer pictures are coded at lower bit-depth (e.g. 8 bits) than
enhancement layer pictures (e.g. 10 or 12 bits).
• Dynamic range scalability: Scalable layers represent a different dynamic range and/or images obtained using a different tone mapping function and/or a different optical transfer function.
• Chroma format scalability: Base layer pictures provide lower spatial resolution in chroma sample arrays (e.g. coded in 4:2:0 chroma format) than enhancement layer pictures (e.g. 4:4:4 format).
• Color gamut scalability: enhancement layer pictures have a richer/broader color representation range than that of the base layer pictures - for example the enhancement layer may have UHDTV (ITU-R BT.2020) color gamut and the base layer may have the ITU-R BT.709 color gamut.
• Region-of- interest (ROI) scalability: An enhancement layer represents of spatial subset of the base layer. ROI scalability may be used together with other types of scalability, e.g. quality or spatial scalability so that the enhancement layer provides higher subjective quality for the spatial subset.
• View scalability, which may also be referred to as multiview coding. The base layer represents a first view, whereas an enhancement layer represents a second view.
• Depth scalability, which may also be referred to as depth- enhanced coding. A layer or some layers of a bitstream may represent texture view(s), while other layer or layers may represent depth view(s).
[0136] In all of the above scalability cases, base layer information could be used to code enhancement layer to minimize the additional bitrate overhead.
[0137] Scalability can be enabled in two basic ways. Either by introducing new coding modes for performing prediction of pixel values or syntax from lower layers of the scalable representation or by placing the lower layer pictures to the reference picture buffer (decoded picture buffer, DPB) of the higher layer. The first approach is more flexible and thus can provide better coding efficiency in most cases. However, the second, reference frame -based scalability, approach can be implemented very efficiently with minimal changes to single layer codecs while still achieving majority of the coding efficiency gains available. Essentially a reference frame -based scalability codec can be implemented by utilizing the same hardware or software implementation for all the layers, just taking care of the DPB management by external means.
[0138] An elementary unit for the output of encoders of some coding formats, such as HEVC, and the input of decoders of some coding formats, such as HEVC, is a Network Abstraction Layer (NAL) unit. For transport over packet-oriented networks or storage into structured files, NAL units may be encapsulated into packets or similar structures.
[0139] NAL units consist of a header and payload. In HEVC, a two-byte NAL unit header is used for all specified NAL unit types, while in other codecs NAL unit header may be similar to that in HEVC.
[0140] In HEVC, the NAL unit header contains one reserved bit, a six-bit NAL unit type indication, a three-bit temporal_id_plusl indication for temporal level or sub-layer (may be required to be greater than or equal to 1) and a six-bit nuh layer id syntax element. The temporal_id_plusl syntax element may be regarded as a temporal identifier for the NAL unit, and a zero-based Temporalld variable may be derived as follows: Temporalld = temporal_id_plusl - 1. The abbreviation TID may be used to interchangeably with the Temporalld variable. Temporalld equal to 0 corresponds to the lowest temporal level. The value of temporal_id_plusl is required to be non-zero in order to avoid start code emulation involving the two NAL unit header bytes. The bitstream created by excluding all VCL NAL units having a Temporalld greater than or equal to a selected value and including all other VCL NAL units remains conforming. Consequently, a picture having Temporalld equal to tid value does not use any picture having a Temporalld greater than tid value as inter prediction reference. A sub-layer or a temporal sub-layer may be defined to be a temporal scalable layer (or a temporal layer, TL) of a temporal scalable bitstream. Such temporal scalable layer may comprise VCL NAL units with a particular value of the Temporalld variable and the associated non-VCL NAL units nuh layer id can be understood as a scalability layer identifier.
[0141 ] NAL units can be categorized into Video Coding Layer (VCL) NAL units and non-VCL NAL units. VCL NAL units are typically coded slice NAL units. In HEVC, VCL NAL units contain syntax elements representing one or more CU. In HEVC, the NAL unit type within a certain range indicates a VCL NAL unit, and the VCL NAL unit type indicates a picture type.
[0142 ] Images can be split into independently codab le and decodab le image segments (e.g. slices or tiles or tile groups). Such image segments may enable parallel processing, "Slices" in this description may refer to image segments constructed of certain number of basic coding units that are processed in default coding or decoding order, while "tiles" may refer to image segments that have been defined as rectangular image regions. A tile group may be defined as a group of one or more tiles. Image segments may be coded as separate units in the bitstream, such as VCL NAL units in H.264/AVC and HEVC. Coded image segments may comprise a header and a payload, wherein the header contains parameter values needed for decoding the payload.
[0143 ] In the HEVC standard, a picture can be partitioned in tiles, which are rectangular and contain an integer number of CTUs. In the HEVC standard, the partitioning to tiles forms a grid that may be characterized by a list of tile column widths (in CTUs) and a list of tile row heights (in CTUs). Tiles are ordered in the bitstream consecutively in the raster scan order of the tile grid. A tile may contain an integer number of slices.
[0144] In the HEVC, a slice consists of an integer number of CTUs. The CTUs are scanned in the raster scan order of CTUs within tiles or within a picture, if tiles are not in use. A slice may contain an integer number of tiles or a slice can be contained in a tile. Within a CTU, the CUs have a specific scan order.
[0145] In HEVC, a slice is defined to be an integer number of coding tree units contained in one independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same access unit. In HEVC, a slice segment is defined to be an integer number of coding tree units ordered consecutively in the tile scan and contained in a single NAL (Network Abstraction Layer) unit. The division of each picture into slice segments is a partitioning. In HEVC, an independent slice segment is defined to be a slice segment for which the values of the syntax elements of the slice segment header are not inferred from the values for a preceding slice segment, and a dependent slice segment is defined to be a slice segment for which the values of some syntax elements of the slice segment header are inferred from the values for the preceding independent slice segment in decoding order. In HEVC, a slice header is defined to be the slice segment header of the independent slice segment that is a current slice segment or is the independent slice segment that precedes a current dependent slice segment, and a slice segment header is defined to be a part of a coded slice segment containing the data elements pertaining to the first or all coding tree units represented in the slice segment. The CUs are scanned in the raster scan order of LCUs within tiles or within a picture, if tiles are not in use. Within an LCU, the CUs have a specific scan order.
[01461 In a draft version of H.266/VVC, pictures are partitioned to tile along a tile grid (similarly to HEVC). Tiles are ordered in the bitstream in tile raster scan order within a picture, and CTUs are ordered in the bitstream in raster scan order within a tile. A tile group contains one or more entire tiles in bitstream order (i.e. tile raster scan order within a picture), and a VCL NAL unit contains one tile group. Slices have not been included in the draft version of H.266/WC. It is noted that what was described in this paragraph might still evolve in later draft versions of H.266/WC until the standard is finalized.
[0147] A motion-constrained tile set (MCTS) is such that the inter prediction process is constrained in encoding such that no sample value outside the motion-constrained tile set, and no sample value at a fractional sample position that is derived using one or more sample values outside the motion-constrained tile set, is used for inter prediction of any sample within the motion- constrained tile set. Additionally, the encoding of an MCTS is constrained in a manner that motion vector candidates are not derived from blocks outside the MCTS. This may be enforced by turning off temporal motion vector prediction of HEVC, or by disallowing the encoder to use the TMVP candidate or any motion vector prediction candidate following the TMVP candidate in the merge or AMVP candidate list for PUs located directly left of the right tile boundary of the MCTS except the last one at the bottom right of the MCTS. In general, an MCTS may be defined to be a tile set that is independent of any sample values and coded data, such as motion vectors, that are outside the MCTS. An MCTS sequence may be defined as a sequence of respective MCTSs in one or more coded video sequences or alike. In some cases, an MCTS may be required to form a rectangular area ft should be understood that depending on the context, an MCTS may refer to the tile set within a picture or to the respective tile set in a sequence of pictures. The respective tile set may be, but in general need not be, collocated in the sequence of pictures. A motion-constrained tile set may be regarded as an independently coded tile set, since it may be decoded without the other tile sets.
[0148] ft is appreciated that sample locations used in inter prediction may be saturated so that a location that would be outside the picture otherwise is saturated to point to the corresponding boundary sample of the picture. Hence, in some use cases, if a tile boundary is also a picture boundary, motion vectors may effectively cross that boundary or a motion vector may effectively cause fractional sample interpolation that would refer to a location outside that boundary, since the sample locations are saturated onto the boundary. In other use cases, specifically if a coded tile may be extracted from a bitstream where it is located on a position adjacent to a picture boundary to another bitstream where the tile is located on a position that is not adjacent to a picture boundary, encoders may constrain the motion vectors on picture boundaries similarly to any MCTS boundaries.
[0149] The temporal motion-constrained tile sets SEI (Supplemental Enhancement Information) message of HEVC can be used to indicate the presence of motion-constrained tile sets in the bitstream.
[0150] A non-VCL NAL unit may be for example one of the following types: a sequence parameter set, a picture parameter set, a supplemental enhancement information (SEI) NAL unit, an access unit delimiter, an end of sequence NAL unit, an end of bitstream NAL unit, or a filler data NAL unit. Parameter sets may be needed for the reconstruction of decoded pictures, whereas many of the other non-VCL NAL units are not necessary for the reconstruction of decoded sample values.
[0151] Some coding formats specify parameter sets that may carry parameter values needed for the decoding or reconstruction of decoded pictures. Parameters that remain unchanged through a coded video sequence may be included in a sequence parameter set (SPS). In addition to the parameters that may be needed by the decoding process, the sequence parameter set may optionally contain video usability information (VUI), which includes parameters that may be important for buffering, picture output timing, rendering, and resource reservation. A picture parameter set (PPS) contains such parameters that are likely to be unchanged in several coded pictures. A picture parameter set may include parameters that can be referred to by the coded image segments of one or more coded pictures. A header parameter set (HPS) has been proposed to contain such parameters that may change on picture basis.
[0152] A parameter set may be activated when it is referenced e.g. through its identifier. For example, a header of an image segment, such as a slice header, may contain an identifier of the PPS that is activated for decoding the coded picture containing the image segment. A PPS may contain an identifier of the SPS that is activated, when the PPS is activated. An activation of a parameter set of a particular type may cause the deactivation of the previously active parameter set of the same type.
[0153] Instead of or in addition to parameter sets at different hierarchy levels (e.g. sequence and picture), video coding formats may include header syntax structures, such as a sequence header or a picture header. A sequence header may precede any other data of the coded video sequence in the bitstream order. A picture header may precede any coded video data for the picture in the bitstream order.
[0154] The phrase along the bitstream (e.g. indicating along the bitstream) or along a coded unit of a bitstream (e.g. indicating along a coded tile) may be used in claims and described embodiments to refer to transmission, signaling, or storage in a manner that the "out-of-band" data is associated with but not included within the bitstream or the coded unit, respectively. The phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively. For example, the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
[0155] A coded picture is a coded representation of a picture.
[ 1561 A Random Access Point (RAP) picture, which may also be referred to as an intra random access point (IRAP) picture, may comprise only intra-coded image segments. Furthermore, a RAP picture may constrain subsequence pictures in output order to be such that they can be correctly decoded without performing the decoding process of any pictures that precede the RAP picture in decoding order.
[0157] An access unit may comprise coded video data for a single time instance and associated other data. In HEVC, an access unit (AU) may be defined as a set of NAL units that are associated with each other according to a specified classification rule, are consecutive in decoding order, and contain at most one picture with any specific value of nuh layer id. In addition to containing the VCL NAL units of the coded picture, an access unit may also contain non-VCL NAL units. Said specified classification rule may for example associate pictures with the same output time or picture output count value into the same access unit.
[0158] It may be required that coded pictures appear in certain order within an access unit. For example, a coded picture with nuh layer id equal to nuhLayerldA may be required to precede, in decoding order, all coded pictures with nuh layer id greater than nuhLayerldA in the same access unit.
[0159] A bitstream may be defined as a sequence of bits, which may in some coding formats or standards be in the form of a NAL unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences. A first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a communication protocol. An elementary stream (in the context of video coding) may be defined as a sequence of one or more bitstreams. In some coding formats or standards, the end of the first bitstream may be indicated by a specific NAL unit, which may be referred to as the end of bitstream (EOB) NAL unit and which is the last NAL unit of the bitstream.
[0160] A coded video sequence (CVS) may be defined as such a sequence of coded pictures in decoding order that is independently decodable and is followed by another coded video sequence or the end of the bitstream.
[0161 ] Bitstreams or coded video sequences can be encoded to be temporally scalable as follows. Each picture may be assigned to a particular temporal sub-layer. Temporal sub-layers may be enumerated e.g. from 0 upwards. The lowest temporal sub-layer, sub-layer 0, may be decoded independently. Pictures at temporal sub-layer 1 may be predicted from reconstructed pictures at temporal sub-layers 0 and 1. Pictures at temporal sub-layer 2 may be predicted from reconstructed pictures at temporal sub-layers 0, 1, and 2, and so on. In other words, a picture at temporal sub-layer N does not use any picture at temporal sub-layer greater than N as a reference for inter prediction. The bitstream created by excluding all pictures greater than or equal to a selected sub-layer value and including pictures remains conforming.
[0162] A sub-layer access picture may be defined as a picture from which the decoding of a sub layer can be started correctly, i.e. starting from which all pictures of the sub-layer can be correctly decoded. In HEVC there are two picture types, the temporal sub-layer access (TSA) and step-wise temporal sub-layer access (STSA) picture types, that can be used to indicate temporal sub-layer switching points. If temporal sub-layers with Temporalld up to N had been decoded until the TSA or STSA picture (exclusive) and the TSA or STSA picture has Temporalld equal to N+l, the TSA or STSA picture enables decoding of all subsequent pictures (in decoding order) having Temporalld equal to N+l. The TSA picture type may impose restrictions on the TSA picture itself and all pictures in the same sub-layer that follow the TSA picture in decoding order. None of these pictures is allowed to use inter prediction from any picture in the same sub-layer that precedes the TSA picture in decoding order. The TSA definition may further impose restrictions on the pictures in higher sub layers that follow the TSA picture in decoding order. None of these pictures is allowed to refer a picture that precedes the TSA picture in decoding order if that picture belongs to the same or higher sub-layer as the TSA picture. TSA pictures have Temporalld greater than 0. The STSA is similar to the TSA picture but does not impose restrictions on the pictures in higher sub-layers that follow the STSA picture in decoding order and hence enable up-switching only onto the sub-layer where the STSA picture resides.
[0163] Available media file format standards include ISO base media file format (ISO/IEC 14496- 12, which may be abbreviated ISOBMFF), MPEG-4 file format (ISO/IEC 14496-14, also known as the MP4 format), file format for NAL unit structured video (ISO/IEC 14496-15) and 3GPP file format (3GPP TS 26.244, also known as the 3GP format). The ISO file format is the base for derivation of all the above mentioned file formats (excluding the ISO file format itself). These file formats (including the ISO file format itself) are generally called the ISO family of file formats.
[0164] Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which the embodiments may be implemented. The aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[0165] A basic building block in the ISO base media file format is called a box. Each box has a header and a payload. The box header indicates the type of the box and the size of the box in terms of bytes. A box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.
[0166 ] According to the ISO family of file formats, a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
[0167] In files conforming to the ISO base media file format, the media data may be provided in a media data‘mdat‘ box and the movie‘moov’ box may be used to enclose the metadata. In some cases, for a file to be operable, both of the‘mdat’ and‘moov’ boxes may be required to be present.
The movie‘moov’ box may include one or more tracks, and each track may reside in one
corresponding TrackBox (‘trak’). A track may be one of the many types, including a media track that refers to samples formatted according to a media compression format (and its encapsulation to the ISO base media file format). A track may be regarded as a logical channel.
[0168] Movie fragments may be used e.g. when recording content to ISO files e.g. in order to avoid losing data if a recording application crashes, runs out of memory space, or some other incident occurs. Without movie fragments, data loss may occur because the file format may require that all metadata, e.g., the movie box, be written in one contiguous area of the file. Furthermore, when recording a file, there may not be sufficient amount of memory space (e.g., random access memory RAM) to buffer a movie box for the size of the storage available, and re-computing the contents of a movie box when the movie is closed may be too slow. Moreover, movie fragments may enable simultaneous recording and playback of a file using a regular ISO file parser. Furthermore, a smaller duration of initial buffering may be required for progressive downloading, e.g., simultaneous reception and playback of a file when movie fragments are used and the initial movie box is smaller compared to a file with the same media content but structured without movie fragments.
[01691 The movie fragment feature may enable splitting the metadata that otherwise might reside in the movie box into multiple pieces. Each piece may correspond to a certain period of time of a track. In other words, the movie fragment feature may enable interleaving file metadata and media data. Consequently, the size of the movie box may be limited and the use cases mentioned above be realized.
[0170] In some examples, the media samples for the movie fragments may reside in an mdat box, if they are in the same file as the moov box. For the metadata of the movie fragments, however, a moof box may be provided. The moof box may include the information for a certain duration of playback time that would previously have been in the moov box. The moov box may still represent a valid movie on its own, but in addition, it may include an mvex box indicating that movie fragments will follow in the same file. The movie fragments may extend the presentation that is associated to the moov box in time.
[0171] Within the movie fragment there may be a set of track fragments, including anywhere from zero to a plurality per track. The track fragments may in turn include anywhere from zero to a plurality of track runs (a.k.a. track fragment runs), each of which document is a contiguous run of samples for that track. Within these structures, many fields are optional and can be defaulted. The metadata that may be included in the moof box may be limited to a subset of the metadata that may be included in a moov box and may be coded differently in some cases. Details regarding the boxes that can be included in a moof box may be found from the ISO base media file format specification. A self- contained movie fragment may be defined to consist of a moof box and an mdat box that are consecutive in the file order and where the mdat box contains the samples of the movie fragment (for which the moof box provides the metadata) and does not contain samples of any other movie fragment (i.e. any other moof box).
[0172] The track reference mechanism can be used to associate tracks with each other. The TrackReferenceBox includes box(es), each of which provides a reference from the containing track to a set of other tracks. These references are labeled through the box type (i.e. the four-character code of the box) of the contained box(es).
[0173] TrackGroupBox, which is contained in TrackBox, enables indication of groups of tracks where each group shares a particular characteristic or the tracks within a group have a particular relationship. The box contains zero or more boxes, and the particular characteristic or the relationship is indicated by the box type of the contained boxes. The contained boxes include an identifier, which can be used to conclude the tracks belonging to the same track group. The tracks that contain the same type of a contained box within the TrackGroupBox and have the same identifier value within these contained boxes belong to the same track group.
[0174] A uniform resource identifier (UR1) may be defined as a string of characters used to identify a name of a resource. Such identification enables interaction with representations of the resource over a network, using specific protocols. A UR1 is defined through a scheme specifying a concrete syntax and associated protocol for the URL The uniform resource locator (URL) and the uniform resource name (URN) are forms of UR1. A URL may be defined as a UR1 that identifies a web resource and specifies the means of acting upon or obtaining the representation of the resource, specifying both its primary access mechanism and network location. A URN may be defined as a URI that identifies a resource by name in a particular namespace. A URN may be used for identifying a resource without implying its location or how to access it.
[0175] Recently, Hypertext Transfer Protocol (HTTP) has been widely used for the delivery of real-time multimedia content over the Internet, such as in video streaming applications. Unlike the use of the Real-time Transport Protocol (RTP) over the User Datagram Protocol (UDP), HTTP is easy to configure and is typically granted traversal of firewalls and network address translators (NAT), which makes it attractive for multimedia streaming applications.
[0176] Several commercial solutions for adaptive streaming over HTTP, such as Microsoft® Smooth Streaming, Apple® Adaptive HTTP Live Streaming and Adobe® Dynamic Streaming, have been launched as well as standardization projects have been carried out. Adaptive HTTP streaming (AHS) was first standardized in Release 9 of 3rd Generation Partnership Project (3GPP) packet- switched streaming (PSS) service (3GPP TS 26.234 Release 9:“Transparent end-to-end packet- switched streaming service (PSS); protocols and codecs”). MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 :“Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,” International Standard, 2nd Edition, , 2014). 3 GPP continued to work on adaptive HTTP streaming in communication with MPEG and published 3GP-DASH (Dynamic Adaptive Streaming over HTTP; 3GPP TS 26.247: “Transparent end-to-end packet-switched streaming Service (PSS); Progressive download and dynamic adaptive Streaming over HTTP (3GP-DASH)”. MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH. Some concepts, formats, and operations of DASH are described below as an example of a video streaming system, wherein the embodiments may be implemented. The aspects of the invention are not limited to DASH, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[0177] In DASH, the multimedia content may be stored on an HTTP server and may be delivered using HTTP. The content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single file or multiple files. The MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP. The MPD contains information describing media presentation, such as an HTTP- uniform resource locator (URL) of each Segment to make GET Segment request. To play the content, the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods. By parsing the MPD, the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
[0178] In DASH, hierarchical data model is used to structure media presentation as follows. A media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments. A Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc. The Segment contains certain duration of media data, and metadata to decode and present the included media content. A Segment is identified by a URI and can typically be requested by a HTTP GET request. A Segment may be defined as a unit of data associated with an HTTP -URL and optionally a byte range that are specified by an MPD.
[0179] The DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML.
[0180] In DASH, all descriptor elements are structured in the same way, namely they contain a @schemeIdUri attribute that provides a URI to identify the scheme and an optional attribute @value and an optional attribute @id. The semantics of the element are specific to the scheme employed. The URI identifying the scheme may be a URN or a URL.
[0181] In DASH, an independent representation may be defined as a representation that can be processed independently of any other representations. An independent representation may be understood to comprise an independent bitstream or an independent layer of a bitstream. A dependent representation may be defined as a representation for which Segments from its complementary representations are necessary for presentation and/or decoding of the contained media content components. A dependent representation may be understood to comprise e.g. a predicted layer of a scalable bitstream. A complementary representation may be defined as a representation which complements at least one dependent representation. A complementary representation may be an independent representation or a dependent representation. Dependent Representations may be described by a Representation element that contains a @dependencyld attribute. Dependent
Representations can be regarded as regular Representations except that they depend on a set of complementary Representations for decoding and/or presentation. The @dependencyld contains the values of the @id attribute of all the complementary Representations, i.e. Representations that are necessary to present and/or decode the media content components contained in this dependent Representation.
[0182] Track references of ISOBMFF can be reflected in the list of four-character codes in the @associationType attribute of DASH MPD that is mapped to the list of Representation@id values given in the @associationId in a one to one manner. These attributes may be used for linking media Representations with metadata Representations.
[0183] A DASH service may be provided as on-demand service or live service. In the former, the MPD is a static and all Segments of a Media Presentation are already available when a content provider publishes an MPD. In the latter, however, the MPD may be static or dynamic depending on the Segment URLs construction method employed by a MPD and Segments are created continuously as the content is produced and published to DASH clients by a content provider. Segment URLs construction method may be either template-based Segment URLs construction method or the Segment list generation method. In the former, a DASH client is able to construct Segment URLs without updating an MPD before requesting a Segment. In the latter, a DASH client has to periodically download the updated MPDs to get Segment URLs. For live service, hence, the template-based Segment URLs construction method is superior to the Segment list generation method.
[0184] An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments. In ISOBMFF based segment formats, an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
[01851 A Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration. The content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to- end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests. Thus, in typical arrangements for live services a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment. For on-demand service, different strategies of selecting Segment duration may be used.
[0186] A Segment may be further partitioned into Subsegments e.g. to enable downloading segments in multiple parts. Subsegments may be required to contain complete access units.
Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment. The Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets. A DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation. The indexing information of a segment may be put in the single box at the beginning of that segment, or spread among many indexing boxes in the segment. Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
[0187] The notation (Sub)segment refers to either a Segment or a Subsegment. If Segment Index boxes are not present, the notation (Sub)segment refers to a Segment. If Segment Index boxes are present, the notation (Sub)segment may refer to a Segment or a Subsegment, e.g. depending on whether the client issues requests on Segment or Subsegment basis.
[0188] MPEG-DASH defines segment-container formats for both ISO Base Media File Format and MPEG-2 Transport Streams. Other specifications may specify segment formats based on other container formats. For example, a segment format based on Matroska container file format has been proposed.
[0189 ] DASH supports rate adaptation by dynamically requesting Media Segments from different Representations within an Adaptation Set to match varying network bandwidth. When a DASH client switches up/down Representation, coding dependencies within Representation have to be taken into account. A Representation switch may happen at a random access point (RAP), which is typically used in video coding techniques such as H.264/AVC. In DASH, a more general concept named Stream Access Point (SAP) is introduced to provide a codec-independent solution for accessing a
Representation and switching between Representations. In DASH, a SAP is specified as a position in a Representation that enables playback of a media stream to be started using only the information contained in Representation data starting from that position onwards (preceded by initialising data in the Initialisation Segment, if any). Hence, Representation switching can be performed in SAP.
[0190] In DASH the automated selection between Representations in the same Adaptation Set have been performed based on the width and height (@width and @height); the frame rate
(@frameRate); the bitrate (@bandwidth); indicated quality ordering between the Representations (@qualityRanking). The semantics of @qualityRanking are specified as follows: specifies a quality ranking of the Representation relative to other Representations in the same Adaptation Set. Lower values represent higher quality content. If not present, then no ranking is defined.
[0191] Several types of SAP have been specified, including the following. SAP Type 1 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps) and in addition the first picture in decoding order is also the first picture in presentation order. SAP Type 2 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps), for which the first picture in decoding order may not be the first picture in presentation order. SAP Type 3 corresponds to what is known in some coding schemes as an“Open GOP random access point”, in which there may be some pictures in decoding order that cannot be correctly decoded and have presentation times less than intra-coded picture associated with the SAP.
[0192] In some video coding standards, such as MPEG-2, each intra picture has been a random access point in a coded sequence. The capability of flexible use of multiple reference pictures for inter prediction in some video coding standards, such as H.264/AVC and H.265/HEVC, has a consequence that an intra picture may not be sufficient for random access. Therefore, pictures may be marked with respect to their random access point functionality rather than inferring such iunctionality from the coding type; for example an IDR picture as specified in the H.264/AVC standard can be used as a random access point. A closed group of pictures (GOP) is such a group of pictures in which all pictures can be correctly decoded. For example, in H.264/AVC, a closed GOP may start from an IDR access unit.
[0193] An open group of pictures (GOP) is such a group of pictures in which pictures preceding the initial intra picture in output order may not be correctly decodable but pictures following the initial intra picture in output order are correctly decodable. Such an initial intra picture may be indicated in the bitstream and/or concluded from the indications from the bitstream, e.g. by using the CRA NAL unit type in HEVC. The pictures preceding the initial intra picture starting an open GOP in output order and following the initial intra picture in decoding order may be referred to as leading pictures. There are two types of leading pictures: decodable and non-decodable. Decodable leading pictures, such as RADL pictures of HEVC, are such that can be correctly decoded when the decoding is started from the initial intra picture starting the open GOP. In other words, decodable leading pictures use only the initial intra picture or subsequent pictures in decoding order as reference in inter prediction. Non-decodable leading pictures, such as RASL pictures of HEVC, are such that cannot be correctly decoded when the decoding is started from the initial intra picture starting the open GOP.
[0194] A DASH Preselection defines a subset of media components of an MPD that are expected to be consumed jointly by a single decoder instance, wherein consuming may comprise decoding and rendering. The Adaptation Set that contains the main media component for a Preselection is referred to as main Adaptation Set. In addition, each Preselection may include one or multiple partial Adaptation Sets. Partial Adaptation Sets may need to be processed in combination with the main Adaptation Set.
A main Adaptation Set and partial Adaptation Sets may be indicated by one of the two means: a preselection descriptor or a Preselection element.
[0195] Virtual reality is a rapidly developing area of technology in which image or video content, sometimes accompanied by audio, is provided to a user device such as a user headset (a.k.a. head- mounted display). As is known, the user device may be provided with a live or stored feed from a content source, the feed representing a virtual space for immersive output through the user device. Currently, many virtual reality user devices use so-called three degrees of freedom (3DoF), which means that the head movement in the yaw, pitch and roll axes are measured and determine what the user sees, i.e. to determine the viewport. It is known that rendering by taking the position of the user device and changes of the position into account can enhance the immersive experience. Thus, an enhancement to 3DoF is a six degrees-of-freedom (6DoF) virtual reality system, where the user may freely move in Euclidean space as well as rotate their head in the yaw, pitch and roll axes. Six degrees- of-freedom virtual reality systems enable the provision and consumption of volumetric content.
Volumetric content comprises data representing spaces and/or objects in three- dimensions from all angles, enabling the user to move fully around the space and/or objects to view them from any angle. Such content may be defined by data describing the geometry (e.g. shape, size, position in a three- dimensional space) and attributes such as colour, opacity and reflectance. The data may also define temporal changes in the geometry and attributes at given time instances, similar to frames in two- dimensional video.
[0196] Terms 360-degree video or virtual reality (VR) video may sometimes be used
interchangeably. They may generally refer to video content that provides such a large field of view (FOV) that only a part of the video is displayed at a single point of time in displaying arrangements. For example, VR video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view. The spatial subset of the VR video content to be displayed may be selected based on the orientation of the HMD. In another example, a flat-panel viewing environment is assumed, wherein e.g. up to 40-degree field-of-view may be displayed. When displaying wide-FOV content (e.g. fisheye) on such a display, it may be preferred to display a spatial subset rather than the entire picture.
[0197] MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard. OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport). OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position. The viewport-dependent streaming scenarios described further below have also been designed for 3DoF although could potentially be adapted to a different number of degrees of freedom.
[01 8] OMAF is discussed with reference to Figure 1. A real-world audio-visual scene (A) may be captured by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors. The acquisition results in a set of digital image/video (Bi) and audio (Ba) signals. The cameras/lenses may cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
[0199] Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics). The channel-based signals may conform to one of the loudspeaker layouts defined in Cl CP (Coding- Independent Code-Points). In an omnidirectional media application, the loudspeaker layout signals of the rendered immersive audio program may be binaraulized for presentation via headphones.
[0200] The images (Bi) of the same time instance are stitched, projected, and mapped onto a packed picture (D).
[0201 ] For monoscopic 360-degree video, the input images of one time instance may be stitched to generate a projected picture representing one view. An example of image stitching, projection, and region-wise packing process for monoscopic content is illustrated with Figure 2. Input images (Bi) are stitched and projected onto a three-dimensional projection structure that may for example be a unit sphere. The projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof. A projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed. The image data on the projection structure is further arranged onto a two-dimensional projected picture (C). The term projection may be defined as a process by which a set of input images are projected onto a projected picture. There may be a pre defined set of representation formats of the projected picture, including for example an equirectangular projection (ERP) format and a cube map projection (CMP) format. It may be considered that the projected picture covers the entire sphere.
[0202] Optionally, a region- wise packing is then applied to map the projected picture (C) onto a packed picture (D). If the region- wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding. Otherwise, regions of the projected picture (C) are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding. The term region- wise packing may be defined as a process by which a projected picture is mapped to a packed picture. The term packed picture may be defined as a picture that results from region- wise packing of a projected picture.
[0203] In the case of stereoscopic 360-degree video, as shown in an example of Figure 3, the input images of one time instance are stitched to generate a projected picture representing two views (CL, CR), one for each eye. Both views (CL, CR) can be mapped onto the same packed picture (D), and encoded by a traditional 2D video encoder. Alternatively, each view of the projected picture can be mapped to its own packed picture, in which case the image stitching, projection, and region- wise packing is performed as illustrated in Figure 2. A sequence of packed pictures of either the left view or the right view can be independently coded or, when using a multiview video encoder, predicted from the other view.
[0204] An example of image stitching, projection, and region- wise packing process for stereoscopic content where both views are mapped onto the same packed picture, as shown in Figure 3 is described next in more detailed manner. Input images (Bi) are stitched and projected onto two three- dimensional projection structures, one for each eye. The image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere. Frame packing is applied to pack the left view picture and right view picture onto the same projected picture. Optionally, region- wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region- wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
[0205] The image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure. Similarly, the region-wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
[0206] 360-degree panoramic content (i.e., images and video) cover horizontally the full 360- degree field-of-view around the capturing position of an imaging device. The vertical field-of-view may vary and can be e.g. 180 degrees. Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that has been mapped to a two-dimensional image plane using equirectangular projection (ERP). In this case, the horizontal coordinate may be considered equivalent to a longitude, and the vertical coordinate may be considered equivalent to a latitude, with no transformation or scaling applied. The process of forming a monoscopic equirectangular panorama picture is illustrated in Figure 4. A set of input images, such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image. The spherical image is further projected onto a cylinder (without the top and bottom faces). The cylinder is unfolded to form a two-dimensional projected picture. In practice one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere. The projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
[0207] In general, 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
[0208] In some cases panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane. In some cases a panoramic image may have less than 360-degree horizontal field-of-view and up to 180- degree vertical field-of-view, while otherwise has the characteristics of equirectangular projection format.
[0209] Region-wise packing information may be encoded as metadata in or along the bitstream. For example, the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier.
[0210] Rectangular region-wise packing metadata may be described as follows:
[0211] For each region, the metadata defines a rectangle in a projected picture, the respective rectangle in the packed picture, and an optional transformation of rotation by 90, 180, or 270 degrees and/or horizontal and/or vertical mirroring. Rectangles may, for example, be indicated by the locations of the top-left comer and the bottom-right comer. The mapping may comprise resampling. As the sizes of the respective rectangles can differ in the projected and packed pictures, the mechanism infers region- wise resampling.
[0212] Among other things, region- wise packing provides signalling for the following usage scenarios:
1 ) Additional compression for viewport-independent projections is achieved by densifying
sampling of different regions to achieve more uniformity across the sphere. For example, the top and bottom parts of ERP are oversampled, and region- wise packing can be applied to down-sample them horizontally.
2) Arranging the faces of plane-based projection formats, such as cube map projection, in an adaptive manner.
3) Generating viewport-dependent bitstreams that use viewport-independent projection formats.
For example, regions of ERP or faces of CMP can have different sampling densities and the underlying projection structure can have different orientations.
4) Indicating regions of the packed pictures represented by an extractor track. This is needed when an extractor track collects tiles from bitstreams of different resolutions.
[02131 A guard band may be defined as an area in a packed picture that is not rendered but may be used to improve the rendered part of the packed picture to avoid or mitigate visual artifacts such as seams.
[0214 ] Referring again to Figure 1, the OMAF allows the omission of image stitching, projection, and region- wise packing and encode the image/video data in their captured format. In this case, images (D) are considered the same as images (Bi) and a limited number of fisheye images per time instance are encoded.
[02151 For audio, the stitching process is not needed, since the captured signals are inherently immersive and omnidirectional.
[0216] The stitched images (D) are encoded as coded images (Ei) or a coded video bitstream (Ev). The captured audio (Ba) is encoded as an audio bitstream (Ea). The coded images, video, and/or audio are then composed into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format. In this specification, the media container file format is the ISO base media file format. The file encapsulator also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
[021 71 The metadata in the file may include:
- the projection format of the projected picture,
- fisheye video parameters,
- the area of the spherical surface covered by the packed picture, - the orientation of the projection structure corresponding to the projected picture relative to the global coordinate axes,
- region-wise packing information, and
- region-wise quality ranking (optional).
[0218] Region- wise packing information may be encoded as metadata in or along the bitstream, for example as region-wise packing SEI message(s) and/or as region-wise packing boxes in a file containing the bitstream. For example, the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier. The region- wise mapping information may for example comprise for each mapped region a source rectangle (a.k.a. projected region) in the projected picture and a destination rectangle (a.k.a. packed region) in the packed picture, where samples within the source rectangle are mapped to the destination rectangle and rectangles may for example be indicated by the locations of the top-left comer and the bottom-right comer. The mapping may comprise resampling. Additionally or alternatively, the packing information may comprise one or more of the following: the orientation of the three-dimensional projection structure relative to a coordinate system, indication which projection format is used, region-wise quality ranking indicating the picture quality ranking between regions and/or first and second spatial region sequences, one or more transformation operations, such as rotation by 90, 180, or 270 degrees, horizontal mirroring, and vertical mirroring. The semantics of packing information may be specified in a manner that they are indicative for each sample location within packed regions of a decoded picture which is the respective spherical coordinate location.
[0219] The segments (Fs) may be delivered using a delivery mechanism to a player.
[0220] The file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F'). A file decapsulator processes the file (F') or the received segments (F's) and extracts the coded bitstreams (E'a, E'v, and/or E'i) and parses the metadata. The audio, video, and/or images are then decoded into decoded signals (B'a for audio, and D' for images/video). The decoded packed pictures (D') are projected onto the screen of a head-mounted display or any other display device based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region-wise packing metadata parsed from the file. Likewise, decoded audio (B'a) is rendered, e.g. through headphones, according to the current viewing orientation. The current viewing orientation is determined by the head tracking and possibly also eye tracking functionality. Besides being used by the Tenderer to render the appropriate part of decoded video and audio signals, the current viewing orientation may also be used the video and audio decoders for decoding optimization.
[0221 ] The process described above is applicable to both live and on-demand use cases.
[0222] At any point of time, a video rendered by an application on a HMD or on another display device renders a portion of the 360-degree video. This portion may be defined as a viewport. A viewport may be understood as a window on the 360-degree world represented in the omnidirectional video displayed via a rendering display. According to another definition, a viewport may be defined as a part of the spherical video that is currently displayed. A viewport may be characterized by horizontal and vertical field of views (FOV or FoV).
[0223] A viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint. A viewing position may be defined as the position within a viewing space from which the user views the scene. A viewing space may be defined as a 3D space of viewing positions within which rendering of image and video is enabled and VR experience is valid.
[0224] Typical representation formats for volumetric content include triangle meshes, point clouds and voxels. Temporal information about the content may comprise individual capture instances, i.e. frames or the position of objects as a function of time.
[0225] Advances in computational resources and in three-dimensional acquisition devices enable reconstruction of highly-detailed volumetric representations. Infrared, laser, time-of- flight and structured light technologies are examples of how such content may be constructed. The
representation of volumetric content may depend on how the data is to be used. For example, dense voxel arrays may be used to represent volumetric medical images. In three-dimensional graphics, polygon meshes are extensively used. Point clouds, on the other hand, are well suited to applications such as capturing real-world scenes where the topology of the scene is not necessarily a two- dimensional surface or manifold. Another method is to code three-dimensional data to a set of texture and depth maps. Closely related to this is the use of elevation and multi-level surface maps. For the avoidance of doubt, embodiments herein are applicable to any of the above technologies.
[0226] "Voxel" of a three-dimensional world corresponds to a pixel of a two-dimensional world. Voxels exist in a three-dimensional grid layout. An octree is a tree data structure used to partition a three-dimensional space. Octrees are the three-dimensional analog of quadtrees. A sparse voxel octree (SVO) describes a volume of a space containing a set of solid voxels of varying sizes. Empty areas within the volume are absent from the tree, which is why it is called "sparse".
[0227] A three-dimensional volumetric representation of a scene may be determined as a plurality of voxels on the basis of input streams of at least one multicamera device. Thus, at least one but preferably a plurality (i.e. 2, 3, 4, 5 or more) of multicamera devices may be used to capture 3D video representation of a scene. The multicamera devices are distributed in different locations in respect to the scene, and therefore each multicamera device captures a different 3D video representation of the scene. The 3D video representations captured by each multicamera device may be used as input streams for creating a 3D volumetric representation of the scene, said 3D volumetric representation comprising a plurality of voxels. Voxels may be formed from the captured 3D points e.g. by merging the 3D points into voxels comprising a plurality of 3D points such that for a selected 3D point, all neighbouring 3D points within a predefined threshold from the selected 3D point are merged into a voxel without exceeding a maximum number of 3D points in a voxel.
[0228] Voxels may also be formed through the construction of the sparse voxel octree. Each leaf of such a tree represents a solid voxel in world space; the root node of the tree represents the bounds of the world. The sparse voxel octree construction may have the following steps: 1) map each input depth map to a world space point cloud, where each pixel of the depth map is mapped to one or more 3D points; 2) determine voxel attributes such as colour and surface normal vector by examining the neighbourhood of the source pixel(s) in the camera images and the depth map; 3) determine the size of the voxel based on the depth value from the depth map and the resolution of the depth map; 4) determine the SVO level for the solid voxel as a function of its size relative to the world bounds; 5) determine the voxel coordinates on that level relative to the world bounds; 6) create new and/or traversing existing SVO nodes until arriving at the determined voxel coordinates; 7) insert the solid voxel as a leaf of the tree, possibly replacing or merging attributes from a previously existing voxel at those coordinates. Nevertheless, the size of voxel within the 3D volumetric representation of the scene may differ from each other. The voxels of the 3D volumetric representation thus represent the spatial locations within the scene.
[02291 A volumetric video frame may be regarded as a complete sparse voxel octree that models the world at a specific point in time in a video sequence. Voxel attributes contain information like colour, opacity, surface normal vectors, and surface material properties. These are referenced in the sparse voxel octrees (e.g. colour of a solid voxel), but can also be stored separately.
[0230] Point clouds are commonly used data structures for storing volumetric content. Compared to point clouds, sparse voxel octrees describe a recursive subdivision of a finite volume with solid voxels of varying sizes, while point clouds describe an unorganized set of separate points limited only by the precision of the used coordinate values.
[0231 ] In technologies such as dense point clouds and voxel arrays, there may be tens or even hundreds of millions of points. In order to store and transport such content between entities, such as between a server and a client over an IP network, compression is usually required.
[0232] User's position can be detected relative to content provided within the volumetric virtual reality content, e.g. so that the user can move freely within a given virtual reality space, around individual objects or groups of objects, and can view the objects from different angles depending on the movement (e.g. rotation and location) of their head in the real world. In some examples, the user may also view and explore a plurality of different virtual reality spaces and move from one virtual reality space to another one.
[02331 The angular extent of the environment observable or hearable through a rendering arrangement, such as with a head-mounted display, may be called the visual field of view (FOV). The actual FOV observed or heard by a user depends on the inter-pupillary distance and on the distance between the lenses of the virtual reality headset and the user's eyes, but the FOV can be considered to be approximately the same for all users of a given display device when the virtual reality headset is being worn by the user.
[0234] When viewing volumetric content from a single viewing position, a portion (often half) of the content may not be seen because it is facing away from the user. This portion is sometimes called "back facing content".
[0235] A volumetric image/video delivery system may comprise providing a plurality of patches representing part of a volumetric scene, and providing, for each patch, patch visibility information indicative of a set of directions from which a forward surface of the patch is visible. A volumetric image/video delivery system may further comprise providing one or more viewing positions associated with a client device, and processing one or more of the patches dependent on whether the patch visibility information indicates that the forward surface of the one or more patches is visible from the one or more viewing positions.
[0236] Patch visibility information is data indicative of where in the volumetric space the forward surface of the patch can be seen. For example, patch visibility information may comprise a visibility cone, which may comprise a visibility cone direction vector (X, Y, Z) and an opening angle (A). The opening angle (A) defines a set of spatial angles from which the forward surface of the patch can be seen. In another example, the patch visibility metadata may comprise a definition of a bounding sphere surface and sphere region metadata, identical or similar to that specified by the omnidirectional media format (OMAF) standard (ISO/IEC 23090-2). The bounding sphere surface may for example be defined by a three-dimensional location of the centre of the sphere, and the radius of the sphere. When the viewing position collocates with the bounding sphere surface, the patch may be considered visible within the indicated sphere region. In general, the geometry of the bounding surface may also be something other than a sphere, such as cylinder, cube, or cuboid. Multiple sets of patch visibility metadata may be defined for the same three-dimensional location of the centre of the bounding surface, but with different radii (or information indicative of the distance of the bounding surface from the three-dimensional location). Indicating several pieces of patch visibility metadata may be beneficial to handle occlusions.
[0237] A volumetric image/video delivery system may comprise one or more patch culling modules. One patch culling module may be configured to determine which patches are transmitted to a user device, for example the rendering module of the headset. Another patch culling module may be configured to determine which patches are decoded. A third patch culling module may be configured to determine which decoded patches are passed to rendering. Any combination of patch culling modules may be present or active in a volumetric image/video delivery or playback system. Patch culling may utilize the patch visibility information of patches, the current viewing position, the current viewing orientation, the expected iuture viewing positions, and/or the expected future viewing orientations. [0238] In some cases, each volumetric patch may be projected to a two-dimensional colour (or other form of texture) image and to a corresponding depth image, also known as a depth map. This conversion enables each patch to be converted back to volumetric form at a client rendering module of the headset using both images.
[0239] In some cases, a source volume of a volumetric image, such as a point cloud frame, may be projected onto one or more projection surfaces. Patches on the projection surfaces may be determined, and those patches may be arranged onto one or more two-dimensional frames. As above, texture and depth patches may be formed similarly shows a projection of a source volume to a projection surface, and inpainting of a sparse projection. In other words, a three-dimensional (3D) scene model, comprising geometry primitives such as mesh elements, points, and/or voxel, is projected onto one or more projection surfaces. These projection surface geometries may be "unfolded" onto 2D planes (typically two planes per projected source volume: one for texture, one for depth). The "unfolding" may include determination of patches. 2D planes may then be encoded using standard 2D image or video compression technologies. Relevant projection geometry information may be transmitted alongside the encoded video files to the decoder. The decoder may then decode the coded image/video sequence and perform the inverse projection to regenerate the 3D scene model object in any desired representation format, which may be different from the starting format e.g. reconstructing a point cloud from original mesh model data.
[0240] In some cases, multiple points of volumetric video or image (e.g. point cloud) are projected to the same pixel position. Such cases may be handled by creating more than one "layer". It is remarked that the concept of layer in volumetric video, such as point cloud compression, may differ from the concept of layer in scalable video coding. Thus, terms such as PCC layer or volumetric video layer may be used to make a distinction from a layer of scalable video coding. Each volumetric (3D) patch may be projected onto more than one 2D patch, representing different layers of visual data, such as points, projected onto the same 2D positions. The patches may be organized for example based on ascending distance to the projection plane. More precisely the following example process may be used to create two layers but could be generalized to other number of layers too: Let H(u,v) be the set of points of the current patch that get projected to the same pixel (u, v). The first layer, also called the near layer, stores the point of H(u,v) with the lowest depth DO. The second layer, referred to as the far layer, captures the point of H(u,v) with the highest depth within the interval [DO, D0+?], where ? is a user-defined parameter that describes the surface thickness.
[0241 ] It should be understood that volumetric image/video can comprise, additionally or alternatively to texture and depth, other types of patches, such as reflectance, opacity or transparency (e.g. alpha channel patches), surface normal, albedo, and/or other material or surface attribute patches.
[0242] Two-dimensional form of patches may be packed into one or more atlases. Texture atlases are known in the art, comprising an image consisting of sub-images, the image being treated as a single unit by graphics hardware and which can be compressed and transmitted as a single image for subsequent identification and decompression. Geometry atlases may be constructed similarly to texture atlases. Texture and geometry atlases may be treated as separate pictures (and as separate picture sequences in case of volumetric video), or texture and geometry atlases may be packed onto the same frame, e.g. similarly to how frame packing is conventionally performed. Atlases may be encoded as frames with an image or video encoder.
[0243] The sub-image layout in an atlas may also be organized such that it is possible to encode a patch or a set of patches having similar visibility information into spatiotemporal units that can be decoded independently of other spatiotemporal units. For example, a tile grid, as understood in the context of High Efficiency Video Coding (HEVC), may be selected for encoding and an atlas may be organized in a manner such that a patch or a group of patches having similar visibility information can be encoded as a motion-constrained tile set (MCTS).
10244 ] In some cases, one or more (but not the entire set of) spatiotemporal units may be provided and stored as a track, as is understood in the context of the ISO base media file format, or as any similar container file format structure. Such a track may be referred to as a patch track. Patch tracks may for example be sub-picture tracks, as understood in the context of OMAF, or tile tracks, as understood in the context of 1SO/1EC 14496-15.
[0245] In some cases, several versions of the one or more atlases are encoded. Different versions may include, but are not limited to, one or more of the following: different bitrate versions of the one or more atlases at the same resolution; different spatial resolutions of the atlases; and different versions for different random access intervals; these may include one or more intra-coded atlases (where every picture can be randomly accessed).
[0246] In some cases, combinations of patches from different versions of the texture atlas may be prescribed and described as metadata, such as extractor tracks, as will be understood in the context of OMAF and/or ISO/IEC 14496-15.
[0247] When the total sample count of a texture atlas and, in some cases, of the respective geometry pictures and/or other auxiliary pictures (if any) exceeds a limit, such as a level limit of a video codec, a prescription may be authored in a manner so that the limit is obeyed. For example, patches may be selected from a lower-resolution texture atlas according to subjective importance. The selection may be performed in a manner that is not related to the viewing position. The prescription may be accompanied by metadata characterizing the obeyed limit(s), e.g. the codec Level that is obeyed.
[0248] A prescription may be made specific to a visibility cone (or generally to a specific visibility) and hence excludes the patches not visible in the visibility cone. The selection of visibility cones for which the prescriptions are generated may be limited to a reasonable number, such that switching from one prescription to another is not expected to occur frequently. The visibility cones of prescriptions may overlap to avoid switching back and forth between two prescriptions. The prescription may be accompanied by metadata indicative of the visibility cone (or generally visibility information).
[0249 ] A prescription may use a specific grid or pattern of independent spatiotemporal units. For example, a prescription may use a certain tile grid, wherein tile boundaries are also MCTS boundaries. The prescription may be accompanied by metadata indicating potential sources (e.g. track groups, tracks, or representations) that are suitable as spatiotemporal units.
[0250] In some cases, a patch track forms a Representation in the context of DASH. Consequently, the Representation element in DASH MPD may provide metadata on the patch, such as patch visibility metadata, related to the patch track. Clients may select patch Representations and request
(Sub)segments from the selected Representations on the basis of patch visibility metadata.
[0251 ] A collector track may be defined as a track that extracts implicitly or explicitly coded video data, such as coded video data of MCTSs or sub-pictures, from other tracks. When resolved by a file reader or alike, a collector track may result into a bitstream that conforms to a video coding standard or format. A collector track may for example extract MCTSs or sub-pictures to form a coded picture sequence where MCTSs or sub-pictures are arranged to a grid. For example, when a collector track extracts two MCTSs or sub-pictures, they may be arranged into a 2x1 grid of MCTSs or sub-pictures. As discussed subsequently, an extractor track that extracts MCTSs or sub-pictures from other tracks may be regarded as a collector track. A tile base track as discussed subsequently is another example of a collector track. A collector track may also be called a collection track. A track that is a source for extracting to a collector track may be referred to as a collection item track.
[0252] Extractors specified in ISO/IEC 14496-15 for H.264/AVC and HEVC enable compact formation of tracks that extract NAL unit data by reference. An extractor is a NAL-unit-like structure. A NAL-unit-like structure may be specified to comprise a NAL unit header and NAL unit payload like any NAL units, but start code emulation prevention (that is required for a NAL unit) might not be followed in a NAL-unit-like structure. For HEVC, an extractor contains one or more constructors. A sample constructor extracts, by reference, NAL unit data from a sample of another track. An in-line constructor includes NAL unit data. The term in-line may be defined e.g. in relation to a data unit to indicate that a containing syntax structure contains or carries the data unit (as opposed to includes the data unit by reference or through a data pointer). When an extractor is processed by a file reader that requires it, the extractor is logically replaced by the bytes resulting when resolving the contained constructors in their appearance order. Nested extraction may be disallowed, e.g. the bytes referred to by a sample constructor shall not contain extractors; an extractor shall not reference, directly or indirectly, another extractor. An extractor may contain one or more constructors for extracting data from the current track or from another track that is linked to the track in which the extractor resides by means of a track reference of type 'seal'. The bytes of a resolved extractor may represent one or more entire NAL units. A resolved extractor starts with a valid length field and a NAL unit header. The bytes of a sample constructor are copied only from the single identified sample in the track referenced through the indicated 'seal' track reference. The alignment is on decoding time, i.e. using the time-to- sample table only, followed by a counted offset in sample number. Extractors are a media-level concept and hence apply to the destination track before any edit list is considered. (However, one would normally expect that the edit lists in the two tracks would be identical).
[0253] In viewport-dependent streaming, which may be also referred to as viewport-adaptive streaming (VAS) or viewport-specific streaming, a subset of 360-degree video content covering the viewport (i.e., the current view orientation) is transmitted at a better quality and/or higher resolution than the quality and/or resolution for the remaining of 360-degree video. There are several alternatives to achieve viewport-dependent omnidirectional video streaming. In tile-based viewport-dependent streaming, projected pictures are partitioned into tiles that are coded as motion-constrained tile sets (MCTSs) or alike. Several versions of the content are encoded at different bitrates or qualities using the same MCTS partitioning. Each MCTS sequence is made available for streaming as a DASH Representation or alike. The player selects on MCTS basis which bitrate or quality is received.
[0254] H.264/AVC does not include the concept of tiles, but the operation like MCTSs can be achieved by arranging regions vertically as slices and restricting the encoding similarly to encoding of MCTSs. For simplicity, the terms tile and MCTS are used in this document but should be understood to apply to H.264/AVC too in a limited manner. In general, the terms tile and MCTS should be understood to apply to similar concepts in any coding format or specification.
[0255] One possible subdivision of the tile-based viewport-dependent streaming schemes is the following:
- Region-wise mixed quality (RWMQ) 360° video: Several versions of the content are coded with the same resolution, the same tile grid, and different bitrate / picture quality. Players choose high-quality MCTSs for the viewport.
- Viewport + 360° video: One or more bitrate and/or resolution versions of a complete low- resolution/low-quality omnidirectional video are encoded and made available for streaming. In addition, MCTS-based encoding is performed and MCTS sequences are made available for streaming. Players receive a complete low-resolution/low-quality omnidirectional video and select and receive the high-resolution MCTSs covering the viewport.
- Region-wise mixed resolution (RWMR) 360° video: MCTSs are encoded at multiple
resolutions. Players select a combination of high resolution MCTSs covering the viewport and low-resolution MCTSs for the remaining areas.
[0256] It needs to be understood that there may be other ways to subdivide tile-based viewport- dependent streaming methods to categories than the one described above. Moreover, the above- described subdivision may not be exhaustive, i.e. they may be tile-based viewport-dependent streaming methods that do not belong to any of the described categories. [0257] All above-described viewport-dependent streaming approaches, tiles or MCTSs (or guard bands of tiles or MCTSs) may overlap in sphere coverage by an amount selected in the pre-processing or encoding.
[0258] All above-described viewport-dependent streaming approaches may be realized with client- driven bitstream rewriting (a.k.a. late binding) or with author-driven MCTS merging (a.k.a. early binding). In late binding, a player selects MCTS sequences to be received, selectively rewrites portions of the received video data as necessary (e.g. parameter sets and slice segment headers may need to be rewritten) for combining the received MCTSs into a single bitstream, and decodes the single bitstream. Early binding refers to the use of author-driven information for rewriting portions of the received video data as necessary, for merging of MCTSs into a single bitstream to be decoded, and in some cases for selection of MCTS sequences to be received. There may be approaches in between early and late binding: for example, it may be possible to let players select MCTS sequences to be received without author guidance, while an author-driven approach is used for MCTS merging and header rewriting. Early binding approaches include an extractor-driven approach and tile track approach, which are described subsequently.
[02591 In the tile track approach, one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is stored as a tile track (e.g. an HEVC tile track) in a file. A tile base track (e.g. an HEVC tile base track) may be generated and stored in a file. The tile base track represents the bitstream by implicitly collecting motion-constrained tile sets from the tile tracks. At the receiver side the tile tracks to be streamed may be selected based on the viewing orientation. The client may receive tile tracks covering the entire omnidirectional content. Better quality or higher resolution tile tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video. A tile base track may include track references to the tile tracks, and/or tile tracks may include track references to the tile base track. For example, in HEVC, the 'sabt' track reference is used used to refer to tile tracks from a tile base track, and the tile ordering is indicated by the order of the tile tracks contained by a 'sabt' track reference. Furthermore, in HEVC, a tile track has is a 'tbas' track reference to the tile base track.
[0260] In the extractor-driven approach, one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is modified to become a compliant bitstream of its own (e.g. HEVC bitstream) and stored as a sub-picture track (e.g. with untransformed sample entry type 'hvcT for HEVC) in a file. One or more extractor tracks (e.g. an HEVC extractor tracks) may be generated and stored in a file. The extractor track represents the bitstream by explicitly extracting (e.g. by HEVC extractors) motion-constrained tile sets from the sub picture tracks. At the receiver side the sub-picture tracks to be streamed may be selected based on the viewing orientation. The client may receive sub-picture tracks covering the entire omnidirectional content. Better quality or higher resolution sub-picture tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video. [0261] It needs to be understood that even though the tile track approach and extractor-driven approach are described in details, specifically in the context of HEVC, they apply to other codecs and similar concepts as tile tracks or extractors. Moreover, a combination or a mixture of tile track and extractor-driven approach is possible. For example, such a mixture could be based on the tile track approach, but where a tile base track could contain guidance for rewriting operations for the client, e.g. the tile base track could include rewritten slice or tile group headers.
[0262] As an alternative to MCTS-based content encoding, content authoring for tile-based viewport-dependent streaming may be realized with sub-picture-based content authoring, described as follows. The pre-processing (prior to encoding) comprises partitioning uncompressed pictures to sub pictures. Several sub-picture bitstreams of the same uncompressed sub-picture sequence are encoded, e.g. at the same resolution but different qualities and bitrates. The encoding may be constrained in a manner that merging of coded sub-picture bitstream to a compliant bitstream representing
omnidirectional video is enabled. For example, dependencies on samples outside the decoded picture boundaries may be avoided in the encoding by selecting motion vectors in a manner that sample locations outside the picture would not be referred in the inter prediction process. Each sub-picture bitstream may be encapsulated as a sub-picture track, and one or more extractor tracks merging the sub-picture tracks of different sub-picture locations may be additionally formed. If a tile track based approach is targeted, each sub-picture bitstream is modified to become an MCTS sequence and stored as a tile track in a file, and one or more tile base tracks are created for the tile tracks.
[0263 ] Tile-based viewport-dependent streaming approaches may be realized by executing a single decoder instance or one decoder instance per MCTS sequence (or in some cases, something in between, e.g. one decoder instance per MCTSs of the same resolution), e.g. depending on the capability of the device and operating system where the player runs. The use of single decoder instance may be enabled by late binding or early binding. To facilitate multiple decoder instances, the extractor-driven approach may use sub-picture tracks that are compliant with the coding format or standard without modifications. Other approaches may need either to rewrite image segment headers, parameter sets, and/or alike information in the client side to construct a conforming bitstream or to have a decoder implementation capable of decoding an MCTS sequence without the presence of other coded video data.
[0264] There may be at least two approaches for encapsulating and referencing tile tracks or sub picture tracks in the tile track approach and the extractor-driven approach, respectively:
- Referencing track identifiers from a tile base track or an extractor track.
- Referencing tile group identifiers from a tile base track or an extractor track, wherein the tile group identified by a tile group identifier contains the collocated tile tracks or the sub-picture tracks that are alternatives for extraction. [0265] In the RWMQ method, one extractor track per each picture size and each tile grid is sufficient. In 360° + viewport video and RWMR video, one extractor track may be needed for each distinct viewing orientation.
[0266] An approach similar to above-described tile-based viewport-dependent streaming approaches, which may be referred to as tile rectangle based encoding and streaming, is described next. This approach may be used with any video codec, even if tiles similar to HEVC were not available in the codec or even if motion-constrained tile sets or alike were not implemented in an encoder. In tile rectangle based encoding, the source content is split into tile rectangle sequences before encoding. Each tile rectangle sequence covers a subset of the spatial area of the source content, such as full panorama content, which may e.g. be of equirectangular projection format. Each tile rectangle sequence is then encoded independently from each other as a single-layer bitstream. Several bitstreams may be encoded from the same tile rectangle sequence, e.g. for different bitrates. Each tile rectangle bitstream may be encapsulated in a file as its own track (or alike) and made available for streaming. At the receiver side the tracks to be streamed may be selected based on the viewing orientation. The client may receive tracks covering the entire omnidirectional content. Better quality or higher resolution tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non- visible viewports. In an example, each track may be decoded with a separate decoder instance.
[0267] In viewport-adaptive streaming, the primary viewport (i.e., the current viewing orientation) is transmitted at a good quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution. When the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display, another version of the content needs to be streamed, matching the new viewing orientation. In general, the new version can be requested starting from a stream access point (SAP), which are typically aligned with (Sub)segments. In single-layer video bitstreams, SAPs correspond to random-access pictures, are intra-coded, and are hence costly in terms of rate- distortion performance. Conventionally, relatively long SAP intervals and consequently relatively long (Sub)segment durations in the order of seconds are hence typically used. Thus, the delay (here referred to as the viewport quality update delay) in upgrading the quality after a viewing orientation change (e.g. a head turn) is conventionally in the order of seconds and is therefore clearly noticeable and annoying.
[0268] As explained above, viewport switching in viewport-dependent streaming, which may be compliant with MPEG OMAF, is enabled at stream access points, which involve intra coding and hence a greater bitrate compared to respective inter coded pictures at the same quality. A compromise between the stream access point interval and the rate- distortion performance is hence chosen in an encoding configuration.
[0269] Viewport-adaptive streaming of equal-resolution HEVC bitstreams with MCTSs is described in the following as an example. Several HEVC bitstreams of the same omnidirectional source content may be encoded at the same resolution but different qualities and bitrates using motion- constrained tile sets. The MCTS grid in all bitstreams is identical. In order to enable the client the use of the same tile base track for reconstructing a bitstream from MCTSs received from different original bitstreams, each bitstream is encapsulated in its own file, and the same track identifier is used for each tile track of the same tile grid position in all these files. HEVC tile tracks are formed from each motion-constrained tile set sequence, and a tile base track is additionally formed. The client may parse tile base track to implicitly reconstruct a bitstream from the tile tracks. The reconstructed bitstream can be decoded with a conforming HEVC decoder.
[0270] Clients can choose which version of each MCTS is received. The same tile base track suffices for combining MCTSs from different bitstreams, since the same track identifiers are used in the respective tile tracks.
[0271 ] Figure 5 illustrates an example how tile tracks of the same resolution can be used for tile- based omnidirectional video streaming. A 4x2 tile grid has been used in forming of the motion- constrained tile sets. Two HEVC bitstreams originating from the same source content are encoded at different picture qualities and bitrates. Each bitstream may be encapsulated in its own file wherein each motion-constrained tile set sequence may be included in one tile track and a tile base track is also included. The client may choose the quality at which each tile track is received based on the viewing orientation. In this example the client receives tile tracks 1, 2, 5, and 6 at a particular quality and tile tracks 3, 4, 7, and 8 at another quality. The tile base track is used to order the received tile track data into a bitstream that can be decoded with an HEVC decoder.
[0272] In current video codecs, different parts of the original content needs to be packed into 2D frame to be coded by a conventional 2D video codec. Video coding formats have constraints on spatial partitioning of pictures. For example, HEVC uses a tile grid of picture-wide tile rows and picture-high tile columns specified in units of CTUs with certain minimum with and height constraints for tile columns and tile rows. Different parts may have different sizes, so their optimal packing along spatial partitioning units of 2D video codecs might not be possible. There may also be empty spaces (areas that are not allocated by any parts of the original content but are anyhow coded and decoded) in the packed picture. These empty spaces, however not needed by the receiver, are counted as effective pixels for the codec, and should anyway be encoded and decoded. This leads to inefficient packing. Known solutions for overcoming this drawback, have concentrated on the possibility of more flexible and/or finer granularity tiling, e.g. tile granularity of CUs or tile partitioning that needs not use a tile grid of picture-wide tile rows and picture-high tile columns.
[0273] As another drawback, in viewport dependent 360-streaming, corresponding tiles need to be selected and arranged in a coded 2D picture. This also needs some changes to the coded data, since the tile positions in the encoder output differ from those of the merged bitstream that is input to the decoder. Thus, parameter sets and slice headers need to be rewritten for the merged bitstream. Known solutions for overcoming this drawback of extraction for viewport-dependent 360-degree streaming have been related to e.g. a client-side slice header rewriting, which, however, is not part of a standardized decoding operation, and may not be supported by decoder APIs and decoder
implementations; or an extractor track with rewritten slice header, which relates to ISO/IEC 14496-15 including the design for extractors, which can be used for rewriting parameter sets and slice headers in the extractor track, while the tile data is included by reference. Such an approach may require one extractor track per each possible extracted combination, such as one extractor track per each range of 360-degree video viewing orientations that results in a different set of picked tiles.
[02741 Yet, as another drawback, there is a rate-distortion penalty in the case that different parts of a content (e.g. different tiles) are needed to be coded independently (e.g. using motion constrained tile set technique in viewport-adaptive streaming application or ROI enhancement layer). For example a 12x8 MCTS grid has been found to have an average Bjontegaard delta bitrate increase of more than 10% over 14 ERP test sequences, peaking at 22.5% when compared to coding without tiles. Known solutions for overcoming this drawback relate to modification of motion compensation filter near the motion constrained tile border to reduce the RD (Rate Distortion) penalty of MCTS tool; or to modification of the predicted block and removal of its dependency to other tiles which are coded in MCTS mode, which reduce the RD penalty of MCTS tool.
[0275] The present embodiments are related to sub-picture-based video codec operation. Visual content at specific time instances is divided into several parts, were each part is represented using a sub-picture (a.k.a. subpicture). Respective sub-pictures at different time instances form a sub-picture sequence, wherein the definition of "respective" may depend on the context, but can be for example the same spatial portion of a picture area in a sequence of pictures or the content acquired with the same settings, such as the same acquisition position, orientation, and projection surface. A picture at specific time instance may be defined as a collection of all the sub-pictures at the specific time instance. Each sub-picture is coded using a conventional video encoder, and reconstructed sub-picture is stored in a reconstructed sub-picture memory corresponding to the sub-picture sequence. For predicting a sub-picture at a particular sub-picture sequence, the encoder can use reconstructed sub pictures of the same sub-picture sequence as reference for prediction. Coded sub-pictures are included as separate units (e.g. VCL NAL units) in the same bitstream.
[0276] A decoder receives coded video data (e.g. a bitstream). A sub-picture is decoded as a separate unit from other sub-pictures using a conventional video decoder. The decoded sub-picture may be buffered using a decoded picture buffering process. The decoded picture buffering process may provide the decoded sub-picture of a particular sub-picture sequence to the decoder, and the decoder may use the decoded sub-picture as reference for prediction for predicting a sub-picture at the same sub-picture sequence.
[0277] Figure 6 illustrates an example of a decoder. The decoder receives coded video data (e.g. a bitstream). A sub-picture is decoded in a decoding process 610 as a separate unit from other sub pictures using a conventional video decoder. The decoded sub-picture may be buffered using a decoded picture buffering process 620. The decoded picture buffering process may provide the decoded sub-picture of a particular sub-picture sequence to the decoding process 610, and the decoder may use the decoded sub-picture as a reference for prediction for predicting a sub-picture at the same sub-picture sequence.
[0278] The decoded picture buffering process 620 may comprise a sub-picture-sequence-wise buffering 730, which may comprise marking of reconstructed sub-pictures as "used for reference" and "unused for reference" as well as keeping track of whether reconstructed sub-pictures have been output from the decoder. The buffering of sub-picture sequences may be independent form each other, or may be synchronized in one or both of the following ways:
- the output of all reconstructed sub-pictures of the same time instance may be performed
synchronously.
- the reference picture marking of reconstructed sub-pictures of the same time instance may be performed synchronously.
[0279] The sub-picture-sequence-wise buffering 730 may be illustrated with Figure 7. The example illustrates decoding of two sub-picture sequences, which have the same height but different width. It needs to be understood that the number of sub-picture sequences and/or the sub-picture dimensions could have been chosen differently and these choices are only meant as possible examples.
[0280] According to an embodiment, output from a decoder comprises a collection of the different and separate decoded sub-pictures.
[0281 ] According to another example shown in Figure 8, an output picture, which may also or alternatively be referred to as a decoded picture, from a decoding process 810 is a collection of the different and separate sub-pictures. According to another embodiment, the output picture is composed by arranging reconstructed sub-pictures into a two-dimensional (2D) picture. This embodiment keeps a conventional design of a single output picture (per time instance) as the output of a video decoder and hence can be straightforward for integrating to systems. The decoded sub-pictures are provided to a decoded sub-picture buffering 812. The decoding process 810 may then use buffered sub-picture(s) as a reference for decoding succeeding pictures. The decoding process may obtain an indication or infer which of the decoded sub-picture(s) are to be used as a source for generating manipulated sub- picture^). Those sub-pictures are provided 814 to a reference sub-picture manipulation process 816. Manipulated reference sub-pictures are then provided 818 to the decoded sub-picture buffering 812, where the manipulated reference sub-pictures are buffered. The sub-pictures and the manipulated reference sub-pictures may then be used by the output picture compositing process 820 that takes the picture composition data as input and arranges reconstructed sub-pictures into output pictures. An encoder encodes picture composition data into or along the bitstream, wherein the picture composition data is indicative of how reconstructed sub-pictures are to be arranged into 2D picture(s) forming output picture(s). A decoder decodes picture composition data from or along the bitstream and forms 820 an output picture from reconstructed sub-pictures and/or manipulated reference sub-pictures according to the decoded picture composition data. The decoding or picture composition data may happen as a part of or operationally connected with the output picture compositing process 820. Thus, a conventional video decoding process decodes the picture composition data.
[0282] According to an embodiment, the picture composition data is encoded in or along the bitstream and/or decoded from or along the bitstream using the bitstream or decoding order of sub pictures and the dimensions of sub-pictures. An algorithm for positioning sub-pictures within a picture area is followed in an encoder and/or in a decoder, wherein sub-pictures are input to the algorithm in their bitstream or decoding order. In an embodiment, the algorithm for positioning sub-pictures within a picture area is the following: When a picture comprises multiple sub-pictures and when encoding of a picture and/or decoding of a coded picture is started, each CTU location in the reconstructed or decoded picture is marked as unoccupied. For each sub-picture in bitstream or decoding order, the sub-picture takes the next such unoccupied location in CTU raster scan order within a picture that is large enough to fit the sub-picture within the picture boundaries.
[0283] According to an embodiment, an encoder indicates in or along the bitstream if
- the decoder is intended to output a collection of the different and separate decoded sub
pictures; or
- the decoder is intended to generate output pictures according to the picture composition data; or
- the decoder is allowed to perform either of the options above.
[0284] According to an embodiment, a decoder decodes from or along the bitstream if
- the decoder is intended to output a collection of the different and separate decoded sub
pictures; or
- the decoder is intended to generate output pictures according to the picture composition data; or
- the decoder is allowed to perform either of the options above.
[0285] The decoder adapts its operation to conform to the decoded intent or allowance.
[02861 According to an embodiment, a decoder includes an interface for selecting at least among outputting a collection of the different and separate decoded sub-pictures or generating output pictures according to the picture composition data. The decoder adapts its operation to conform to what has been indicated through the interface.
[0287] According to an embodiment, pictures are divided into sub-pictures, tile groups and tiles. A tile may be defined similarly to an HEVC tile, thus a tile may be defined as a sequence of CTUs that cover a rectangular region of a picture. A tile group may be defined as a sequence of tiles in tile raster scan within a sub-picture. It may be specified that A VCL NAL unit contains exactly one tile group, i.e. a tile group is contained in exactly one VCL NAL unit. A sub-picture may be defined as a rectangular set of one or more entire tile groups. In an embodiment, a picture is partitioned to sub- pictures, i.e. the entire picture is occupied by sub-pictures and there are no unoccupied areas within a picture. In another embodiment, a picture comprises sub-pictures and one or more unoccupied areas.
[0288] According to an embodiment, an encoder encodes in or along the bitstream and/or a decoder decodes from or along the bitstream information indicative of one or more tile partitionings for sub-pictures. A tile partitioning may for example be a tile grid specified as widths and heights of tile columns and tile rows, respectively. An encoder encodes in or along a bitstream and/or a decoder decodes from or along the bitstream which tile partitioning applies for a particular sub-picture or sub picture sequence. In an embodiment, syntax elements describing a tile partitioning are encoded in and/or decoded from a picture parameter set, and a PPS is activated for a sub-picture e.g. through a PPS identifier in a tile group header. Each sub-picture may refer to its own PPS and may hence have its own tile partitioning. For example, Figure 10 illustrates a picture that is divided into 4 sub-pictures. Each sub-picture may have its own tile grid. In this example sub-picture 1 is divided into a grid of 3x2 tiles of equal width and equal height, sub-picture 2 is divided into 2x1 tiles of 3 and 5 CTUs high.
Each of sub-pictures 3 and 4 has only one tile. Sub-picture 1 has 3 tile groups containing 1, 3, and 2 tiles, respectively. Each of sub-pictures 2, 3, and 4 has one tile group.
[02891 Figure 10 also illustrates the above-discussed algorithm for positioning sub-pictures within a picture area. Sub-picture 1 is the first in decoding order and thus placed in the top-left comer of the picture area. Sub-picture 2 is the second in decoding order and thus placed to the next unoccupied location in raster scan order. The algorithm also operates the same way for the third and fourth sub pictures in decoding order, i.e. sub- pictures 3 and 4, respectively. The sub-picture decoding order is indicated with the number (1, 2, 3, 4) outside the picture boundaries.
[0290] According to an embodiment, an encoder encodes in the bitstream and/or a decoder decodes from the bitstream, e.g. in an image segment header such as a tile group header, information indicative of one or more tile positions within a sub-picture. For example, a tile position of the first tile, in decoding order, of the image segment or tile group may be encoded and/or decoded. In an embodiment, a decoder concludes that the current image segment or tile group is the first image segment or tile group of a sub-picture, when the first tile of an image segment or tile group is the top- left tile of a sub-picture (e.g. having a tile address or tile index equal to 0 in raster scan order of tiles). In an embodiment, in relation to concluding a first image segment or tile group, a decoder concludes if a new access unit is started. In an embodiment, it is concluded that a new access is started when the picture order count value or syntax element value(s) related to picture order count (such as least significant bits of picture order count) differ from that of the previous sub-picture.
[0291 1 According to an embodiment, decoded picture buffering is performed on picture-basis rather than on sub-picture basis. An encoder and/or a decoder generates a reference picture from decoded sub-pictures of the same access unit or time instance using the picture composition data. The generation of a reference picture is performed identically or similarly to what is described in other embodiments for generating output pictures. When a reference picture is referenced in encoding and/or decoding of a sub-picture, reference sub-pictures for encoding and/or decoding the sub-picture are generated by extracting the area collocating with the current sub-picture from the reference pictures in the decoded picture buffer. Thus, the decoding process gets reference sub-picture(s) from the decoded picture buffering process similarly to other embodiments, and the decoding process may operate similarly to other embodiments.
[0292] In an embodiment, an encoder selects reference pictures for predicting a current sub-picture in a manner that the reference pictures contain a sub-picture that has the same location as the current sub-picture (within the picture) and the same dimensions (width and height) as the current sub-picture. An encoder avoids selecting reference pictures for predicting a current sub-picture if the reference pictures do not contain a sub-picture that has the same location as the current sub-picture (within the picture) or the same dimensions as the current sub-picture. In an embodiment, sub-pictures of the same access unit or time instance are allowed to have different types, such as random-access sub-picture and non-random-access sub-picture, defined similarly to what has been described earlier in relation to NAL unit types and/or picture types. An encoder encodes a first access unit with both a random-access sub-picture in a first location and size and a non-random-access sub-picture in a second location and size, and a subsequent access unit in decoding order including a sub-picture in the first location and size constrained in a manner that reference pictures preceding the first access unit in decoding order are avoided, and including another sub-picture in the second location and size using a reference picture preceding the first access unit decoding order as a reference for prediction.
[0293 ] In an embodiment, for encoding and/or decoding a current sub-picture, an encoder and/or a decoder includes only such reference pictures into the initial reference picture list that contain a sub picture that has the same location as the current sub-picture (within the picture) and the same dimensions (width and height) as the current sub-picture. Reference pictures into that do not contain a sub-picture that has the same location as the current sub-picture (within the picture) or the same dimensions (width and height) as the current sub-picture are skipped or excluded for generating an initial reference picture list for encoding and/or decoding the current sub-picture. In an embodiment, sub-pictures of the same access unit or time instance are allowed to have different types, such as random-access sub-picture and non-random-access sub-picture, defined similarly to what has been described earlier in relation to NAL unit types and/or picture types. Reference picture list initialization process or algorithm in an encoder and/or a decoder only includes the previous random-access sub picture and subsequent sub-pictures, in decoding order, in an initial reference picture list and skips or excludes sub-pictures preceding, in decoding order, the previous random-access sub-picture.
[0294] According to an embodiment, a sub-picture at a second sub-picture sequence is predicted from one or more sub-pictures of a first sub-picture sequence. Spatial relationship of the sub-picture in relation to the one or more sub-pictures of the first sub-picture sequence is either inferred or indicated by an encoder in or along the bitstream and/or decoded by a decoder from or along the bitstream. In the absence of such spatial relationship information in or along the bitstream, an encoder and/or a decoder may infer that the sub-pictures are collocated, i.e. exactly overlapping for spatial correspondence in prediction. The spatial relationship information is independent of the picture composition data. For example, sub-pictures may be composed to be above each other in an output picture (in a top-bottom packing arrangement) while they are considered to be collocated for prediction.
[0295] An embodiment of an encoding process or a decoding process is illustrated in Figure 11, where the arrows from the first sub-picture to the second sub-picture sequence indicate prediction. In the example of Figure 11, the sub-pictures may be inferred to be collocated for prediction.
[0296] According to an embodiment, an encoder indicates a sub-picture sequence identifier or alike in or along the bitstream in a manner that the sub-picture sequence identifier is associated with coded video data units, such as VCL NAL units. According to an embodiment, a decoder decodes a sub-picture sequence identifier or alike from or along the bitstream in a manner that the sub-picture sequence identifier is associated with coded video data units and/or the respective reconstructed sub pictures. The syntax structure containing the sub-picture sequence identifier and the association mechanism may include but are not limited to one or more of the following:
- A sub-picture sequence identifier included in a NAL unit header and associated with the NAL unit.
- A sub-picture sequence identifier included in a header included in a VCL NAL unit, such as a tile group header or a slice header and associated with the respective image segment (e.g. tile group or slice).
- A sub-picture sequence identifier included in a sub-picture delimiter, a picture header, or alike syntax structure, which is implicitly referenced by coded video data. A sub-picture delimiter may for example be a specific NAL unit that starts a new sub-picture. Implicit referencing may for example mean that the previous syntax structure (e.g. sub-picture delimiter or picture header) in decoding or bitstream order may be referenced.
- A sub-picture sequence identifier included in a header parameter set, a picture parameter set or alike syntax structure, which is explicitly referenced by coded video data. Explicit referencing may for example mean that the identifier of the reference parameter set is included in the coded video data, such as in a tile group header or in a slice header.
[0297] In an embodiment, sub-picture sequence identifier values are valid within a pre-defined subset of a bitstream (which may be called "validity period" or "validity subset"), which may be but is not limited to one of the following:
- A single access unit, i.e. coded video data for a single time instance.
- A coded video sequence.
- From a closed random-access access unit (inclusive) until the next closed random-access access unit (exclusive) or the end of the bitstream. A closed random-access access unit may be defined as an access unit within and after which all present sub-picture sequences start with a closed random-access sub-picture. A closed random-access sub-picture may be defined as an intra-coded sub-picture, which is followed, in decoding order, by no such sub-pictures in the same sub-picture sequence that reference any sub-picture preceding the intra-coded sub picture, in decoding order, in the same sub-picture sequence. In an embodiment, a closed random-access sub-picture may either be an intra-coded sub-picture or a sub-picture associated with and predicted only from external reference sub-picture(s) (see an embodiment described further below) and is otherwise constrained as described above.
- The entire bitstream.
[02981 In an embodiment, sub-picture sequence identifier values are valid within an indicated subset of a bitstream. An encoder may for example include a specific NAL unit in the bitstream, where the NAL unit indicates a new period for sub-picture sequence identifiers that is unrelated to earlier period(s) of sub-picture sequence identifiers.
[0299] In an embodiment, a sub-picture with a particular sub-picture sequence identifier value is concluded to be within the same sub-picture sequence as a preceding sub-picture in decoding order that has the same sub-picture sequence identifier value, when both sub-pictures are within the same validity period of sub-picture sequence identifiers. When two pictures are on different validity periods of sub-picture sequence identifiers or have different sub-picture sequence identifiers, they are concluded to be in different sub-picture sequences.
[0300] In an embodiment, a sub-picture sequence identifier is a fixed-length codeword. The number of bits in the fixed-length codeword may be encoded into or along the bitstream, e.g. in a video parameter set or a sequence parameter set, and/or may be decoded from or along the bitstream, e.g. from a video parameter set or a sequence parameter set.
[0301 ] In an embodiment, a sub-picture sequence identifier is a variable-length codeword, such as an exponential-Golomb code or alike.
[0302] According to an embodiment, an encoder indicates a mapping of VCL NAL units or image segments, in decoding order, to sub-pictures or sub-picture sequences in or along the bitstream, e.g. in a video parameter set, a sequence parameter set, or a picture parameter set. Likewise, according to an embodiment, a decoder decodes a mapping of VCL NAL units or image segments, in decoding order, to sub-pictures or sub-picture sequence from or along the bitstream. The mapping may concern a single time instance or access unit at a time.
[0303] In an embodiment, several mappings are provided e.g. in a single container syntax structure and each mapping is indexed or explicitly identified e.g. with an identifier value.
[0304] In an embodiment, an encoder indicates in the bitstream, e.g. in an access unit header or delimiter, a picture parameter set, a header parameter set, a picture header, a header of an image segment (e.g. tile group or slice), which mapping applies to a particular access unit or time instance. Likewise, in an embodiment, a decoder decodes form the bitstream which mapping applies to a particular access unit or time instance. In an embodiment, the indication which mapping applies is an index to a list of several mappings (specified e.g. in a sequence parameter set) or an identifier to a set of several mappings (specified e.g. in a sequence parameter set). In another embodiment, the indication which mapping applies comprises the mapping itself e.g. as a list of sub-picture sequence identifiers for VCL NAL units in decoding order included in the access unit associated with the mapping.
[0305] According to an embodiment, the decoder concludes the sub-picture or sub-picture sequence for a VCL NAL unit or image segment as follows:
- The start of an access unit is concluded e.g. as specified in a coding specification, or the start of a new time instance is concluded as specified in a packetization or container file specification.
- The mapping applied to the access unit or time instance is concluded according to any earlier embodiment.
- For each VCL NAL unit or image segment in decoding order, the respective sub-picture sequence or sub-picture is concluded from the mapping.
[0306] An example embodiment is provided below with the following design decisions:
- The mappings are specified in a sequence parameter set.
- The mappings are specified to map VCL NAL units to sub-picture sequences.
- Indicating which mapping applies for a particular access unit or time instance takes place in a tile group header.
[0307] It should be understood that other embodiments could be similarly realized with other design decisions, e.g. container syntax structures, mapping for image segments rather than VCL NAL units, and mapping for sub-pictures rather than sub-picture sequences.
Figure imgf000058_0001
[0308] The semantics of syntax elements may be specified as follows: num_subpic_pattems equal to 0 specifies that sub-picture-based decoding is not in use. num_subpic_pattems greater than 0 specifies the number of mappings from VCL NAL units to sub-picture sequence identifiers. subpic_seq_id_len_minusl plus 1 specifies the length of the subpic_seq_id[ i ][ j ] syntax element in bits num vcl nal units minusl [ i ] plus 1 specifies the number of VCL NAL units that are mapped in the i-th mapping. subpic_seq_id[ i ][ j ] specifies the sub-picture sequence identifier of the j-th VCL NAL unit in decoding order in an access unit associated with the i-th mapping.
Figure imgf000059_0001
[03091 The semantics of subpic_pattem_idx may be specified as follows. subpic_pattem_idx specifies the index of the mapping from VCL NAL units to sub-picture sequence identifiers that applies for this access unit. It may be required that subpic_pattem_idx has the same value in all tile_group_header( ) syntax structures of the same access unit.
10310] According to an embodiment, a random-access sub-picture of a particular sub-picture sequence may be predicted from one or more reference sub-pictures of other sub-picture sequences (excluding the particular sub-picture sequence). One of the following may be required and may be indicated for a random-access sub-picture:
[031 1 ] It may be required that the random-access sub-picture is constrained so that prediction of any sub-picture at or after the random-access sub-picture in output order does not depend on any reference sub-picture (of the same sub-picture sequence) preceding the random-access sub-picture in decoding order; this case corresponds to an open GOP random-access point.
[0312] It may be required that the random-access sub-picture is constrained so that prediction of any sub-picture at or after the random-access sub-picture in decoding order does not depend on any reference sub-picture (of the same sub-picture sequence) preceding the random-access sub-picture in decoding order; this case corresponds to a closed GOP random-access point.
[0313] Since random-access sub-picture may be predicted from other sub-picture sequence(s), random-access sub-pictures are more compact than similar random-access sub-pictures realized with intra-coded pictures.
[0314] Stream access points (which may also or alternatively be referred to as sub-picture sequence access point) for sub-picture sequences may be defined as a position in a sub-picture sequence (or alike) that enables playback of the sub-picture sequence to be started using only the information from that position onwards assuming that the referenced sub-picture sequences have already been decoded earlier. Stream access points of sub-picture sequences may coincide or be equivalent with random-access sub-pictures. [0315] According to an embodiment, At the start of the decoding of a bitstream, the decoding of all sub-picture sequences is marked as uninitialized in the decoding process. When a sub-picture is coded as a random-access sub-picture (e.g. like an IRAP picture in HEVC) and prediction across sub picture sequences is not enabled, the decoding of the corresponding sub-picture sequence is marked as initialized. When a current sub-picture is coded as a random-access sub-picture (e.g. like an IRAP picture in predicted layers in multilayer HEVC) and decoding of all sub-picture sequences used as reference for prediction is marked as initialized, the decoding of the sub-picture sequence of the current sub-picture is marked as initialized. When no sub-picture for a sub-picture sequence of an identifier is not present for a time instance (e.g. for an access unit), the decoding of the corresponding sub-picture sequence is marked as uninitialized in the decoding process. When a current sub-picture is not a random-access sub-picture and the decoding of the sub-picture sequence of the current sub picture is not marked as initialized, the decoding of the current sub-picture may be omitted. Areas that correspond to omitted sub-pictures (e.g. on the basis of picture composition data) can be treated like unoccupied areas in the output picture compositing process, as described in other embodiments.
[0316] As a consequence of the above-described sub-picture- wise decoding start-up, the present or absence of sub-picture can be dynamically selected, e.g. depending on application needs.
[03171 Picture composition data may comprise but is not limited to one or more of the following pieces of information per sub-picture:
- The top, left, bottom and right coordinates of an effective area within a sub-picture. Samples outside the effective area are not used in the output picture compositing process. One example of taking advantage of indicating an effective area is to exclude guard bands from the output picture compositing process.
- The top, left, bottom and right coordinates of a composition area within the output picture.
One composition area is indicated per one effective area of a sub-picture. The effective area of a sub-picture is mapped onto the composition area. When the composition area has different dimensions than the effective area, the effective picture area is rescaled or resampled to match the effective area
- Rotation, e.g. by 0, 90, 180 or 270 degrees, for mapping the effective area on the composition area.
- Mirroring, e.g. vertically or horizontally, for mapping the effective area on the composition area.
[0318] ft is appreciated that other choices for syntax elements than those presented above may be equivalently used. For example, coordinates and dimensions of an effective area and/or a composition area may be indicated by the coordinates of the top-left comer of the area, the width of the area, and the height of the area ft needs to be understood that the units for indicating the coordinates or extents may be inferred or indicated in or along the bitstream and/or decoded from or along the bitstream. For example, coordinates and/or extents may be indicated as integer multiples of coding tree units. [0319] According to an embodiment, a z-order or an overlaying order may be indicated by the encoder or another entity as part of picture composition data in or along the bitstream. According to an embodiment, a z-order or an overlaying order may be inferred for example to be an ascending sub picture identifier or the same as the decoding order of the sub-pictures of the same output time or the same output order.
[0320] Picture composition data may be associated with a sub-picture sequence identifier or alike. Picture composition data may be encoded into and/or decoded from a video parameter set, a sequence parameter set, or a picture parameter set.
[0321] Picture composition data may describe sub-pictures or sub-picture sequences, which are not encoded, requested, transmitted, received, and/or decoded. This enables selecting a subset of possible or available sub-pictures or sub-picture sequences for encoding, requesting, transmission, receiving, and/or decoding.
[0322] A decoder or a player according to an embodiment may include an output picture compositing process or alike, which may take as input two or more reconstructed sub-pictures that represent the same output time or the same output order. An output picture compositing process may be a part of the decoded picture buffering process or may be connected to the decoded picture buffering process. An output picture compositing process may be invoked when a decoder is triggered to output a picture. Such triggering may for example happen when an output picture at a correct output order can be composed, i.e. when no coded video data preceding the next reconstructed sub-pictures in output order follows the current decoding position within the bitstream. Another example of such triggering is when an indicated buffering time has elapsed.
[0323] In the output picture compositing process, picture composition data is applied to locate said two or more reconstructed sub-pictures on the same coordinates or onto the same output picture area. According to an embodiment, the output picture area that is unoccupied is set to a determined value, which may be separately derived per each color component. The determined value may be a default value (e.g. pre-defined in a coding standard), an arbitrary value determined by the output picture compositing process, or a value indicated by an encoder in or along the bitstream and/or decoded from or along the bitstream. Correspondingly, the output picture area may be initialized to the determined value prior to locating said two or more reconstructed sub-pictures onto it.
[0324] According to an embodiment, a decoder indicates unoccupied areas together with the output picture. The output interface of the decoder or the output picture compositing process may comprise an output picture and information indicative of the unoccupied areas.
[0325] According to an embodiment, the output picture of the output picture compositing process is formed by locating the possibly resampled sample arrays of the two or more reconstructed sub pictures in the z-order onto the output picture in such a manner that the sample array later in the z- order covers or replaces the sample values in collocated positions of the sample arrays earlier in the z- order. [0326] According to an embodiment, the output picture compositing process comprises aligning the decoded representations of said two or more reconstructed sub-pictures. For example, if one sub picture is represented by the YUV 4:2:0 chroma format and the other one, later in the z-order, is represented by the YUV 4:4:4 chroma format, the first one may be upsampled to YUV 4:4:4 as part of the process. Likewise, if one picture is represented by a first color gamut or format, such a 1TU-R BT.709, and another one, later in the z-order, is represented by a second color gamut or format, such as 1TU-R BT.2020, the first one may be converted to the second color gamut or format as part of the process.
[0327] In addition, the output picture compositing process may include one or more conversions from a color representation format to another (or, equivalently, from one set of primary colors to another set of primary colors). The destination color representation format may be selected for example based on the display in use. For example, the output picture compositing process may include a conversion from YUV to RGB.
[0328] Eventually, when all of said two or more reconstructed sub-pictures are processed a described above, the resulting output picture may form the picture to be displayed or to be used in the displaying process e.g. for generating content for the viewport.
[0329] ft is appreciated that the output picture compositing process may additionally contain other steps than those described above and may lack some steps from those described above. Alternatively, or additionally, the described steps of the output picture compositing process may be performed in another order than that described above.
[0330] The spatial correspondence between a current sub-picture and the reference sub-picture (from a different sub-picture sequence) may be indicated by the encoder and/or decoded by the decoder using spatial relationship information described in the following:
[0331 I According to an embodiment, in the absence of spatial relationship information, it may be inferred that the current sub-picture and the reference sub-picture are collocated.
[0332] According to an embodiment, the spatial relationship information indicates the location of the top-left sample of the current sub-picture in the reference sub-picture ft is noted that the top-left sample of the current sub-picture may be indicated to correspond to a location outside the reference sub-picture (e.g. have negative horizontal and/or vertical coordinates). Likewise, bottom and/or right- side samples of the current sub-picture may be located outside the reference sub-picture. When the current sub-picture references samples or decoded variable values (e.g. motion vectors) that are outside the reference sub-picture, they may be considered to be unavailable for prediction.
[03331 According to an embodiment, the spatial relationship information indicates the location of an indicated or inferred sample location of the reference sub-picture (for example the top-left sample location of the reference sub-picture) in the current sub-picture. It is noted that the indicated or inferred sample location of the reference sub-picture may be indicated to correspond to a location outside the current sub-picture (e.g. have negative horizontal and/or vertical coordinates). Likewise, some sample locations, e.g. bottom and/or right-side samples, of the reference sub-picture may be located outside the current sub-picture. When the current sub-picture references samples or decoded variable values (e.g. motion vectors) that are outside the reference sub-picture, they may be considered to be unavailable for prediction. It is noted that sub-pictures of different sub-picture sequences may use the same reference sub-picture as a reference for prediction using the same or different spatial relationship information. It is also noted that the indicated or inferred sample location of the reference sub-picture may be indicated to correspond to a fractional location in the current sub-picture. In this case, reference sub-picture is generated by resampling the current sub-picture.
[0334] According to an embodiment, the spatial relationship information indicates the location of the four comer (e.g. top-left, top-right, bottom-left, bottom-right) samples of the current sub-picture in the reference sub-picture. The corresponding location of each sample of the current picture in the reference subpicture may be calculated using a for example bilinear interpolation.
[0335] According to an embodiment, it may be inferred by an encoder and/or a decoder or it may be indicated in or along the bitstream by an encoder and/or it may be decoded from or along the bitstream by a decoder that spatial correspondence is applied in a wrap-around manner horizontally and/or vertically. An encoder may indicate such wrap-around correspondence for example when a sub picture covers an entire 360-degree picture and sub-picture sequences of both views are present in the bitstream. When wrap-around correspondence is in use and a sample location outside a boundary of the reference sub-picture would be referenced in the decoding process, the referenced sample location may be wrapped around horizontally or vertically (depending which boundary is crossed) to the other side of the reference sub-picture.
[0336] According to an embodiment, an encoder generates and/or a decoder decodes more than one instance of spatial relationship information to indicate spatial correspondence between a current sub-picture and more than one reference sub-pictures.
[0337] According to an embodiment, an encoder generates and/or a decoder decodes more than one instance of spatial relationship information to indicate more than one spatial correspondence between a current sub-picture and the reference sub-picture (from a different sub-picture sequence). Any embodiment above may be used for describing an instance of spatial relationship information. For each instance of spatial relationship information, a separate reference picture index in one or more reference picture lists may be generated in an encoder and/or in a decoder. For example, reference picture list initialization may comprise including the number of times a reference sub-picture is included in an initial reference picture list can be equal to the number of instance of the spatial relationship information concerning the reference sub-picture. An encoder may indicate the use the reference sub-picture associated with a particular instance of spatial relationship information using the corresponding reference index when indicating a reference for inter prediction. Respectively, a decoder may decode the reference index to be used as a reference for inter prediction, conclude the particular instance of spatial relationship information corresponding to that reference index, and use the associated reference sub-picture with the concluded particular instance of spatial relationship information as a reference for inter prediction. The present embodiment may be used e.g. when the reference sub-picture is bigger than the current sub-picture, and object motions in different borders of the current sub-picture are in different directions (spatially when they are toward outside the sub picture). Thus, for each border a different reference with different instance of spatial relationship information may be helpful.
[03381 According to an embodiment, unavailable samples may be copied from the other side of the sub-picture. This may be useful especially in 360-degree videos.
[0339] According to an embodiment, an access unit contains sub-pictures of the same time instance, and coded video data for a single access unit is contiguous in decoding order and is not interleaved, in decoding order, with any coded data of any other access unit. In another embodiment, sub-pictures of the same time instance need not be contiguous in decoding order.
[0340] According to another embodiment, sub-pictures of the same time instance need not be contiguous in decoding order. This embodiment may be used for example for retroactive decoding of some sub-layers of sub-picture sequences that were earlier decoded at a reduced picture rate but are now to be decoded at a higher picture rate. Such operation for multiple picture rates or different number of sub-layers for sub-picture sequence is described in another embodiment further below.
[0341] According to an embodiment, all sub-picture sequences have sub-pictures of the same time instances present. In other words, when one sub-picture sequence has a sub-picture for any particular time instance, all other sub-pictures also have a sub-picture for that time instance. An encoder may indicate in or along the bitstream, e.g. in a VPS (Video Processing System), and/or a decoder may decode from or along the bitstream if all sub-picture sequences have sub-pictures of the same time instances present. According to another embodiment, sub-picture sequences may have sub-pictures present whose time instances are at least partially differing. For example, sub-picture sequences may have different picture rates from each other.
[0342] According to an embodiment, all sub-picture sequences may have the same prediction structure, have sub-pictures of the same time instances present and use sub-pictures of the same time instances as reference. An encoder may indicate in or along the bitstream, e.g. in a VPS, and/or a decoder may decode from or along the bitstream if all sub-picture sequences have the same prediction structure.
[0343] According to an embodiment, reference picture marking for a sub-picture sequence is independent of other sub-picture sequences. This may be realized e.g. by using separate SPSs (Sequence Parameter Set) and PPSs (Picture Parameter Set) for different sub-picture sequences.
[0344] According to another embodiment, reference picture marking for all sub-picture sequences is synchronized. In other words, all sub-pictures of a single time instance are either all marked as "used for reference" or all marked as "unused for reference". In an embodiment, syntax structures affecting reference picture marking are included in and/or referenced by sub-picture-specific data units, such as VCL NAL units for sub-pictures. In another embodiment, syntax structures affecting reference picture marking are included in and/or reference by picture-specific data units, such as a picture header, a header parameter set, or alike.
[0345] According to an embodiment, bitstream or CVS (Coded Video Seqeuence) properties are indicated in two levels, namely per sub-picture sequence and collectively to all sub-picture sequences (i.e. all coded video data). The properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
Properties per sub-picture sequence may be indicated in a syntax structure that applies to the sub picture sequence. Properties applying collectively to all sub-picture sequences may be indicated in a syntax structure applying to the entire CVS or bitstream.
[0346] According to an embodiment, two levels of bitstream or CVS (Coded Video Sequence) properties are decoded, namely per sub-picture sequence and collectively to all sub-picture sequences (i.e. all coded video data). The properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding. A decoder or a client may determine from the properties indicated for all sub-picture sequences collectively whether it can process the entire bitstream. A decoder or a client may determine from the properties indicated for individual sub-picture sequences which sub-picture sequences it is able to process.
[0347] According to an embodiment, it is indicated in or along the bitstream, e.g. in SPS, and/or decoded from or along the bitstream:
- if motion vectors do not cause references to sample locations over sub-picture boundaries, or
- if motion vectors may cause references to sample locations over sub-picture boundaries.
[0348] According to an embodiment, the properties per sub-picture sequence and/or the properties applying collectively to all sub-picture sequences are informative of the sample count and/or sample rate limit applicable in the sub-picture sequence and/or all sub-picture sequences wherein:
- the sample locations over sub-picture boundaries are excluded provided that motion vectors do not cause references to sample locations over sub-picture boundaries, and
- the sample locations over sub-picture boundaries that may be referenced are included provided that motion vectors may cause references to sample locations over sub-picture boundaries.
[0349] According to an embodiment, parameters related to a sub-picture and/or sub-picture sequence are encoded into and/or decoded from a picture parameter set. Sub-pictures of the same picture, access unit, or time instance are allowed but not necessarily required to refer to different picture parameter sets.
[0350] According to an embodiment, information indicative of sub-picture width and height are indicated in and/or decoded from a picture parameter set. For example, the sub-picture width and height may be indicated and/or decoded in units of CTUs. The picture parameter set syntax structure may comprise the following syntax elements:
Figure imgf000066_0001
0351 ] The semantics of the syntax elements may be specified as follows:
multiple subpics enabled flag equal to 0 specifies that a picture contains exactly one sub-picture and that all VCL NAL units of an access unit reference the same active PPS.
multiple subpics enabled flag equal to 1 specifies that a picture may contain more than one sub picture and each sub-picture may reference a different active PPS. subpic width in ctus minusl plus 1, when present, specifies the width of the sub-picture for which this PPS is the active PPS.
subpic height in ctus minusl plus 1, when present, specifies the height of the sub-picture for which this PPS is the active PPS. When subpic width in ctus minusl and subpic height in ctus minusl are present in a PPS that is activated, variables related to picture dimensions may be derived based on them and may override the respective variables derived from the syntax elements of SPS.
[0352] It needs to be understood that information indicative of sub-picture width and height may be realized differently than what is described above in details. In a first example, PPS may contain the tile row heights and tile column widths of all tile rows and tile columns, respectively, and the sub picture height and width are the sums of all the tile column heights and widths, respectively. In a second example, sub-picture width and height may be indicated and/or decoded in units of minimum coding block size. This option would enable finer granularity for the last tile column and the last tile row.
[0353] According to an embodiment, parameters related to a sub-picture sequence are encoded into and/or decoded from a sub-picture parameter set. A single sub-picture parameter set may be used by sub-picture of more than one sub-picture sequence but is not required to be used by all sub-picture sequences. A sub-picture parameter set may for example comprise information similar to that included in a picture parameter set for conventional video coding, such as HEVC. For example, a sub-picture parameter set may indicate which coding tools are enabled in coded image segments of the sub pictures referring to the sub-picture parameter set. Sub-pictures of the same time instance may refer to different sub-picture parameter sets. A picture parameter set may indicate parameters that applies collectively to more than one sub-picture sequences or across sub-pictures, such as spatial relationship information.
[0354] According to an embodiment, a sub-picture sequence is encapsulated as a track in a container file. A container file may contain multiple tracks of sub-picture sequences. Prediction of a sub-picture sequence from another sub-picture sequence may be indicated through file format metadata, such as a track reference.
[0355 ] According to an embodiment, selected sub-layer(s) of a sub-picture sequence are encapsulated as a track. For example, sub-layer 0 may be encapsulated as a track. Sub-layer- wise encapsulation may enable requesting, transmission, reception, and/or decoding of a subset of sub layers for tracks that are not needed for rendering.
[03561 According to an embodiment, one or more collector tracks are generated. A collector track indicates which sub-picture tracks are suitable to be consumed together. Sub-picture tracks may be grouped into groups containing alternatives to be consumed. For example, one sub-picture track of per a group may be intended to be consumed for any time range. Collector tracks may reference either or both of sub-picture tracks and/or groups of sub-picture tracks. Collector tracks might not contain instructions for modifying coded video content, such as VCL NAL units. In an embodiment, the generation of a collector track comprises but is not limited to authoring and storing one or more of the following pieces of information:
Parameter sets and/or headers that apply when the collector track is resolved. For example, sequence parameter set(s), picture parameter set(s), header parameter set(s), and/or picture header(s) may be generated. For example, a collector track may contain the picture header that applies for picture when its sub-pictures may originate from both random-access pictures and non-random-access pictures or be of both random-access sub-picture type and non-random- access sub-picture type.
Picture composition data.
Bitstream or CVS (Coded Video Sequence) properties applying collectively to the sub-picture sequences resolved from the collector track. The properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
[0357] In an embodiment, a sample in a collector track pertains to multiple samples of associated sub-picture tracks. For example, by selecting the sample duration of a collector track to pertain to multiple samples of associated sub-picture tracks, it can be indicated that the same parameter sets and/or header, and/or the same picture composition data applies to a period of time in the associated sub-picture tracks.
[0358] According to an embodiment, a client or alike identifies one or more collector tracks being available, wherein
a collector track indicates which sub-picture tracks are suitable to be consumed together, and collector tracks may reference either or both of sub-picture tracks and/or groups of sub-picture tracks (e.g. a group containing alternative sub-picture tracks out of which one is intended to be selected for consumption for any time range), and collector tracks might not contain instructions for modifying coded video content, such as VCL NAL units.
[0359 ] In an embodiment, the client or alike parses from one or more collector tracks or information accompanying the one or more collector tracks one or more of the following pieces of information:
Parameter sets and/or headers that apply when the collector track is resolved.
Picture composition data.
Bitstream or CVS (Coded Video Sequence) properties applying collectively to the sub-picture sequences resolved from the collector track. The properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding.
[0360] In an embodiment, the client or alike selects a collector track from the one or more collector tracks to be consumed. The selection may be based on but is not limited to the above-listed pieces of information.
[0361 ] In an embodiment, the client or alike resolves the collector track to generate a bitstream for decoding. At least a subset of the information included in or accompanying the collector track may be included in the bitstream for decoding. The bitstream may be generated piece-wise, e.g. access unit by access unit. The bitstream may then be decoded, and the decoding may be performed piece-wise, e.g. access unit by access unit.
10362] It needs to be understood that the embodiments described in relation to collector track equally apply to tracks called differently but essentially with the same nature. For example, instead of a collector track, the term parameter set track could be used, since the information included in the track could be considered parameter or parameter sets rather than VCL data.
[03631 According to an embodiment, sub-picture sequence(s) are decapsulated from selected tracks of a container file. Samples of the selected tracks may be arranged into a decoding order that complies with a coding format or a coding standard, and then passed to a decoder. For example, when a second sub-picture is predicted from a first sub-picture, the first sub-picture is arranged prior to the second sub-picture in decoding order.
10364] According to an embodiment, each track containing a sub-picture sequence forms a
Representation in the MPD. An Adaptation Set is generated per each group of sub-picture sequence tracks that is collocated and also otherwise share the same properties such that switching between the Representations of an Adaptation Set is possible e.g. with a single decoder instance.
[03651 According to an embodiment, prediction of a sub-picture sequence from another sub picture sequence may be indicated through streaming manifest metadata, such as a @dependencyld in DASH MPD.
[0366] According to an embodiment, an indication of a group of Adaptation Sets is generated into an MPD, wherein the Adaptation Sets contain Representations that carry sub-picture sequences, and the sub-picture sequences are such that can be decoded with a single decoder. According to an embodiment, a client infers from the indicated group that any combination of selected dependent Representations whose complementary Representations are also in the combination and any selected independent or complementary Representations can be decoded.
[0367] According to an embodiment, a client selects e.g. based on the above-mentioned indicated group, estimated throughput, and use case needs (see e.g. below embodiments on viewport-dependent streaming) from which Representations (Sub)segments are requested.
[0368] According to an embodiment, sub-pictures are encoded onto and/or decoded from more than one layer of scalable video coding. In an embodiment, a reference picture for inter-layer prediction comprises a picture generated by the output picture compositing process. In another embodiment, inter-layer prediction is performed from a reconstructed sub-picture of a reference layer to a sub-picture of an enhancement layer.
[0369] According to an embodiment, a sub-picture sequence corresponds to a layer of scalability video coding. Embodiments can be used to realize e.g. quality scalability, region- of-interest scalability, or view scalability (i.e. multiview or stereoscopic video coding). Thus, multi-layer coding may be replaced by sub-picture-based coding. Sub-picture-based coding may be more advantageous in many use cases compared to scalable video coding. For example, many described embodiments enable a large number of sub-picture sequences, which may be advantageous e.g. in point cloud coding or volumetric video coding where generated of patches is dynamically adapted. In contrast, scalable video coding has conventionally assumed a fixed maximum number of layers (e.g. as determined by the number of bits in nuh layer id syntax element in HEVC). Furthermore, many described embodiments enable dynamic selection of (de)coding order of sub-pictures and reference sub-pictures for prediction, whereas the scalable video coding conventionally has a fixed (de)coding order of layers (within an access unit) and a fixed set of allowed inter-layer dependencies within a coded video sequence.
[0370] Embodiments may be used but are not limited with selecting (for encoding) and/or decoding sub-pictures or sub-picture sequences as any of the following:
- whole picture of a normal single view 2D video (in this case each picture has only one sub picture)
- partitions of a spatial partitioning of a video; partitions may correspond to coded image
segments
- partitions of a spatiotemporal partitioning of a video; spatiotemporal partitions may be
selected similarly to MCTSs in various use cases
- views of stereoscopic or multiview video as discussed above
- layers of a multi-layer (scalable) video as discussed above
- surfaces of a projection structure of 360-degree projection, such as faces of a multi- face 360- degree projection (e.g. cubemap) - packed regions as indicated by region- wise packing information
- spatially contiguous single-resolution parts of a multi-resolution packing of a video (for example multi-resolution ERP or CMP)
- parts or patches of a point cloud projected onto a surface (texture or depth); a sub-picture sequence may comprise respective patches in subsequent time instances
- one or more regions of interest coded as sub-pictures at a higher resolution than other areas
- aggregation of coded videos from different sources (e.g. different cameras) as sub-picture sequences within one bitstream; this may be used for multi-point video conferencing, for example
[0371 ] In the following some example embodiments using sub-picture-based (de)coding are discussed, e.g. from a point of view of Viewport-dependent 360-degree video streaming; coding of scalable, multiview and stereoscopic video; coding of multi-face content with overlapping; coding of point cloud content.
[0372] Viewport-dependent 360-degree video streaming:
[0373] According to an embodiment, a coded sub-picture sequence may be encapsulated in a track of a container file, the track may be partitioned into Segments and/or Subsegments, and a
Representation may be created in a streaming manifest (e.g. MPEG-DASH MPD) to make the (Sub)segments available through requests and to announce properties of the coded sub-picture sequence. The process of the previous sentence may be performed for each of the coded sub-picture sequences.
[0374] According to an embodiment, a client apparatus may be configured to parse from a manifest information of a plurality of Representations and to parse from the manifest a spherical region for each of the plurality of Representations. The client apparatus may also parse from the manifest values indicative of the quality of the spherical regions and/or resolution information for the spherical regions or their 2D projections. The client apparatus determines which Representations are suitable for its use. For example, the client apparatus may include means to detect head orientation when using a head-mounted display and select a Representation with a higher quality to cover the viewport than in Representations selected for other regions. As a consequence of the selection, the client apparatus may request (Sub)Segments of the selected Representations.
[0375 ] According to an embodiment, the same content is coded at multiple resolutions and/or bitrates using sub-picture sequences. For example, different parts of a 360-degree content may be projected to different surfaces, and the projected faces may be downsampled to different resolutions. For example, the faces that are not in the current viewport may be downsampled to lower resolution. Each face may be coded as a sub-picture.
[03761 According to an embodiment, the same content is coded at different random-access intervals using sub-picture sequences. [0377] According to an embodiment, a change in viewing orientation causes a partly different selection of Representations to be requested than earlier. The new Representations to be requested may be requested or their decoding may be started from the next random-access position within the sub-picture sequences carried in the Representations. When sub-picture sequences are made available at several random-access intervals, Representations having more frequent random-access positions may be requested as a response to a viewing orientation change until a next (Sub)segment with random-access position and of similar quality is available from respective Representations having less frequent random-access positions. Representations that need not be changed as a response to a viewing orientation change need not have random-access positions. As discussed already earlier, sub-pictures may be allowed to have different sub-picture types or NAL unit types. For example, a sub-picture of a particular access unit or time instance may be of a random-access type while another sub-picture of the same particular access unit or time instance may be of a non-random-access type. Thus, sub-pictures of bitstreams having different random-access intervals can be combined.
[0378] According to an embodiment, shared coded sub-pictures are coded among the sub-picture sequences. Shared coded sub-pictures are identical in respective sub-picture sequences of different bitrates, both in their coded form (e.g. VCL NAL units are identical) and in their reconstructed form (the reconstructed sub-pictures are identical).
[0379] According to an embodiment, shared coded sub-pictures are coded in their own sub-picture sequence.
[0380] In an embodiment, shared coded sub-pictures are indicated in or along the bitstream (e.g. by an encoder) not to be output by a decoder, and/or are decoded from or along the bitstream not to be output by a decoder.
[0381 ] Shared coded sub-pictures may be made available as separate Representation(s) or may be included in "normal" Representations. When shared coded sub-pictures are made available as separate Representation(s), the client apparatus may constantly request and receive those Representation(s).
[0382] The above-described selection process(es) depending on viewing orientation apply when shared coded sub-pictures are in use with a difference that in addition to capability of switching between Representation(s) at random-access positions, also the shared coded sub-pictures offer that capability.
[0383 ] Figure 12 illustrates an example of using shared coded sub-picture for multi-resolution viewport-dependent 360-degree video streaming.
[0384] The cubemap content is resampled before encoding to three resolutions (A, B, C). It needs to be understood that cubemap projection is meant as one possible choice for which the embodiment can be realized, but generally other projection formats can likewise be used. In this example, the content at each resolution is split into sub-pictures of equal dimensions, although generally different dimensions could likewise be used. [0385] In this example, shared coded sub-pictures (indicated with rectangle containing the S character) are coded periodically, but it needs to be understood that different strategies of coding shared coded sub-pictures could additionally or alternatively be used. For example, scene cuts could be detected, IRAP pictures or alike could be coded for detected scene cuts, and periods for coding shared coded sub-pictures could be reset at IRAP pictures or alike.
[0386] In this example, shared coded sub-pictures are coded with "normal" sub-pictures (indicated with striped rectangles in the figure) in the same sub-picture sequences. The shared coded sub-picture and the respective "normal" sub-picture represent conceptually different units in the bitstream, e.g. with different decoding times, with different picture order counts, and/or belonging to different access units. In another embodiment, a sequence of shared coded sub-pictures could form its own sub-picture sequence from which the respective "normal" sub-picture sequence could be predicted. If prediction from one sub-picture sequence (the shared coded sub-picture sequence in this example) to another is enabled, shared coded sub-picture and the respective "normal" sub-picture from the same input picture can belong to the time instance (e.g. be a part of the same access unit).
[0387] In this example, shared coded sub-pictures have the same dimensions as the respective "normal" sub-pictures. In another embodiment, shared coded sub-pictures could have different dimensions. For example, a shared coded sub-picture could cover an entire cube face or all cube faces of a cubemap, and spatial relationship information could be used to indicate how "normal" sub pictures spatially relate to shared coded sub-pictures. An advantage of this approach is to enable prediction across a larger area within and between shared coded sub-pictures when compared to "normal" sub-pictures.
[0388] The client apparatus can select, request, receive, and decode:
- shared coded sub-pictures A00 .. A95, B00 .. B23, and CO .. C5 of all desired resolutions
- any subset of sub-pictures of other coded pictures of any selected bitrate (on sub-picture basis) [0389] According to an embodiment, a sub-picture sequence representing 360-degree video is coded at a "base" fidelity or quality, and hence the sub-picture sequence may be referred to as the base sub-picture sequence. This sub-picture sequence may be considered to carry shared coded sub pictures. Additionally, one or more sub-picture sequences representing spatiotemporal subsets of the 360-degree video are coded at a fidelity or quality that is higher than the base fidelity or quality. For example, the projected picture area or the packed picture area may be partitioned into rectangles, and each sequence of rectangles may be coded as a "region-of-interest" sub-picture sequence. An ROI sub picture sequence may be predicted from base sub-picture sequence and from reference sub-pictures of the same ROI sub-picture sequence. Spatial relationship information is used to indicate the spatial correspondence of the ROI sub-picture sequence in relation to the base sub-picture sequence. Several ROI sub-picture sequences of the same spatial position can be coded, e.g. for different bitrate or resolution. [0390] In an embodiment, the base sub-picture sequence has the same picture rate as the ROI sub picture sequences and thus ROI sub-picture sequences can be selected to cover a subset of the 360- degree video, e.g. the viewport with a selected margin for viewing orientation changes. In another embodiment, the base sub-picture sequence has a lower picture rate than the ROI sub-picture sequences and thus ROI sub-picture sequences can be selected to cover the entire 360-degree video. The viewport with a selected margin for viewing orientation changes can be selected to be requested, transmitted, received, and/or decoded from ROI sub-picture sequences with higher quality than the ROI sub-picture sequences covering the remaining of the sphere coverage.
[0391] In some solutions, the base sub-picture sequence is always received and decoded.
Additionally, ROI sub-picture sequences selected on the basis of the current viewing orientation are received and decoded.
[0392 ] Random-access sub-pictures for the ROI sub-picture sequences may be predicted from the base sub-picture sequence. Since the base sub-picture sequence is consistently received and decoded, random-access sub-picture interval (i.e., the SAP interval) for the base sub-picture sequence can be longer than that for ROI sub-picture sequences. The encoding method facilitates switching to requesting and/or receiving and/or decoding another ROI sub-picture sequence at a SAP position of that ROI sub-picture sequence. No intra-coded sub-picture at that ROI sub-picture sequence is required to start the decoding of that ROI sub-picture sequence, and consequently compression efficiency is improved compared to a conventional approach.
[0393 ] The benefits of using the invention in viewport-dependent 360-degree streaming include the following:
- Extractor track(s) or tile base track(s) or alike are not needed for merging of MCTSs in
viewport-dependent streaming, since sub-picture sequences can be decoded without modifications regardless of which set of sub-picture sequences are received or passed to decoding. This reduces content authoring burden and simplifies client operation.
- No changes in VCL NAL units are needed in late-binding-based viewport-dependent
streaming, since sub-picture sequences can be decoded without modifications regardless of which set of sub-picture sequences are received or passed to decoding. This reduces client implementation complexity.
- Picture size in terms of pixels needs not be constant. This advantage becomes apparent when shared coded sub-pictures are used, where a greater number of pixels may be decoded in the time instances including shared coded sub-pictures than other time instances.
- Flexibility in choosing the number of sub-pictures according to the viewport size and head motion margin. In some prior-art methods, the number of sub-picture tracks was pre-defined when creating an extractor track for merging of the content of the sub-picture tracks into a single bitstream. - Flexibility in choosing the number of sub-pictures according to the decoding capacity and/or availability of received data. The number of decoded sub-pictures can be dynamically chosen depending on available decoding capacity, e.g. on a multi-process or multi-tasking system with resource sharing. The coded data for a particular time instance can be passed to decoding even if some requested sub-pictures for it have not been received. Thus, delivery delays concerning only a subset of sub-picture sequences do not stall the decoding and playback of other sub-picture sequences.
- Switching between bitrates and received sub-pictures can take place at any shared coded sub picture and/or random-access sub-picture. Several versions of the content can be encoded at different intervals of shared coded sub-pictures and/or random-access sub-pictures. In the decoded bitstreams shared coded sub-pictures and/or random-access sub-pictures need not be aligned in all sub-picture sequences, thus better rate-distortion efficiency can be achieved when switching and/or random-access property is only in those sub-picture sequences where it is needed.
[0394] As discussed above, depending on the use case, the term“sub-picture” can refer to various use cases and/or types of projections. Examples relating to the coding of sub-pictures in the context of few of these use cases are discussed next.
[0395] Coding of multi- face content with overlapping
[0396] According to an embodiment, different parts of a 360-degree content may be projected to different surfaces, and the projected faces may have overlapped content. In another embodiment a content may be divided to several regions (e.g. tiles) with overlapped content. Each face or region may be coded as a sub-picture. Each sub-picture may use a part of the other sub-picture as a reference frame as is shown in Figures 13 and 14 for two examples, where the non-overlapped contents have been shown in white box, the overlapped areas have been shown in gray color, and the corresponding parts in sub-pictures have been indicated by a dashed rectangle. Spatial relationship information could be used to indicate how a sub-picture spatially relate to other sub-pictures.
[0397] Coding of point cloud content
[0398] According to an embodiment, each part of a point cloud content is projected to a surface to generate a patch. Each patch may be coded as a sub-picture. Different patches may have redundant data. Each sub-picture may use other sub-picture to compensate this redundancy. In example in Figure 15 different parts of a point cloud have been projected to surface 1 and surface 2 to generate patch 1 and patch 2, respectively. Each patch is coded as a sub-picture. In this example, a part of the point cloud content which is indicated by c,d,e is redundantly projected to two surfaces, so the
corresponding content in redundant in patch 1 and patch 2. In Figure 15, that part of the sub-picture 2 which may be predicted from sub-picture 1 is indicated by dashed box. The collection of reconstructed sub-pictures may form the output picture. Alternatively, reconstructed sub-pictures may be arranged into a 2D output picture. [0399] According to an encoding embodiment, a patch of a second PCC layer is coded as a second sub-picture and is predicted the reconstructed sub-picture of the respective patch of a first PCC layer. Similarly, according to a decoding embodiment, a second sub-picture is decoded, wherein the second sub-picture represents a patch of a second PCC layer, and wherein the decoding comprises prediction from the reconstructed sub-picture that represents the respective patch of a first PCC layer.
[0400] According to an embodiment sub-picture sequences are intentionally encoded, requested, transmitted, received, and/or decoded at different picture rates and/or at different number of sub layers. This embodiment is applicable e.g. when only a part of the content is needed for rendering at a particular time. For example, in 360-degree video only the viewport is needed for rendering at a particular time, and in point cloud coding and volumetric video the part needed for rendering may depend on the viewing position and viewing orientation. The picture rate and/or the number of sub layers for sub-picture sequences that are needed for rendering may be selected (in encoding, requesting, transmitting, receiving, and/or decoding) to be higher than for those sub-picture sequences that are not needed for rendering and/or not likely to be needed for rendering soon (e.g. for responding to a viewing orientation change). With the described arrangement, the needed decoding capacity and power consumption may be reduced. Alternatively, delivery and/or decoding speedup may be achieved e.g. for faster than real-time playback. When decoding of a sub-picture sequence at a greater number of sub-layers is desired (e.g. for responding to a viewing orientation change), sub-layer access pictures, such as TSA and/or STSA pictures of HEVC, may be used to restart encoding, requesting, transmitting, receiving, and/or decoding sub-layers.
[0401 ] According to an embodiment, a TSA sub-picture or alike can be encoded into the lowest sub-layer of a sub-picture sequence not predicted from other sub-picture sequences. This TSA sub picture indicates that all sub-layers of this sub-picture sequence can be predicted starting from this TSA picture. According to an embodiment, a TSA sub-picture or alike is decoded from the lowest sub layer of a sub-picture sequence not predicted from other sub-picture sequences. In an embodiment, it is concluded that requesting, transmission, reception, and/or decoding of any sub-layers above the lowest sub-layer can start starting from this TSA sub-picture, and consequently such requesting, transmission, reception, and/or decoding takes place.
[0402 ] The present embodiments provide also other advantages in addition to those already discussed above. For example, loop filtering across sub-picture boundaries is disabled. Thus, a very low delay operation may be achieved by processing the decoded sub-pictures output by the decoding process immediately (e.g., through color space conversion from YUV to RGB, etc.). This enables pipelining of the processes involved in playing (e.g. receiving VCL NAL units, decoding VCL NAL units, post-processing decoded sub-pictures). Similar benefit may also be achieved in the encoding end. Filtering over borders of non-contiguous image content, such as filtering across disjoint projection surfaces, may cause visible artefacts. By disabling loop filter visible artifacts at sub-picture borders are reduced and subjective quality is improved. [0403] As a further advantage, when sub-picture sequences are independent from each other, sub pictures can be decoded in any order and sub-pictures of different pictures can be decoded in parallel. This provides more flexibility for load balancing between processing cores.
[0404] As a further advantage, a sequence of patches of point cloud or volumetric video can be indicated to be of the same or similar source (e.g. the same projection surface) by indicating them to below to the same sub-picture sequence. Consequently, patches of the same source can be inter- predicted from each other. Conventionally, patches of point cloud or volumetric video have been packed onto a 2D picture and patches of the same or similar source should have been positioned spatially to the same location on the 2D picture. However, as the number and size of patches may vary, such temporal alignment of corresponding patches might not be straightforward.
[0405] As a further advantage, only high-level syntax structures, such as the picture composition data, need to be rewritten for extracting a subset of sub-pictures of a bitstream or merging sub-pictures of different bitstreams. Coded data of sub-pictures need not be changed. This makes viewport- dependent 360-degree streaming applications easier to implement. Likewise, for viewing position and orientation dependent volumetric video streaming applications.
[0406] In addition, the number or pixel count of sub-pictures per picture does not have the stay constant. This makes 360-degree and 6DoF streaming applications that are based on "late binding" and adaptation based on viewing orientation and / viewing position easier to implement. The number of received sub-pictures can be chosen based on the viewport size and/or the decoding capacity. If a sub picture is not received in time, the picture can be decoded without it.
[0407] By allowing motion vectors to reference data outside sub-picture boundaries compression efficiency can be improved compared to motion-constrained tile sets.
[0408] By allowing prediction from one sub-picture sequence to another, compression efficiency can be improved e.g. for:
- Inter- view prediction, when the first sub-picture sequence represents a first view, and the second sub-picture sequence represents a second view.
- Prediction from a "shared sub-picture sequence" can be enabled for adaptive 360 and 6DoF streaming.
[0409 ] Since the picture width and height is may be allowed to be not aligned with CTU boundary (or alike) and since sub-picture decoding operates as conventional picture decoding, flexibility in defining sub-picture sizes is achieved. For example, sub-picture sizes used for 360-degree video need not be multiples of CTU width and height. This decoding capacity in terms of pixels/second can be utilized more flexibly.
[0410] In multifaced projection like CMP, where there is discontinuity in face boundaries, sub picture coding can improve intra coding in the face boundaries by not using the neighboring face pixels for prediction. [0411] In the following, the reference sub-picture manipulation process will be described in more detail, in accordance with an embodiment.
[0412 ] An encoder selects which of the sub-pictures could be used as a source of a manipulated reference sub-picture. The encoder generates the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and includes at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
[0413 ] The encoder includes in or along the bitstream an identification of the reference sub-picture manipulation process and may also include in the bitstream information indicative of or infers a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub-pictures to be generated.
[0414] A decoder decodes from or along the bitstream the identification of the reference sub picture manipulation process. The decoder also decodes from the bitstream information indicative of or infers a set of decoded sub-pictures to be manipulated, and/or a set of manipulated reference sub pictures to be generated.
[0415] The decoder may also generate the set of manipulated reference sub-pictures from the set of decoded sub-pictures using the identified reference sub-picture manipulation process; and include at least one of the manipulated reference sub-pictures in a reference picture list for prediction.
[0416] In an embodiment, an encoder indicates in or along the bitstream and/or a decoder decodes from or along the bitstream and/or it is inferred by an encoder and/or a decoder that a reference sub picture manipulation operation is to be carried out when the reference sub-picture(s) used as input in the reference sub-picture manipulation become available.
[0417] In an embodiment, an encoder encodes into or along the bitstream and/or a decoder decodes from or along the bitstream a control signal if a reference sub-picture is to be provided for reference sub-picture manipulation when it becomes available (e.g., right after it has been decoded). The control signal may be included for example in a sequence parameter set, a picture parameter set, a header parameter set, a picture header, a sub-picture delimiter or header, and/or an image segment header (e.g. a tile group header). When included in a parameter set, the control signal may apply to each sub-picture referring to the parameter set. The control signal may be specific to a sub-picture sequence (and may be accompanied by a sub-picture sequence identifier) or may apply to all sub picture sequences that are decoded. When included in a header, the control signal may apply to the spatiotemporal unit wherein the header is applied. In some cases, the control signal is applicable in the first header and may be repeated in subsequent headers applying to the same spatiotemporal units. For example, the control signal may be included in an image segment header (e.g. a tile group header) of a sub-picture, indicating that the decoded sub-picture is provided to the reference sub-picture manipulation.
[0418] In an embodiment, an encoder indicates in or along the bitstream and/or a decoder decodes from or along the bitstream and/or it is inferred by an encoder and/or a decoder that a reference sub- picture manipulation operation is to be carried out when the manipulated reference sub-picture is referenced in encoding and/or decoding or is about to be reference in encoding and/or decoding. For example, the reference sub-picture manipulation process may be carried out when the manipulated reference sub-picture is included in a reference picture list among "active" reference sub-pictures that may be used as reference for prediction in the current sub-picture.
[0419] As discussed earlier, in some embodiments:
Decoded picture buffering is performed on picture basis rather than on sub-picture basis.
An encoder and/or a decoder generates a reference picture from decoded sub-pictures of the same access unit or time instance using the picture composition data.
The generation of a reference picture is performed identically or similarly to what is described in other embodiments for generating output pictures.
[0420] An embodiment, in which decoded picture buffering is performed on picture basis, comprises the following: A reference sub-picture to be used as input to the reference sub-picture manipulation process is generated by extracting an area from a reference picture in the decoded picture buffer. The extraction may be done as a part of the decoded picture buffering process or a part of the reference sub-picture manipulation process or be operationally connected to the decoded picture buffering process and/or the reference sub-picture manipulation process. In an embodiment, the area is the area that collocates with the current sub-picture being encoded or decoded. In another
embodiment, the area is provided through spatial relationship information. Thus, the reference sub picture manipulation process gets reference sub-picture(s) from the decoded picture buffering process similarly to other embodiments, and the reference sub-picture manipulation process may operate similarly to other embodiments.
[0421 ] Identification of the reference sub-picture manipulation process and signalling
accompanying information
[0422] The above-mentioned sub-picture packing may involve indicating packing information for sub-picture sequences, sub-pictures, or regions within sub-pictures that may be used as source for the sub-picture packing. In an embodiment, the packing information is indicated similarly to but separately from the picture composition data. In an embodiment, an encoder indicates in or along the bitstream that the picture composition data is reused as packing information, and/or likewise a decoder decodes from or along the bitstream that the picture composition data is reused as packing information. In an embodiment, the packing information is indicated similarly to region-wise packing SEI message or region- wise packing metadata of OMAF.
[0423] It is noted that packing information may be indicated for a set of reconstructed sub-pictures (e.g. all sub-pictures to be used for output picture compositing), but a manipulated reference sub picture may be generated from those reconstructed sub-pictures that are available at the time when the manipulated reference sub-picture is created. For example, a manipulated reference sub-picture that is used as a reference for a third sub-picture of a first time instance may be generated from a first reconstructed sub-picture and a second reconstructed sub-picture (also of the first time instance) that precede the third sub-picture in decoding order, while the packing information used in generating the manipulated reference sub-picture may comprise the information for the first, second, and third sub pictures.
[0424] The blending as part of generating the manipulated reference sub-picture may be performed in either so that each value sample for a sample position in calculated as the average of all samples of the reference sub-pictures positioned onto this sample position, or so that each sample may be calculated using a weighted average according to the location of the sample with respect to the location of available and unavailable samples.
[0425] Adaptive resolution change
[0426] An adaptive resolution change (ARC) refers to dynamically changing the resolution within the video bitstream or video session, for example in video-conferencing use-cases. Adaptive resolution change may be used e.g. for better network adaptation and error resilience against transmission errors or losses. For better adaptation to changing network requirements for different content, it may be desired to be able to change both the temporal/spatial resolution in addition to quality. ARC may also enable a fast start of a session or after seeking to a new time position, wherein the start-up time of a session may be able to be reducing by first sending a low resolution frame and then increasing the resolution. ARC may further be used in composing a conference. For example, when a person starts speaking, his/her corresponding resolution may be increased.
[0427] ARC may be conventionally carried out by an encoding a random-access picture (e.g. an HEVC IRAP picture) at the position where the resolution change takes place. However, since intra coding applied in random-access pictures make them more inefficient than inter-coded pictures in rate- distortion performance. Consequently, one possibility is to encode a random-access picture at a relatively low quality to keep the bit count close to that of inter-coded pictures so that the delay is not significantly increased. However, a low-quality picture may be subjectively noticeable and also negatively affects the rate-distortion performance of pictures predicted from it. Another possibility is to encode a random-access picture at a relatively high quality, but then the relatively high bit count may cause higher delay. In low-delay conversational applications, it might not be possible to compensate the high delay with initial buffering, which might cause noticeable picture rate fluctuation or motion discontinuity.
[0428] The use of the reference sub-picture manipulation process for adaptive resolution changes is illustrated with Figure 17a, in accordance with an embodiment. In this figure empty squares illustrate coded sub-pictures (e.g. 200, 201) and squares with the letter M illustrate manipulated reference sub-pictures (e.g.210, 211). The arrows 220, 221 illustrate generation of the manipulated reference sub-pictures.
[0429] While inter prediction is not illustrated in the Figure 17a, inter prediction may be used. Any preceding reference sub-picture of the same sub-picture sequence, in decoding order, may be used as a reference for prediction. Moreover, the illustrated manipulated reference sub-pictures may be used as reference for prediction.
[0430 ] The example illustrates that the last reconstructed sub-picture of a certain resolution is resampled to generate a manipulated reference sub-picture 210 for a new resolution. Such an arrangement may suit low-delay applications, where the decoding and output order of (sub-)pictures are the same. It needs to be understood that this is not the only possible arrangement, but any reconstructed sub-picture(s) may be resampled to generate manipulated reference sub-picture(s) to be used as a reference for prediction of sub-pictures of a new resolution. Moreover, there may be more than one manipulated reference sub-picture that is used as a reference for prediction for sub-pictures of a new resolution.
[0431 ] Resampling ratio and the corresponding area in the current sub-picture and the reference sub-picture may be indicated by determining a scaling window for the current and the reference sub picture. Scaling ratio in horizontal/vertical direction may be derived by dividing width/height of the scaling window in the current sub-picture to that of the reference sub-picture. To determine the corresponding area, the top-left comer of the scaling window in the current sub-picture may be matched to the top-left comer of the scaling window in the reference sub-picture.
[0432] Resampling operation may be performed by resampling the whole sub-picture and creating a new (temporary) subpicture to be used as reference frame in inter prediction process. Alternatively, the resampling operation may be integrated into the motion compensation process. In this method, top- left comer of the current block in the current sub-picture in mapped to reference sub-picture using the parameters of scaling windows (i.e. scaling ratio and corresponding area). Then during motion compensation, the sub-sample position of horizontal and vertical filtering process of motion compensation operation is updated for each row and each column. Based on the scaling ratio, different resampling filter with different frequency characteristics may be used for interpolation filtering to avoid for example Nyquist resampling ratio.
[043 ] Sub-picture sequences may be formed so that the sub-pictures of the same resolution are in the same sub-picture sequence. Consequently, there are two sub-picture sequences in this example. Another option for forming sub-picture sequences is such that the sub-pictures of the same resolution starting from a resolution switch point are in the same sub-picture sequence. Consequently, there are three sub-picture sequences in this example.
[0434] The example above illustrates a possible operation for live encoding adapted e.g. to network throughput and/or decoding capability. Alternatively or additionally, the example above may also illustrate the decoding operation, where the decoded sub-pictures are a subset of sub-pictures that are available for decoding, e.g. in a container file or as a part of received streams.
[0435] An adaptive resolution change may be facilitated in streaming (for multiple players) for example as described in the following. A possible encoding arrangement is illustrated in Figure 17b.
In this figure empty squares illustrate coded non-random-access sub-pictures 200, squares with the letter M illustrate manipulated reference sub-pictures 210, and squares with the letter I illustrate coded random-access sub-pictures 230. The vertical dotted lines 222 illustrate (sub)segment boundaries. The vertical arrows 220, 221 illustrate generation of the manipulated reference sub-pictures and the diagonal arrows 223 illustrate inter prediction. It should be noted that the inter prediction is illustrated in Figure 17b only when it uses a manipulated reference sub-picture as a reference. It needs to be understood that the Figure 17b presents only one possible example and other realizations could similarly be implemented. For example, more than one sub-picture sequence for switching between resolutions could be encoded for different switch points. In another example, sub-picture sequence(s) for switching from high resolution to low resolution may be encoded. In yet another example, sub picture sequences of more than two resolutions or for several qualities or bitrates are encoded and switching between those may be enabled by encoding sub-picture sequences using manipulated reference sub-pictures.
[0436] Selected sub-picture sequences may be encoded for relatively infrequent random-access interval. In this example, a low-resolution sub-picture sequence and a high-resolution sub-picture sequence are generated for random-access period of every third (sub)segment. These sub-picture sequences may be received e.g. at a stable reception condition, when the receiver buffer occupancy is sufficiently high and network throughput is sufficient and stable for the bitrate of the sub-picture sequence.
[0437] Selected sub-picture sequences are encoded for switching between resolutions using manipulated reference sub-pictures created through resampling. In this example, one sub-picture sequence is encoded for resolution change from low to high resolution at any (sub)segment boundary. The sub-pictures of each (sub)segment in this sub-picture sequence are encoded in a manner that they only depend on each other or on the low-resolution sub-picture sequence.
[0438] The sub-picture sequences are made available separately for streaming. For example, they may be announced as separate Representations in DASH MPD.
[0439] The client chooses on (sub)segment basis which sub-picture sequence is received, an example of this is illustrated in Figure 17c. In this figure empty squares illustrate coded non-random- access sub-pictures 200, squares with the letter M illustrate manipulated reference sub-pictures 210, and squares with the letter I illustrate coded random-access sub-pictures 230. The vertical dotted lines 222 illustrate (sub)segment boundaries and the thick line 224 illustrates received/decoded/generated sub-pictures. The vertical arrows 220, 221 illustrate generation of the manipulated reference sub pictures and the diagonal arrows 223 illustrate inter prediction. In the illustration of Figure 17c, the client first receives one (sub)segment of the low-resolution sub-picture sequence 240 (of the infrequent random-access interval). The client then decides to switch up to a higher resolution 250 and receives two (sub)segments of the sub-picture sequence 245 that uses manipulated reference sub pictures generated from the low-resolution sub-pictures as a reference for prediction. However, since the latter manipulated reference sub-picture requires the second low-resolution (sub)segment to be decoded, the second (sub)segment of the low-resolution sub-picture sequence is also received. The client then switches to the high-resolution sub-picture sequence of the infrequent random-access interval.
10440] In an embodiment, the manipulated reference sub-pictures are generated from specific temporal sub-layers only (e.g. the lowest temporal sub-layer, e.g. Temporalld equal to 0 in HEVC). Those specific temporal sub-layers may be made available for streaming separately from the other temporal sub-layers of the same sub-picture sequence. For example, those specific temporal sub-layers may be announced as a first Representation, and the other sub-layers of the same sub-picture sequence may be made available as a second Representation. Continuing the example client operation illustrated above, only the specific sub-layers need to be received from the second (sub)segment of the low- resolution sub-picture sequence. The specific sub-layers may be made available as a separate
Representation or Sub-Representation, hence enabling requesting and receiving them separately from other sub-layers.
[0441 ] An example of RWMR 360° video streaming taking advantage of subpicture-wise ARC is presented next. It needs to be understood that the chosen parameter values, such as the width and height of the pictures and the subpicture grid, are for illustration only and embodiments can be similarly realized with other choices of parameter values.
[0442] RWMR 360° video streaming offers an increased effective spatial resolution on the viewport. A scheme where subpictures covering the viewport originate from a cubemap (CMP) equivalent to 6K (6144x3072) equirectangular projection (ERP) resolution is described in the following. The achieved resolution on the viewport may be suitable e.g. for head-mounted displays using quad-HD (2560x1440) display panel. The merged bitstream can be decoded with "4K" decoding capacity (e.g. like specified for HEVC Level 5.1).
[0443] The content is encoded at two spatial resolutions at with cube face size 1536x1536 (high resolution, HR) and 768x768 (low resolution, LR). A 6x4 subpicture grid is used, and subpicture boundaries are treated like picture boundaries. Two bitstreams are encoded for each resolution, i.e. "normal" (N), which may have a random-access picture interval suitable for seeking or random accessing (e.g. 49 pictures), and "switching" (S), which may provide switching capability from the other resolution at an interval suitable for responding to viewing orientation changes (e.g. 8 pictures).
[0444] In an embodiment, a subpicture in the S bitstream is (de)coded using only one or more respective manipulated reference subpictures of the N bitstream as a reference in inter prediction. In one example, illustrated in Figure 20, the S bitstream comprises dependent random access point (DRAP) pictures that use only a preceding IRAP picture as reference in inter prediction, and the IRAP picture is identical to the corresponding IRAP picture in the N bitstream. An arrow in the figure implies both reference sub-picture manipulation (by upsampling or downsampling) and inter prediction (or motion compensation). In the example, an IDR picture is used as an IRAP picture, but any other type of an IRAP picture could likewise be used. In this example, "GOP" in Figure 20 comprises a sequence of one or more sub-pictures and the sub-pictures within“GOP” are of the resolution of the preceding IRAP/DRAP picture. The sub-pictures within a "GOP" may have any inter prediction structure but their reference pictures may be constrained not to include any pictures preceding, in decoding order, the latest previous IRAP or DRAP picture. It needs to be understood that the embodiment is not limited to having a DRAP picture in the S-bitstream, but a picture that provides the switching capability from the N bitstream to the S bitstream may be (de)coded using any pictures of the N bitstream (or identically coded pictures of the S bitstream) as reference. For example, the picture immediately preceding, in output order, the picture that provides switching capability may be used as reference in inter prediction. It also needs to be understood that in some embodiments the pictures in the S bitstream from an IDR picture (inclusive) to the first DRAP picture (exclusive) following the IDR picture need not be (de)coded or made available for streaming, since they may be identical to the time-aligned pictures in N bitstream.
[0445] A DRAP picture may be defined to have one or more of the following properties:
The DRAP picture is a trailing picture (as defined in HEVC or WC).
The DRAP picture has a temporal sublayer identifier equal to 0.
The DRAP picture is predicted only from the previous IRAP picture in decoding order (a.k.a. the associated IRAP picture). Consequently, in some coding formats, such as VVC, the DRAP picture does not include any pictures in the active entries of its reference picture lists except the associated IRAP picture of the DRAP picture.
Any picture that follows the DRAP picture in both decoding order and output order does not use any picture that precedes the DRAP picture in decoding order or output order as reference in inter prediction, with the exception of the associated IRAP picture of the DRAP picture. Consequently, in some coding formats, such as VVC, any picture that follows the DRAP picture in both decoding order and output order does not include, in the active entries of its reference picture lists, any picture that precedes the DRAP picture in decoding order or output order, with the exception of the associated IRAP picture of the DRAP picture.
A dependent random access point (DRAP) indication SEI message is associated with a DRAP picture.
[0446] Continuing the example above, each subpicture sequence may be encapsulated as an ISO base media file format track (e.g. a sub-picture track) and may be made available as a Representation in DASH. A single (Sub)segment may comprise a sequence of pictures from an IDR or DRAP picture (inclusive) to the next IDR or DRAP picture (exclusive). A player or alike may for example select 12 subpictures from the high-resolution bitstream and the complementary 12 subpictures may be selected from the low-resolution bitstream. Thus, a hemi-sphere (180°xl80°) of the streamed content originates from the high resolution. Initially, the subpictures of the normal bitstreams may be streamed (the left half of Figure 19 shows an example). If the viewing orientation changes, the subpictures that need to be updated may be selected from the "switching" bitstreams until the next IRAP picture of the respective "normal" bitstreams. The right half of Figure 19 presents an example which subpictures are updated and streamed from the "switching" bitstreams after a viewing orientation change.
[0447 ] Continuing the example above, Figure 21 presents how the received subpictures of a single time instance are merged into a coded picture of 3840x2304 luma samples, which conforms to "4K" decoding capability (e.g. like HEVC Level 5.1). In the example, a coded picture comprises side-by- side a 4x3 grid of subpictures originating from the high-resolution version and a 2x6 grid of subpictures originating from a low-resolution version.
[04481 In the above example of RWMR 360° video streaming, switching between low and high resolution bitstream of each subpicture can happen only once in each IDR period. To enable multiple switching shared coded picture (SCP) may be combined with the above method. Similar to RWMR, each subpicture is encoded in different resolutions (i.e. HR and LR) as show in Figure 22 (for the case of GOP=8) and described below. Subpicture are classified into two types (typel or type2) which is explained below. The prediction structure is presented only for one of the bitstreams which is the same for all the other bitstreams in Figure 22.
[0449] For each subpicture, the first frame of each GOP is downsampled and duplicated at the beginning of each GOP. These shared coded subpicture (e.g. a'O, d'O, a'8, d'8, a'16, d'16) are encoded identically in both high and low resolution bitstreams. The frames between two SCPs (which is indicated by GOP) are coded using only the preceding SCP. SCPs are always transmitted to the receiver, so the IDR period may be set to a large value, e.g. 10 seconds. SCPs provide switching capability between different resolutions (i.e. HR and LR) at an interval suitable for responding to viewing orientation changes (e.g. 8 pictures). In HR bitstream, high resolution subpictures may be coded from low resolution SCP using reference picture resampling (RPR) method in WC.
[0450] The received subpictures of a single time instance are merged into a coded picture of 3840x2304 luma samples similar to the layout shown in Figure 21. However, to explain the layout of merged bitstream, it is assumed that the whole 360-degree content is divided into four subpicture including A, B, C, and D. Given the condition that the subpicture layout and picture size shale remain unchanged in whole bitstream in WC, the sample layout of the merged stream is presented in Figure 23. In this example layout, half of the subpictures are transmitted in high resolution, and half of them are transmitted in low resolution. The switched subpictures are highlighted in yellow to demonstrating as switching example. The location of subpictures in SCP is always fixed, to enable inter prediction. But the location of the subpicture in other pictures (i.e. non-SCPs) may change based on the viewport orientation. To keep the layout unchanged, half of the low-resolution subpictures (e.g., a'O and b'O) in SCP are packed inside larger (i.e. high resolution) subpictures, and the rest of the subpicture area (i.e. grey area) may be filled with dummy data. Scaling window (to be used by RPR in WC) is set to the actual content. This means that the scaling window (showed in dashed red rectangle) covers only the actual content (not dummy area) in the case of a'O and b'O for example. For other cases, scaling window (which has not been shown) covers whole the subpicture area.
[0451 ] Dummy area may be coded with the lower bitrate. Alternatively, dummy area may be coded in Intra horizontal and vertical mode to realize the padding of pixels at picture boundaries to achieve this, the blocks on the right/bottom/right-bottom side of the actual content are coed in horizontal/vcrtical/DC Intra mode.
[04521 The subpicture that is partially covered by actual content (e.g., a'O and b'O), may be divided to different tiles in a way that the whole actual content corresponds to one tile, or the content boundaries (on left and bottom side) matches tile boundaries. This may help coding the actual content more efficiently.
[0453] The first subpicture after SCP (e.g. cO, dO, a8, d8, bl6, dl 6) may be coded identical to SCP picture if they have the same resolution. This may be realized by using skip coding for all the blocks of that subpicture.
[0454] Two different type of subpictures may be defined, including Typel and Type2. Typel includes subpictures (e.g. subpictures A and B) that their low-resolution SCPs (e.g. a' and b') are packed in larger SCP. Type2 includes subpictures (e.g. subpictures C and D) that their low-resolution SCPs (e.g. c' and d') are packed in an SCP with the same size.
[0455] In another embodiment, as shown in Figure 24, some of the subpictures (e.g. A and B) may have low-resolution SCP and the other subpicture may have normal (i.e. high) resolution SCP (e.g. C and D). This simplified the packing of the merged bitstream as show in Figure 25. In this case, scaling window for each sequence
[0456] As described earlier sub-pictures may be encoded onto and/or decoded from more than one layer of scalable video coding, and inter-layer prediction may be performed from a reconstructed sub picture of a reference layer to a sub-picture of an enhancement layer. Further embodiments related to coding and/or decoding sub-pictures in more than layer are described in the next paragraphs.
[0457] In an embodiment, a spatial correspondence between a first sub-picture in a reference layer and a second sub-picture of an enhancement is concluded e.g. by a decoder based on the sub-picture sequence identifiers of the first and second sub-pictures being the same.
[0458 ] In an embodiment, sub-picture-wise ARC is performed to realize inter-layer prediction as a part of encoding and/or decoding.
[0459] In an example embodiment, a RWMR 360° method is realized as follows. An independent layer (e.g. layer 0) is encoded in low resolution. In the example presented in Figure 26, cubemap projected content is encoded in the independent layer (layer 0) having 768x768 resolution per cube face (but likewise any other resolution could be used). In the example, a 6x4 subpicture grid is used, and subpicture boundaries are treated like picture boundaries, but likewise any other subpicture layout could be used. An enhancement layer predicted from the independent layer (e.g. layer 2) is encoded in high resolution. In the example presented in Figure 26, the high-resolution enhancement layer (layer 2) has 1536x1536 resolution per cube face (but likewise any other resolution could be used). Subpicture- wise inter-layer prediction is applied between the subpictures having spatial correspondence, e.g. between subpictures having the same sub-picture sequence identifier value. The independent layer may have a lower picture frequency than the enhancement layer and/or only some pictures of the independent layer may be used as reference for inter-layer prediction. In Figure 26, horizontal arrows (within a layer) indicate temporal inter prediction (within the same layer) and arrows from one layer to another indicate subpicture- wise inter-layer prediction.
[04601 Zero or more other enhancement layers predicted from the independent layer may be encoded. In the example of Figure 26, an enhancement layer (layer 1) having the same resolution as the independent layer is encoded. Pictures in layer 1 may but need not have improved picture quality compared to respective pictures in the independent layer.
[0461] "GOP" in Figure 26 indicates a sequence of one or more pictures with any inter prediction hierarchy. Pictures within a "GOP" do not use inter-layer prediction.
[0462] In an embodiment, a client receives the independent layer. Additionally, the client selects a number of subpictures from the enhancement layer(s) and arranges the selected subpictures into coded pictures. The client then decodes the generated bitstream. As part of the decoding, the client applies subpicture-wise inter-layer prediction, wherein a spatial correspondence between subpictures in different layers is concluded (e.g. based on the same sub-picture sequence identifier values). As part of the subpicture- wise inter-layer prediction, subpicture-wise ARC is applied if the subpictures in the current and reference layers are of different width and/or height.
[0463] Figure 27 presents an example embodiment continuing the example illustrated in Figure 26. The client first selects 12 subpictures from layer 2 and the remaining 12 subpictures from layer 1 like presented in the left side of Figure 19. The same selection of subpictures is followed in the layer- 1 and layer-2 GOPs following the initial picture. At the access unit containing the second picture of layer 0, the client makes a new selection of 12 subpictures from layer 2 and the remaining 12 subpictures from layer 1 like presented in the right side of Figure 19, and that selection of subpictures is then followed in the subsequent layer- 1 and layer-2 GOPs. Since subpicture- wise inter- layer prediction is applied, the resulting multi-layer bitstream is valid and can be decoded.
[0464]
[0465] Stream switching at open GOP random-access pictures
[0466] To support the client switching between different qualities and resolutions during the streaming session of DASH representations, random access point pictures may be encoded at the segment boundaries. Conventionally, random-access pictures starting a so-called closed group of pictures (GOP) prediction structure have been used at segment boundaries of DASH representations. It has been found that open-GOP random-access pictures improve rate- distortion performance compared to closed-GOP random-access pictures. Moreover, open-GOP random-access pictures have been found to reduce observable picture quality fluctuation when compared to closed-GOP random-access pictures. When the decoding starts from an open-GOP random-access picture (e.g. a CRA picture of HEVC), some pictures following the random-access picture in decoding order but preceding the random-access picture in output order may not be decodable. These pictures may be referred to random access skipped leading (RASL) pictures. Consequently, if open GOPs were used at segment boundaries in DASH, representation switching would result into the inability to decode the RASL pictures and hence a picture rate glitch in the playback.
[04671 Seamless representation switching may be enabled when representations use open GOP structures and share the same resolution and other characteristics, i.e. when a decoded picture of the source representation can be used as such as a reference picture for predicting pictures of a target representation. However, representations may not share the same characteristics, e.g., they may be of different spatial resolution, wherein seamless representation switching may need some further considerations.
[0468] According to an embodiment, an encoder indicates in or along the bitstream that reference sub-picture manipulation is applied for those reference sub-pictures of leading sub-pictures or alike that precede, in decoding order, the open-GOP random-access sub-picture associated with the leading sub-pictures. According to an embodiment, a decoder decodes from or along the bitstream or infers that reference sub-picture manipulation is applied for those reference sub-pictures of leading sub pictures or alike that precede, in decoding order, the open-GOP random-access sub-picture associated with the leading sub-pictures. A decoder may infer reference sub-picture manipulation e.g. when an open-GOP random-access sub-picture is of different resolution than earlier sub-pictures of the same sub-picture sequence in decoding order and when the open-GOP random-access sub-picture kept one or more preceding (in decoding order) reference sub-pictures marked as "used for reference". The reference sub-picture manipulation may be indicated (by an encoder), decoded (by a decoder), or inferred (by an encoder and/or a decoder) to be resampling to match the resolution of the reference sub-pictures to that of the leading sub-pictures using the reference sub-pictures as reference for prediction.
[0469] Adaptive resolution changing for responding to viewport changes in region- wise mixed- resolution (RWMR) 360° video streaming
[0470] When viewing orientation changes in HEVC-based viewport-dependent 360° streaming, a new selection of sub-picture Representations can take effect at the next IRAP-aligned (Sub)segment boundary. Sub-picture Representations are merged to coded pictures for decoding, and hence the VCL NAL unit types are aligned in all selected sub-picture Representations.
[0471 1 To provide a trade-off between the response time to react to viewing orientation changes and the rate- distortion performance when the viewing orientation is stable, multiple versions of the content can be coded at different random-access picture intervals (or SAP intervals).
[0472] Since the viewing orientation may often move gradually, the resolution changes in only a subset of the sub-picture locations in RWMR viewport-dependent streaming. However, as discussed above, (Sub)Segments starting with an random-access picture need to be received for all sub-picture locations. Updating all sub-picture locations with (Sub)segments starting with a random-access picture is inefficient in terms of streaming rate- distortion performance.
[0473] In addition, the ability to use open GOP prediction structures with sub-picture
Representations of RWMR 360° streaming is desirable to improve rate- distortion performance and to avoid visible picture quality pumping caused by closed GOP prediction structures.
[0474] Adaptive resolution change may also be used when there are multiple sub-pictures per access unit. For example, cubemap projection may be used, and each cube face may be coded as one or more sub-pictures. The sub-pictures that cover the viewport (potentially with a margin to cover also viewing orientation changes) may be streamed and decoded at a higher resolution than the other sub pictures. When a viewing orientation changes in a manner that new sub-pictures would need to be streamed at a higher resolution while they were earlier streamed at a lower resolution, or vice versa, switching from one resolution to another may be performed as described above.
[0475] Adaptive resolution change and/or stream switching at open GOP random-access pictures according to embodiments described above may also be used when there are multiple sub-pictures per access unit.
[0476] According to an embodiment, multiple versions of sub-picture sequences for each sub picture location have been encoded. For example, a separate version is coded for each combination among two resolutions and two random access intervals (here referred to as "short" and "long") for each sub-picture location. An open GOP prediction structure has been used in at least of a sub-picture sequence. Sub-picture sequences have been encapsulated into sub-picture tracks and made available as a sub-picture Representations in DASH. At least some of the (Sub)segments formed from the coded sub-picture sequences start with an open GOP prediction structure. A client selects for a first range of (Sub)segments a first set of sub-picture locations to be received at a first resolution and a second set of sub-picture locations to be received at a second resolution. A viewing orientation change is handled by the client by selecting for a second range of (Sub)segments a third set of sub-picture locations to be received at a first resolution and a fourth set of sub-picture locations to be received at a second resolution. The first and third sets are not identical, and the intersection of the first and third sets is non-empty. Likewise, the second and fourth sets are not identical, and the intersection of the second and fourth sets is non-empty. If the second range of (Sub)segments does not start with a random- access position in the long-random-access versions, the client requests (Sub)segments of the short- random-access sub-picture Representations for sub-picture locations for which the resolution needs to change (i.e. that are within the third set but outside the intersection of the first and third sets, or within the fourth set but outside of the intersection of the second and fourth sets). The reference sub- picture^) for RASL sub-picture(s) of (Sub)segments of a changed resolution and starting with an open-GOP random-access picture are processed by reference sub-picture manipulation as described in other embodiments. For example, the reference sub-picture(s) may be resampled to the resolution of the RASL sub-picture(s).
[0477] For example, cubemap projection may be used, and each cube face may be coded as one or more sub-pictures. The sub-pictures that cover the viewport (potentially with a margin to cover also viewing orientation changes) may be streamed and decoded at a higher resolution than the other sub pictures. When a viewing orientation changes in a manner that new sub-pictures would need to be streamed at a higher resolution while they were earlier streamed at a lower resolution, or vice versa, switching from one resolution to another may be performed as described above.
[0478] Reference sub-picture manipulation in-place
[0479] In some embodiments, reference sub-picture manipulation happens in-place. In other words, the modified sub-sequence modifies, overwrites or replaces the reference sub-picture. No other codec or bitstream changes beyond indicating reference sub-picture manipulation might be needed. An encoder and/or a decoder may conclude in-place manipulation taking place through, but not limited to, one or more of the following means:
- In-place manipulation may be pre-defined, e.g. in a coding standard, to apply always when a manipulated reference sub-picture is generated.
- In-place manipulation may be specified, e.g. in a coding standard, to apply for a pre-defined subset of manipulation processes.
- An encoder indicates in or along the bitstream, e.g. in a sequence parameter set, and/or a decoder decodes from or along the bitstream that in-place manipulation takes place.
[0480] If the dimensions (i.e. width and/or height) and/or other properties affecting memory allocation (e.g. bit depth) of the manipulated reference sub-picture differ from those of the sub- picture^) used as input to the manipulation process, in-place manipulation may be understood to comprise the following:
- Creating the manipulated reference sub-picture in a picture buffer separate from the sub- picture^) used as source(s) for the manipulation process.
- Marking the sub-picture(s) used as source(s) for the manipulation process as "unused for reference" and possibly removing them from the decoded picture buffer.
[0481 ] Block-level reference sub-picture manipulation
[0482] In some embodiments, reference sub-picture manipulation happens as a part of motion compensation process. The manipulated reference sub-picture is not stored in the DPB but rather a prediction block is formed as a part of the motion compensation process by manipulating the reference sub-picture. The manipulation may for example be upsampling or downsampling. The manipulation may be performed for example by adjusting the step size of the interpolation filter used in motion compensation process in pixel- wise manner when accessing the reference sub-picture. It needs to be understood that the embodiments described in relation to storing a manipulated reference sub-picture into the DPB can likewise be applied to block-level reference sub-picture manipulation. [0483] Implicit resampling
[0484] In an embodiment the identification of the reference sub-picture manipulation process identifies resampling. The identification may, for example, be a sequence-level indication that reference sub-pictures may need to be resampled. In another example, the identification is a profile indicator or alike, whereby the feature of resampling of reference sub-pictures is included. The set of decoded sub-pictures to be manipulated may be inferred as follows: if a reference sub-picture has a different resolution than the current sub-picture, it is resampled to the resolution of the current sub picture. In an embodiment, the resampling takes place only if the reference sub-picture is among active pictures in any reference picture list.
[0485] In an embodiment there is exactly one coded sub-picture per any time instance or access unit. Consequently, conventional (de)coding operation and bitstream syntax can be used except the above-described implicit resampling. Such decoding operation suits e.g. adaptive resolution change as described above.
[0486] In an example, reference sub-picture manipulation involves implicit upsampling or downsampling. The horizontal and vertical scaling factors upsampling or downsampling are derived from the width and height ratios of the respective sub-pictures (with the same sub-picture sequence identifier) in the current picture and in the reference picture.
[0487] In an example, reference sub-picture manipulation for resampling is realized as a block- level reference sub-picture manipulation described above. The sample locations of the current coding subblock are relative to the top-left comer of the sub-picture for the scaling by the scaling factors.
After scaling, the top-left position of the subpicture within the reference picture is added to the scaled sub-picture-relative sample location in order to obtain the sample locations within the reference picture.
[0488] In an embodiment, resampling may be accompanied or replaced by any other operations for generating manipulated reference sub-picture, as described above. The identification of the reference sub-picture manipulation process identifies which operations are to be carried out when the reference sub-picture has a different resolution or format (e.g. chroma format or bit depth) than the current sub picture.
[0489 ] Explicit management of manipulated reference sub-pictures
[0490] In an embodiment, an encoder encodes in the bitstream and/or a decoder decodes from the bitstream a control operation to generate a manipulated reference sub-picture. In an embodiment, the control operation is included in the coded video data of the sub-picture that is used as a source for generating the manipulated reference sub-picture. In another embodiment, the control operation is included in the coded video data of the sub-picture that is using the manipulated reference sub-picture as a reference for prediction. In yet another embodiment, the control operation is included in the coded video data of any sub-picture at or subsequent to (in decoding order) the sub-picture used as a source for generating the manipulated reference sub-picture. [0491] In an embodiment, the manipulated reference sub-picture is paired with the corresponding "source" reference sub-picture in its marking as "used for reference" or "unused for reference" (e.g. in a reference picture set). I.e., when a "source" reference sub-picture is marked as "unused for reference", the corresponding manipulated reference sub-picture is also marked as "unused for reference".
[0492] In an embodiment, an encoder encodes in the bitstream and/or a decoder decodes from the bitstream a control operation to mark a manipulated reference sub-picture as "used for reference" or "unused for reference". The control operation may, for example, be a specific reference picture set for manipulated reference sub-pictures only.
[0493] In an embodiment, a reference picture list is initialized to contain manipulated reference sub-pictures that are marked as "used for reference". In an embodiment, a reference picture list is initialized to contain manipulated reference sub-pictures that are indicated to be active references for the current sub-picture.
[0494] External reference sub-picture
[0495] In an embodiment, the decoding process provides an interface for inputting an "external reference sub-picture". The reference sub-picture manipulation process may provide the manipulated reference sub-picture to the decoding process through the interface.
[0496] Within the decoding process, the external reference sub-picture may have pre-defined properties and/or may be inferred and/or properties may be provided through the interface. These properties may include but are not limited to one or more of the following:
- Picture order count (POC) or certain bits of POC, e.g. POC least significant bits (LSBs) and/or POC most significant bits (MSBs).
- Marking as "used for short-term reference" or "used for long-term reference".
[0497] For example, it may be pre-defined that an external reference sub-picture is treated as a long-term reference picture and/or has picture order count equal to 0.
[0498] In an embodiment, an encoder encodes into or along the bitstream and/or a decoder decodes from or along the bitstream a control signal if an external reference sub-picture is to be obtained for decoding. The control signal may be included for example in a sequence parameter set, a picture parameter set, a header parameter set, a picture header, a sub-picture delimiter or header, and/or an image segment header (e.g. a tile group header). When included in a parameter set, the control signal may cause the decoding to obtain an external reference sub-picture when the parameter set is activated. The control signal may be specific to a sub-picture sequence (and may be
accompanied by a sub-picture sequence identifier) or may apply to all sub-picture sequences that are decoded. When included in a header, the control signal may cause the decoding the obtain an external reference sub-picture e.g. when the header is decoded or at the start of decoding the spatiotemporal unit wherein the header is applied. For example, if the control signal is included in an image segment header (e.g. a tile group header), fetching of the external reference sub-picture may be carried out only for the first image segment header of a sub-picture.
10499 ] In an embodiment, the external reference sub-picture may only be given for the first sub picture of a coded sub-picture sequence that is independent of other coded sub-picture sequences. For example, in the example embodiments for adaptive resolution change, each manipulated reference sub-pictures may start a coded sub-picture sequence. If only one sub-picture per coded picture, access unit or time instance is in use, a manipulated reference sub-picture may start a coded video sequence.
[05001 In an embodiment, the external reference sub-picture is inferred to have properties that are the same as in the sub-pictures used as source for generating the external reference sub-picture.
[0501] In some embodiments, the marking of external reference sub-pictures (as used or unused for reference) is controlled synchronously with the sub-picture(s) used as input for the reference sub picture manipulation.
[0502] In an embodiment, external reference sub-pictures are included in the initial reference picture lists like other reference sub-pictures.
[0503] External reference sub-pictures may be accompanied by an identifier (e.g. ExtRefld) that is passed through the interface or inferred. Memory management of the external reference sub-pictures (e.g. which ExtRefld indices are kept in the decoded picture buffer) may be encoded in or decoded from the bitstream or may be controlled through the interface.
[0504] Start of sequence and/or end of sequence indication
[0505 ] According to an embodiment, an encoder encodes into a bitstream and/or a decoder decodes from a bitstream an end of sequence (EOS) syntax structure and/or a start of sequence (SOS) syntax structure comprising but not limited to one or more of the following:
Identifier of sub-picture sequence to which the EOS and/or SOS syntax structure concerns. Identifiers of parameter set(s) that are activated by the SOS syntax structure.
Control signal specifying if decoding of the EOS and/or SOS syntax structure is to cause a reference sub-picture manipulation operation (e.g. by implicit resampling) and/or obtaining an external reference sub-picture. For example, a SOS syntax structure, when present, may imply an external
[0506 ] In an embodiment, an end of sequence (EOS) syntax structure and/or a start of sequence (SOS) syntax structure is included in aNAL unit whose NAL unit type indicates the end of sequence and/or the start of sequence, respectively.
[0507] Start of bitstream, sequence, and sub-picture sequence indications
[0508] According to an embodiment, an encoder encodes into a bitstream and/or a decoder decodes from a bitstream a start-of-bitstream indication, a start-of-coded- video-sequence indication, and/or a start-of-sub-picture-sequence indication. The indication(s) may be included in and/or decoded from e.g. a parameter set syntax structure, a picture header, and/or a sub-picture delimiter. When present in a parameter set, the indication(s) may apply to the picture or sub-picture that activates the parameter set. When present in a picture header, a sub-picture delimiter, or similar syntax structure, the indication(s) may apply in the bitstream order, i.e. indicate that the syntax structure or the access unit or coded picture containing the syntax structure starts a bitstream, a coded video sequence, or a sub-picture sequence.
[0509] Property indications
[0510] In an embodiment, bitstream or CVS properties are indicated in two levels, namely per sub picture sequence excluding the generation of the manipulated reference sub-pictures and per sub picture sequence including the generation of the manipulated reference sub-pictures. The properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding. Properties per sub-picture sequence excluding the generation of the manipulated reference sub-pictures may be indicated in a syntax structure that applies to the core decoding process, such as a sequence parameter set. Properties per sub-picture sequence including the generation of the manipulated reference sub-pictures may be indicated in a syntax structure that applies to the generation of the manipulated reference sub-pictures instead of or in addition to the core decoding process.
[051 1 I Reference sub-picture manipulation may happen outside the core decoding specification and may be specified e.g. in an application-specific standard or annex.
[0512] A second shell of video codec profile indications may be generated, e.g.: H.266 first shell profile = Main 10, second shell profile = sub-picture packing, or 360-degree geometry padding, or Point Cloud, or implicit adaptive resolution change.
[0513] In an embodiment, the encoder indicates, and/or the decoder decodes a bitstream property data structure including a first-shell profile indicator and a second-shell profile indicator, wherein the first-shell profile indicator indicates properties excluding reference sub-picture manipulation and the second-shell profile indicator indicates properties including reference sub-picture manipulation.
[0514] In an embodiment, bitstream or CVS properties are indicated collectively to all sub-picture sequences (i.e. all coded video data). The properties may comprise but are not limited to a coding profile, a level, HRD parameters (e.g. CPB and/or DPB size), constraints that have been applied in encoding. As with sub-picture sequence-specific property indications, separate set of properties may be indicated and/or decoded for sub-picture sequences excluding the generation of the manipulated reference sub-pictures; and for sub-picture sequences including the generation of the manipulated reference sub-pictures.
[0515] Generation of the set of manipulated reference sub-picture by unfolding projection surfaces
[0516] In an embodiment, a manipulated reference sub-picture is generated by unfolding entire or partial projection surfaces onto a 2D plane. In some embodiments, the unfolding is performed through knowledge on the geometrical relations of the projection surfaces and knowledge on how the projection surfaces are mapped onto sub-pictures. In other embodiments, sub-picture packing is used for realizing the unfolding operation. [0517] An example embodiment is described in relation to cubemap projection, but it needs to be understood that embodiments can be realized similarly for other projection formats. In the example embodiment, cube faces that are adjacent to the "main" cube face (subject to being predicted) are unfolded onto a 2D plane next to the "main" cube face. Figures 16a-16d provide an example. Let us assume that the hatched cube face 261 is encoded or decoded as a sub-picture within the current access unit. The hatched cube face 261 corresponds to the cube face marked by "Face" on the illustration of the cubemap 260. The picture composition data may be authored by an encoder and/or decoded by a decoder to generate an output arranged as in Figure 16b from the reconstructed sub-pictures corresponding to cube faces. The cube faces with vertical stripes in the cube may be arranged as the cube faces D and B in the 2D cubemap. It is remarked that the viewpoint for observing the cube may be in the middle of the cube and hence a cubemap may represent the inner surface of the cube. The top and bottom cube faces may be arranged as cube faces C and A in the 2D cubemap.
[0518] The reconstructed sub-pictures of an access unit used as a reference for prediction are used in generating a manipulated reference sub-picture for the hatched cube face 261 of the current access unit by unfolding the cube faces of the cube as illustrated in Figure 16c and described as follows: The unfolded cube faces are adjacent to the hatched cube face, i.e. share a common edge with the hatched cube face. Subsequent to unfolding the picture area of the manipulated reference sub-picture may be cropped as illustrated in Figure 16d. In an embodiment, an encoder indicates information indicative of the cropping area in or along the bitstream and a decoder decodes information indicative of the cropping area from or along the bitstream. In another embodiment, an encoder and/or a decoder infers the cropping area, e.g. to be proportional to the maximum size of a prediction unit for inter prediction, which may be additionally appended proportionally to the maximum number of samples needed for interpolating samples at non-integer sample locations.
[0519] Subsequent or prior to cropping, the comers of the unfolded area 263 may be handled e.g. in one of the following ways: the comers may be left unoccupied or the comers may be padded e.g. with the adjacent comer sample of the cross-hatched cube face.
[0520] The comers may be interpolated from the unfolded cube faces 262. For example, interpolation may be performed but is not limited to either of the following:
- A sample row and a sample column from the adjacent unfolded cube faces may be rescaled to cover the comer area and blended (e.g. averaged).
- Interpolation along each line segment connecting the border sample of the sample row and the border sample of the sample column from the adjacent unfolded cube faces that have the same distance to the comer sample. The interpolation can be done as a weighted average proportional to the inverse of the distance to the border sample.
- Padding from the closest of the border samples of the sample row or the sample column of the adjacent unfolded cube faces. [0521] Spatial relationship information may be used to indicate that the hatched cube face 261 in the current access unit corresponds to the central area of the manipulated reference sub-picture. An advantage of this arrangement is that motion vectors are allowed to refer to sample outside the central area of the manipulated reference sub-picture and that the picture content is approximately correct in those areas.
[0522] Generation of the set of manipulated reference sub-picture by unfolding projection surfaces and sample-line-wise resampling
[0523] In an embodiment, a manipulated reference sub-picture is generated in two steps. First, entire or partial projection surfaces are unfolded onto a 2D plane as described in the previous embodiment. Second, since the unfolding may cause unoccupied sample locations in the manipulated reference sub-picture, the sample lines or columns of the unfolded projection surface, such as an entire or partial unfolded cube face, may be extended by resampling to cover up to 45 degree of the comer, as shown in Figure 16e just for two of the sides.
[0524] Rotation compensation for 360° video
[0525] In 360° video coding, the projection structure, such as the sphere, may be rotated prior to deriving the 2D picture. One reason for such rotation may be to adjust the 2D version of the content to suit coding tools better for improved rate- distortion performance. For example, only certain intra prediction directions may be available, and hence rotation could be applied to match the 2D version of the content with intra prediction directions. This may be done for example by computing localized gradients and statistically improve the match between the gradients and intra prediction directions by rotating the projection structure. However, there would be temporal inconsistency between the 2D pictures of the content generated with different rotation and hence conventional inter prediction between such 2D pictures is not likely to succeed well, causing a penalty in rate- distortion performance.
[0526] In an embodiment, a reference sub-picture is associated with a first rotation and a current sub-picture is associated with a second rotation. A manipulated reference sub-picture is generated wherein the essentially the second rotation is used. The reference sub-picture manipulation may for example comprise the following steps: First, the reference sub-picture may be projected onto a projection structure, such as a sphere, using the first rotation. The image data on the projection structure may be projected onto a manipulated reference sub-picture using the second rotation. For example, the second rotation may be applied to rotate the sphere image and the sphere image may then be projected onto a projection structure (e.g. a cube or a cylinder) which is then unfolded to form a 2D sub-picture.
[0527] Compensation for non-aligned projection surfaces of point cloud video
[0528] As discussed above, point cloud sequences may be coded as video when point clouds are projected onto one or more projection surfaces. An encoder could adapt properties of the projection surfaces to the content in a time-varying manner. Properties of the projection surfaces may comprise but are not limited to one or more of the following: 3D location, 3D orientation, shape, size, projection format (e.g. ortographic projection or a geometric projection with a projection center), and sampling resolution. Thus, conventional inter prediction between patches might not succeed well if any property of the projection surface differs between a reference picture and a current picture being encoded or decoded. Thus, adapting properties of projection surfaces could cause a penalty in rate-distortion performance for coding a point cloud sequence even if it improved rate- distortion performance for a single time instance.
[05291 In an embodiment, reference sub-picture manipulation comprises inter-projection prediction. One or more patches of one or more sub-pictures from one projection (texture and geometry images) may be used as a source for generating a manipulated reference sub-picture comprising one or more reference patches. The manipulated reference sub-picture may essentially represent the properties of the projection surface(s) of a current sub-picture being encoded or decoded. In the reference sub-picture manipulation process, a point cloud may be generated from the reconstructed texture and geometry sub-pictures, using the properties of the projection surface(s) applying to the reconstructed texture and geometry sub-pictures. The point cloud may be projected onto a second set of projection surface(s) that may have the same or similar properties as the projection surfaces applying to the current texture sub-picture and/or the current geometry sub-picture being encoded or decoded and the respective texture and geometry prediction pictures are formed from this projection.
[0530] Generalizations
[0531 ] In an embodiment, reference sub-picture manipulation is regarded as a part of decoded picture buffering rather than a process separate from the decoded picture buffering.
[0532] In an embodiment, reference sub-picture manipulation access sub-pictures from a first bitstream and a second bitstream to generate manipulated reference sub-picture(s). For example, the first bitstream may represent texture video of a first viewpoint, and the second bitstream may represent depth or geometry video for the first viewpoint, and the manipulated reference sub-picture may represent texture video for a second viewpoint.
[0533] Embodiments have been described above with reference to the term sub-picture. It needs to be understood that in some cases there is only one sub-picture per time instance or access unit, and thus embodiments could likewise be described with reference to the term picture instead of the term sub-picture.
[0534] The above described embodiments provide a mechanism and an architecture to use core video (de)coding process and bitstream format in a versatile manner for many video-based purposes, including video-based point cloud coding, patch-based volumetric video coding, and 360-degree video coding with multiple projection surfaces. Compression efficiency may be improved compared to plain 2D video coding by enabling sophisticated application-tailored prediction. [0535] The above described embodiments are suitable for interfacing a single-layer 2D video codec with additional functionality.
[0536] Figure 9 is a flowchart illustrating a method according to an embodiment. A method comprises obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub picture belonging to a sub-picture sequence (block 190 in Figure 9). It is then determined 192 whether the sub-picture would be used as a source for a manipulated reference sub-picture. If the determination 192 indicates that the sub-picture would be used as a source for a manipulated reference sub-picture, that sub-picture is used as a basis for a manipulated reference sub-picture. In other words, the manipulated reference sub-picture is generated 196 from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence.
[0537] The manipulation may comprise, for example, rotating the sub-picture, mirroring the sub picture, resampling the sub-picture, positioning within the area of the manipulated reference sub picture, overlaying over or blending with the samples already present within the indicated area of the manipulated reference sub-picture, or some other form of manipulation. It may also be possible to use more than one of the above mentioned and/or other manipulation principles to generate the manipulated reference sub-picture.
[0538] An apparatus according to an embodiment comprises at least one processor and at least one memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
- obtain coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
- determine whether to use the sub-picture as a source for a manipulated reference sub-picture;
- generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
[0539] An example of an apparatus, e.g. an apparatus for encoding and/or decoding, is illustrated in Figure 18. The generalized structure of the apparatus will be explained in accordance with the functional blocks of the system. Several functionalities can be carried out with a single physical device, e.g. all calculation procedures can be performed in a single processor if desired. A data processing system of an apparatus according to an example of Figure 18 comprises a main processing unit 100, a memory 102, a storage device 104, an input device 106, an output device 108, and a graphics subsystem 110, which are all connected to each other via a data bus 112.
[0540] The main processing unit 100 may be a conventional processing unit arranged to process data within the data processing system. The main processing unit 100 may comprise or be implemented as one or more processors or processor circuitry. The memory 102, the storage device 104, the input device 106, and the output device 108 may include conventional components as recognized by those skilled in the art. The memory 102 and storage device 104 store data in the data processing system 100. Computer program code resides in the memory 102 for implementing, for example, the methods according to embodiments. The input device 106 inputs data into the system while the output device 108 receives data from the data processing system and forwards the data, for example to a display. The data bus 112 is a conventional data bus and while shown as a single line it may be any combination of the following: a processor bus, a PCI bus, a graphical bus, an ISA bus. Accordingly, a skilled person readily recognizes that the apparatus may be any data processing device, such as a computer device, a personal computer, a server computer, a mobile phone, a smart phone or an Internet access device, for example Internet tablet computer.
[0541] The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment. The computer program code comprises one or more operational characteristics. Said operational characteristics are being defined through configuration by said computer based on the type of said processor, wherein a system is connectable to said processor by a bus, wherein a programmable operational characteristic of the system comprises obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub picture sequence; determining whether to use the sub-picture as a source for a manipulated reference sub-picture; if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture; the method further comprises generating the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub picture sequence.
[0542] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined.
[0543 ] Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

Claims

CLAIMS:
1. A method comprising:
obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
determining whether to use the sub-picture as a source for a manipulated reference sub-picture; if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture; the method further comprises
generating the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence.
2. The method according to claim 1 further comprising:
- including in or along the bitstream an identification of the reference sub-picture manipulation process.
3. The method according to claim 1 further comprising:
- including in or infering information of at least one of a set of decoded sub-pictures to be manipulated, and a set of manipulated reference sub-pictures to be generated.
4. The method according to claim 1, wherein generating the manipulated reference sub-picture comprises packing of one or more reference sub-pictures or regions therein, wherein the packing comprises one or more of the following:
- rotating the sub-picture;
- mirroring the sub-picture;
- resampling the sub-picture;
- positioning within the area of the manipulated reference sub-picture;
- overlaying over or blending with the samples already present within the indicated area of the manipulated reference sub-picture.
5. The method according to claim 1, wherein generating the manipulated reference sub-picture comprises one or more of the following:
- geometry padding for 360° video;
- padding, wherein content of a padding area of the manipulated reference sub-picture is generated from other sub-pictures including in or along the bitstream an identification of the reference sub-picture manipulation process;
- reference patch reprojection, wherein reference sub-pictures are interpreted as 3D point cloud patches and re-projecting the 3D point cloud patches onto a plane suitable for 2D inter prediction;
- view synthesis from sub-pictures representing one or more texture and depth views; - resampling;
- color gamut conversion;
- dynamic range conversion and/or
- color mapping conversion;
- bit depth conversion;
- projection conversion;
- rotation conversion;
- frame rate conversion.
6. An apparatus comprising at least one processor and at least one memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
obtain coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
determine whether to use the sub-picture as a source for a manipulated reference sub-picture; generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub picture is to be used as the source for the manipulated reference sub-picture.
7. The apparatus according to claim 6, wherein the memory and the computer program code further configured to cause the apparatus to
- include in or along the bitstream an identification of the reference sub-picture manipulation process.
8. The apparatus according to claim 6, wherein the memory and the computer program code further configured to cause the apparatus to
- include in or infer information of at least one of a set of decoded sub-pictures to be manipulated, and a set of manipulated reference sub-pictures to be generated.
9. The apparatus according to claim 6, wherein the memory and the computer program code further configured to cause the apparatus to generate the manipulated reference sub-picture by packing of one or more reference sub-pictures or regions therein.
10. The apparatus according to claim 9, wherein the packing comprises one or more of the following:
- rotating the sub-picture;
- mirroring the sub-picture;
- resampling the sub-picture;
- positioning within the area of the manipulated reference sub-picture; - overlaying over or blending with the samples already present within the indicated area of the manipulated reference sub-picture.
11. The apparatus according to claim 6, wherein the memory and the computer program code further configured to cause the apparatus to
- encode into or along the bitstream an indication or infer that a reference sub-picture manipulation operation is to be carried out when the manipulated reference sub-picture is referenced in decoding or is about to be referenced in decoding.
12. The apparatus according to claim 6, wherein the memory and the computer program code further configured to cause the apparatus to
- decode from or along the bitstream a control signal if a reference sub-picture is to be provided for reference sub-picture manipulation when it becomes available.
13. The apparatus according to claim 6, wherein the memory and the computer program code further configured to cause the apparatus to
- decode from or along the bitstream an indication or infer that a reference sub-picture manipulation operation is to be carried out when the manipulated reference sub-picture is referenced in decoding or is about to be referenced in decoding.
14. A computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:
obtain coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
determine whether to use the sub-picture as a source for a manipulated reference sub-picture; generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub picture is to be used as the source for the manipulated reference sub-picture.
15. The computer program product according to claim 14, wherein the computer program product is embodied on a non-transitory computer readable medium.
16. An encoder comprising:
an input for obtaining coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
a determinator configured to determine whether to use the sub-picture as a source for a manipulated reference sub-picture;
a manipulator configured to generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
17. The encoder according to claim 16 configured to:
encode into or along the bitstream an identification of the reference sub-picture manipulation process.
18. The encoder according to claim 16 configured to:
encode into or along the bitstream a control signal if a reference sub-picture is to be provided for reference sub-picture manipulation when it becomes available.
19. The encoder according to claim 16 configured to:
encode into or along the bitstream an indication or infer that a reference sub-picture manipulation operation is to be carried out when the manipulated reference sub-picture is referenced in decoding or is about to be referenced in decoding.
20. A decoder comprising:
an input for receiving coded data of a sub-picture, the sub-picture belonging to a picture, and the sub-picture belonging to a sub-picture sequence;
a determinator configured to determine whether to use the sub-picture as a source for a manipulated reference sub-picture;
a manipulator configured to generate the manipulated reference sub-picture from the sub-picture to be used as a reference for a subsequent sub-picture of the sub-picture sequence, if the determining reveals that the sub-picture is to be used as the source for the manipulated reference sub-picture.
21. The decoder according to claim 20 configured to:
decode from or along the bitstream an identification of the reference sub-picture manipulation process.
22. The decoder according to claim 20 configured to:
decode from or along the bitstream a control signal if a reference sub-picture is to be provided for reference sub-picture manipulation when it becomes available.
23. The decoder according to claim 20 configured to:
decode from or along the bitstream an indication or infer that a reference sub-picture manipulation operation is to be carried out when the manipulated reference sub-picture is referenced in decoding or is about to be referenced in decoding.
PCT/FI2019/050938 2019-01-02 2019-12-31 An apparatus, a method and a computer program for video coding and decoding WO2020141260A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19908067.2A EP3906675A4 (en) 2019-01-02 2019-12-31 An apparatus, a method and a computer program for video coding and decoding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962787510P 2019-01-02 2019-01-02
US62/787,510 2019-01-02

Publications (1)

Publication Number Publication Date
WO2020141260A1 true WO2020141260A1 (en) 2020-07-09

Family

ID=71407051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2019/050938 WO2020141260A1 (en) 2019-01-02 2019-12-31 An apparatus, a method and a computer program for video coding and decoding

Country Status (2)

Country Link
EP (1) EP3906675A4 (en)
WO (1) WO2020141260A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112087637A (en) * 2020-09-09 2020-12-15 中国电子科技集团公司第五十八研究所 High-pixel bit depth video image data coding and decoding processing method
US20210044819A1 (en) * 2019-08-06 2021-02-11 Op Solutions, Llc Adaptive resolution management using sub-frames
CN113613010A (en) * 2021-07-07 2021-11-05 南京大学 Point cloud geometric lossless compression method based on sparse convolutional neural network
US20220030234A1 (en) * 2019-03-16 2022-01-27 Mediatek Inc. Method and apparatus for signaling adaptive loop filter parameters in video coding
WO2022045717A1 (en) * 2020-08-24 2022-03-03 현대자동차주식회사 Frame packing method in mpeg immersive video format
WO2022042538A1 (en) * 2020-08-24 2022-03-03 北京大学深圳研究生院 Block-based point cloud geometric inter-frame prediction method and decoding method
WO2022187754A1 (en) * 2021-07-19 2022-09-09 Innopeak Technology, Inc. Atlas information carriage in coded volumetric content
WO2022191436A1 (en) * 2021-03-08 2022-09-15 엘지전자 주식회사 Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
WO2023275247A1 (en) * 2021-06-30 2023-01-05 Telefonaktiebolaget Lm Ericsson (Publ) Encoding resolution control
WO2023273551A1 (en) * 2021-07-02 2023-01-05 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus of encoding/decoding point cloud captured by a spinning sensors head
WO2023282581A1 (en) * 2021-07-05 2023-01-12 엘지전자 주식회사 Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100150231A1 (en) * 2008-12-11 2010-06-17 Novatek Microelectronics Corp. Apparatus for reference picture resampling generation and method thereof and video decoding system using the same
US20130202051A1 (en) * 2012-02-02 2013-08-08 Texas Instruments Incorporated Sub-Pictures for Pixel Rate Balancing on Multi-Core Platforms
WO2018127625A1 (en) * 2017-01-03 2018-07-12 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
US20180262774A1 (en) * 2017-03-09 2018-09-13 Mediatek Inc. Video processing apparatus using one or both of reference frame re-rotation and content-oriented rotation selection and associated video processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100150231A1 (en) * 2008-12-11 2010-06-17 Novatek Microelectronics Corp. Apparatus for reference picture resampling generation and method thereof and video decoding system using the same
US20130202051A1 (en) * 2012-02-02 2013-08-08 Texas Instruments Incorporated Sub-Pictures for Pixel Rate Balancing on Multi-Core Platforms
WO2018127625A1 (en) * 2017-01-03 2018-07-12 Nokia Technologies Oy An apparatus, a method and a computer program for video coding and decoding
US20180262774A1 (en) * 2017-03-09 2018-09-13 Mediatek Inc. Video processing apparatus using one or both of reference frame re-rotation and content-oriented rotation selection and associated video processing method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220030234A1 (en) * 2019-03-16 2022-01-27 Mediatek Inc. Method and apparatus for signaling adaptive loop filter parameters in video coding
US11882276B2 (en) * 2019-03-16 2024-01-23 Hfi Innovation Inc. Method and apparatus for signaling adaptive loop filter parameters in video coding
US20210044819A1 (en) * 2019-08-06 2021-02-11 Op Solutions, Llc Adaptive resolution management using sub-frames
WO2022045717A1 (en) * 2020-08-24 2022-03-03 현대자동차주식회사 Frame packing method in mpeg immersive video format
WO2022042538A1 (en) * 2020-08-24 2022-03-03 北京大学深圳研究生院 Block-based point cloud geometric inter-frame prediction method and decoding method
CN112087637A (en) * 2020-09-09 2020-12-15 中国电子科技集团公司第五十八研究所 High-pixel bit depth video image data coding and decoding processing method
WO2022191436A1 (en) * 2021-03-08 2022-09-15 엘지전자 주식회사 Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
WO2023275247A1 (en) * 2021-06-30 2023-01-05 Telefonaktiebolaget Lm Ericsson (Publ) Encoding resolution control
WO2023273551A1 (en) * 2021-07-02 2023-01-05 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus of encoding/decoding point cloud captured by a spinning sensors head
WO2023282581A1 (en) * 2021-07-05 2023-01-12 엘지전자 주식회사 Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
CN113613010A (en) * 2021-07-07 2021-11-05 南京大学 Point cloud geometric lossless compression method based on sparse convolutional neural network
WO2022187754A1 (en) * 2021-07-19 2022-09-09 Innopeak Technology, Inc. Atlas information carriage in coded volumetric content

Also Published As

Publication number Publication date
EP3906675A1 (en) 2021-11-10
EP3906675A4 (en) 2022-11-30

Similar Documents

Publication Publication Date Title
US20230308639A1 (en) Apparatus, a Method and a Computer Program for Video Coding and Decoding
US12088847B2 (en) Apparatus, a method and a computer program for video encoding and decoding
US12022117B2 (en) Apparatus, a method and a computer program for video coding and decoding
US11082719B2 (en) Apparatus, a method and a computer program for omnidirectional video
WO2020008106A1 (en) An apparatus, a method and a computer program for video coding and decoding
EP3906675A1 (en) An apparatus, a method and a computer program for video coding and decoding
EP3741108A1 (en) An apparatus, a method and a computer program for omnidirectional video
GB2555788A (en) An apparatus, a method and a computer program for video coding and decoding
WO2019141907A1 (en) An apparatus, a method and a computer program for omnidirectional video
CN115211131B (en) Apparatus, method and computer program for omni-directional video
US11190802B2 (en) Apparatus, a method and a computer program for omnidirectional video
WO2020201632A1 (en) An apparatus, a method and a computer program for omnidirectional video
WO2019197722A1 (en) An apparatus, a method and a computer program for volumetric video
EP3673665A1 (en) An apparatus, a method and a computer program for omnidirectional video
RU2784900C1 (en) Apparatus and method for encoding and decoding video
EP3804334A1 (en) An apparatus, a method and a computer program for volumetric video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19908067

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019908067

Country of ref document: EP

Effective date: 20210802