WO2020175908A1 - Method and device for partitioning picture on basis of signaled information - Google Patents
Method and device for partitioning picture on basis of signaled information Download PDFInfo
- Publication number
- WO2020175908A1 WO2020175908A1 PCT/KR2020/002733 KR2020002733W WO2020175908A1 WO 2020175908 A1 WO2020175908 A1 WO 2020175908A1 KR 2020002733 W KR2020002733 W KR 2020002733W WO 2020175908 A1 WO2020175908 A1 WO 2020175908A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- tile
- picture
- current picture
- tiles
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/53—Multi-resolution motion estimation; Hierarchical motion estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- This disclosure is about video coding technology, and more specifically, video coding.
- It relates to a picture partitioning method and apparatus based on information signaled by the system.
- the demand for high-resolution, high-quality video/video is increasing in various fields.
- the higher the resolution and quality of the video/video data the higher the amount of information or bits to be transmitted compared to the existing video/video data.
- the video data can be transmitted using a medium such as a wired/wireless broadband line or an existing storage medium
- the transmission cost and storage cost increase.
- a flexible picture partitioning method that can be applied to efficiently compress and play back images/videos is required.
- the technical task of this disclosure is to provide a method and apparatus to increase the image coding efficiency.
- Another technical task of this disclosure is to provide a method and apparatus for signaling partitioning information.
- Another technical task of this disclosure is to create pictures based on signaled information.
- Another technical task of this disclosure is to provide a method and apparatus for partitioning a current picture based on partition information for the current picture.
- Another technical task of this disclosure is flag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and upper left for each of the MCTSs. Position information of the tile located at the (top-left) or the position of the tile located at the bottom-right for each MCTS It is intended to provide a method and apparatus for partitioning the current picture based on at least one of the information.
- MCTS motion constrained tile sets
- an image decoding method performed by a decoding apparatus includes partition information for a current picture and a current block included in the current picture. For example, acquiring image information including prediction information from a bitstream; Based on the partitioning information for the current picture, a partitioning structure of the current picture based on a plurality of tiles is also provided.
- MCTS motion constrained tile sets
- a decoding device for performing image decoding includes partition information for a current picture and a current block included in the current picture. For example, obtaining image information including prediction information from a bitstream, and reducing the partitioning structure of the current picture based on a plurality of tiles, based on the partitioning information for the current picture.
- An entropy decoding unit A prediction unit for deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples Including an adder for restoring the current picture, the division information on the current picture, flag information on whether the current picture is divided into motion constrained tile sets (MCTS), number information on the number of MCTSs in the current picture , At least one of the location information of a tile positioned at the top-left of each of the MCTSs or the location information of a tile positioned at the bottom-right of each of the MCTSs.
- MCTS motion constrained tile sets
- an image encoding method performed by an encoding device includes the steps of dividing a current picture into a plurality of tiles; based on the plurality of tiles. Generating segmentation information for the current picture; Deriving prediction samples for a current block included in one of the plurality of tiles; Generating prediction information for the current block based on the prediction samples And the division information for the current picture and the Including the step of encoding image information including prediction information on the current block, wherein the division information on the current picture is, Neulrag information on whether the current picture is divided into motion constrained tile sets (MCTS).
- MCTS motion constrained tile sets
- location information of a tile located on the top-left side of each of the MCTSs, or information on the bottom-right side of each of the MCTSs Includes at least one of the tile's location information.
- an encoding device for performing image encoding.
- the encoding device divides a current picture into a plurality of tiles, and the current picture is divided based on the plurality of tiles.
- An image segmentation unit that generates segmentation information for a picture, extracts prediction samples for the current block included in one of the plurality of tiles, and generates prediction information for the current block based on the prediction samples.
- Entropy encoding image information including prediction unit and segmentation information for the current picture and prediction information for the current block
- the partition information on the current picture always lag information on whether the current picture is divided into motion constrained tile sets (MCTS), number information on the number of MCTSs in the current picture, the It includes at least one of the location information of a tile located at the top-left of each of the MCTSs or the location information of a tile located at the bottom-right of each of the MCTSs.
- MCTS motion constrained tile sets
- a storage medium readable by a decoder for storing image information encoded by the image encoding method includes a plurality of current pictures. Dividing into tiles; Generating segmentation information for the current picture based on the plurality of tiles; deriving prediction samples for the current block included in one of the plurality of tiles; the current picture based on the predicted samples Generating prediction information for a block; And encoding image information including segmentation information for the current picture and prediction information for the current block, wherein the segmentation information for the current picture is, Always lag information on whether the current picture is divided into MCTS (motion constrained tile sets), information on the number of MCTSs in the current picture, and information on each of the MCTSs
- MCTS motion constrained tile sets
- the overall image/video compression efficiency can be improved.
- FIG. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
- Fig. 2 shows the configuration of a video/video encoding device to which this disclosure can be applied.
- Figure 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
- FIG. 5 is a diagram showing an example of partitioning a picture.
- FIG. 6 is a flowchart illustrating a procedure for encoding a picture based on a tile and/or a tile group according to an embodiment.
- FIG. 7 is a flowchart illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
- FIG. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
- FIG. 9 is a block diagram showing the configuration of an encoding device according to an embodiment.
- W is a block diagram showing the configuration of a decoding apparatus according to an embodiment.
- FIG. 11 is a diagram showing an example of a tile and a tile group unit constituting the current picture
- Fig. 12 schematically shows an example of the signaling structure of tile group information
- FIG. 13 is a diagram showing an example of a picture in a video conference video program.
- FIG. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
- FIG. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on MCTS (Motion Constrained Tile Set).
- MCTS Motion Constrained Tile Set
- 16 is a diagram illustrating an example of dividing a picture based on an ROI area.
- 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
- FIG. 18 is a flow chart showing the operation of the decoding apparatus according to an embodiment.
- 19 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
- 20 is a flow chart showing the operation of the encoding device according to an embodiment.
- 21 is a block diagram showing the configuration of an encoding apparatus according to an embodiment. 2020/175908 1»(:1 ⁇ 1 ⁇ 2020/002733
- Fig. 22 shows an example of a content streaming system to which the disclosure of this document can be applied.
- each configuration is implemented with separate hardware or separate software; for example, two or more of each configuration may be combined to form a single configuration.
- One configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is incorporated and/or separated are also included in the scope of the rights of this disclosure, as long as it does not depart from the essence of this disclosure.
- 1 or show or p may mean “only show,” “only, or “show and ⁇ all.”
- 1 or show 01 in this specification is 1 and/ Or it can be interpreted as show (1/(epidermis”.
- “6” or “:( ⁇ 3 or(:)” means “only show”, “only ,,” only 0 ⁇ or “ ⁇ :8 and (:any combination of arbitrary (
- Show ⁇ can mean “only show”, “only, or “show and ⁇ all”.
- ⁇ 3,(:” can mean “ ⁇ 3 or (:”).
- parentheses used in this specification may mean “for example.” Specifically, when indicated as “predictive (intra prediction)”, “intra prediction” as an example of “prediction” In other words, “predictions” in this specification are not limited to “intra predictions”, and “intra predictions” may be suggested as an example of “predictions”. That is, even if it is marked as “intra prediction)”, “intra prediction” may have been proposed as an example of “prediction”.
- FIG. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
- the video/video coding system may include a first device (source device) and a second device (receive device).
- the source device is encoded.
- Video/image information or data can be transferred to a receiving device via a digital storage medium or network in the form of a file or streaming.
- the source device may include a video source, an encoding device, and a transmission unit.
- the receiving device may include a receiver, a decoding device, and a renderer.
- the encoding device may be referred to as a video/image encoding device, and the decoding device may be referred to as a video/image decoding device.
- the transmitter may be included in the encoding device.
- the receiver may be included in the decoding device.
- the renderer may include a display unit, and the display unit may be composed of separate devices or external components.
- Video sources are captured through video/video capture, synthesis, or generation
- Video/image can be acquired Video sources can include video/image capture devices and/or video/image generation devices Video/image capture devices can be, for example, one or more cameras, previously captured video/image Video/image archives, etc. Video/image generation devices may include, for example, computers, tablets and smartphones, and may (electronically) generate video/images, e.g. computers A virtual video/video can be created through the like, and in this case, the video/video capture process can be replaced by the process of generating related data.
- the encoding device can encode the input video/video.
- the encoding device can compress and For coding efficiency, a series of procedures such as prediction, transformation, and quantization can be performed.
- the encoded data (encoded video/video information) can be summarized in the form of a bitstream.
- the transmission unit is encoded video/video information output in the form of a bitstream or
- Data can be transferred to the receiver of the receiving device via a digital storage medium or network in the form of a file or streaming.
- the digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
- the transmission unit may include an element for generating a media file through a predetermined file format and may include an element for transmission through a broadcasting/communication network.
- the receiving unit may receive/extract the bitstream and transmit it to the decoding device. have.
- the decoding device is inverse quantization, inverse transformation, prediction, etc. corresponding to the operation of the encoding device.
- Video/video can be decoded by performing a series of procedures.
- the renderer can render decoded video/video.
- the rendered video/video can be displayed through the display unit.
- This document is about video/image coding. For example,
- the method/embodiment includes a versatile video coding (VVC) standard, an essential video coding (EVC) standard, an AOMedia Video 1 (AVI) standard, a 2nd generation of audio video coding standard (AVS2), or a next-generation video/image coding standard (ex.H). .267 or H.268, etc.).
- VVC versatile video coding
- EVC essential video coding
- AVI AOMedia Video 1
- AVS2 2nd generation of audio video coding standard
- next-generation video/image coding standard ex.H). .267 or H.268, etc.
- a video is a series of images over time.
- a picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding.
- a tile can contain more than one CTU (coding tree unit); a picture can consist of more than one slice/tile.
- a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture).
- the tile row is a rectangular region of CTUs.
- the tile column is a rectangular region of CTUs having a height equal to the height of the picture and width can be specified by syntax elements in the picture parameter set. a width specified by syntax elements in the picture parameter set).
- the tile row is a rectangular area of CTUs, the rectangular area has a width specified by syntax elements in the picture parameter set, and the height can be the same as the height of the picture. (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture).
- 2020/175908 1»(:1/10 ⁇ 020/002733 Can represent sequential ordering, the CTUs can be sequentially sorted by CTU raster scan within a tile, and tiles within a picture are continuous by raster scan of the tiles of the picture.
- a tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture
- a slice can contain any number of complete tiles or multiple consecutive CTU rows within one tile of a picture that can be contained in a single NAL unit.
- Tile groups and slices can be mixed in this document, for example In a document, the tile group/tile group header can be called a slice/slice header.
- a picture can be divided into two or more subpictures.
- a subpicture can be a rectangular region of one or more slices within a picture (an mctangular mgion of one or more slices within a picture).
- a pixel or pel may mean the smallest unit constituting a picture (or image).
- sample' may be used as a term corresponding to a pixel.
- Sample In general, can represent the pixel or pixel value, it can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component.
- a unit can represent a basic unit of image processing.
- a unit can contain at least one of a specific area of a picture and information related to that area.
- a unit can contain one luma block and two chromas (one luma block and two chromas).
- ex. cb, cr) may contain a block
- a unit may be used interchangeably with terms such as block or area in some cases.
- the MxN block may include a set (or array) of samples (or sample array) or transform coefficients consisting of M columns and N rows.
- Figure 2 shows the configuration of a video/video encoding apparatus to which this disclosure can be applied.
- the video encoding device may include an image encoding device.
- the encoding device 200 includes an image partitioner 210,
- Predictor 220 residual processor (230), entropy encoder (240), adder (250), filtering unit (filter, 260) and memory (memory, 270) It can be configured to include.
- the part 220 is
- the residual processing unit 230 includes a transform unit 232, a quantizer 233, an inverse quantizer 234, and an inverse transform unit ( An inverse transformer 235 may be included.
- the residual processing unit 230 may further include a subtractor 231.
- the addition unit 250 may include a reconstructor or a recontructged block generator. The image segmentation unit 210 described above, the prediction unit 220, the residual processing unit 230, the entropy encoding unit 240, The addition unit 250 and the filtering unit 260 are
- the hardware component may be configured by a component (e.g., an encoder chipset or processor).
- the memory 270 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium.
- DPB decoded picture buffer
- the hardware component is a memory 270. You can also include more as internal/external components.
- the image segmentation unit (2W) is an input image (or, picture, input) input to the encoding device 200
- Frame can be divided into one or more processing units.
- the processing unit may be referred to as a coding unit (CU), in which case the coding unit is a coding tree unit (CTU).
- the coding unit can be divided recursively from the largest coding unit (LCU) according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure.
- LCU largest coding unit
- QTBTTT Quad-tree binary-tree ternary-tree
- one coding unit has a quad tree structure, Based on the binary tree structure and/or ternary structure, it can be divided into a plurality of coding units of deeper depth. In this case, for example, the quad tree structure is applied first, and the binary tree structure and/or ternary structure is It may be applied later. Or the binary retrieval structure may be applied first.
- the coding procedure according to this disclosure may be performed based on the final coding unit that is no longer divided. In this case, based on the coding efficiency according to the image characteristics, etc., the maximum possible
- the coding unit can be used directly as the final coding unit, or if necessary, the coding unit can be
- the processing unit may further include a unit (PU: Prediction Unit) or a transformation unit (TU: Transform Unit).
- PU Prediction Unit
- TU Transform Unit
- the prediction unit and the transformation unit are each divided from the final coding unit described above.
- it may be partitioned.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for inducing a conversion factor and/or a unit for inducing a residual signal from the conversion factor.
- an MxN block can represent a set of samples or transform coefficients consisting of M columns and N rows.
- a sample can typically represent a pixel or pixel value, and the luminance ( It can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component.
- a sample corresponds to one picture (or image) corresponding to a pixel or pel. Can be used as a term.
- the encoding device 200 subtracts the prediction signal (predicted block, prediction sample array) output from the inter prediction unit 221 or the intra prediction unit 222 from the input video signal (original block, original sample array).
- a residual signal residual signal, residual block, residual sample array
- the input image within the encoding device 200 as shown Signal original block, original
- the unit that subtracts the prediction signal (prediction block, prediction sample array) from the sample array) may be referred to as a subtraction unit 231.
- the prediction unit performs prediction on a block to be processed (hereinafter referred to as a current block), and the above A predicted block including predicted samples for the current block can be generated.
- the prediction unit can determine whether intra prediction is applied or inter prediction is applied in units of the current block or CU.
- the prediction unit may generate various types of information related to prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit it to the entropy encoding unit 240.
- the information on prediction may be encoded in the entropy encoding unit 240 and summarized in the form of a bitstream.
- the PI intra prediction unit 222 may predict the current block by referring to samples in the current picture.
- the referenced samples are of the current block according to the prediction mode.
- the prediction modes can include a plurality of non-directional modes and a plurality of directional modes.
- Non-directional modes are, for example, DC mode and planner mode.
- It may include 33 directional prediction modes or 65 directional prediction modes. However, this is an example and more or less directional predictions depending on the setting.
- the intra prediction unit 222 may determine a prediction mode to be applied to the current block by using the prediction mode applied to the surrounding block.
- the inter prediction unit 221 refers to a reference specified by a motion vector on the reference picture.
- Motion information can be predicted in units of blocks, sub-blocks, or samples.
- the motion information may include a motion vector and a reference picture index.
- the motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.)
- the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- the reference picture including the reference block and the reference picture including the temporal peripheral block may be the same or different.
- the temporal peripheral block may be a collocated reference block, a co-located CU (colCU), etc. It can be called by the name of, and the reference picture containing the temporal surrounding block is the same position.
- colPic collocated picture
- the prediction unit 221 may construct a motion information candidate list based on the neighboring blocks, and generate information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. Inter prediction may be performed based on the prediction mode. For example, in the case of the skip mode and the merge mode, the inter prediction unit 221 moves the motion information of the neighboring block to the current block. It can be used as information In the case of skip mode, unlike the merge mode, the residual signal may not be transmitted In the case of motion information e.g. (motion vector prediction, MVP) mode, the motion vector of the neighboring block is used as a motion vector, i.e. vector predictor), and by signaling the motion vector difference, the motion vector of the current block can be indicated.
- motion information e.g. (motion vector prediction, MVP) mode
- MVP motion vector prediction, MVP
- the motion vector of the neighboring block is used as a motion vector, i.e. vector predictor
- the motion vector of the current block can be indicated.
- the prediction unit 220 may generate a prediction signal based on various prediction methods to be described later.
- the prediction unit may apply intra prediction or inter prediction to predict one block, as well as intra prediction.
- Prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as).
- IBC intra block copy
- palette mode can be used for content video/video coding such as games, for example SCC (screen content coding), etc.
- IBC is basically This can be done similarly to inter prediction in that it performs prediction within the current picture but derives a reference block within the current picture, i.e. IBC can use at least one of the inter prediction techniques described in this document.
- the mode can be seen as an example of intracoding or intra prediction.
- the palette mode is applied, the sample value in the picture can be signaled based on the information about the palette table and palette index.
- the prediction signal generated through the prediction unit may be used to generate a restoration signal or may be used to generate a residual signal.
- the transform unit 232 may generate transform coefficients by applying a transform method to the residual signal.
- the transform method is DCT (Discrete Cosine Transform), DST (Discrete Sine Transform),
- KLT Kerhunen-Loeve Transform
- GBT Graph-Based Transform
- It may include at least one of CNT (Conditionally Non-linear Transform).
- CNT refers to a transformation that is obtained based on, e.g., generating a signal using all previously reconstructed pixels. Also, the transformation process can be applied to a block of pixels of the same size of a square, and It can also be applied to blocks of variable size that are not square.
- the quantization unit 233 quantizes the transform coefficients to the entropy encoding unit 240
- the entropy encoding unit 240 encodes the quantized signal (information on quantized transformation coefficients) and outputs it as a bitstream.
- the information on the quantized transformation coefficients may be referred to as residual information.
- the quantization unit 233 can rearrange the quantized transformation coefficients of the block form into a one-dimensional vector form based on the coefficient scan order, and the quantized transformation coefficients of the one-dimensional vector form Based on the quantized transformation coefficients, information on the quantized transformation coefficients may be generated.
- the entropy encoding unit 240 includes, for example, exponential Golomb,
- the entropy encoding unit 240 includes quantized conversion factors and information necessary for video/image restoration. (E.g., values of syntax elements) can be encoded together or separately Encoded information (ex.encoded video/video information) is transmitted in the form of a bitstream in units of network abstraction layer (NAL) units or
- the video/video information may further include information about various parameter sets, such as an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/video information may further include general constraint information.
- information transmitted/signaled from the encoding device to the decoding device and/or syntax elements may be included in the video/video information.
- the video/video information may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream may be transmitted through a network or may be stored in a digital storage medium.
- the network is a broadcasting network and/or
- the digital storage medium may include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
- the signal output from the entropy encoding unit 240 is transmitted by a transmission unit ( (Not shown) and/or a storage unit (not shown) for storing may be configured as an internal/external element of the encoding apparatus 200, or a transmission unit may be included in the entropy encoding unit 240.
- the quantized transformation coefficients output from the quantization unit 233 can be used to generate the predicted signal.
- the quantization unit 234 and the inverse transformation unit 235 are applied to the quantized transformation coefficients. Residual by applying quantization and inverse transformation
- a signal (residual block or residual samples) can be restored.
- the addition unit 155 restores the restored residual signal by adding the restored residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222.
- a (reconstructed) signal (restored picture, reconstructed block, reconstructed sample array) can be generated If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block can be used as a reconstructed block.
- the unit 250 may be referred to as a restoration unit or a restoration block generation unit.
- the generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and inter prediction of the next picture through filtering as described below. It can also be used for
- LMCS luma mapping with chroma scaling
- the filtering unit 260 applies filtering to the restored signal to improve subjective/objective image quality.
- the filtering unit 260 applies various filtering methods to the restored picture.
- the modified restored picture can be stored in the memory 270, specifically the DPB of the memory 270.
- the various filtering methods include, for example, deblocking filtering. , A sample adaptive offset, an adaptive loop filter, a bilateral filter, etc.
- the filtering unit 260 performs filtering as described later in the description of each filtering method.
- a variety of information related to may be generated and transmitted to the entropy encoding unit 240.
- the filtering information may be encoded by the entropy encoding unit 240 and output in the form of a bitstream.
- the modified reconstructed picture transmitted to the memory 270 can be used as a reference picture in the inter prediction unit 221.
- the encoding device is predicted by the encoding device (W0) and the decoding device. Mismatch can be avoided and coding efficiency can be improved.
- the DPB can store the modified restored picture for use as a reference picture in the inter prediction unit 221.
- the memory 270 is a block of the block from which motion information in the current picture is derived (or encoded). It is possible to store motion information and/or motion information of blocks in a picture that has already been restored.
- the stored motion information can be transmitted to the inter prediction unit 221 for use as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 270 may store restoration samples of the restored blocks in the current picture, and may transmit the restoration samples to the intra prediction unit 222.
- FIG. 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
- the decoding apparatus 300 includes an entropy decoder 310, a residual processor 320, a predictor 330, and an adder 340.
- the prediction unit 330 may include an intra prediction unit 331 and an inter prediction unit 332.
- the residual processing unit 320 may include a dequantizer 321 and an inverse transformer 321.
- the addition unit 340 and the filtering unit 350 may be configured by one hardware component (for example, a decoder chipset or processor) according to an embodiment.
- the memory 360 may include a decoded picture buffer (DPB). In addition, it may be configured by a digital storage medium.
- the hardware component may include a memory 360 as an internal/external component loader.
- the decoding device 300 can restore the image in response to a process in which the video/image information is processed in the encoding device of FIG. 3.
- decoding The device 300 may derive units/blocks based on the block division related information acquired from the bitstream.
- the decoding device 300 may perform decoding using a processing unit applied in the encoding device. Therefore, decoding
- the processing unit of may be, for example, a coding unit, and the coding unit is From the coding tree unit or the maximum coding unit, it can be divided according to the quad tree structure, the binary tree structure, and/or the turner tree structure.
- One or more conversion units can be derived from the coding unit.
- decoding through the decoding device 300, And the output restored image signal can be reproduced through the reproduction device.
- the decoding device 300 converts the signal output from the encoding device of FIG. 3 into a bitstream.
- the received signal can be decoded through the entropy decoding unit 310.
- the entropy decoding unit 3W parses the bitstream and is required for image restoration (or picture restoration).
- Information (ex. video/video information) can be derived.
- the video/video information may further include information on various parameter sets, such as an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- /Video information may further include general constraint information.
- the decoding device may further decode the picture based on the information on the parameter set and/or the general limit information.
- the signaling/received information and/or syntax elements described later in this document are decoded through the decoding procedure, It can be obtained from the bitstream.
- the entropy decoding unit (3W) decodes the information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC or CABAC,
- the CABAC entropy decoding method receives the bin corresponding to each syntax element in the bitstream, and receives the decoding target syntax element information and the surrounding and decoding information of the decoding target block.
- the context model is determined using the symbol/bin information decoded in the previous step, and the probability of occurrence of bins is predicted according to the determined context model, and arithmetic decoding of bins is performed.
- a symbol corresponding to the value of the syntax element can be generated.
- the CABAC entropy decoding method can update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after determining the context model.
- information about prediction is provided to the prediction unit (inter prediction unit 332 and intra prediction unit 331), and entropy decoding is performed by the entropy decoding unit 3W.
- the residual value that is, quantized transform coefficients and related parameter information may be input to the residual processing unit 320.
- the residual processing unit 320 may derive a residual signal (residual block, residual samples, and residual sample array).
- a filtering unit information about filtering among information decoded by the entropy decoding unit 310 is a filtering unit. Can be provided as 350.
- a receiving unit (not shown) that receives the signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 3W.
- the decoding device may be called a video/image/picture decoding device, and the decoding device is an information decoder (video/image/picture information decoder) and a sample decoder (video/image/picture sample). Decoder), the information decoder is the entropy
- a decoding unit (3W) may be included, and the sample decoder includes the inverse quantization unit 321, an inverse transform unit 322, an addition unit 340, a filtering unit 350, a memory 360, an inter prediction unit ( 332) and an intra prediction unit 331.
- the inverse quantization unit 321 may inverse quantize the quantized transformation coefficients and output the transformation coefficients.
- the inverse quantization unit 321 may rearrange the quantized transformation coefficients in a two-dimensional block form. In this case, the above reordering
- the inverse quantization unit 321 performs inverse quantization on the quantized transform coefficients using a quantization parameter (for example, quantization step size information) based on the coefficient scan order performed by the silver encoding device. And, you can obtain transform coefficients.
- a quantization parameter for example, quantization step size information
- a residual signal (residual block, residual sample array) is obtained by inverse transforming the transform coefficients.
- the prediction unit performs prediction on the current block, and predicts the current block
- a predicted block including samples may be generated.
- the prediction unit determines whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310. Can be determined and specific intra/inter prediction modes can be determined.
- the prediction unit 330 may generate a prediction signal based on various prediction methods to be described later.
- the prediction unit may not only apply intra prediction or inter prediction to predict a single block, but also apply it.
- Intra prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as).
- the example is based on the intra block copy (IBC) prediction mode for block prediction. It may or may be based on a palette mode.
- the IBC prediction mode or palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding), etc.
- IBC is Basically, the prediction is performed within the current picture, but it can be performed similarly to inter prediction in that it derives a reference block within the current picture, i.e., IBC can use at least one of the inter prediction techniques described in this document.
- Palette mode can be seen as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index may be included in the video/video information and signaled.
- the intra prediction unit 331 may predict the current block by referring to samples in the current picture.
- the referenced samples are of the current block according to the prediction mode.
- the prediction modes may include a plurality of non-directional modes and a plurality of directional modes in intra prediction.
- the intra prediction unit 331 is a prediction applied to a peripheral block. Using the mode, you can also determine the prediction mode that applies to the current block.
- the inter prediction unit 332 may induce a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture. At this time, transmitted in the inter prediction mode.
- motion information can be predicted in units of blocks, sub-blocks, or samples based on the correlation of motion information between the neighboring block and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include information on the inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.)
- the neighboring block is a spatial neighboring block existing in the current picture and a reference picture. It can include a temporal neighboring block that exists in.
- the inter prediction unit 332 may construct a motion information candidate list based on the neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information.
- Inter prediction may be performed based on the prediction mode, and the information on the prediction may include information indicating a mode of inter prediction for the current block.
- This addition unit 340 is a prediction unit (inter prediction unit 332) and/or the obtained residual signal
- a restoration signal (restored picture, restoration block, restoration sample array) can be generated. Processing as in the case where skip mode is applied. If there is no residual for the target block, the predicted block can be used as a restore block.
- the addition unit 340 may be referred to as a restoration unit or a restoration block generation unit.
- the generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and output after filtering as described later. It may be used or it may be used for inter prediction of the next picture.
- LMCS luma mapping with chroma scaling
- the filtering unit 350 applies filtering to the restored signal to improve subjective/objective image quality.
- the filtering unit 350 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 360, specifically a memory 360.
- the various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc. can do.
- the (modified) reconstructed picture stored in the DPB of the memory 360 may be used as a reference picture in the inter prediction unit 332.
- the memory 360 is the one from which motion information in the current picture is derived (or decoded).
- the motion information of the block and/or the motion information of the blocks in the picture that has already been restored can be stored.
- the stored motion information is interpolated to be used as the motion information of the spatial surrounding block or the motion information of the temporal surrounding block.
- the memory 360 is a block of the restored blocks in the current picture. Restore samples can be stored, and can be delivered to the intra prediction unit 331.
- the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding apparatus 100 are respectively described in the filtering unit 350 of the decoding apparatus 300 ,
- the inter prediction unit 332 and the intra prediction unit 331 may be applied to be the same or corresponding to each other.
- prediction is performed to increase compression efficiency in performing video coding.
- a predicted block including predicted samples for the current block which is a block to be coded, can be generated.
- the block includes prediction samples in the spatial domain (or pixel domain).
- the predicted block is derived identically in the encoding device and the decoding device, and the encoding device is the original block and the original block, not the original sample value itself.
- the video coding efficiency can be increased by signaling the predicted residual information (residual information) between the predicted blocks with a decoding device.
- the decoding device derives a residual block including residual samples based on the residual information, and ,
- the residual block and the predicted block may be summed to generate a restoration block including restoration samples, and a restoration picture including restoration blocks may be generated.
- the residual information may be generated through transformation and quantization procedures.
- the encoding device derives a residual block between the original block and the predicted block, and the residual information is included in the residual block.
- Transformation coefficients are derived by performing a transformation procedure on samples (residual sample array), quantized transformation coefficients are derived by performing a quantization procedure on the transformation coefficients, and related residual information (via bitstream)
- the residual information may include information such as value information of the quantized conversion coefficients, location information, conversion technique, conversion kernel, and quantization parameters.
- the decoding apparatus is inverse based on the residual information.
- the quantization/inverse transform procedure can be performed and residual samples (or residual blocks) can be derived.
- the decoding device can generate a reconstructed picture based on the predicted block and the residual block.
- the encoding device can also generate a later picture.
- a residual block is derived by inverse quantization/inverse transformation of the quantized transformation coefficients, and a reconstructed picture can be generated based on this.
- the coded data is between a video coding layer (VCL) that deals with the video/image coding process and itself, and a sub-system that stores and transmits the coded video/image data. It can be classified as a network abstraction layer (NAL).
- VCL video coding layer
- NAL network abstraction layer
- This VCL is a set of parameters corresponding to headers such as sequences and pictures (picture parameter set (PPS), sequence parameter set (SPS), video parameter set (VPS), etc.)) and in addition to the video/image coding process.
- Required Supplemental Enhancement (SEI) information) message can be created.
- the SEI message is separated from the video/image information (slice data).
- the VCL containing the video/image information consists of the slice data and the slice header.
- the slice header is a tile group header. It may be referred to as, and the slice data may be referred to as tile group data.
- NAL unit can be created by adding information (NAL unit header).
- RBSP refers to slice data, parameter set, SEI message, etc. generated from VCL.
- the NAL unit header may include NAL unit type information specified according to RBSP data included in the corresponding NAL unit.
- the NAL unit which is the basic unit of NAL, plays a role of mapping the coded image to the bit stream of sub-systems such as file format, RTP (Real-time Transport Protocol), TS (Transport Strea), etc. according to a predetermined standard.
- sub-systems such as file format, RTP (Real-time Transport Protocol), TS (Transport Strea), etc.
- the NAL unit is the NAL unit according to the RBSP generated from the VCL.
- the VCL NAL unit can mean a NAL unit that contains information about the video (slice data), and the Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video.
- VCL NAL unit can mean a NAL unit that contains information about the video (slice data)
- Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video.
- VCL NAL unit and Non-VCL NAL unit can be transmitted through a network by attaching header information according to the data standard of the sub-system.
- the NAL unit is in H.266/VVC file format, RTP (Real- time Transport Protocol), TS (Transport Stream), etc.
- the NAL unit is RBSP data included in the NAL unit.
- the NAL unit type may be specified according to the structure, and information on the NAL unit type may be stored in the NAL unit header and signaled.
- VCL NAL unit type can be classified according to the properties and types of pictures included in the VCL NAL unit
- non-VCL NAL unit type can be classified according to the type of parameter set.
- the NAL unit type specified according to the type of parameter set included in the Non-VCL NAL unit type.
- the NAL unit type can be specified according to the type of parameter set, etc.
- the NAL unit type is an APS (Adaptation Parameter Set) NAL unit, which is a type for NAL units including APS, and a type for NAL units including DPS.
- APS Adaptation Parameter Set
- VPS Video Parameter Set
- SPS Sequence Parameter Set
- PPS Physical Parameter Set
- NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored in the NAL unit header and signaled.
- the syntax information may be nal_unit_type, and the NAL unit types are nal_unit_type values. Can be specified.
- one picture can contain a plurality of slices, and one slice can contain a slice header and slice data.
- multiple slices (slice header and slice data) within one picture.
- one picture header can be added.
- the picture header (picture header syntax) can include information/parameters commonly applicable to the picture.
- the slice header (slice header syntax) can be added to the slice. Commonly applicable information/parameters may be included.
- the above APS (APS syntax) or PPS (PPS syntax) may contain information/parameters commonly applicable to one or more slices or pictures.
- SPS SPS syntax
- the VPS may contain information/parameters that are commonly applicable to multiple layers.
- the DPS (DPS syntax) is common throughout the video.
- the DPS may include information/parameters applicable to the CVS (coded video sequence) concatenation.
- the high level syntax (HLS) in this document refers to the APS. At least one of syntax, PPS syntax, SPS syntax, VPS syntax, DPS syntax, a picture header syntax, and slice header syntax can be included.
- the image/video information encoded from the encoding device to the decoding device and signaled in the form of a bitstream only includes intra-picture partitioning information, intra/inter prediction information, residual information, and in-loop filtering information. Rather, information included in the slice header, information included in the picture header, information included in the APS, information included in the PPS, information included in the SPS, information included in the VPS, and/or the information included in the DPS. Information may be included. In addition, the image/video information may further include information of the NAL unit header.
- FIG. 5 is a diagram showing an example of partitioning a picture.
- Pictures can be divided into coding tree units (CTUs), and CTU is a coding tree unit.
- CTU coding tree unit
- the CTU can contain a coding tree block of luma samples and two coding tree blocks of chroma samples corresponding thereto.
- the maximum allowable size of CTU for coding and prediction is the CTU for conversion. It may be different from the maximum allowable size.
- a tile can correspond to a series of CTUs covering a rectangular area of a picture, and a picture can be divided into one or more tile rows and one or more tile columns.
- a slice can consist of an integer number of complete tiles or an integer number of consecutive complete 0X1 rows.
- two slice modes including raster scan (-) slice mode and rectangular slice mode can be supported.
- the slice In raster scan slice mode, the slice is in the tile raster scan of the picture.
- a slice is a number of complete tiles that collectively form a rectangular area of the picture, or a number of consecutive tiles in a single tile that collectively form a rectangular area of the picture. (:1!1 rows can be included. Tiles within a square slice can be scanned in tile raster scan order within the square area corresponding to the slice.
- FIG. 5 an example of dividing a picture into tiles and raster scan slices is shown.
- Figure 5 shows an example of dividing a picture into tiles and square slices.
- a picture can be divided into 24 tiles (6 tile columns and 4 tile rows) and 9 square slices.
- FIG. 5 shows an example of dividing the picture into tiles and square slices.
- a picture can be divided into 24 tiles (2 tile columns and 2 tile rows) and 4 square slices.
- FIG. 6 is a flowchart illustrating a tile and/or tile group-based picture encoding procedure according to an embodiment.
- Generation 610) can be performed by the video segmentation unit 210 of the encoding device, and for video/video information including information about tiles/tile groups.
- the encoding 620 can be performed by the entropy encoding unit 240 of the encoding device.
- the encoding apparatus may perform picture partitioning to encode an input picture 600).
- the picture may include one or more tiles/tile groups.
- the encoding apparatus is an image of the picture. Considering the characteristics and coding efficiency, the picture can be partitioned into various types, and information indicating the partitioning type with the optimum coding efficiency can be generated and signaled to the decoding device.
- An encoding apparatus includes a tile/tile applied to the picture
- the information on the tile/tile group may include information indicating the structure of the tile/tile group for the picture.
- the information on the tile/tile group includes various parameter sets and/or tile group headers as described later. It can be signaled through. A specific example is described below.
- the encoding apparatus may encode video/image information including information on the tile/tile group and output it in a bitstream format 620).
- the bitstream may be transmitted to a decoding device through a digital storage medium or a network.
- the video/image information may include the HLS and/or tile group header syntax described in this document.
- the video/image information is The above-described prediction information, residual information, (in-loop) filtering information, etc. may be further included.
- the encoding device restores the current picture, applies in-loop filtering, and encodes the parameters related to the in-loop filtering. It can be output in bitstream format.
- FIG. 7 is a flow diagram illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
- the step of acquiring information on a tile/tile group from a bitstream (S700) and deriving a tile/tile group within a picture (Stage 0), and decoding a picture based on a tile/tile group Step (S720) of performing is the entropy of the decoding device
- step (S620) of encoding video/image information including information on a tile/tile group may be performed by a sample decoder of the decoding apparatus.
- a decoding apparatus includes tiles/tiles from a received bitstream.
- Information on the group can be obtained (S700).
- the information on the tile/tile group can be obtained through various parameter sets and/or tile group headers as described later. A specific example will be described later.
- the decoding apparatus may derive a tile/tile group in a current picture based on the information on the tile/tile group (S phase 0).
- the decoding apparatus may decode the current picture based on the tile/tile group (S720). For example, the decoding apparatus derives a CTU/CU located in the tile, and performs it. Based on inter/intra prediction, residual processing, restoration block (picture) generation, and/or in-loop filtering procedures can be performed. In this case, for example, the decoding device can perform context model/information in tile/tile group units. In addition, if the surrounding block or the surrounding sample referenced during inter/intra prediction is located on a tile different from the current tile where the current block is located, the decoding device may treat the surrounding block or the surrounding sample as not available. .
- the decoding device may treat the surrounding block or the surrounding sample as not available. .
- FIG. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
- tiles may refer to areas within a picture defined by a set of vertical and/or horizontal boundaries that divide the picture into a plurality of rectangles.
- FIG. 8 shows one picture 700
- Figure 8 shows an example of splitting into multiple tiles based on a plurality of column boundaries (810) and row boundaries (820) within the first 32 maximum coding units (or 820).
- Coding Tree Units (CTUs) are numbered and shown.
- each tile may include an integer number of CTUs processed in a raster scan order within each tile.
- a plurality of tiles within a picture, including each tile may also include the picture. It can be processed as a raster scan order within.
- the tiles can be grouped to form tile groups, and tiles within a single tile group can be raster scanned. Splitting a picture into tiles is
- the information derived from the PPS about tiles may be used to check (or read) the following items. First, check whether a tile exists in the picture or whether more than one tile exists. If more than one tile is present, it can be checked whether the above one or more tiles are uniformly distributed, the dimension of the tiles can be checked, and whether the loop filter is enabled can be checked. have.
- the PPS may signal the syntax element single_tile_in_pic_flag first.
- the single_tile_in_pic_flag may indicate whether only one tile in a picture exists or whether a plurality of tiles in a picture exist. A plurality of tiles in a picture When they are present, the decoding device can parse information about the number of tile rows and tile columns using the syntax elements num_tile_columns_minus 1 and num_tile_rows_minusl.
- num_tile_rows_minusl can specify the process of dividing a picture into tile rows and columns.
- the heights of tile rows and widths of tile columns are from the perspective of CTBs (i.e.
- Additional flags can be parsed to check if the tiles in the picture are not uniformly spaced, the number of CTBs per tile can be explicitly signaled for each tile row and column boundaries (i.e. CTB within each tile row). The number of and the number of CTBs in each tile row can be signaled) If the tiles are spaced uniformly, the tiles can have the same width and height.
- a loop filter is enabled for tile boundaries.
- Another flag (e.g. the syntax element loop_filter_across_tiles_enabled_flag) can be parsed to determine if it has been enabled.
- Table 1 summarizes examples of main information about tiles that can be derived by parsing the PPS.
- Table 1 can represent the PPS RBSP syntax. 2020/175908 1»(:1/10 ⁇ 020/002733
- Table 2 below shows an example of semantics for the syntax elements described in Table 1 above.
- FIG. 9 is a block diagram showing a configuration of an encoding apparatus according to an embodiment
- FIG. 9 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
- the encoding apparatus 900 shown in FIG. 9 includes a partitioning module 910 and an encoding module 920.
- the partitioning module (0) and the image division unit (0) of the encoding device shown in FIG. The same and/or similar operations may be performed, and the encoding module 920 may perform the same and/or similar operations as the entropy encoding unit 240 of the encoding apparatus shown in FIG. 2.
- the input video is a partitioning module 9 After being divided in W), it can be encoded in the encoding module 920. After being encoded, the encoded video can be output from the encoding device 900.
- FIG. W An example of a block diagram of a decoding apparatus is shown in FIG. W.
- the decoding apparatus 1000 shown in FIG. W includes a decoding module 1010 and a deblocking filter 1020.
- the decoding module ( 1010) can perform the same and/or similar operations as the entropy decoding unit 3W of the decoding apparatus shown in FIG. 3, and the deblocking filter 1020 is a filtering unit 350 of the decoding apparatus shown in FIG. The same and/or similar operations can be performed.
- the decoding module 1010 decodes the input received from the encoding device 900 to derive information about tiles. A processing unit based on the decoded information
- the deblocking filter 1020 may apply an in-loop deblocking filter to process the processing unit.
- In-loop filtering may be applied to remove coding artifacts generated during the partitioning process.
- the in-loop filtering The operation may include an adaptive loop filter (ALF), a deblocking filter (DF), a sample adaptive operation set (SAO), etc. After that, the decoded picture can be output.
- ALF adaptive loop filter
- DF deblocking filter
- SAO sample adaptive operation set
- FIG. 11 is a diagram illustrating an example of a tile and tile group unit constituting a current picture.
- tiles can be grouped to form tile groups.
- 11 shows an example in which one picture is divided into tiles and tile groups. In FIG. 11, the picture includes 9 tiles and 3 tile groups. Each tile group can be independently coded.
- each tile group has a tile group header.
- Tile groups can have a similar meaning to a slice group. Each tile group can be independently coded.
- a tile group can contain one or more tiles.
- a tile group header can refer to a PPS, and a PPS can sequentially refer to a SPS (Sequence Parameter Set). .
- a tile group header is a PPS of a PPS referenced by the tile group header.
- the PPS can refer to the SPS in sequence.
- the tile group header can be determined for the following information. First, if more than one tile exists per picture, the tile group address and the number of tiles in the tile group are determined. Next, you can determine the tile group type, such as intra/predictive/bi-directional. Next, you can determine the picture order count (POC) of the Lease Significant Bits (LSB). Next, if there is more than one tile in a picture, you can determine the offset length and entry point to the tile.
- POC picture order count
- LSB Lease Significant Bits
- Table 4 shows an example of the syntax of the tile group header.
- the tile group header (tile_group_header) can be replaced with a slice header.
- Table 5 below shows an example of English semantics for the syntax of the tile group header.
- tile group header syntax element group_pic_parameter_set_id and ti le_group_pic_order_cnt_l sb shall be ame in all tile group headers of a coded picture.
- *' ti le_group_pic_para eter_set_id specifies the value of
- pps_pic_parameter_set_id for the PPS in use.
- the value of ti 1 e_group_pic_para eter_set_id shall be in the range of 0 to 63, inclusive. * ⁇ '
- Temporal Id of the current picture shall be greater than or equal to the value of Temporal Id of the PPS that has pps_pic_parameter_set_id equal to ti 1 e_group_p i c_parameter _set_id.
- ti le_group_address specifies the tile address of the first tile in the tile group, where tile address is the tile ID as specified by Equation c-7.
- the length of ti le_group_address is Cei 1 (Log2 (NumTi lesInPic)) bits.
- the value of ti le_group_address shall be in the range of 0 to
- ti le_group_address When ti le_group_address is not present it is inferred to be equal to 0..: num_tiies_in_ti le_group_minusl plus 1 specifies the number of tiles [157] in the tile group. The value of num_ti les_in_ti le_group_minusl shall be in the range of 0 to Nu Ti lesInPic-1, inclusive.
- ti le_group_type specifies the coding type of the tile group according to table 6.
- nal_unit_type is equal to IRAP_NUT, i.e., the picture is an
- ti le_group_type shall be equal to 2.* ⁇ ti le_group_pic_order_cnt_lsb specifies the picture order count modulo MaxPicOrderCntLsb for the current picture.
- the length of the ti le_group_pic_order_cnt_lsb syntax element is log2_max_pic_order_cnt_lsb_minus4 + 4 bits.
- the value of the ti le_group_pic_order_cnt_lsb shall be in the range of 0 to
- MaxPicOrderCntLsb-1, inclusive.- ' of fset_len_ inusl plus 1 specifies the length, in bits, of the entry_point_offset_ inusl [i] syntax elements.
- the value of offset_len_minusl shall be in the range of 0 to 31, inclusive.
- entry_point_of fset_minusl[ i] plus 1 specifies the i-th entry point offset in bytes, and is represented by offset_len_minusl plus 1 bits.
- the tile group data that follow the tile group header consists of nu _ti les_in_ti le_group_ inusl +1 subsets, with subset index values 2020/175908 1»(:1/10 ⁇ 020/002733
- the tile group may include a tile group header and tile group data.
- each 0X1 in the tile group is 2020/175908 PCT/KR2020/002733 Locations can be mapped and decoded.
- Table 7 below shows an example of the syntax of tile group data. In Table 7, tile group data can be replaced with slice data.
- Table 8 below shows an example of English semantics for the syntax of the tile group data.
- RowHeight [j] t i 1 e_row_he ight_mi nus 1 [j] + !-
- tbY ctbAddrRs / PicfidthlnCtbsY
- NumCtusInTi le[ tileldx] Colfidth[ i] * RowHeight[ j] '
- tileStartFIag 0
- tileStartFIag 1 2020/175908 1»(:1/10 ⁇ 020/002733
- Some implementations running on CPUs require dividing the source picture into tiles and tile groups, where each tile group can be processed in parallel on a separate core.
- the parallel processing is a high-resolution real-time encoding of videos.
- the above parallel processing can reduce the sharing of information between groups of tiles, thereby reducing the memory constraint. Tiles can be distributed to different threads while processing in parallel. Therefore, the parallel architecture can benefit from this partitioning mechanism.
- the maximum transmission unit (MTU) size matching is reviewed.
- the coded pictures transmitted through the network are subject to fragmentation when the coded pictures are larger than the MTU size. It can be different. Similarly, if the coded segments are small, the IP (Internet Protocol) header can become important. Packet fragmentation can lead to loss of error resiliency. The picture is taken to mitigate the effects of packet fragmentation. When dividing into tiles and packing each tile/tile group as a separate packet, the packet may be smaller than the MTU size.
- FIG. 13 is a diagram showing an example of a picture in a video conference video program.
- Fig. 13 shows an example of a picture in a video program for video conferencing when a video conference with multiple participants is held.
- the participant is speaker l (Speaker 1), speaker 2 (Speaker 2), speaker 3 (Speaker 3) and Speaker 4 (Speaker 4)
- the area corresponding to each participant in the picture can correspond to each of the preset areas, and each of the preset areas can be coded as a single tile or a group of tiles. have.
- the single tile or group of tiles corresponding to the participant may also change.
- FIG. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
- an area assigned to speaker 1 participating in a video conference may be coded as a single tile.
- the areas assigned to each of Speaker 2, Speaker 3, and Speaker 4 can be coded as a single tile.
- MCTS Motion Constrained Tile Set
- a picture can be acquired from 360 degree video data.
- 360 video can mean video or image content that is captured or played back in all directions (360 degrees) at the same time required to provide VR (Virtual Reality).
- %0 video can refer to a video or image that appears in various types of 3D space depending on the 3D model.
- a 360 video can be displayed on a spherical surface.
- a two-dimensional space (2D) picture obtained from 360-degree video data can be encoded with at least one spatial resolution.
- a picture can be encoded with a first resolution and a second resolution, and the first resolution. May be higher than the second resolution.
- a picture can be encoded in two spatial resolutions, each having a size of 1536x1536 and 768x768, but the spatial resolution is not limited thereto and may correspond to various sizes.
- a 6x4 size tile grid may be used for bitstreams encoded at each of the two spatial resolutions.
- a motion constraint tile set (MCTS) for each position of the tiles may be coded and used.
- each of the MCTSs may include tiles positioned in respective areas set for a picture.
- MCTS may contain at least one tile to form a set of square tiles.
- a tile can represent a rectangular area composed of coding tree blocks (CTBs) of a two-dimensional picture.
- CTBs coding tree blocks
- a tile can be classified based on a specific tile row and tile column within a picture.
- a specific MCTS in the encoding/decoding process When inter prediction is performed on the blocks within, the blocks within the specific MCTS may be restricted to refer only to the corresponding MCTS of the reference picture for motion estimation/motion compensation.
- the 12 first MCTSs 1510 are of 1536x1536.
- first MCTSs 1510 May correspond to a region having a first resolution in the same picture
- second MCTSs 1520 may correspond to a region having a second resolution in the same picture.
- the first MCTSs may correspond to the viewport area in the picture.
- the viewport area may refer to the area that the user is viewing in the 360-degree video.
- the first MCTSs may correspond to the ROI (Region in the picture). of Interest).
- the ROI area can refer to the area of interest of users, suggested by the 360 content provider.
- the MCTSs 1520 can be merged and merged into a 1920x4708-sized merge picture 1530, and the merge picture 1530 can have four tile groups.
- ti le_addr_val [i ][ j] specifies the ti le_group_address value of the tile of the i-th tile row and the j— th tile column.
- the length of ti le_addr_val [i ][ j] is ti le_addr_len_minusl + 1 bits.
- ti le_addr_val [i ][ j] shall not be equal to ti le_addr_val [m ][ n] when i is not equal to m or j is not equal to n.
- num_mcts_in_pic_minusl plus 1 specifies the number of MCTSs in the picture 2020/175908 1»(:1/10 ⁇ 020/002733
- a syntax element unifoml_tile_spacing_flag indicating whether tiles having the same width and height are to be derived by dividing the picture uniformly may be parsed.
- the unifoml_tile_spacing_flag can be used to indicate whether the tiles in the picture are divided in a uniform manner.
- the syntax element unifoml_tile_spacing_flag is enabled, the width of the tile row and the height of the tile row can be parsed, i.e., the syntax indicating the width of the tile column.
- the tiles in the picture Syntax element indicating whether to form 111 8_: ⁇ Can be parsed. If so, it may indicate that the tiles or groups of tiles in the picture may or may not form a square tile set, and that the use of sample values or variables outside the rectangular tile set is restricted or unrestricted. 111 If _: ⁇ is 1, it can be indicated that the picture is divided into ⁇ .
- the syntax element 1111111_111(:18_:11 in 1(:_1111111181 may represent the number. In one embodiment, when 111_£ is 1, In the case of dividing by, the syntax element num_mcts_in_pic_minusl can be parsed. 2020/175908 1»(:1/10 ⁇ 020/002733 There is.
- the tile_group_address value which is the position of the tile located at the top-left, can be indicated.
- the syntax element bottom_right_tile_addr[ i] is the i-th
- the tile_group_address value which is the location of the tile located at the bottom-right, can be displayed.
- tile group data can be replaced with slice data.
- Table 12 below shows English semantics for the tile group data syntax.
- 16 is a diagram showing an example of dividing a picture based on an R region.
- tiling for partitioning a picture into a plurality of tiles flexible tiling based on a region of interest (ROI) can be achieved.
- ROI region of interest
- FIG. 16 a picture is in R this region. Based on this, it can be divided into multiple tile groups.
- Table 15 below shows an example of English semantics for the above syntax.
- tile_group_info_in_pps_flag indicating whether tile group information related to tiles included in the tile group exists in or in a tile group header referring to may be parsed.
- tile_group_info_in_pps_flag 1
- the tile group information does not exist in ⁇ and refers to In the tile group header, it can indicate its presence. have.
- niim_tile_groups_in_pic_minusl may indicate the number of tile groups in the picture referring to.
- syntax element pps_first_tile_id can represent the tile 11) of the first tile of the first tile group
- syntax element pps_last_tile_id can represent the tile 11) of the last tile of the first tile group.
- 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
- coding for tiling that divides a picture into a plurality of tiles Flexible tiling can be achieved by considering tiles smaller than the size of the tree unit (0X1).
- the tiling structure according to this method can be usefully applied to recent video applications such as video conferencing programs.
- a picture may be partitioned into a plurality of tiles, and a plurality of
- At least one of the tiles may be smaller than the size of the coding tree unit (0X1), e.g. tile 1 (13 ⁇ 4 1), tile 2 (13 ⁇ 4 2), tile 3 (13 ⁇ 4 3) and tile 4 (1116). 4) It can be dug, and among them, tile 1 (13 ⁇ 4 1), tile 2 (1116 2) and tile 4 (11 no 4) are smaller than that.
- the syntax element tile_size_unit_idc may represent the unit size of the tile. For example, if tile_size_unit_id is 0, 1, 2..., the height and width of the tile is a coding tree block (CTB) can be defined as 4, 8, 16...
- CTB coding tree block
- FIG. 18 is a flow chart showing an operation of a decoding device according to an embodiment
- FIG. 19 is a block diagram showing a configuration of a decoding device according to an embodiment.
- Each step disclosed in FIG. 18 may be performed by the decoding device 300 disclosed in FIG. 3. More specifically, S1800 and S1810 are entropy disclosed in FIG.
- S1820 may be performed by the prediction unit 330 disclosed in FIG. 3
- S1830 may be performed by the addition unit 340 disclosed in FIG. 3.
- operations according to S1800 to S1830. are based on some of the contents described above in Figs. 1 to 17. Therefore, specific contents overlapping with the contents described above in Figs. 1 to 17 will be omitted or simplified.
- the decoding apparatus As shown in Fig. 19, the decoding apparatus according to the embodiment is
- Fig. 19 may not be essential components of the decoding device, and the decoding device is It may be implemented by more or less components than the components shown in FIG. 19.
- the entropy decoding unit (3W), the prediction unit 330, and the addition unit 340 are each implemented as a separate chip, or at least two or more components are It can also be implemented through a chip.
- the decoding apparatus includes partition information for a current picture.
- image information including prediction infomiation about the current block included in the current picture can be obtained from the bitstream.
- the entropy decoding unit (3W) of the decoding device can obtain image information including segmentation information for the current picture and prediction information for the current block included in the current picture from the bitstream.
- the division information includes information for dividing the current picture into a plurality of tiles, information for dividing the current picture into a plurality of tile groups, or dividing the current picture into a plurality of slices. It may include at least one of the information for this purpose.
- the prediction information may include at least one of information on intra prediction for the current block, information on inter prediction, or information on CIIP (Combined Inter Intra Prediction).
- the partitioning structure of the current picture based on a plurality of tiles can also be reduced (S18W). More specifically, the entropy decoding unit 3W of the decoding device includes the A partitioning structure of the current picture based on a plurality of tiles may be provided based on the partitioning information for the current picture.
- the decoding apparatus may derive prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles (S1820). More specifically, the prediction unit 330 of the decoding apparatus may derive prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles.
- the picture can be restored (S1830). More specifically, the adding unit 340 of the decoding device can restore the current picture based on the prediction samples.
- the division information on the current picture is, lag information on whether the current picture is divided into motion constrained tile sets (MCTS), and the number of MCTSs in the current picture It may include at least one of information, location information of a tile positioned at the top-left for each of the MCTSs, or location information of a tile positioned at the bottom-right for each of the MCTSs. .
- the decoding apparatus may derive MCTSs for the current picture based on the division information.
- the decoding apparatus merges the MCTSs to form a merged picture, and the The merged picture can be decoded based on MCTSs.
- the first MCTS among the MCTSs corresponds to a region having a first resolution
- the second MCTS of the MCTSs may correspond to a region having a second resolution.
- the first resolution may be higher than the second resolution.
- the current picture is acquired from 360-degree video data
- the first MCTSs may correspond to a viewport area in the current picture.
- each of the MCTSs may include tiles located in each of preset regions in the current picture.
- the split information on the current picture is based on whether or not tile group information on tiles included in at least one tile group exists in the PPS or a tile group header referring to the PPS. Information can be included.
- the division information on the current picture may include information on unit sizes of the plurality of tiles.
- At least one tile size among the plurality of tiles may be smaller than the size of a coding tree unit (CTU).
- CTU coding tree unit
- a picture can be flexibly partitioned into a plurality of tiles. Further, according to this disclosure, the efficiency of picture partitioning can be improved based on the division information for the current picture.
- FIG. 20 is a flow chart showing the operation of the encoding apparatus according to an embodiment
- FIG. 21 is a block diagram showing the configuration of the encoding apparatus according to the embodiment.
- the encoding device according to FIGS. 20 and 21 is a decoding device according to FIGS. 18 and 19 2020/175908 1» (: 1 ⁇ 1 ⁇ 2020/002733) Corresponding operations can be performed. Therefore, the operations of the encoding device to be described later in Figs. 20 and 21 can be applied in the same way to the decoding device according to Figs. 18 and 19. have.
- Each step disclosed in FIG. 20 may be performed by the encoding apparatus 200 disclosed in FIG. 2. More specifically, 82000 and 32010 may be performed by the image dividing unit 210 disclosed in FIG. 2, and 3 2020. And 32030 may be performed by the prediction unit 220 disclosed in FIG. 2, and 82040 may be performed by the entropy encoding unit 240 disclosed in FIG. 2. In addition, operations according to 82000 to 32040 are described above in FIGS. 1 to 17. It is based on some of the contents described above. Therefore, specific contents overlapping with the contents described above in Figs. 1 to 17 will be omitted or simplified.
- the encoding apparatus may include an image division unit 210, a prediction unit 220, and an entropy encoding unit 240.
- All of the components shown in 21 may not be essential components of the encoding device, and the encoding device may be implemented by more or less components than the components shown in FIG.
- the image division unit 210, the prediction unit 220, and the entropy encoding unit 240 are each implemented as a separate chip, or at least two or more components are It can also be implemented through the chip.
- the encoding apparatus can divide a current picture into a plurality of tiles.
- the image dividing unit 210 of the encoding device may divide the current picture into a plurality of tiles.
- the encoding apparatus may generate segmentation information for the current picture based on the plurality of tiles. More specifically, the image segmentation unit (0) of the encoding device is the above. Split information for the current picture may be generated based on a plurality of tiles.
- the encoding apparatus may derive prediction samples for the current block included in one of the plurality of tiles 2020). More specifically, the prediction unit 220 of the encoding apparatus includes: Prediction samples for the current block included in one of the plurality of tiles can be derived.
- the encoding apparatus may generate prediction information for the current block based on the prediction samples 2030). More specifically, the prediction unit 220 of the encoding apparatus is based on the prediction samples. With this, prediction information for the current block can be generated.
- the encoding apparatus may encode image information including segmentation information on the current picture and prediction information on the current block 2040). More specifically, it is possible to encode image information including at least one of division information for the current picture or prediction information for the current block. [249] In one embodiment, the division information on the current picture is, the current picture is
- MCTS motion constrained tile sets
- information on the number of MCTSs in the current picture information on the location of a tile located at the top-left for each of the MCTSs
- information on the location of a tile located at the top-left for each of the MCTSs Or, for each of the MCTSs, at least one of the location information of the tile located at the bottom-right side may be included.
- the encoding apparatus may combine the MCTSs to form a merged picture and encode the merged picture based on the MCTSs.
- the first MCTSs among the MCTSs It corresponds to a region having a first resolution
- second MCTSs may correspond to a region having a second resolution.
- the first resolution may be higher than the second resolution.
- the current picture is acquired from 360-degree video data
- the first MCTSs may correspond to a viewport area in the current picture.
- each of the MCTSs may include tiles located in each of preset regions in the current picture.
- the split information for the current picture is determined whether or not tile group information for tiles included in at least one tile group exists in a PPS or a tile group header referring to the PPS. Information can be included.
- the division information on the current picture may include information on a unit size of the plurality of tiles.
- At least one tile size among the plurality of tiles may be smaller than the size of a coding tree unit (CTU).
- CTU coding tree unit
- the above-described method according to this disclosure can be implemented in the form of software, and the encoding device and/or decoding device according to this disclosure can perform image processing such as TV, computer, smartphone, set-top box, and display device. It can be included in a device that performs.
- Modules are stored in memory and can be executed by the processor.
- the memory can be inside or outside the processor, it is well known and can be connected to the processor by various means.
- Processor is ASIC (application-specific integrated circuit), other chipset, logic circuit And/or data processing devices.
- the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media and/or other storage devices, i.e.,
- ROM read-only memory
- RAM random access memory
- flash memory memory cards, storage media and/or other storage devices, i.e.,
- the embodiments described in the present disclosure may be implemented and performed on a processor, microprocessor, controller, or chip.
- the functional units shown in each diagram may be implemented and performed on a computer, processor, microprocessor, controller or chip.
- information for implementation (ex. information on instructions) or algorithms can be stored on a digital storage medium.
- the decoding device and the encoding device to which this disclosure is applied are multimedia broadcasting.
- Transmission/reception device mobile communication terminal, home cinema video device, digital cinema video device, surveillance camera, video conversation device, real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, video-on-demand (VoD) service provider device , OTT video (Over the top video) device, Internet streaming service providing device,
- real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, video-on-demand (VoD) service provider device , OTT video (Over the top video) device, Internet streaming service providing device,
- 3D (3D) video device VR (virtual reality) device, AR (argumente reality) device, video phone video device, transportation terminal (ex. vehicle (including self-driving vehicle) terminal, airplane terminal, ship terminal, etc.) and It can be included in medical video equipment, etc., and can be used to process video signals or data signals.
- OTT video (Over the top video) devices include game consoles, Blu-ray players, Internet access TVs, home theater systems, It can include smartphones, tablet PCs, and DVR (Digital Video Recoder).
- the processing method to which this disclosure is applied can be produced in the form of a program executed by a computer, and can be stored in a computer-readable recording medium.
- Multimedia data having a data structure according to the present disclosure can also be produced by a computer.
- the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
- the computer-readable recording medium is, for example, a computer-readable recording medium.
- it can include Blu-ray disk (BD), universal serial bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical data storage device.
- the temporary readable recording medium includes media implemented in the form of a carrier (for example, transmission via the Internet).
- the bitstream generated by the encoding method is on a computer readable recording medium
- It can be stored or transmitted over a wired or wireless communication network.
- an embodiment of the present disclosure may be implemented as a computer program product using a program code, and the program code may be executed in a computer by an embodiment of the present disclosure.
- the program code is a carrier readable by a computer. Can be stored on
- Figure 22 shows an example of a content streaming system to which the disclosure of this document can be applied.
- the content streaming system to which this disclosure is applied is a large encoding server, streaming server, web server, media storage, user device and multimedia input May contain devices.
- the above encoding server inputs multimedia such as smartphones, cameras, camcorders, etc.
- bitstream By compressing the content input from the devices into digital data and transmitting it to the streaming server.
- multimedia input devices such as smartphones, cameras, and camcorders directly generate the bitstream.
- the encoding server may be omitted.
- the bitstream may be generated by an encoding method or a bitstream generation method to which the present disclosure is applied, and the streaming server may temporarily store the bitstream while transmitting or receiving the bitstream.
- the streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server serves as a medium that informs the user of what services exist.
- the web server delivers it to the streaming server, and the streaming server transmits multimedia data to the user.
- the content streaming system may include a separate control server, in which case the control server is the above. It controls the command/response between devices in the content streaming system.
- the streaming server can receive content from a media storage and/or encoding server. For example, when receiving content from the encoding server, it can receive the content in real time. In this case, a seamless streaming service In order to provide a, the streaming server may store the bitstream for a predetermined time.
- Computer laptop computer
- digital broadcasting terminal PDA (personal digital assistants)
- PMP portable multimedia player
- navigation slate PC
- tablet PC ultrabook
- wearable device for example, smartwatch, smart glass, head mounted display
- digital TV desktop computer
- digital signage etc.
- Each server in the content streaming system can be operated as a distributed server, and in this case, data received from each server can be distributed and processed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method for decoding an image by a decoding device according to the present disclosure comprises the steps of: obtaining image information including partition information of a current picture and prediction information of a current block included in the current picture from a bitstream; deriving a partitioning structure of the current picture, which is based on a plurality of tiles, on the basis of the partition information of the current picture; deriving prediction samples of the current block on the basis of the prediction information of the current block included in one tile among the plurality of tiles; and reconstructing the current picture on the basis of the prediction samples.
Description
명세서 Specification
발명의명칭:시그널링된정보에 기반한픽처파티셔닝방법 및 장치 Name of the invention: Picture partitioning method and apparatus based on signaled information
기술분야 Technical field
[1] 본개시는영상코딩기술에관한것으로서보다상세하게는영상코딩 [1] This disclosure is about video coding technology, and more specifically, video coding.
시스템에서시그널링된정보에기반한픽처파티셔닝방법및장치에관한 것이다. It relates to a picture partitioning method and apparatus based on information signaled by the system.
배경기술 Background
四 최근 4K또는 8K이상의 UHD(Ultra High Definition)영상/비디오와같은 四 The latest 4K or 8K or higher UHD (Ultra High Definition) video/video
고해상도,고품질의영상/비디오에대한수요가다양한분야에서증가하고있다. 영상/비디오데이터가고해상도,고품질이될수록기존의영상/비디오데이터에 비해상대적으로전송되는정보량또는비트량이증가하기때문에기존의 유무선광대역회선과같은매체를이용하여영상데이터를전송하거나기존의 저장매체를이용해영상/비디오데이터를저장하는경우,전송비용과저장 비용이증가된다. The demand for high-resolution, high-quality video/video is increasing in various fields. The higher the resolution and quality of the video/video data, the higher the amount of information or bits to be transmitted compared to the existing video/video data.Therefore, the video data can be transmitted using a medium such as a wired/wireless broadband line or an existing storage medium When the video/video data is stored using the method, the transmission cost and storage cost increase.
[3] 또한,최근 VR( Virtual Reality), AR(Artificial Realtiy)컨텐츠나홀로그램등의 실감미디어 (Immersive Media)에대한관심및수요가증가하고있으며 ,게임 영상과같이현실영상과다른영상특성을갖는영상/비디오에대한방송이 증가하고있다. [3] In addition, interest and demand for immersive media such as VR (Virtual Reality) and AR (Artificial Realtiy) contents and holograms are increasing recently. Broadcasting for video/video is increasing.
[4] 이에따라,최근다양한특성을갖는영상/비디오응용프로그램에서 [4] Accordingly, in recent years, video/video applications with various characteristics
효율적으로영상/비디오를압축및재생하기위해적용될수있는,유연한픽처 파티셔닝방법이요구된다. A flexible picture partitioning method that can be applied to efficiently compress and play back images/videos is required.
발명의상세한설명 Detailed description of the invention
기술적과제 Technical task
[5] 본개시의기술적과제는영상코딩효율을높이는방법및장치를제공함에 있다. [5] The technical task of this disclosure is to provide a method and apparatus to increase the image coding efficiency.
[6] 본개시의다른기술적과제는파티셔닝정보를시그널링하는방법및장치를 제공함에 있다. [6] Another technical task of this disclosure is to provide a method and apparatus for signaling partitioning information.
[7] 본개시의또다른기술적과제는시그널링된정보에기반하여픽처를 [7] Another technical task of this disclosure is to create pictures based on signaled information.
유연하게파티셔닝하는방법및장치를제공함에있다. It is to provide a flexible partitioning method and device.
[8] 본개시의또다른기술적과제는현재픽처에대한분할정보를기반으로현재 픽처를파티셔닝하는방법및장치를제공함에있다. [8] Another technical task of this disclosure is to provide a method and apparatus for partitioning a current picture based on partition information for the current picture.
[9] 본개시의또다른기술적과제는,현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한플래그정보,현재픽처내 MCTS들의개수에 대한개수정보, MCTS들각각에대하여좌상측 (top-left)에위치하는타일의위치 정보또는 MCTS들각각에대하여우하측 (bottom-right)에위치하는타일의위치
정보중적어도하나에기초하여현재픽처를파티셔닝하는방법및장치를 제공함에 있다. [9] Another technical task of this disclosure is flag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and upper left for each of the MCTSs. Position information of the tile located at the (top-left) or the position of the tile located at the bottom-right for each MCTS It is intended to provide a method and apparatus for partitioning the current picture based on at least one of the information.
과제해결수단 Problem solving means
[1이 본개시의일실시예에따르면,디코딩장치에의하여수행되는영상디코딩 방법이제공된다.상기방법은,현재픽처에대한분할정보 (partition information) 및상기현재픽처에포함된현재블록에대한예즉정보 (prediction information)를 포함하는영상정보를비트스트림으로부터획득하는단계 ;상기현재픽처에 대한상기분할정보를기반으로,복수의타일들에기반한상기현재픽처의 분할구조 (partitioning structure)를도줄하는단계;상기복수의타일들중하나의 타일에포함된상기현재블록에대한상기 예측정보를기반으로,상기현재 블록에대한예측샘플들을도출하는단계;및상기예측샘플들을기반으로 상기현재픽처를복원하는단계를포함하고,상기현재픽처에대한상기분할 정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지 여부에대한플래그정보,상기현재픽처내 MCTS들의개수에대한개수정보, 상기 MCTS들각각에대하여좌상측 (top-left)에위치하는타일의위치정보또는 상기 MCTS들각각에대하여우하측 (bottom-right)에위치하는타일의위치정보 중적어도하나를포함한다. [1] According to an embodiment of the present disclosure, an image decoding method performed by a decoding apparatus is provided. The method includes partition information for a current picture and a current block included in the current picture. For example, acquiring image information including prediction information from a bitstream; Based on the partitioning information for the current picture, a partitioning structure of the current picture based on a plurality of tiles is also provided. A step of; Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples Including a step of restoring, the division information on the current picture, flag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, It includes at least one of the location information of a tile located at the top-left of each of the MCTSs or the location information of a tile located at the bottom-right of each of the MCTSs.
[11] 본개시의다른일실시예에따르면,영상디코딩을수행하는디코딩장치가 제공된다.상기디코딩장치는,현재픽처에대한분할정보 (partition information) 및상기현재픽처에포함된현재블록에대한예즉정보 (prediction information)를 포함하는영상정보를비트스트림으로부터획득하고,상기현재픽처에대한 상기분할정보를기반으로,복수의타일들에기반한상기현재픽처의분할 구조 (partitioning structure)를도줄하는엔트로피디코딩부;상기복수의타일들 중하나의타일에포함된상기현재블록에대한상기 예측정보를기반으로, 상기현재블록에대한예측샘플들을도출하는예측부;및상기예측샘플들을 기반으로상기현재픽처를복원하는가산부를포함하고,상기현재픽처에대한 상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로 분할되는지여부에대한플래그정보,상기현재픽처내 MCTS들의개수에대한 개수정보,상기 MCTS들각각에대하여좌상측 (top-left)에위치하는타일의위치 정보또는상기 MCTS들각각에대하여우하측 (bottom-right)에위치하는타일의 위치정보중적어도하나를포함한다. [11] According to another embodiment of the present disclosure, a decoding device for performing image decoding is provided. The decoding device includes partition information for a current picture and a current block included in the current picture. For example, obtaining image information including prediction information from a bitstream, and reducing the partitioning structure of the current picture based on a plurality of tiles, based on the partitioning information for the current picture. An entropy decoding unit; A prediction unit for deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples Including an adder for restoring the current picture, the division information on the current picture, flag information on whether the current picture is divided into motion constrained tile sets (MCTS), number information on the number of MCTSs in the current picture , At least one of the location information of a tile positioned at the top-left of each of the MCTSs or the location information of a tile positioned at the bottom-right of each of the MCTSs.
[12] 본개시의또다른일실시예에따르면,인코딩장치에의하여수행되는영상 인코딩방법이제공된다.상기방법은,현재픽처를복수의타일들로분할하는 단계 ;상기복수의타일들을기반으로상기현재픽처에대한분할정보를 생성하는단계;상기복수의타일들중하나의타일에포함된현재블록에대한 예측샘플들을도출하는단계 ;상기 예측샘플들을기반으로상기현재블록에 대한예측정보를생성하는단계;및상기현재픽처에대한분할정보및상기
현재블록에대한예측정보를포함하는영상정보를인코딩하는단계를 포함하고,상기현재픽처에대한상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기현재 픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에대하여 우하측 (bottom-right)에위치하는타일의위치정보중적어도하나를포함한다. [12] According to another embodiment of the present disclosure, an image encoding method performed by an encoding device is provided. The method includes the steps of dividing a current picture into a plurality of tiles; based on the plurality of tiles. Generating segmentation information for the current picture; Deriving prediction samples for a current block included in one of the plurality of tiles; Generating prediction information for the current block based on the prediction samples And the division information for the current picture and the Including the step of encoding image information including prediction information on the current block, wherein the division information on the current picture is, Neulrag information on whether the current picture is divided into motion constrained tile sets (MCTS). ,Information on the number of MCTSs in the current picture, location information of a tile located on the top-left side of each of the MCTSs, or information on the bottom-right side of each of the MCTSs Includes at least one of the tile's location information.
[13] 본개시의또다른일실시예에따르면,영상인코딩을수행하는인코딩장치가 제공된다.상기인코딩장치는,현재픽처를복수의타일들로분할하고,상기 복수의타일들을기반으로상기현재픽처에대한분할정보를생성하는영상 분할부,상기복수의타일들중하나의타일에포함된현재블록에대한예측 샘플들을도출하고,상기예측샘플들을기반으로상기현재블록에대한예측 정보를생성하는예측부및상기현재픽처에대한분할정보및상기현재 블록에대한예측정보를포함하는영상정보를인코딩하는엔트로피 According to another embodiment of the present disclosure, an encoding device for performing image encoding is provided. The encoding device divides a current picture into a plurality of tiles, and the current picture is divided based on the plurality of tiles. An image segmentation unit that generates segmentation information for a picture, extracts prediction samples for the current block included in one of the plurality of tiles, and generates prediction information for the current block based on the prediction samples. Entropy encoding image information including prediction unit and segmentation information for the current picture and prediction information for the current block
인코딩부를포함하고,상기현재픽처에대한상기분할정보는,상기현재 픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그 정보,상기현재픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에 대하여좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에 대하여우하측 (bottom-right)에위치하는타일의위치정보중적어도하나를 포함한다. Including an encoding unit, the partition information on the current picture, Always lag information on whether the current picture is divided into motion constrained tile sets (MCTS), number information on the number of MCTSs in the current picture, the It includes at least one of the location information of a tile located at the top-left of each of the MCTSs or the location information of a tile located at the bottom-right of each of the MCTSs.
[14] 본개시의또다른일실시예에따르면,영상인코딩방법에의하여인코딩된 영상정보를저장하는디코더로판독가능한저장매체가제공된다.상기일 실시예에따른인코딩방법은,현재픽처를복수의타일들로분할하는단계; 상기복수의타일들을기반으로상기현재픽처에대한분할정보를생성하는 단계;상기복수의타일들중하나의타일에포함된현재블록에대한예측 샘플들을도출하는단계;상기예측샘플들을기반으로상기현재블록에대한 예측정보를생성하는단계 ;및상기현재픽처에대한분할정보및상기현재 블록에대한예측정보를포함하는영상정보를인코딩하는단계를포함하고, 상기현재픽처에대한상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기현재픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 [14] According to another embodiment of the present disclosure, a storage medium readable by a decoder for storing image information encoded by the image encoding method is provided. The encoding method according to the above exemplary embodiment includes a plurality of current pictures. Dividing into tiles; Generating segmentation information for the current picture based on the plurality of tiles; deriving prediction samples for the current block included in one of the plurality of tiles; the current picture based on the predicted samples Generating prediction information for a block; And encoding image information including segmentation information for the current picture and prediction information for the current block, wherein the segmentation information for the current picture is, Always lag information on whether the current picture is divided into MCTS (motion constrained tile sets), information on the number of MCTSs in the current picture, and information on each of the MCTSs
좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에대하여 우하측 (bottom-right)에위치하는타일의위치정보중적어도하나를포함한다. 발명의효과 It includes at least one of the location information of the tile located at the top-left or the location information of the tile located at the bottom-right of each of the MCTSs. Effects of the Invention
[15] 본개시에따르면전반적인영상/비디오압축효율을높일수있다. [15] According to this disclosure, the overall image/video compression efficiency can be improved.
[16] 본개시에따르면픽처파티셔닝의효율을높일수있다. [16] According to this disclosure, the efficiency of picture partitioning can be improved.
[17] 본개시에따르면현재픽처에대한분할정보를기반으로픽처파티셔닝의 유연성을높일수있다.
2020/175908 1»(:1/10公020/002733 본개시에 따르면현재픽처가 MCTS (motion constrained tile set)들로 분할되는지 여부에 대한플래그정보,현재픽처내 MCTS들의 개수에 대한개수 정보, MCTS들각각에 대하여좌상측 (top-left)에위치하는타일의위치 정보또는 MCTS들각각에 대하여우하측 (bottom-right)에위치하는타일의위치 정보중 적어도하나에기초하여 현재픽처를분할함으로써,픽처파티셔닝을위한 시그널링의 효율을높일수있다. [17] According to this disclosure, it is possible to increase the flexibility of picture partitioning based on the division information for the current picture. 2020/175908 1»(:1/10公020/002733 According to this disclosure, flag information on whether the current picture is divided into motion constrained tile sets (MCTS)), information on the number of MCTSs in the current picture, MCTS By dividing the current picture based on at least one of the position information of the tile located at the top-left with respect to each of the lifts or the position information of the tile located at the bottom-right of each of the MCTSs, It can increase the efficiency of signaling for partitioning.
도면의간단한설명 Brief description of the drawing
[19] 도 1은본개시가적용될수있는비디오/영상코딩시스템의 예를개략적으로 나타낸다. [19] Fig. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
[2이 도 2는본개시가적용될수있는비디오/영상인코딩장치의구성을 [2] Fig. 2 shows the configuration of a video/video encoding device to which this disclosure can be applied.
개략적으로설명하는도면이다. This is a schematic drawing.
[21] 도 3은본개시가적용될수있는비디오/영상디코딩장치의구성을 [21] Figure 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
개략적으로설명하는도면이다. This is a schematic drawing.
도 4는코딩된데이터에 대한계층구조를예시적으로나타낸다. 4 exemplarily shows a hierarchical structure for coded data.
도 5는픽처를파티셔닝하는일예를나타내는도면이다. 5 is a diagram showing an example of partitioning a picture.
도 6는일실시예에 따른타일및/또는타일그룹에기반한픽처 인코딩 절차를 도시하는흐름도이다. 6 is a flowchart illustrating a procedure for encoding a picture based on a tile and/or a tile group according to an embodiment.
[25] 도 7은일실시예에 따른타일및/또는타일그룹에기반한픽처디코딩 절차를 도시하는흐름도이다. 7 is a flowchart illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
[26] 도 8은픽처를복수의 타일들로파티셔닝하는일예를나타내는도면이다. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
[27] 도 9는일실시예에 따른인코딩장치의구성을도시하는블록도이다. 9 is a block diagram showing the configuration of an encoding device according to an embodiment.
[28] 도 W은일실시예에 따른디코딩장치의구성을도시하는블록도이다. W is a block diagram showing the configuration of a decoding apparatus according to an embodiment.
[29] 도 11은현재픽처를구성하는타일및타일그룹단위의 일예를도시하는 11 is a diagram showing an example of a tile and a tile group unit constituting the current picture
] ] ] 도면이다. ]]] This is a drawing.
243 243
[ 2223이 도 12는타일그룹정보의시그널링구조의 일 예를개략적으로도시하는 [2223] Fig. 12 schematically shows an example of the signaling structure of tile group information
도면이다. It is a drawing.
[31] 도 13은화상회의용비디오프로그램에서픽처의 일 예를나타내는도면이다. 13 is a diagram showing an example of a picture in a video conference video program.
[32] 도 14는화상회의용비디오프로그램에서픽처를타일또는타일그룹으로 파티셔닝하는일예를나타내는도면이다. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
[33] 도 15는픽처를 MCTS(Motion Constrained Tile Set)에 기반하여타일또는타일 그룹으로파티셔닝하는일 예를나타내는도면이다. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on MCTS (Motion Constrained Tile Set).
[34] 도 16은픽처를 ROI영역에기반하여분할하는일 예를나타내는도면이다. 16 is a diagram illustrating an example of dividing a picture based on an ROI area.
[35] 도 17은픽처를복수의 타일들로파티셔닝하는일예를나타내는도면이다. 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
[36] 도 18은일실시예에 따른디코딩장치의동작을도시하는흐름도이다. 18 is a flow chart showing the operation of the decoding apparatus according to an embodiment.
[37] 도 19는일실시예에 따른디코딩장치의구성을도시하는블록도이다. 19 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
[38] 도 20은일실시예에 따른인코딩장치의동작을도시하는흐름도이다. 20 is a flow chart showing the operation of the encoding device according to an embodiment.
[39] 도 21는일실시예에 따른인코딩장치의구성을도시하는블록도이다.
2020/175908 1»(:1^1{2020/002733 21 is a block diagram showing the configuration of an encoding apparatus according to an embodiment. 2020/175908 1»(:1^1{2020/002733
[4이 도 22는본문서의개시가적용될수있는컨텐츠스트리밍시스템의예를 [4] Fig. 22 shows an example of a content streaming system to which the disclosure of this document can be applied.
나타낸다. Show.
발명의실시를위한형태 Modes for the implementation of the invention
[41] 본개시는다양한변경을가할수있고여러가지실시예를가질수있는바, 특정실시예들을도면에 예시하고상세하게설명하고자한다.그러나,이는본 개시를특정실시예에한정하려고하는것이아니다.본명세서에서상용하는 용어는단지특정한실시예를설명하기위해사용된것으로,본개시의기술적 사상을한정하려는의도로사용되는것은아니다.단수의표현은문맥상 명백하게다르게뜻하지않는한,복수의표현을포함한다.본명세서에서 "포함하다”또는 "가지다”등의용어는명세서상에기재된특징,숫자,단계, 동작,구성요소,부품또는이들을조합한것이존재함을지정하려는것이지, 하나또는그이상의다른특징들이나숫자,단계,동작,구성요소,부품도는 이들을조합한것들의존재또는부가가능성을미리배제하지않는것으로 이해되어야한다. [41] Since this disclosure can make various changes and have various embodiments, specific embodiments are illustrated in the drawings and described in detail. However, this is not intended to limit the present disclosure to specific embodiments. The terms commonly used in the specification are only used to describe specific embodiments and are not intended to limit the technical idea of this disclosure. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as "include" or "have" are intended to designate the existence of a feature, number, step, action, component, part, or combination of one or more other features listed in the specification. It is to be understood that figures, numbers, steps, actions, components, and parts diagrams do not preclude the existence or additional possibility of any combination of them.
[42] 한편,본개시에서설명되는도면상의각구성들은서로다른특징적인 [42] On the other hand, each of the configurations in the drawings described in this disclosure
기능들에관한설명의편의를위해독립적으로도시된것으로서,각구성들이 서로별개의하드웨어나별개의소프트웨어로구현된다는것을의미하지는 않는다.예컨대,각구성중두개이상의구성이합쳐져하나의구성을이룰수도 있고,하나의구성이복수의구성으로나뉘어질수도있다.각구성이통합 및/또는분리된실시예도본개시의본질에서벗어나지않는한본개시의 권리범위에포함된다. As shown independently for the convenience of explanation of the functions, it does not mean that each configuration is implemented with separate hardware or separate software; for example, two or more of each configuration may be combined to form a single configuration. One configuration may be divided into a plurality of configurations. Embodiments in which each configuration is incorporated and/or separated are also included in the scope of the rights of this disclosure, as long as it does not depart from the essence of this disclosure.
[43] 본명세서에서 1또는 쇼 or피”는“오직쇼”,“오직 ,또는“쇼와 ^모두”를 의미할수있다.달리표현하면,본명세서에서 1또는 쇼 01피”는 1및/또는 쇼 (1/(표피”으로해석될수있다.예를들어 ,본명세서에서 ' 6또는(:(人 3 or(:)”는“오직쇼”,“오직 ,,“오직 0 \또는“人:8및(:의임의의모든조합( [43] In this specification, 1 or show or p" may mean "only show," "only, or "show and ^ all." In other words, 1 or show 01 in this specification is 1 and/ Or it can be interpreted as show (1/(epidermis”. For example, in this specification, “6” or “:(人 3 or(:)” means “only show”, “only ,,” only 0 \ or “人:8 and (:any combination of arbitrary (
6 (1〔:)”를의미할수있다. 6 Can mean (1〔:)”.
[44] 본명세서에서사용되는슬래쉬(/)나쉼표(¥111111幻는“및/또는 少버”을 [44] A forward slash (/) or comma (¥111111幻) is used in this specification.
의미할수있다.예를들어,“쇼思”는 1및/또는 6”를의미할수있다.이에따라 For example, "show 思" could mean 1 and/or 6".
“쇼思”는“오직쇼”,“오직 ,,또는“쇼와 ^모두”를의미할수있다.예를들어 ,“人 3,(:”는“人 3또는(:”를의미할수있다. “Show 思” can mean “only show”, “only, or “show and ^ all”. For example, “人 3,(:” can mean “人 3 or (:”).
동일하게해석될수있다. It can be interpreted in the same way.
[46] 또한,본명세서에서“적어도하나의 6및 01 6 산(I!)”는, [46] Also, in this specification, “at least one 6 and 01 6 mountain (I!)” means,
“오직쇼”,“오직 ,,“오직 0 \또는“ :8및(:의임의의모든조합(
combination of A, B and C)”를의미할수있다.또한,“적어도하나의 A, B또는 C(at least one of A, B or C)”나“적어도하나의 A, B및/또는 C(at least one of A, B and/or C)”는“적어도하나의 A, B및 C(at least one of A, B and C)”를의미할수 있다. “Only show”, “only ,,” only 0 \ or “:8 and (:any combination of any ( combination of A, B and C)”. Also, “at least one of A, B or C” or “at least one A, B and/or C ( "at least one of A, B and/or C)" can mean "at least one of A, B and/or C".
[47] 또한,본명세서에서사용되는괄호는“예를들어 (for example)”를의미할수 있다.구체적으로,“예측 (인트라예측)”로표시된경우,“예측”의일례로“인트라 예측”이제안된것일수있다.달리표현하면본명세서의“예측”은“인트라 예측”으로제한 (limit)되지않고,“인트라예측”이“예측”의일례로제안될것일 수있다.또한,“예측 (즉,인트라예측)”으로표시된경우에도,“예측”의일례로 “인트라예측”이제안된것일수있다. [47] In addition, parentheses used in this specification may mean “for example.” Specifically, when indicated as “predictive (intra prediction)”, “intra prediction” as an example of “prediction” In other words, “predictions” in this specification are not limited to “intra predictions”, and “intra predictions” may be suggested as an example of “predictions”. That is, even if it is marked as “intra prediction)”, “intra prediction” may have been proposed as an example of “prediction”.
[48] 본명세서에서하나의도면내에서개별적으로설명되는기술적특징은, 개별적으로구현될수도있고,동시에구현될수도있다. [48] In this specification, technical features that are described individually within a single drawing may be implemented individually or simultaneously.
[49] 이하,첨부한도면들을참조하여,본개시의바람직한실시예를보다상세하게 설명하고자한다.이하,도면상의동일한구성요소에대해서는동일한참조 부호를사용하고동일한구성요소에대해서중복된설명은생략될수있다. [49] Hereinafter, with reference to the accompanying drawings, a preferred embodiment of the present disclosure will be described in more detail. Hereinafter, the same reference numerals are used for the same components in the drawings, and duplicated descriptions for the same components will be described. Can be omitted.
[5이 도 1은본개시가적용될수있는비디오/영상코딩시스템의 예를개략적으로 나타낸다. [5] Fig. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
[51] 도 1을참조하면,비디오/영상코딩시스템은제 1장치 (소스디바이스)및제 2 장치 (수신디바이스)를포함할수있다.소스디바이스는인코딩된 Referring to FIG. 1, the video/video coding system may include a first device (source device) and a second device (receive device). The source device is encoded.
비디오 (video)/영상 (image)정보또는데이터를파일또는스트리밍형태로 디지털저장매체또는네트워크를통하여수신디바이스로전달할수있다. Video/image information or data can be transferred to a receiving device via a digital storage medium or network in the form of a file or streaming.
[52] 상기소스디바이스는비디오소스,인코딩장치,전송부를포함할수있다. 상기수신디바이스는수신부,디코딩장치및렌더러를포함할수있다.상기 인코딩장치는비디오/영상인코딩장치라고불릴수있고,상기디코딩장치는 비디오/영상디코딩장치라고불릴수있다.송신기는인코딩장치에포함될수 있다.수신기는디코딩장치에포함될수있다.렌더러는디스플레이부를포함할 수도있고,디스플레이부는별개의디바이스또는외부컴포넌트로구성될수도 있다. [52] The source device may include a video source, an encoding device, and a transmission unit. The receiving device may include a receiver, a decoding device, and a renderer. The encoding device may be referred to as a video/image encoding device, and the decoding device may be referred to as a video/image decoding device. The transmitter may be included in the encoding device. The receiver may be included in the decoding device. The renderer may include a display unit, and the display unit may be composed of separate devices or external components.
[53] 비디오소스는비디오/영상의캡쳐 ,합성또는생성과정등을통하여 [53] Video sources are captured through video/video capture, synthesis, or generation
비디오/영상을획득할수있다.비디오소스는비디오/영상캡쳐디바이스 및/또는비디오/영상생성디바이스를포함할수있다.비디오/영상캡쳐 디바이스는예를들어,하나이상의카메라,이전에캡쳐된비디오/영상을 포함하는비디오/영상아카이브등을포함할수있다.비디오/영상생성 디바이스는예를들어컴퓨터,타블렛및스마트폰등을포함할수있으며 (전자적으로)비디오/영상을생성할수있다.예를들어,컴퓨터등을통하여 가상의비디오/영상이생성될수있으며,이경우관련데이터가생성되는 과정으로비디오/영상캡쳐과정이갈음될수있다. Video/image can be acquired Video sources can include video/image capture devices and/or video/image generation devices Video/image capture devices can be, for example, one or more cameras, previously captured video/image Video/image archives, etc. Video/image generation devices may include, for example, computers, tablets and smartphones, and may (electronically) generate video/images, e.g. computers A virtual video/video can be created through the like, and in this case, the video/video capture process can be replaced by the process of generating related data.
[54] 인코딩장치는입력비디오/영상을인코딩할수있다.인코딩장치는압축및
코딩효율을위하여 예측,변환,양자화등일련의절차를수행할수있다. [54] The encoding device can encode the input video/video. The encoding device can compress and For coding efficiency, a series of procedures such as prediction, transformation, and quantization can be performed.
인코딩된데이터 (인코딩된비디오/영상정보)는비트스트림 (bitstream)형태로 줄력될수있다. The encoded data (encoded video/video information) can be summarized in the form of a bitstream.
[55] 전송부는비트스트림형태로출력된인코딩된비디오/영상정보또는 [55] The transmission unit is encoded video/video information output in the form of a bitstream or
데이터를파일또는스트리밍형태로디지털저장매체또는네트워크를통하여 수신디바이스의수신부로전달할수있다.디지털저장매체는 USB, SD, CD, DVD,블루레이 , HDD, SSD등다양한저장매체를포함할수있다.전송부는 미리정해진파일포멧을통하여미디어파일을생성하기위한엘리먼트를 포함할수있고,방송/통신네트워크를통한전송을위한엘리먼트를포함할수 있다.수신부는상기비트스트림을수신/추출하여디코딩장치로전달할수 있다. Data can be transferred to the receiver of the receiving device via a digital storage medium or network in the form of a file or streaming. The digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc. The transmission unit may include an element for generating a media file through a predetermined file format and may include an element for transmission through a broadcasting/communication network. The receiving unit may receive/extract the bitstream and transmit it to the decoding device. have.
[56] 디코딩장치는인코딩장치의동작에대응하는역양자화,역변환,예측등 [56] The decoding device is inverse quantization, inverse transformation, prediction, etc. corresponding to the operation of the encoding device.
일련의절차를수행하여비디오/영상을디코딩할수있다. Video/video can be decoded by performing a series of procedures.
[57] 렌더러는디코딩된비디오/영상을렌더링할수있다.렌더링된비디오/영상은 디스플레이부를통하여디스플레이될수있다. [57] The renderer can render decoded video/video. The rendered video/video can be displayed through the display unit.
[58] 이문서는비디오/영상코딩에관한것이다.예를들어이문서에서개시된 [58] This document is about video/image coding. For example,
방법/실시예는 VVC (versatile video coding)표준, EVC (essential video coding) 표준, AVI (AOMedia Video 1)표준, AVS2 (2nd generation of audio video coding standard)또는차세대비디오/영상코딩표준 (ex. H.267 or H.268등)에개시되는 방법에적용될수있다. The method/embodiment includes a versatile video coding (VVC) standard, an essential video coding (EVC) standard, an AOMedia Video 1 (AVI) standard, a 2nd generation of audio video coding standard (AVS2), or a next-generation video/image coding standard (ex.H). .267 or H.268, etc.).
[59] 이문서에서는비디오/영상코딩에관한다양한실시예들을제시하며 ,다른 언급이없는한상기실시예들은서로조합되어수행될수도있다. [59] In this document, various embodiments of video/image coding are presented, and the above embodiments may be implemented in combination with each other unless otherwise stated.
[6이 이문서에서비디오 (video)는시간의흐름에따른일련의영상 (image)들의 [6 In this document, a video is a series of images over time.
집합을의미할수있다.픽처 (picture)는일반적으로특정시간대의하나의영상을 나타내는단위를의미하며,슬라이스 (slice)/타일 (tile)는코딩에 있어서픽처의 일부를구성하는단위이다.슬라이스/타일은하나이상의 CTU(coding tree unit)을 포함할수있다.하나의픽처는하나이상의슬라이스/타일로구성될수있다. It can mean a set. A picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding. A tile can contain more than one CTU (coding tree unit); a picture can consist of more than one slice/tile.
[61] 타일은특정타일열및특정타일열이내의 CTU들의사각영역이다 tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture).상기타일열은 CTU들의사각영역이고,상기사각영역은상기픽처의 높이와동일한높이를갖고,너비는픽처파라미터세트내의신택스요소들에 의하여명시될수있다 (The tile column is a rectangular region of CTUs having a height equal to the height of the picture and a width specified by syntax elements in the picture parameter set).상기타일행은 CTU들의사각영역이고,상기사각영역은 픽처파라미터세트내의신택스요소들에의하여명시되는너비를갖고,높이는 상기픽처의높이와동일할수있다 (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture).타일스캔은픽처를파티셔닝하는 CTU들의특정
2020/175908 1»(:1/10公020/002733 순차적오더링을나타낼수있고,상기 CTU들은타일내 CTU래스터스캔으로 연속적으로정렬될수있고,픽처내타일들은상기픽처의상기타일들의 래스터스캔으로연속적으로정렬될수있다 (A tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).슬라이스는다수의완전한타일들또는 하나의 NAL유닛에포함될수있는픽처의하나의타일내다수의연속적인 CTU행들을포함할수있다.이문서에서타일그룹과슬라이스는혼용될수 있다.예를들어본문서에서 tile group/tile group header는 slice/slice header로불리 수있다. [61] A tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture). The tile row is a rectangular region of CTUs. The tile column is a rectangular region of CTUs having a height equal to the height of the picture and width can be specified by syntax elements in the picture parameter set. a width specified by syntax elements in the picture parameter set).The tile row is a rectangular area of CTUs, the rectangular area has a width specified by syntax elements in the picture parameter set, and the height can be the same as the height of the picture. (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture). 2020/175908 1»(:1/10公020/002733 Can represent sequential ordering, the CTUs can be sequentially sorted by CTU raster scan within a tile, and tiles within a picture are continuous by raster scan of the tiles of the picture. (A tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture A slice can contain any number of complete tiles or multiple consecutive CTU rows within one tile of a picture that can be contained in a single NAL unit. Tile groups and slices can be mixed in this document, for example In a document, the tile group/tile group header can be called a slice/slice header.
[62] 한편,하나의픽처는둘이상의서브픽처로구분될수있다.서브픽처는픽처내 하나이상의슬라이스들의사각리전일수있다 (an mctangular mgion of one or more slices within a picture). [62] On the other hand, a picture can be divided into two or more subpictures. A subpicture can be a rectangular region of one or more slices within a picture (an mctangular mgion of one or more slices within a picture).
[63] 픽셀 (pixel)또는펠 (pel)은하나의픽처 (또는영상)을구성하는최소의단위를 의미할수있다.또한,픽셀에대응하는용어로서’샘플 (sample)’이사용될수 있다.샘플은일반적으로픽셀또는픽셀의값을나타낼수있으며,루마 (luma) 성분의픽셀/픽셀값만을나타낼수도있고,크로마 (chroma)성분의픽셀/픽셀 값만을나타낼수도있다. [63] A pixel or pel may mean the smallest unit constituting a picture (or image). In addition,'sample' may be used as a term corresponding to a pixel. Sample In general, can represent the pixel or pixel value, it can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component.
[64] 유닛 (unit)은영상처리의기본단위를나타낼수있다.유닛은픽처의특정영역 및해당영역에관련된정보중적어도하나를포함할수있다.하나의유닛은 하나의루마블록및두개의크로마 (ex. cb, cr)블록을포함할수있다.유닛은 경우에따라서블록 (block)또는영역 (area)등의용어와혼용하여사용될수있다. 일반적인경우, MxN블록은 M개의열과 N개의행으로이루어진샘플들 (또는 샘늘어레이 )또는변환계수 (transform coefficient)들의집합 (또는어레이 )을 포함할수있다. [64] A unit can represent a basic unit of image processing. A unit can contain at least one of a specific area of a picture and information related to that area. A unit can contain one luma block and two chromas (one luma block and two chromas). ex. cb, cr) may contain a block A unit may be used interchangeably with terms such as block or area in some cases. In general, the MxN block may include a set (or array) of samples (or sample array) or transform coefficients consisting of M columns and N rows.
[65] 도 2는본개시가적용될수있는비디오/영상인코딩장치의구성을 [65] Figure 2 shows the configuration of a video/video encoding apparatus to which this disclosure can be applied.
개략적으로설명하는도면이다.이하비디오인코딩장치라함은영상인코딩 장치를포함할수있다. This is a schematic diagram. Hereinafter, the video encoding device may include an image encoding device.
[66] 도 2를참조하면,인코딩장치 (200)는영상분할부 (image partitioner, 210), Referring to FIG. 2, the encoding device 200 includes an image partitioner 210,
예즉부 (predictor, 220),레지듀얼처리부 (residual processor, 230),엔트로피 인코딩부 (entropy encoder, 240),가산부 (adder, 250),필터링부 (filter, 260)및 메모리 (memory, 270)를포함하여구성될수있다.예즉부 (220)는인터 Predictor (220), residual processor (230), entropy encoder (240), adder (250), filtering unit (filter, 260) and memory (memory, 270) It can be configured to include. For example, the part 220 is
예측부 (221)및인트라예측부 (222)를포함할수있다.레지듀얼처리부 (230)는 변환부 (transformer, 232),양자화부 (quantizer 233),역양자화부 (dequantizer 234), 역변환부 (inverse transformer, 235)를포함할수있다.레지듀얼처리부 (230)은 감산부 (subtractor, 231)를더포함할수있다.가산부 (250)는복원부 (reconstructor) 또는복원블록생성부 (recontructged block generator)로불릴수있다.상술한영상 분할부 (210),예측부 (220),레지듀얼처리부 (230),엔트로피인코딩부 (240),
가산부 (250)및필터링부 (260)는실시예에따라하나이상의하드웨어 It may include a prediction unit 221 and an intra prediction unit 222. The residual processing unit 230 includes a transform unit 232, a quantizer 233, an inverse quantizer 234, and an inverse transform unit ( An inverse transformer 235 may be included. The residual processing unit 230 may further include a subtractor 231. The addition unit 250 may include a reconstructor or a recontructged block generator. The image segmentation unit 210 described above, the prediction unit 220, the residual processing unit 230, the entropy encoding unit 240, The addition unit 250 and the filtering unit 260 are
컴포넌트 (예를들어인코더칩셋또는프로세서)에의하여구성될수있다.또한 메모리 (270)는 DPB(decoded picture buffer)를포함할수있고,디지털저장매체에 의하여구성될수도있다.상기하드웨어컴포넌트는메모리 (270)을내/외부 컴포넌트로더포함할수도있다. The hardware component may be configured by a component (e.g., an encoder chipset or processor). Also, the memory 270 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium. The hardware component is a memory 270. You can also include more as internal/external components.
[67] 영상분할부 (2W)는인코딩장치 (200)에입력된입력영상 (또는,픽쳐 , [67] The image segmentation unit (2W) is an input image (or, picture, input) input to the encoding device 200
프레임)를하나이상의처리유닛 (processing unit)으로분할할수있다.일예로, 상기처리유닛은코딩유닛 (coding unit, CU)이라고불릴수있다.이경우코딩 유닛은코딩트리유닛 (coding tree unit, CTU)또는최대코딩유닛 (largest coding unit, LCU)으로부터 QTBTTT (Quad-tree binary-tree ternary-tree)구조에따라 재귀적으로 (recursively)분할될수있다.예를들어,하나의코딩유닛은쿼드 트리구조,바이너리트리구조,및/또는터너리구조를기반으로하위 (deeper) 뎁스의복수의코딩유닛들로분할될수있다.이경우예를들어쿼드트리 구조가먼저적용되고바이너리트리구조및/또는터너리구조가나중에적용될 수있다.또는바이너리트리구조가먼저적용될수도있다.더이상분할되지 않는최종코딩유닛을기반으로본개시에따른코딩절차가수행될수있다.이 경우영상특성에따른코딩효율등을기반으로,최대코딩유닛이바로최종 코딩유닛으로사용될수있고,또는필요에따라코딩유닛은 Frame) can be divided into one or more processing units. For example, the processing unit may be referred to as a coding unit (CU), in which case the coding unit is a coding tree unit (CTU). Alternatively, it can be divided recursively from the largest coding unit (LCU) according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure. For example, one coding unit has a quad tree structure, Based on the binary tree structure and/or ternary structure, it can be divided into a plurality of coding units of deeper depth. In this case, for example, the quad tree structure is applied first, and the binary tree structure and/or ternary structure is It may be applied later. Or the binary retrieval structure may be applied first. The coding procedure according to this disclosure may be performed based on the final coding unit that is no longer divided. In this case, based on the coding efficiency according to the image characteristics, etc., the maximum possible The coding unit can be used directly as the final coding unit, or if necessary, the coding unit can be
재귀적으로 (recursively)보다하위 뎁스의코딩유닛들로분할되어최적의 사이즈의코딩유닛이최종코딩유닛으로사용될수있다.여기서코딩절차라 함은후술하는예측,변환,및복원등의절차를포함할수있다.다른예로,상기 처리유닛은예즉유닛 (PU: Prediction Unit)또는변환유닛 (TU: Transform Unit)을 더포함할수있다.이경우상기예측유닛및상기변환유닛은각각상술한 최종코딩유닛으로부터분할또는파티셔닝될수있다.상기예측유닛은샘플 예측의단위일수있고,상기변환유닛은변환계수를유도하는단위및/또는 변환계수로부터레지듀얼신호 (residual signal)를유도하는단위일수있다. It is recursively divided into coding units of a lower depth, so that the optimal size coding unit can be used as the final coding unit. Here, the coding procedure may include procedures such as prediction, conversion, and restoration described later. As another example, the processing unit may further include a unit (PU: Prediction Unit) or a transformation unit (TU: Transform Unit). In this case, the prediction unit and the transformation unit are each divided from the final coding unit described above. Alternatively, it may be partitioned. The prediction unit may be a unit of sample prediction, and the transform unit may be a unit for inducing a conversion factor and/or a unit for inducing a residual signal from the conversion factor.
[68] 유닛은경우에따라서블록 (block)또는영역 (area)등의용어와혼용하여 [68] Units are sometimes used interchangeably with terms such as block or area.
사용될수있다.일반적인경우, MxN블록은 M개의열과 N개의행으로 이루어진샘늘들또는변환계수 (transform coefficient)들의집합을나타낼수 있다.샘플은일반적으로픽셀또는픽셀의값을나타낼수있으며,휘도 (luma) 성분의픽셀/픽셀값만을나타낼수도있고,채도 (chroma)성분의픽셀/픽셀 값만을나타낼수도있다.샘플은하나의픽처 (또는영상)을픽셀 (pixel)또는 펠 (pel)에대응하는용어로서사용될수있다. In general, an MxN block can represent a set of samples or transform coefficients consisting of M columns and N rows. A sample can typically represent a pixel or pixel value, and the luminance ( It can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component. A sample corresponds to one picture (or image) corresponding to a pixel or pel. Can be used as a term.
[69] 인코딩장치 (200)는입력영상신호 (원본블록,원본샘플어레이)에서인터 예측부 (221)또는인트라예측부 (222)로부터출력된예측신호 (예측된블록,예측 샘플어레이 )를감산하여레지듀얼신호 (residual signal,잔여블록,잔여샘플 어레이)를생성할수있고,생성된레지듀얼신호는변환부 (232)로전송된다.이 경우도시된바와같이인코딩장치 (200)내에서입력영상신호 (원본블록,원본
샘플어레이 )에서예측신호 (예측블록,예측샘플어레이 )를감산하는유닛은 감산부 (231)라고불릴수있다.예측부는처리대상블록 (이하,현재블록이라 함)에대한예측을수행하고,상기현재블록에대한예측샘플들을포함하는 예측된블록 (predicted block)을생성할수있다.예측부는현재블록또는 CU 단위로인트라예측이적용되는지또는인터예측이적용되는지결정할수있다. 예측부는각예측모드에대한설명에서후술하는바와같이예측모드정보등 예측에관한다양한정보를생성하여엔트로피인코딩부 (240)로전달할수있다. 예측에관한정보는엔트로피인코딩부 (240)에서인코딩되어비트스트림형태로 줄력될수있다. [69] The encoding device 200 subtracts the prediction signal (predicted block, prediction sample array) output from the inter prediction unit 221 or the intra prediction unit 222 from the input video signal (original block, original sample array). Thus, a residual signal (residual signal, residual block, residual sample array) can be generated, and the generated residual signal is transmitted to the conversion unit 232. In this case, the input image within the encoding device 200 as shown Signal (original block, original The unit that subtracts the prediction signal (prediction block, prediction sample array) from the sample array) may be referred to as a subtraction unit 231. The prediction unit performs prediction on a block to be processed (hereinafter referred to as a current block), and the above A predicted block including predicted samples for the current block can be generated. The prediction unit can determine whether intra prediction is applied or inter prediction is applied in units of the current block or CU. The prediction unit may generate various types of information related to prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit it to the entropy encoding unit 240. The information on prediction may be encoded in the entropy encoding unit 240 and summarized in the form of a bitstream.
P이 인트라예측부 (222)는현재픽처내의샘플들을참조하여현재블록을예측할 수있다.상기참조되는샘플들은예측모드에따라상기현재블록의 The PI intra prediction unit 222 may predict the current block by referring to samples in the current picture. The referenced samples are of the current block according to the prediction mode.
주변 (neighbor)에위치할수있고,또는떨어져서위치할수도있다.인트라 예측에서 예측모드들은복수의비방향성모드와복수의방향성모드를포함할 수있다.비방향성모드는예를들어 DC모드및플래너모드 (Planar모드)를 포함할수있다.방향성모드는예측방향의세밀한정도에따라예를들어 It can be located in the neighborhood, or it can be located away from it. In intra prediction, the prediction modes can include a plurality of non-directional modes and a plurality of directional modes. Non-directional modes are, for example, DC mode and planner mode. Can include (Planar mode) Directional mode, for example, depending on the precision of the predicted direction.
33개의방향성예측모드또는 65개의방향성 예측모드를포함할수있다.다만, 이는예시로서설정에따라그이상또는그이하의개수의방향성 예측 It may include 33 directional prediction modes or 65 directional prediction modes. However, this is an example and more or less directional predictions depending on the setting.
모드들이사용될수있다.인트라예측부 (222)는주변블록에적용된예측모드를 이용하여,현재블록에적용되는예측모드를결정할수도있다. Modes may be used. The intra prediction unit 222 may determine a prediction mode to be applied to the current block by using the prediction mode applied to the surrounding block.
P 1] 인터예측부 (221)는참조픽처상에서움직임벡터에의해특정되는참조 P 1] The inter prediction unit 221 refers to a reference specified by a motion vector on the reference picture.
블록 (참조샘플어레이)을기반으로,현재블록에대한예측된블록을유도할수 있다.이때,인터 예측모드에서전송되는움직임정보의양을줄이기위해주변 블록과현재블록간의움직임정보의상관성에기초하여움직임정보를블록, 서브블록또는샘플단위로예측할수있다.상기움직임정보는움직임벡터및 참조픽처인덱스를포함할수있다.상기움직임정보는인터 예측방향 (L0예측, L1예측, Bi예측등)정보를더포함할수있다.인터예측의경우에,주변블록은 현재픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조 픽처에존재하는시간적주변블록 (temporal neighboring block)을포함할수있다. 상기참조블록을포함하는참조픽처와상기시간적주변블록을포함하는참조 픽처는동일할수도있고,다를수도있다.상기시간적주변블록은동일위치 참조블록 (collocated reference block),동일위치 CU(colCU)등의이름으로불릴 수있으며 ,상기시간적주변블록을포함하는참조픽처는동일위치 Based on the block (reference sample array), it is possible to induce the predicted block for the current block. In this case, based on the correlation of the motion information between the neighboring block and the current block to reduce the amount of motion information transmitted in the inter prediction mode. Motion information can be predicted in units of blocks, sub-blocks, or samples. The motion information may include a motion vector and a reference picture index. The motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) In the case of inter prediction, the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture. The reference picture including the reference block and the reference picture including the temporal peripheral block may be the same or different. The temporal peripheral block may be a collocated reference block, a co-located CU (colCU), etc. It can be called by the name of, and the reference picture containing the temporal surrounding block is the same position.
픽처 (collocated picture, colPic)라고불릴수도있다.예를들어 ,인터 It can also be called a picture (collocated picture, colPic), for example, inter
예측부 (221)는주변블록들을기반으로움직임정보후보리스트를구성하고, 상기현재블록의움직임벡터및/또는참조픽처인덱스를도출하기위하여 어떤후보가사용되는지를지시하는정보를생성할수있다.다양한예측모드를 기반으로인터예측이수행될수있으며,예를들어스킵모드와머지모드의 경우에,인터 예측부 (221)는주변블록의움직임정보를현재블록의움직임
정보로이용할수있다.스킵모드의경우,머지모드와달리레지듀얼신호가 전송되지않을수있다.움직임정보예즉 (motion vector prediction, MVP)모드의 경우,주변블록의움직임벡터를움직임벡터예즉자 (motion vector predictor)로 이용하고,움직임벡터차분 (motion vector difference)을시그널링함으로써현재 블록의움직임벡터를지시할수있다.The prediction unit 221 may construct a motion information candidate list based on the neighboring blocks, and generate information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. Inter prediction may be performed based on the prediction mode. For example, in the case of the skip mode and the merge mode, the inter prediction unit 221 moves the motion information of the neighboring block to the current block. It can be used as information In the case of skip mode, unlike the merge mode, the residual signal may not be transmitted In the case of motion information e.g. (motion vector prediction, MVP) mode, the motion vector of the neighboring block is used as a motion vector, i.e. vector predictor), and by signaling the motion vector difference, the motion vector of the current block can be indicated.
2] 예측부 (220)는후술하는다양한예측방법을기반으로예측신호를생성할수 있다.예를들어,예측부는하나의블록에대한예측을위하여인트라예측또는 인터 예측을적용할수있을뿐아니라,인트라예측과인터 예측을동시에 적용할수있다.이는 combined inter and intra prediction (〔고 라고불릴수있다. 또한,예즉부는블록에대한예즉을위하여인트라블록카피 (intra block copy, IBC)예측모드에기반할수도있고또는팔레트모드 (palette mode)에기반할 수도있다.상기 IBC예측모드또는팔레트모드는예를들어 SCC(screen content coding)등과같이게임등의컨텐츠영상/동영상코딩을위하여사용될수있다. IBC는기본적으로현재픽처내에서예측을수행하나현재픽처내에서참조 블록을도출하는점에서인터예측과유사하게수행될수있다.즉, IBC는본 문서에서설명되는인터 예측기법들중적어도하나를이용할수있다.팔레트 모드는인트라코딩또는인트라예측의일예로볼수있다.팔레트모드가 적용되는경우팔레트테이블및팔레트인덱스에관한정보를기반으로픽처내 샘플값을시그널링할수있다.2] The prediction unit 220 may generate a prediction signal based on various prediction methods to be described later. For example, the prediction unit may apply intra prediction or inter prediction to predict one block, as well as intra prediction. Prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as). Also, for example, it may be based on an intra block copy (IBC) prediction mode for example. Or it may be based on a palette mode. The IBC prediction mode or palette mode can be used for content video/video coding such as games, for example SCC (screen content coding), etc. IBC is basically This can be done similarly to inter prediction in that it performs prediction within the current picture but derives a reference block within the current picture, i.e. IBC can use at least one of the inter prediction techniques described in this document. The mode can be seen as an example of intracoding or intra prediction. When the palette mode is applied, the sample value in the picture can be signaled based on the information about the palette table and palette index.
3] 상기예측부 (인터 예측부 (221)및/또는상기인트라예측부 (222)포함)를통해 생성된예측신호는복원신호를생성하기위해이용되거나레지듀얼신호를 생성하기위해이용될수있다.변환부 (232)는레지듀얼신호에변환기법을 적용하여변환계수들 (transform coefficients)를생성할수있다.예를들어,변환 기법은 DCT (Discrete Cosine Transform), DST(Discrete Sine Transform), 3] The prediction signal generated through the prediction unit (including the inter prediction unit 221 and/or the intra prediction unit 222) may be used to generate a restoration signal or may be used to generate a residual signal. The transform unit 232 may generate transform coefficients by applying a transform method to the residual signal. For example, the transform method is DCT (Discrete Cosine Transform), DST (Discrete Sine Transform),
KLT(Karhunen-Loeve Transform), GBT(Graph-Based Transform),또는 KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or
CNT (Conditionally Non-linear Transform)중적어도하나를포함할수있다. It may include at least one of CNT (Conditionally Non-linear Transform).
여기서, GBT는픽셀간의관계정보를그래프로표현한다고할때이 Here, when it is said that GBT expresses relationship information between pixels in a graph,
그래프로부터얻어진변환을의미한다. CNT는이전에복원된모든픽셀 (all previously reconstructed pixel)를이용하여 예즉신호를생성하고그에기초하여 획득되는변환을의미한다.또한,변환과정은정사각형의동일한크기를갖는 픽셀블록에적용될수도있고,정사각형이아닌가변크기의블록에도적용될 수있다. It means the transformation obtained from the graph. CNT refers to a transformation that is obtained based on, e.g., generating a signal using all previously reconstructed pixels. Also, the transformation process can be applied to a block of pixels of the same size of a square, and It can also be applied to blocks of variable size that are not square.
4] 양자화부 (233)는변환계수들을양자화하여엔트로피인코딩부 (240)로 4] The quantization unit 233 quantizes the transform coefficients to the entropy encoding unit 240
전송되고,엔트로피인코딩부 (240)는양자화된신호 (양자화된변환계수들에 관한정보)를인코딩하여비트스트림으로출력할수있다.상기양자화된변환 계수들에관한정보는레지듀얼정보라고불릴수있다.양자화부 (233)는계수 스캔순서 (scan order)를기반으로블록형태의양자화된변환계수들을 1차원 벡터형태로재정렬할수있고,상기 1차원벡터형태의양자화된변환계수들을
기반으로상기양자화된변환계수들에관한정보를생성할수도있다.엔트로피 인코딩부 (240)는예를들어지수골롬 (exponential Golomb), After being transmitted, the entropy encoding unit 240 encodes the quantized signal (information on quantized transformation coefficients) and outputs it as a bitstream. The information on the quantized transformation coefficients may be referred to as residual information. The quantization unit 233 can rearrange the quantized transformation coefficients of the block form into a one-dimensional vector form based on the coefficient scan order, and the quantized transformation coefficients of the one-dimensional vector form Based on the quantized transformation coefficients, information on the quantized transformation coefficients may be generated. The entropy encoding unit 240 includes, for example, exponential Golomb,
CAVLC(context-adaptive variable length coding), CABAC(context-adaptive binary arithmetic coding)등과같은다양한인코딩방법을수행할수있다.엔트로피 인코딩부 (240)는양자화된변환계수들외비디오/이미지복원에필요한 정보들 (예컨대신택스요소들 (syntax elements)의값등)을함께또는별도로 인코딩할수도있다.인코딩된정보 (ex.인코딩된비디오/영상정보)는 비트스트림형태로 NAL(network abstraction layer)유닛단위로전송또는저장될 수있다.상기비디오/영상정보는어맵테이션파라미터세트 (APS),픽처 파라미터세트 (PPS),시퀀스파라미터세트 (SPS)또는비디오파라미터 세트 (VPS)등다양한파라미터세트에관한정보를더포함할수있다.또한상기 비디오/영상정보는일반제한정보 (general constraint information)을더포함할수 있다.본문서에서인코딩장치에서디코딩장치로전달/시그널링되는정보 및/또는신택스요소들은비디오/영상정보에포함될수있다.상기비디오/영상 정보는상술한인코딩절차를통하여인코딩되어상기비트스트림에포함될수 있다.상기비트스트림은네트워크를통하여전송될수있고,또는디지털 저장매체에저장될수있다.여기서네트워크는방송망및/또는통신망등을 포함할수있고,디지털저장매체는 USB, SD, CD, DVD,블루레이 , HDD, SSD등 다양한저장매체를포함할수있다.엔트로피인코딩부 (240)로부터출력된 신호는전송하는전송부 (미도시)및/또는저장하는저장부 (미도시)가인코딩 장치 (200)의내/외부엘리먼트로서구성될수있고,또는전송부는엔트로피 인코딩부 (240)에포함될수도있다. Various encoding methods such as CAVLC (context-adaptive variable length coding) and CABAC (context-adaptive binary arithmetic coding) can be performed. The entropy encoding unit 240 includes quantized conversion factors and information necessary for video/image restoration. (E.g., values of syntax elements) can be encoded together or separately Encoded information (ex.encoded video/video information) is transmitted in the form of a bitstream in units of network abstraction layer (NAL) units or The video/video information may further include information about various parameter sets, such as an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). In addition, the video/video information may further include general constraint information. In this document, information transmitted/signaled from the encoding device to the decoding device and/or syntax elements may be included in the video/video information. The video/video information may be encoded through the above-described encoding procedure and included in the bitstream. The bitstream may be transmitted through a network or may be stored in a digital storage medium. Here, the network is a broadcasting network and/or The digital storage medium may include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc. The signal output from the entropy encoding unit 240 is transmitted by a transmission unit ( (Not shown) and/or a storage unit (not shown) for storing may be configured as an internal/external element of the encoding apparatus 200, or a transmission unit may be included in the entropy encoding unit 240.
5] 양자화부 (233)로부터출력된양자화된변환계수들은예측신호를생성하기 위해이용될수있다.예를들어 ,양자화된변환계수들에역양자화부 (234)및 역변환부 (235)를통해역양자화및역변환을적용함으로써레지듀얼 5] The quantized transformation coefficients output from the quantization unit 233 can be used to generate the predicted signal. For example, the quantization unit 234 and the inverse transformation unit 235 are applied to the quantized transformation coefficients. Residual by applying quantization and inverse transformation
신호 (레지듀얼블록 or레지듀얼샘플들)를복원할수있다.가산부 (155)는 복원된레지듀얼신호를인터예측부 (221)또는인트라예측부 (222)로부터 출력된예측신호에더함으로써복원 (reconstructed)신호 (복원픽처,복원블록, 복원샘플어레이)가생성될수있다.스킵모드가적용된경우와같이처리대상 블록에대한레지듀얼이없는경우,예측된블록이복원블록으로사용될수 있다.가산부 (250)는복원부또는복원블록생성부라고불릴수있다.생성된 복원신호는현재픽처내다음처리대상블록의인트라예측을위하여사용될 수있고,후술하는바와같이필터링을거쳐서다음픽처의인터 예측을위하여 사용될수도있다. A signal (residual block or residual samples) can be restored. The addition unit 155 restores the restored residual signal by adding the restored residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222. A (reconstructed) signal (restored picture, reconstructed block, reconstructed sample array) can be generated If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block can be used as a reconstructed block. The unit 250 may be referred to as a restoration unit or a restoration block generation unit. The generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and inter prediction of the next picture through filtering as described below. It can also be used for
R6] 한편픽처인코딩및/또는복원과정에서 LMCS (luma mapping with chroma scaling)가적용될수도있다. R6] Meanwhile, LMCS (luma mapping with chroma scaling) may be applied during picture encoding and/or restoration.
7] 필터링부 (260)는복원신호에필터링을적용하여주관적/객관적화질을 7] The filtering unit 260 applies filtering to the restored signal to improve subjective/objective image quality.
향상시킬수있다.예를들어필터링부 (260)은복원픽처에다양한필터링방법을
적용하여수정된 (modified)복원픽처를생성할수있고,상기수정된복원 픽처를메모리 (270),구체적으로메모리 (270)의 DPB에저장할수있다.상기 다양한필터링방법은예를들어,디블록킹필터링,샘플적응적오프셋 (sample adaptive offset),적응적루프필터 (adaptive loop filter),양방향필터 (bilateral filter) 등을포함할수있다.필터링부 (260)은각필터링방법에대한설명에서후술하는 바와같이필터링에관한다양한정보를생성하여엔트로피인코딩부 (240)로 전달할수있다.필터링관한정보는엔트로피인코딩부 (240)에서인코딩되어 비트스트림형태로출력될수있다.For example, the filtering unit 260 applies various filtering methods to the restored picture. By applying the modified (modified) restored picture can be generated, the modified restored picture can be stored in the memory 270, specifically the DPB of the memory 270. The various filtering methods include, for example, deblocking filtering. , A sample adaptive offset, an adaptive loop filter, a bilateral filter, etc. The filtering unit 260 performs filtering as described later in the description of each filtering method. A variety of information related to may be generated and transmitted to the entropy encoding unit 240. The filtering information may be encoded by the entropy encoding unit 240 and output in the form of a bitstream.
8] 메모리 (270)에전송된수정된복원픽처는인터예측부 (221)에서참조픽처로 사용될수있다.인코딩장치는이를통하여인터 예측이적용되는경우,인코딩 장치 ( W0)와디코딩장치에서의예측미스매치를피할수있고,부호화효율도 향상시킬수있다. 8] The modified reconstructed picture transmitted to the memory 270 can be used as a reference picture in the inter prediction unit 221. When the inter prediction is applied through this, the encoding device is predicted by the encoding device (W0) and the decoding device. Mismatch can be avoided and coding efficiency can be improved.
9] 메모리 (270) DPB는수정된복원픽처를인터예측부 (221)에서의참조픽처로 사용하기위해저장할수있다.메모리 (270)는현재픽처내움직임정보가 도출된 (또는인코딩된)블록의움직임정보및/또는이미복원된픽처내 블록들의움직임정보를저장할수있다.상기저장된움직임정보는공간적 주변블록의움직임정보또는시간적주변블록의움직임정보로활용하기 위하여인터예측부 (221)에전달할수있다.메모리 (270)는현재픽처내복원된 블록들의복원샘플들을저장할수있고,인트라예측부 (222)에전달할수있다. 9] Memory 270 The DPB can store the modified restored picture for use as a reference picture in the inter prediction unit 221. The memory 270 is a block of the block from which motion information in the current picture is derived (or encoded). It is possible to store motion information and/or motion information of blocks in a picture that has already been restored. The stored motion information can be transmitted to the inter prediction unit 221 for use as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks. The memory 270 may store restoration samples of the restored blocks in the current picture, and may transmit the restoration samples to the intra prediction unit 222.
[8이 도 3은본개시가적용될수있는비디오/영상디코딩장치의구성을 [8] Fig. 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
개략적으로설명하는도면이다. This is a schematic drawing.
[81] 도 3을참조하면,디코딩장치 (300)는엔트로피디코딩부 (entropy decoder, 310), 레지듀얼처리부 (residual processor, 320),예즉부 (predictor, 330),가산부 (adder, 340),필터링부 (filter, 350)및메모리 (memory, 360)를포함하여구성될수있다. 예측부 (330)는인트라예측부 (331)및인터예측부 (332)를포함할수있다. [81] Referring to FIG. 3, the decoding apparatus 300 includes an entropy decoder 310, a residual processor 320, a predictor 330, and an adder 340. , Can be configured including a filtering unit (filter, 350) and memory (memory, 360). The prediction unit 330 may include an intra prediction unit 331 and an inter prediction unit 332.
레지듀얼처리부 (320)는역양자화부 (dequantizer, 321)및역변환부 (inverse transformer, 321)를포함할수있다.상술한엔트로피디코딩부 (310),레지듀얼 처리부 (320),예측부 (330),가산부 (340)및필터링부 (350)는실시예에따라하나의 하드웨어컴포넌트 (예를들어디코더칩셋또는프로세서)에의하여구성될수 있다.또한메모리 (360)는 DPB(decoded picture buffer)를포함할수있고,디지털 저장매체에의하여구성될수도있다.상기하드웨어컴포넌트는메모리 (360)을 내/외부컴포넌트로더포함할수도있다. The residual processing unit 320 may include a dequantizer 321 and an inverse transformer 321. The above-described entropy decoding unit 310, a residual processing unit 320, a prediction unit 330, The addition unit 340 and the filtering unit 350 may be configured by one hardware component (for example, a decoder chipset or processor) according to an embodiment. Further, the memory 360 may include a decoded picture buffer (DPB). In addition, it may be configured by a digital storage medium. The hardware component may include a memory 360 as an internal/external component loader.
[82] 비디오/영상정보를포함하는비트스트림이입력되면,디코딩장치 (300)는도 3의인코딩장치에서비디오/영상정보가처리된프로세스에대응하여영상을 복원할수있다.예를들어,디코딩장치 (300)는상기비트스트림으로부터획득한 블록분할관련정보를기반으로유닛들/블록들을도출할수있다.디코딩 장치 (300)는인코딩장치에서적용된처리유닛을이용하여디코딩을수행할수 있다.따라서디코딩의처리유닛은예를들어코딩유닛일수있고,코딩유닛은
코딩트리유닛또는최대코딩유닛으로부터쿼드트리구조,바이너리트리 구조및/또는터너리트리구조를따라서분할될수있다.코딩유닛으로부터 하나이상의변환유닛이도출될수있다.그리고,디코딩장치 (300)를통해 디코딩및출력된복원영상신호는재생장치를통해재생될수있다. When a bitstream including video/image information is input, the decoding device 300 can restore the image in response to a process in which the video/image information is processed in the encoding device of FIG. 3. For example, decoding The device 300 may derive units/blocks based on the block division related information acquired from the bitstream. The decoding device 300 may perform decoding using a processing unit applied in the encoding device. Therefore, decoding The processing unit of may be, for example, a coding unit, and the coding unit is From the coding tree unit or the maximum coding unit, it can be divided according to the quad tree structure, the binary tree structure, and/or the turner tree structure. One or more conversion units can be derived from the coding unit. And, decoding through the decoding device 300, And the output restored image signal can be reproduced through the reproduction device.
[83] 디코딩장치 (300)는도 3의인코딩장치로부터출력된신호를비트스트림 [83] The decoding device 300 converts the signal output from the encoding device of FIG. 3 into a bitstream.
형태로수신할수있고,수신된신호는엔트로피디코딩부 (310)를통해디코딩될 수있다.예를들어,엔트로피디코딩부 (3 W)는상기비트스트림을파싱하여영상 복원 (또는픽처복원)에필요한정보 (ex.비디오/영상정보)를도출할수있다. 상기비디오/영상정보는어맵테이션파라미터세트 (APS),픽처파라미터 세트 (PPS),시퀀스파라미터세트 (SPS)또는비디오파라미터세트 (VPS)등 다양한파라미터세트에관한정보를더포함할수있다.또한상기비디오/영상 정보는일반제한정보 (general constraint information)을더포함할수있다. 디코딩장치는상기파라미터세트에관한정보및/또는상기일반제한정보를 더기반으로픽처를디코딩할수있다.본문서에서후술되는시그널링/수신되는 정보및/또는신택스요소들은상기디코딩절차를통하여디코딩되어상기 비트스트림으로부터획득될수있다.예컨대,엔트로피디코딩부 (3W)는지수 골롬부호화, CAVLC또는 CABAC등의코딩방법을기초로비트스트림내 정보를디코딩하고,영상복원에필요한신택스엘리먼트의값,레지듀얼에관한 변환계수의양자화된값들을출력할수있다.보다상세하게, CABAC엔트로피 디코딩방법은,비트스트림에서각신택스요소에해당하는빈을수신하고, 디코딩대상신택스요소정보와주변및디코딩대상블록의디코딩정보혹은 이전단계에서디코딩된심볼/빈의정보를이용하여문맥 (context)모델을 결정하고,결정된문맥모델에따라빈 (bin)의발생확률을예측하여빈의산술 디코딩 (arithmetic decoding)를수행하여각신택스요소의값에해당하는심볼을 생성할수있다.이때, CABAC엔트로피디코딩방법은문맥모델결정후다음 심볼/빈의문맥모델을위해디코딩된심볼/빈의정보를이용하여문맥모델을 업데이트할수있다.엔트로피디코딩부 (3 W)에서디코딩된정보중예측에관한 정보는예측부 (인터예측부 (332)및인트라예측부 (331))로제공되고,엔트로피 디코딩부 (3W)에서엔트로피디코딩이수행된레지듀얼값,즉양자화된변환 계수들및관련파라미터정보는레지듀얼처리부 (320)로입력될수있다. It can be received in a form, and the received signal can be decoded through the entropy decoding unit 310. For example, the entropy decoding unit 3W parses the bitstream and is required for image restoration (or picture restoration). Information (ex. video/video information) can be derived. The video/video information may further include information on various parameter sets, such as an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). /Video information may further include general constraint information. The decoding device may further decode the picture based on the information on the parameter set and/or the general limit information. The signaling/received information and/or syntax elements described later in this document are decoded through the decoding procedure, It can be obtained from the bitstream. For example, the entropy decoding unit (3W) decodes the information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC or CABAC, In more detail, the CABAC entropy decoding method receives the bin corresponding to each syntax element in the bitstream, and receives the decoding target syntax element information and the surrounding and decoding information of the decoding target block. Alternatively, the context model is determined using the symbol/bin information decoded in the previous step, and the probability of occurrence of bins is predicted according to the determined context model, and arithmetic decoding of bins is performed. A symbol corresponding to the value of the syntax element can be generated. In this case, the CABAC entropy decoding method can update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after determining the context model. Among the information decoded by the entropy decoding unit (3W), information about prediction is provided to the prediction unit (inter prediction unit 332 and intra prediction unit 331), and entropy decoding is performed by the entropy decoding unit 3W. The residual value, that is, quantized transform coefficients and related parameter information may be input to the residual processing unit 320.
레지듀얼처리부 (320)는레지듀얼신호 (레지듀얼블록,레지듀얼샘플들, 레지듀얼샘플어레이)를도출할수있다.또한,엔트로피디코딩부 (310)에서 디코딩된정보중필터링에관한정보는필터링부 (350)으로제공될수있다. 한편,인코딩장치로부터출력된신호를수신하는수신부 (미도시)가디코딩 장치 (300)의내/외부엘리먼트로서더구성될수있고,또는수신부는엔트로피 디코딩부 (3 W)의구성요소일수도있다.한편,본문서에따른디코딩장치는 비디오/영상/픽처디코딩장치라고불릴수있고,상기디코딩장치는정보 디코더 (비디오/영상/픽처정보디코더)및샘플디코더 (비디오/영상/픽처샘플
디코더 )로구분할수도있다.상기정보디코더는상기엔트로피 The residual processing unit 320 may derive a residual signal (residual block, residual samples, and residual sample array). In addition, information about filtering among information decoded by the entropy decoding unit 310 is a filtering unit. Can be provided as 350. On the other hand, a receiving unit (not shown) that receives the signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 3W. ,The decoding device according to this document may be called a video/image/picture decoding device, and the decoding device is an information decoder (video/image/picture information decoder) and a sample decoder (video/image/picture sample). Decoder), the information decoder is the entropy
디코딩부 (3 W)를포함할수있고,상기샘플디코더는상기역양자화부 (321), 역변환부 (322),가산부 (340),필터링부 (350),메모리 (360),인터 예측부 (332)및 인트라예측부 (331)중적어도하나를포함할수있다. A decoding unit (3W) may be included, and the sample decoder includes the inverse quantization unit 321, an inverse transform unit 322, an addition unit 340, a filtering unit 350, a memory 360, an inter prediction unit ( 332) and an intra prediction unit 331.
[84] 역양자화부 (321)에서는양자화된변환계수들을역양자화하여변환계수들을 출력할수있다.역양자화부 (321)는양자화된변환계수들을 2차원의블록 형태로재정렬할수있다.이경우상기재정렬은인코딩장치에서수행된계수 스캔순서를기반하여재정렬을수행할수있다.역양자화부 (321)는양자화 파라미터 (예를들어양자화스텝사이즈정보)를이용하여양자화된변환 계수들에대한역양자화를수행하고,변환계수들 (transform coefficient)를획득할 수있다. [84] The inverse quantization unit 321 may inverse quantize the quantized transformation coefficients and output the transformation coefficients. The inverse quantization unit 321 may rearrange the quantized transformation coefficients in a two-dimensional block form. In this case, the above reordering The inverse quantization unit 321 performs inverse quantization on the quantized transform coefficients using a quantization parameter (for example, quantization step size information) based on the coefficient scan order performed by the silver encoding device. And, you can obtain transform coefficients.
[85] 역변환부 (322)에서는변환계수들를역변환하여레지듀얼신호 (레지듀얼블록, 레지듀얼샘플어레이)를획득하게된다. In the inverse transform unit 322, a residual signal (residual block, residual sample array) is obtained by inverse transforming the transform coefficients.
[86] 예측부는현재블록에대한예측을수행하고,상기현재블록에대한예측 [86] The prediction unit performs prediction on the current block, and predicts the current block
샘플들을포함하는예측된블록 (predicted block)을생성할수있다.예측부는 엔트로피디코딩부 (310)로부터출력된상기 예측에관한정보를기반으로상기 현재블록에인트라예측이적용되는지또는인터 예측이적용되는지결정할수 있고,구체적인인트라/인터예측모드를결정할수있다. A predicted block including samples may be generated. The prediction unit determines whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310. Can be determined and specific intra/inter prediction modes can be determined.
[87] 예측부 (330)는후술하는다양한예측방법을기반으로예측신호를생성할수 있다.예를들어,예측부는하나의블록에대한예측을위하여인트라예측또는 인터 예측을적용할수있을뿐아니라,인트라예측과인터 예측을동시에 적용할수있다.이는 combined inter and intra prediction (〔고 라고불릴수있다. 또한,예즉부는블록에대한예즉을위하여인트라블록카피 (intra block copy, IBC)예측모드에기반할수도있고또는팔레트모드 (palette mode)에기반할 수도있다.상기 IBC예측모드또는팔레트모드는예를들어 SCC(screen content coding)등과같이게임등의컨텐츠영상/동영상코딩을위하여사용될수있다. IBC는기본적으로현재픽처내에서예측을수행하나현재픽처내에서참조 블록을도출하는점에서인터예측과유사하게수행될수있다.즉, IBC는본 문서에서설명되는인터 예측기법들중적어도하나를이용할수있다.팔레트 모드는인트라코딩또는인트라예측의일예로볼수있다.팔레트모드가 적용되는경우팔레트테이블및팔레트인덱스에관한정보가상기비디오/영상 정보에포함되어시그널링될수있다. [87] The prediction unit 330 may generate a prediction signal based on various prediction methods to be described later. For example, the prediction unit may not only apply intra prediction or inter prediction to predict a single block, but also apply it. Intra prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as). In addition, for example, the example is based on the intra block copy (IBC) prediction mode for block prediction. It may or may be based on a palette mode. The IBC prediction mode or palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding), etc. IBC is Basically, the prediction is performed within the current picture, but it can be performed similarly to inter prediction in that it derives a reference block within the current picture, i.e., IBC can use at least one of the inter prediction techniques described in this document. Palette mode can be seen as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index may be included in the video/video information and signaled.
[88] 인트라예측부 (331)는현재픽처내의샘플들을참조하여현재블록을예측할 수있다.상기참조되는샘플들은예측모드에따라상기현재블록의 [88] The intra prediction unit 331 may predict the current block by referring to samples in the current picture. The referenced samples are of the current block according to the prediction mode.
주변 (neighbor)에위치할수있고,또는떨어져서위치할수도있다.인트라 예측에서 예측모드들은복수의비방향성모드와복수의방향성모드를포함할 수있다.인트라예측부 (331)는주변블록에적용된예측모드를이용하여,현재 블록에적용되는예측모드를결정할수도있다.
[89] 인터예측부 (332)는참조픽처상에서움직임벡터에의해특정되는참조 블록 (참조샘플어레이)을기반으로,현재블록에대한예측된블록을유도할수 있다.이때,인터 예측모드에서전송되는움직임정보의양을줄이기위해주변 블록과현재블록간의움직임정보의상관성에기초하여움직임정보를블록, 서브블록또는샘플단위로예측할수있다.상기움직임정보는움직임벡터및 참조픽처인덱스를포함할수있다.상기움직임정보는인터 예측방향 (L0예측, L1예측, Bi예측등)정보를더포함할수있다.인터예측의경우에,주변블록은 현재픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조 픽처에존재하는시간적주변블록 (temporal neighboring block)을포함할수있다. 예를들어,인터예측부 (332)는주변블록들을기반으로움직임정보후보 리스트를구성하고,수신한후보선택정보를기반으로상기현재블록의움직임 벡터및/또는참조픽처인덱스를도출할수있다.다양한예측모드를기반으로 인터 예측이수행될수있으며,상기 예측에관한정보는상기현재블록에대한 인터 예측의모드를지시하는정보를포함할수있다. The prediction modes may include a plurality of non-directional modes and a plurality of directional modes in intra prediction. The intra prediction unit 331 is a prediction applied to a peripheral block. Using the mode, you can also determine the prediction mode that applies to the current block. [89] The inter prediction unit 332 may induce a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture. At this time, transmitted in the inter prediction mode. In order to reduce the amount of motion information, motion information can be predicted in units of blocks, sub-blocks, or samples based on the correlation of motion information between the neighboring block and the current block. The motion information may include a motion vector and a reference picture index. The motion information may further include information on the inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) In the case of inter prediction, the neighboring block is a spatial neighboring block existing in the current picture and a reference picture. It can include a temporal neighboring block that exists in. For example, the inter prediction unit 332 may construct a motion information candidate list based on the neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information. Inter prediction may be performed based on the prediction mode, and the information on the prediction may include information indicating a mode of inter prediction for the current block.
[9이 가산부 (340)는획득된레지듀얼신호를예측부 (인터예측부 (332)및/또는 [9 This addition unit 340 is a prediction unit (inter prediction unit 332) and/or the obtained residual signal
인트라예측부 (331)포함)로부터출력된예측신호 (예측된블록,예측샘플 어레이)에더함으로써복원신호 (복원픽처,복원블록,복원샘플어레이)를 생성할수있다.스킵모드가적용된경우와같이처리대상블록에대한 레지듀얼이없는경우,예측된블록이복원블록으로사용될수있다. In addition to the prediction signals (predicted blocks, prediction sample arrays) output from the intra prediction unit 331), a restoration signal (restored picture, restoration block, restoration sample array) can be generated. Processing as in the case where skip mode is applied. If there is no residual for the target block, the predicted block can be used as a restore block.
[91] 가산부 (340)는복원부또는복원블록생성부라고불릴수있다.생성된복원 신호는현재픽처내다음처리대상블록의인트라예측을위하여사용될수 있고,후술하는바와같이필터링을거쳐서출력될수도있고또는다음픽처의 인터 예측을위하여사용될수도있다. [91] The addition unit 340 may be referred to as a restoration unit or a restoration block generation unit. The generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and output after filtering as described later. It may be used or it may be used for inter prediction of the next picture.
[92] 한편,픽처디코딩과정에서 LMCS (luma mapping with chroma scaling)가적용될 수도있다. [92] Meanwhile, luma mapping with chroma scaling (LMCS) may be applied in the picture decoding process.
[93] 필터링부 (350)는복원신호에필터링을적용하여주관적/객관적화질을 [93] The filtering unit 350 applies filtering to the restored signal to improve subjective/objective image quality.
향상시킬수있다.예를들어필터링부 (350)는복원픽처에다양한필터링방법을 적용하여수정된 (modified)복원픽처를생성할수있고,상기수정된복원 픽처를메모리 (360),구체적으로메모리 (360)의 DPB에전송할수있다.상기 다양한필터링방법은예를들어,디블록킹필터링,샘플적응적오프셋 (sample adaptive offset),적응적루프필터 (adaptive loop filter),양방향필터 (bilateral filter) 등을포함할수있다. For example, the filtering unit 350 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 360, specifically a memory 360. The various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc. can do.
[94] 메모리 (360)의 DPB에저장된 (수정된)복원픽처는인터 예측부 (332)에서참조 픽쳐로사용될수있다.메모리 (360)는현재픽처내움직임정보가도출된 (또는 디코딩된)블록의움직임정보및/또는이미복원된픽처내블록들의움직임 정보를저장할수있다.상기저장된움직임정보는공간적주변블록의움직임 정보또는시간적주변블록의움직임정보로활용하기위하여인터 [94] The (modified) reconstructed picture stored in the DPB of the memory 360 may be used as a reference picture in the inter prediction unit 332. The memory 360 is the one from which motion information in the current picture is derived (or decoded). The motion information of the block and/or the motion information of the blocks in the picture that has already been restored can be stored. The stored motion information is interpolated to be used as the motion information of the spatial surrounding block or the motion information of the temporal surrounding block.
예측부 (332)에전달할수있다.메모리 (360)는현재픽처내복원된블록들의
복원샘플들을저장할수있고,인트라예측부 (331)에전달할수있다. It can be transmitted to the prediction unit 332. The memory 360 is a block of the restored blocks in the current picture. Restore samples can be stored, and can be delivered to the intra prediction unit 331.
[95] 본명세서에서,인코딩장치 (100)의필터링부 (260),인터예측부 (221)및인트라 예측부 (222)에서설명된실시예들은각각디코딩장치 (300)의필터링부 (350), 인터 예측부 (332)및인트라예측부 (331)에도동일또는대응되도록적용될수 있다. [95] In the present specification, the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding apparatus 100 are respectively described in the filtering unit 350 of the decoding apparatus 300 , The inter prediction unit 332 and the intra prediction unit 331 may be applied to be the same or corresponding to each other.
[96] 상술한바와같이비디오코딩을수행함에 있어압축효율을높이기위하여 예측을수행한다.이를통하여코딩대상블록인현재블록에대한예측 샘플들을포함하는예측된블록을생성할수있다.여기서상기예측된블록은 공간도메인 (또는픽셀도메인)에서의 예측샘플들을포함한다.상기예측된 블록은인코딩장치및디코딩장치에서동일하게도출되며,상기인코딩장치는 원본블록의원본샘플값자체가아닌상기원본블록과상기 예측된블록간의 레지듀얼에대한정보 (레지듀얼정보)를디코딩장치로시그널링함으로써영상 코딩효율을높일수있다.디코딩장치는상기레지듀얼정보를기반으로 레지듀얼샘플들을포함하는레지듀얼블록을도출하고,상기레지듀얼블록과 상기 예측된블록을합하여복원샘플들을포함하는복원블록을생성할수 있고,복원블록들을포함하는복원픽처를생성할수있다. [96] As described above, prediction is performed to increase compression efficiency in performing video coding. Through this, a predicted block including predicted samples for the current block, which is a block to be coded, can be generated. The block includes prediction samples in the spatial domain (or pixel domain). The predicted block is derived identically in the encoding device and the decoding device, and the encoding device is the original block and the original block, not the original sample value itself. The video coding efficiency can be increased by signaling the predicted residual information (residual information) between the predicted blocks with a decoding device. The decoding device derives a residual block including residual samples based on the residual information, and , The residual block and the predicted block may be summed to generate a restoration block including restoration samples, and a restoration picture including restoration blocks may be generated.
[97] 상기레지듀얼정보는변환및양자화절차를통하여생성될수있다.예를 들어 ,인코딩장치는상기원본블록과상기예측된블록간의레지듀얼블록을 도출하고,상기레지듀얼블록에포함된레지듀얼샘플들 (레지듀얼샘플 어레이)에변환절차를수행하여변환계수들을도출하고,상기변환계수들에 양자화절차를수행하여양자화된변환계수들을도출하여관련된레지듀얼 정보를 (비트스트림을통하여)디코딩장치로시그널링할수있다.여기서상기 레지듀얼정보는상기양자화된변환계수들의값정보,위치정보,변환기법, 변환커널,양자화파라미터등의정보를포함할수있다.디코딩장치는상기 레지듀얼정보를기반으로역양자화/역변환절차를수행하고레지듀얼 샘플들 (또는레지듀얼블록)을도출할수있다.디코딩장치는예측된블록과 상기레지듀얼블록을기반으로복원픽처를생성할수있다.인코딩장치는 또한이후픽처의인터예측을위한참조를위하여양자화된변환계수들을 역양자화/역변환하여레지듀얼블록을도출하고,이를기반으로복원픽처를 생성할수있다. [97] The residual information may be generated through transformation and quantization procedures. For example, the encoding device derives a residual block between the original block and the predicted block, and the residual information is included in the residual block. Transformation coefficients are derived by performing a transformation procedure on samples (residual sample array), quantized transformation coefficients are derived by performing a quantization procedure on the transformation coefficients, and related residual information (via bitstream) Here, the residual information may include information such as value information of the quantized conversion coefficients, location information, conversion technique, conversion kernel, and quantization parameters. The decoding apparatus is inverse based on the residual information. The quantization/inverse transform procedure can be performed and residual samples (or residual blocks) can be derived. The decoding device can generate a reconstructed picture based on the predicted block and the residual block. The encoding device can also generate a later picture. For reference for inter prediction, a residual block is derived by inverse quantization/inverse transformation of the quantized transformation coefficients, and a reconstructed picture can be generated based on this.
[98] 도 4는코딩된데이터에대한계층구조를예시적으로나타낸다. 4 exemplarily shows a hierarchical structure for coded data.
[99] 도 4를참조하면,코딩된데이터는비디오/이미지의코딩처리및그자체를 다루는 VCL(video coding layer)과코딩된비디오/이미지의데이터를저장하고 전송하는하위시스템과의사이에있는 NAL(Network abstraction layer)로구분될 수있다. [99] Referring to FIG. 4, the coded data is between a video coding layer (VCL) that deals with the video/image coding process and itself, and a sub-system that stores and transmits the coded video/image data. It can be classified as a network abstraction layer (NAL).
[10이 VCL은시퀀스와픽처등의헤더에해당하는파라미터세트 (픽처파라미터 세트 (PPS),시퀀스파라미터세트 (SPS),비디오파라미터세트 (VPS)등)및 비디오/이미지의코딩과정에부가적으로필요한 SEI(Supplemental enhancement
information)메시지를생성할수있다. SEI메시지는비디오/이미지에대한 정보 (슬라이스데이터)와분리되어있다.비디오/이미지에대한정보를포함한 VCL은슬라이스데이터와슬라이스헤더로이루어진다.한편,슬라이스헤더는 타일그룹헤더 (tile group header)로지칭될수있으며,슬라이스데이터는타일 그룹데이터 (tile group data)로지칭될수있다. [10 This VCL is a set of parameters corresponding to headers such as sequences and pictures (picture parameter set (PPS), sequence parameter set (SPS), video parameter set (VPS), etc.)) and in addition to the video/image coding process. Required Supplemental Enhancement (SEI) information) message can be created. The SEI message is separated from the video/image information (slice data). The VCL containing the video/image information consists of the slice data and the slice header. On the other hand, the slice header is a tile group header. It may be referred to as, and the slice data may be referred to as tile group data.
[101] NAL에서는 VCL에서생성된 RBSP(Raw Byte Sequence Payload)에헤더 [101] In NAL, header to RBSP (Raw Byte Sequence Payload) generated from VCL
정보 (NAL유닛헤더 )를부가하여 NAL유닛을생성할수있다.이때, RBSP는 VCL에서생성된슬라이스데이터 ,파라미터세트, SEI메시지등을말한다. NAL 유닛헤더에는해당 NAL유닛에포함되는 RBSP데이터에따라특정되는 NAL 유닛타입정보를포함할수있다. NAL unit can be created by adding information (NAL unit header). At this time, RBSP refers to slice data, parameter set, SEI message, etc. generated from VCL. The NAL unit header may include NAL unit type information specified according to RBSP data included in the corresponding NAL unit.
[102] NAL의기본단위인 NAL유닛은코딩된영상을소정의규격에따른파일포맷, RTP(Real-time Transport Protocol), TS(Transport Strea)등과같은하위시스템의 비트열에매핑시키는역할을한다. [102] The NAL unit, which is the basic unit of NAL, plays a role of mapping the coded image to the bit stream of sub-systems such as file format, RTP (Real-time Transport Protocol), TS (Transport Strea), etc. according to a predetermined standard.
[103] 도시된바와같이 NAL유닛은 NAL유닛은 VCL에서생성된 RBSP의따라 [103] As shown, the NAL unit is the NAL unit according to the RBSP generated from the VCL.
VCL NAL유닛과 Non-VCL NAL유닛으로구분될수있다. VCL NAL유닛은 영상에대한정보 (슬라이스데이터 )를포함하고있는 NAL유닛을의미할수 있고, Non-VCL NAL유닛은영상을디코딩하기위하여필요한정보 (파라미터 세트또는 SEI메시지)를포함하고있는 NAL유닛을의미할수있다. It can be divided into a VCL NAL unit and a Non-VCL NAL unit. The VCL NAL unit can mean a NAL unit that contains information about the video (slice data), and the Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video. Can mean
[104] 상술한 VCL NAL유닛, Non-VCL NAL유닛은하위시스템의데이터규격에 따라헤더정보를붙여서네트워크를통해전송될수있다.예컨대, NAL유닛은 H.266/VVC파일포맷, RTP(Real-time Transport Protocol), TS(Transport Stream) 등과같은소정규격의데이터형태로변형되어다양한네트워크를통해전송될 수있다. [104] The above-described VCL NAL unit and Non-VCL NAL unit can be transmitted through a network by attaching header information according to the data standard of the sub-system. For example, the NAL unit is in H.266/VVC file format, RTP (Real- time Transport Protocol), TS (Transport Stream), etc.
[105] 상술한바와같이 , NAL유닛은해당 NAL유닛에포함되는 RBSP데이터 [105] As described above, the NAL unit is RBSP data included in the NAL unit.
구조 (structure)에따라 NAL유닛타입이특정될수있으며,이러한 NAL유닛 타입에대한정보는 NAL유닛헤더에저장되어시그널링될수있다. The NAL unit type may be specified according to the structure, and information on the NAL unit type may be stored in the NAL unit header and signaled.
[106] 예를들어 , NAL유닛이영상에대한정보 (슬라이스데이터 )를포함하는지 [106] For example, whether the NAL unit contains information about the image (slice data)
여부에따라크게 VCL NAL유닛타입과 Non-VCL NAL유닛타입으로분류될 수있다. VCL NAL유닛타입은 VCL NAL유닛이포함하는픽처의성질및종류 등에따라분류될수있으며, Non-VCL NAL유닛타입은파라미터세트의종류 등에따라분류될수있다. Depending on whether or not, it can be largely classified into VCL NAL unit type and Non-VCL NAL unit type. The VCL NAL unit type can be classified according to the properties and types of pictures included in the VCL NAL unit, and the non-VCL NAL unit type can be classified according to the type of parameter set.
[107] 아래는 Non-VCL NAL유닛타입이포함하는파라미터세트의종류등에따라 특정된 NAL유닛타입의일예이다. NAL유닛타입은파라미터세트의종류 등에따라특정될수있다.예를들어, NAL유닛타입은 APS를포함하는 NAL 유닛에대한타입인 APS (Adaptation Parameter Set) NAL unit, DPS를포함하는 NAL유닛에대한타입인 DPS (Decoding Parameter Set) NAL unit, VPS를 포함하는 NAL유닛에대한타입인 VPS(Video Parameter Set) NAL unit, SPS를 포함하는 NAL유닛에대한타입인 SPS(Sequence Parameter Set) NAL unit및
PPS를포함하는 NAL유닛에대한타입인 PPS(Picture Parameter Set) NAL unit중 어느하나로특정될수있다. [107] The following is an example of the NAL unit type specified according to the type of parameter set included in the Non-VCL NAL unit type. The NAL unit type can be specified according to the type of parameter set, etc. For example, the NAL unit type is an APS (Adaptation Parameter Set) NAL unit, which is a type for NAL units including APS, and a type for NAL units including DPS. In DPS (Decoding Parameter Set) NAL unit, VPS (Video Parameter Set) NAL unit, which is a type for NAL unit including VPS, SPS (Sequence Parameter Set) NAL unit, which is a type for NAL unit including SPS, and It can be specified as any one of the PPS (Picture Parameter Set) NAL units, which are types for NAL units including PPS.
[108] 상술한 NAL유닛타입들은 NAL유닛타입을위한신택스정보를가지며,상기 신택스정보는 NAL유닛헤더에저장되어시그널링될수있다.예컨대,상기 신택스정보는 nal_unit_type일수있으며 , NAL유닛타입들은 nal_unit_type 값으로특정될수있다. [108] The above-described NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored in the NAL unit header and signaled. For example, the syntax information may be nal_unit_type, and the NAL unit types are nal_unit_type values. Can be specified.
[109] 한편,상술한바와같이하나의픽처는복수의슬라이스를포함할수있으며, 하나의슬라이스는슬라이스헤더및슬라이스데이터를포함할수있다.이 경우,하나의픽처내복수의슬라이스(슬라이스헤더및슬라이스데이터 집합)에대하여하나의픽처헤더가더부가될수있다.상기픽처헤더(픽처헤더 신택스)는상기픽처에공통적으로적용할수있는정보/파라미터를포함할수 있다.상기슬라이스헤더(슬라이스헤더신택스)는상기슬라이스에공통적으로 적용할수있는정보/파라미터를포함할수있다.상기 APS(APS신택스)또는 PPS(PPS신택스)는하나이상의슬라이스또는픽처에공통적으로적용할수 있는정보/파라미터를포함할수있다.상기 SPS(SPS신택스)는하나이상의 시퀀스에공통적으로적용할수있는정보/파라미터를포함할수있다.상기 VPS(VPS신택스)는다중레이어에공통적으로적용할수있는정보/파라미터를 포함할수있다.상기 DPS(DPS신택스)는비디오전반에공통적으로적용할수 있는정보/파라미터를포함할수있다.상기 DPS는 CVS(coded video sequence)의 concatenation에관련된정보/파라미터를포함할수있다.본문서에서상위레벨 신택스(High level syntax, HLS)라함은상기 APS신택스, PPS신택스, SPS 신택스, VPS신택스, DPS신택스, a picture header syntax,슬라이스헤더신택스 중적어도하나를포함할수있다. [109] On the other hand, as described above, one picture can contain a plurality of slices, and one slice can contain a slice header and slice data. In this case, multiple slices (slice header and slice data) within one picture. For a set), one picture header can be added. The picture header (picture header syntax) can include information/parameters commonly applicable to the picture. The slice header (slice header syntax) can be added to the slice. Commonly applicable information/parameters may be included. The above APS (APS syntax) or PPS (PPS syntax) may contain information/parameters commonly applicable to one or more slices or pictures. SPS (SPS syntax) above. May contain information/parameters that are commonly applicable to more than one sequence. The VPS (VPS syntax) may contain information/parameters that are commonly applicable to multiple layers. The DPS (DPS syntax) is common throughout the video. The DPS may include information/parameters applicable to the CVS (coded video sequence) concatenation. The high level syntax (HLS) in this document refers to the APS. At least one of syntax, PPS syntax, SPS syntax, VPS syntax, DPS syntax, a picture header syntax, and slice header syntax can be included.
[110] 본문서에서인코딩장치에서디코딩장치로인코딩되어비트스트림형태로 시그널링되는영상/비디오정보는픽처내파티셔닝관련정보,인트라/인터 예측정보,레지듀얼정보,인루프필터링정보등을포함할뿐아니라,상기 슬라이스헤더에포함된정보,상기 Picture header에포함된정보,상기 APS에 포함된정보,상기 PPS에포함된정보, SPS에포함된정보, VPS에포함된정보 및/또는 DPS에포함된정보를포함할수있다.또한상기영상/비디오정보는 NAL unit header의정보를더포함할수있다. [110] In this document, the image/video information encoded from the encoding device to the decoding device and signaled in the form of a bitstream only includes intra-picture partitioning information, intra/inter prediction information, residual information, and in-loop filtering information. Rather, information included in the slice header, information included in the picture header, information included in the APS, information included in the PPS, information included in the SPS, information included in the VPS, and/or the information included in the DPS. Information may be included. In addition, the image/video information may further include information of the NAL unit header.
[111] 도 5는픽처를파티셔닝하는일예를나타내는도면이다. 5 is a diagram showing an example of partitioning a picture.
[112] 픽처들은코딩트리유닛(CTU)들로분할될수있으며, CTU는코딩트리 [112] Pictures can be divided into coding tree units (CTUs), and CTU is a coding tree unit.
블록(CTB)에대응될수있다. CTU는루마(luma)샘플들의코딩트리블록및 이에대응하는크로마(chroma)샘플들의두개의코딩트리블록들을포함할수 있다.한편,코딩및예측등을위한 CTU의최대허용사이즈는변환을위한 CTU의최대허용사이즈와다를수있다. Block (CTB) can be responded. The CTU can contain a coding tree block of luma samples and two coding tree blocks of chroma samples corresponding thereto. On the other hand, the maximum allowable size of CTU for coding and prediction is the CTU for conversion. It may be different from the maximum allowable size.
[113] 타일은픽처의직사각형영역을덮는일련의 CTU들에해당할수있으며, 픽처는하나이상의타일행과하나이상의타일열로분할될수있다.
2020/175908 1»(:1^1{2020/002733 [113] A tile can correspond to a series of CTUs covering a rectangular area of a picture, and a picture can be divided into one or more tile rows and one or more tile columns. 2020/175908 1»(:1^1{2020/002733
[114] 한편,슬라이스는정수개의완전한타일또는정수개의 연속적인완전한 0X1 행들로구성될수있다.이 때,래스터스캔( - )슬라이스모드및 직사각형슬라이스모드를포함하는두가지슬라이스모드가지원될수있다. [114] On the other hand, a slice can consist of an integer number of complete tiles or an integer number of consecutive complete 0X1 rows. In this case, two slice modes including raster scan (-) slice mode and rectangular slice mode can be supported.
[115] 래스터스캔슬라이스모드에서,슬라이스는픽처의타일 래스터스캔에서 [115] In raster scan slice mode, the slice is in the tile raster scan of the picture.
일련의 완전한타일들을포함할수있다.사각형슬라이스모드에서,슬라이스는 픽처의사각형 영역을집합적으로형성하는다수의완전한타일들또는픽처의 사각형 영역을집합적으로형성하는하나의 타일내다수의 연속적인(:1!1 행들을포함할수있다.사각형슬라이스내의타일들은해당슬라이스에 해당하는사각형 영역 내에서타일 래스터스캔순서로스캔될수있다. It can contain a series of complete tiles. In square slice mode, a slice is a number of complete tiles that collectively form a rectangular area of the picture, or a number of consecutive tiles in a single tile that collectively form a rectangular area of the picture. (:1!1 rows can be included. Tiles within a square slice can be scanned in tile raster scan order within the square area corresponding to the slice.
[116] 도 5의如는픽처를타일들및 래스터스캔슬라이스들로분할한일 예를 [116] In Fig. 5, an example of dividing a picture into tiles and raster scan slices is shown.
나타내는도면이며,예를들어픽처는 12개타일들과 3개의 래스터스캔 슬라이스들로분할될수있다. This is a diagram showing, for example, a picture can be divided into 12 tiles and 3 raster scan slices.
[117] 또한,도 5의 )는픽처를타일들및사각형슬라이스들로분할한일예를 [117] Also, Figure 5 shows an example of dividing a picture into tiles and square slices.
나타내는도면이며,예를들어픽처는 24개의타일들(6개의타일열과 4개의 타일행)과 9개의사각형슬라이스들로분할될수있다. This is the drawing shown, for example, a picture can be divided into 24 tiles (6 tile columns and 4 tile rows) and 9 square slices.
[118] 또한,도 5의切는픽처를타일들및사각형슬라이스들로분할한일예를 [118] Further, the figure in Fig. 5 shows an example of dividing the picture into tiles and square slices.
나타내는도면이며,예를들어픽처는 24개의타일(2개의타일열과 2개의타일 행)과 4개의사각형슬라이스들들로분할될수있다. This is the drawing shown, for example, a picture can be divided into 24 tiles (2 tile columns and 2 tile rows) and 4 square slices.
[119] 도 6은일실시예에 따른타일및/또는타일그룹에기반한픽처 인코딩 절차를 도시하는흐름도이다. 6 is a flowchart illustrating a tile and/or tile group-based picture encoding procedure according to an embodiment.
[12이 일실시예에서,픽처 파티셔닝 600)및타일/타일그룹에 관한정보 [12 in this embodiment, picture partitioning 600] and information on tiles/tile groups
생성 610)은인코딩장치의 영상분할부(210)에의하여수행될수있고, 타일/타일그룹에관한정보를포함하는비디오/영상정보에 대한 Generation 610) can be performed by the video segmentation unit 210 of the encoding device, and for video/video information including information about tiles/tile groups.
인코딩 620)은인코딩장치의 엔트로피 인코딩부(240)에의하여수행될수 있다. The encoding 620) can be performed by the entropy encoding unit 240 of the encoding device.
[121] 일실시예에 따른인코딩장치는입력된픽처에 대한인코딩을위하여 ,픽처 파티셔닝을수행할수있다 600).상기픽처는하나이상의 타일/타일그룹을 포함할수있다.인코딩장치는상기픽처의 영상특성 및코딩효율을고려하여 픽처를다양한형태로파티셔닝할수있고,최적의코딩효율을갖는파티셔닝 형태를지시하는정보를생성하여디코딩장치로시그널링할수있다. [121] The encoding apparatus according to an embodiment may perform picture partitioning to encode an input picture 600). The picture may include one or more tiles/tile groups. The encoding apparatus is an image of the picture. Considering the characteristics and coding efficiency, the picture can be partitioned into various types, and information indicating the partitioning type with the optimum coding efficiency can be generated and signaled to the decoding device.
[122] 일실시예에 따른인코딩장치는상기픽처에 대하여 적용되는타일/타일 [122] An encoding apparatus according to an embodiment includes a tile/tile applied to the picture
그룹을결정하고,상기타일/타일그룹에관한정보를생성할수있다 610). 상기 타일/타일그룹에 관한정보는상기픽처에 대한타일/타일그룹의구조를 지시하는정보를포함할수있다.상기 타일/타일그룹에 관한정보는후술하는 바와같이다양한파라미터세트및/또는타일그룹헤더를통하여시그널링될 수있다.구체적인예는후술된다. Determine the group and create information about the tile/tile group 610). The information on the tile/tile group may include information indicating the structure of the tile/tile group for the picture. The information on the tile/tile group includes various parameter sets and/or tile group headers as described later. It can be signaled through. A specific example is described below.
[123] 일실시예에 따른인코딩장치는상기 타일/타일그룹에 관한정보를포함하는 비디오/영상정보를인코딩하여 비트스트림 형태로출력할수있다 620) .상기
비트스트림은디지털저장매체또는네트워크를통하여디코딩장치로전달될 수있다.상기비디오/영상정보는본문서에서서술된 HLS및/또는타일그룹 헤더신택스를포함할수있다.또한,상기비디오/영상정보는상술한예측정보, 레지듀얼정보, (인루프)필터링정보등을더포함할수있다.예를들어 ,인코딩 장치는현재픽처를복원한후인루프필터링을적용하고,상기인루프필터링에 관한파라미터를인코딩하여비트스트림형태로출력할수있다. [123] The encoding apparatus according to an embodiment may encode video/image information including information on the tile/tile group and output it in a bitstream format 620). The bitstream may be transmitted to a decoding device through a digital storage medium or a network. The video/image information may include the HLS and/or tile group header syntax described in this document. In addition, the video/image information is The above-described prediction information, residual information, (in-loop) filtering information, etc. may be further included. For example, the encoding device restores the current picture, applies in-loop filtering, and encodes the parameters related to the in-loop filtering. It can be output in bitstream format.
[124] 도 7은일실시예에따른타일및/또는타일그룹에기반한픽처디코딩절차를 도시하는흐름도이다. 7 is a flow diagram illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
[125] 일실시예에서,비트스트림으로부터타일/타일그룹에관한정보를획득하는 단계 (S700)및픽처내타일/타일그룹을도출하는단계 (S기 0),타일/타일그룹에 기반한픽처디코딩을수행하는단계 (S720)는디코딩장치의엔트로피 [125] In one embodiment, the step of acquiring information on a tile/tile group from a bitstream (S700) and deriving a tile/tile group within a picture (Stage 0), and decoding a picture based on a tile/tile group Step (S720) of performing is the entropy of the decoding device
디코딩부 (310)에의하여수행될수있고,타일/타일그룹에관한정보를포함하는 비디오/영상정보를인코딩하는단계 (S620)는디코딩장치의샘플디코더에 의하여수행될수있다. It may be performed by the decoding unit 310, and the step (S620) of encoding video/image information including information on a tile/tile group may be performed by a sample decoder of the decoding apparatus.
[126] 일실시예에따른디코딩장치는,수신된비트스트림으로부터타일/타일 [126] A decoding apparatus according to an embodiment includes tiles/tiles from a received bitstream.
그룹에관한정보를획득할수있다 (S700).상기타일/타일그룹에관한정보는 후술하는바와같이다양한파라미터세트및/또는타일그룹헤더를통하여 획득될수있다.구체적인예는후술된다. Information on the group can be obtained (S700). The information on the tile/tile group can be obtained through various parameter sets and/or tile group headers as described later. A specific example will be described later.
[127] 일실시예에따른디코딩장치는,상기타일/타일그룹에관한정보를기반으로 현재픽처내타일/타일그룹을도출할수있다 (S기 0). The decoding apparatus according to an embodiment may derive a tile/tile group in a current picture based on the information on the tile/tile group (S phase 0).
[128] 일실시예에따른디코딩장치는상기타일/타일그룹을기반으로상기현재 픽처를디코딩할수있다 (S720).예를들어 ,디코딩장치는상기타일내에 위치하는 CTU/CU를도출하고,이를기반으로인터/인트라예측,레지듀얼처리, 복원블록 (픽처)생성및/또는인루프필터링절차를수행할수있다.또한이 경우예를들어,디코딩장치는타일/타일그룹단위로컨텍스트모델/정보를 초기화할수있다.또한,디코딩장치는인터/인트라예측시참조되는주변블록 또는주변샘플이현재블록이위치하는현재타일과다른타일에위치하는경우 상기주변블록또는주변샘플이가용하지않은것으로처리할수도있다. [128] The decoding apparatus according to an embodiment may decode the current picture based on the tile/tile group (S720). For example, the decoding apparatus derives a CTU/CU located in the tile, and performs it. Based on inter/intra prediction, residual processing, restoration block (picture) generation, and/or in-loop filtering procedures can be performed. In this case, for example, the decoding device can perform context model/information in tile/tile group units. In addition, if the surrounding block or the surrounding sample referenced during inter/intra prediction is located on a tile different from the current tile where the current block is located, the decoding device may treat the surrounding block or the surrounding sample as not available. .
[129] 도 8은픽처를복수의타일들로파티셔닝하는일예를나타내는도면이다. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
[130] 일실시예에서,타일들은픽처를복수의직사각형들로분할하는수직및/또는 수평경계들 (boundaries)의세트에의해정의되는픽처내영역들을의미할수 있다.도 8은하나의픽처 (700)내에서복수의열경계들 (column boundaries, 810) 및행경계들 (row boundaries, 820)을기반으로복수의타일들로분할되는예시를 도시하고있다.도 8에는최초 32개의최대코딩유닛 (또는 CTU(Coding Tree Unit))들이넘버링되어도시되어 있다. In one embodiment, tiles may refer to areas within a picture defined by a set of vertical and/or horizontal boundaries that divide the picture into a plurality of rectangles. FIG. 8 shows one picture 700 Figure 8 shows an example of splitting into multiple tiles based on a plurality of column boundaries (810) and row boundaries (820) within the first 32 maximum coding units (or 820). Coding Tree Units (CTUs) are numbered and shown.
[131] 일실시예에서 ,각타일은각타일내에서 래스터스캔오더 (raster scan order)로 처리되는정수개의 CTU들을포함할수있다.이때상기각타일을포함하는, 픽처내복수의타일들도상기픽처내에서 래스터스캔오더로처리될수있다.
상기타일들은타일그룹들 (tile groups)을형성하기위해그루핑될수있고,단일 타일그룹내타일들은래스터스캔될수있다.픽처를타일들로분할하는것은[131] In one embodiment, each tile may include an integer number of CTUs processed in a raster scan order within each tile. In this case, a plurality of tiles within a picture, including each tile, may also include the picture. It can be processed as a raster scan order within. The tiles can be grouped to form tile groups, and tiles within a single tile group can be raster scanned. Splitting a picture into tiles is
PPS(Picture Parameter Set)의신택스 (syntax)및시맨틱스 (semantics)를기반으로 정의될수있다. It can be defined based on the syntax and semantics of the Picture Parameter Set (PPS).
[132] 일실시예에서,타일들에관하여 PPS로부터도출된정보는다음의사항들을 체크 (또는판독)하기위해이용될수있다.우선픽처내에하나의타일이 존재하는지또는하나이상의타일들이존재하는지체크될수있고,하나이상의 타일들이존재하는경우,상기하나이상의타일들이유니픔하게 (uniformly) 분배되었는지여부가체크될수있고,타일들의차원 (dimension)이체크될수 있고,루프필터가인에이블되었는지여부가체크될수있다. [132] In one embodiment, the information derived from the PPS about tiles may be used to check (or read) the following items. First, check whether a tile exists in the picture or whether more than one tile exists. If more than one tile is present, it can be checked whether the above one or more tiles are uniformly distributed, the dimension of the tiles can be checked, and whether the loop filter is enabled can be checked. have.
[133] 일실시예에서, PPS는우선신택스요소 single_tile_in_pic_flag를시그널링할 수있다.상기 single_tile_in_pic_flag는픽처내하나의타일만존재하는지또는 픽처내복수의타일들이존재하는지여부를지시할수있다.픽처내복수의 타일들이존재하는경우,디코딩장치는신택스요소 num_tile_columns_minus 1 및 num_tile_rows_minusl을이용하여타일행들및타일열들의개수에대한 정보를파싱할수있다.상기신택스요소 num_tile_columns_minus 1및 In one embodiment, the PPS may signal the syntax element single_tile_in_pic_flag first. The single_tile_in_pic_flag may indicate whether only one tile in a picture exists or whether a plurality of tiles in a picture exist. A plurality of tiles in a picture When they are present, the decoding device can parse information about the number of tile rows and tile columns using the syntax elements num_tile_columns_minus 1 and num_tile_rows_minusl. The syntax element num_tile_columns_minus 1 and
num_tile_rows_minusl은픽처를타일행들및열들로분할하는과정을구체화할 수있다.타일행들의높이들및타일열들의폭들은 CTB들의관점에서 (즉, num_tile_rows_minusl can specify the process of dividing a picture into tile rows and columns. The heights of tile rows and widths of tile columns are from the perspective of CTBs (i.e.
CTB를단위로)나타낼수있다. CTB in units).
[134] 일실시예에서,픽처내타일들이유니폼하게스페이싱되었는지여부를 [134] In one embodiment, whether or not tiles in a picture are uniformly spaced
체크하기위해추가적인플래그가파싱될수있다.상기픽처내타일들이 유니폼하게스페이싱되지않은경우,각각의타일행및열의경계들에대하여 타일당 CTB의개수가명시적으로시그널링될수있다 (즉,각타일행내 CTB의 개수와각타일열내 CTB의개수가시그널링될수있다).만약타일들이 유니폼하게스페이싱된경우,타일들은서로동일한폭및높이를가질수있다. Additional flags can be parsed to check if the tiles in the picture are not uniformly spaced, the number of CTBs per tile can be explicitly signaled for each tile row and column boundaries (i.e. CTB within each tile row). The number of and the number of CTBs in each tile row can be signaled) If the tiles are spaced uniformly, the tiles can have the same width and height.
[135] 일실시예에서 ,타일경계들에대하여루프필터 (loop filter)가인에이블 [135] In one embodiment, a loop filter is enabled for tile boundaries.
되었는지여부를결정하기위해또다른플래그 (예를들어,신택스요소 loop_filter_across_tiles_enabled_flag)가파싱될수있다. Another flag (e.g. the syntax element loop_filter_across_tiles_enabled_flag) can be parsed to determine if it has been enabled.
[136] 아래의표 1은 PPS를파싱함으로써도출될수있는타일들에대한주요정보의 예시를요약하여나타낸다.표 1은 PPS RBSP신택스를나타낼수있다.
2020/175908 1»(:1/10公020/002733[136] Table 1 below summarizes examples of main information about tiles that can be derived by parsing the PPS. Table 1 can represent the PPS RBSP syntax. 2020/175908 1»(:1/10公020/002733
[137] [표 1][137] [Table 1]
[138] 아래의표 2는상기표 1에기재된신택스요소들에대한시맨틱스의일예시를 나타낸다.
Table 2 below shows an example of semantics for the syntax elements described in Table 1 above.
[139] [S.2]
[139] [S.2]
2020/175908 1»(:1/10公020/002733 2020/175908 1»(:1/10公020/002733
[14이 [14 this
[141] 도 9는일실시예에따른인코딩장치의구성을도시하는블록도이고,도 은 일실시예에따른디코딩장치의구성을도시하는블록도이다. 9 is a block diagram showing a configuration of an encoding apparatus according to an embodiment, and FIG. 9 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
[142] 도 9에는인코딩장치의블록도의일예시가도시되어 있다.도 9에도시된 인코딩장치 (900)는파티셔닝모듈 (910)과인코딩모듈 (920)을포함하고있다. 상기파티셔닝모듈 (이 0)은도 2에도시된인코딩장치의영상분할부 ( 0)와
동일및/또는유사한동작들을수행할수있고,상기인코딩모듈 (920)은도 2에 도시된인코딩장치의엔트로피인코딩부 (240)와동일및/또는유사한동작들을 수행할수있다.입력비디오는파티셔닝모듈 (9 W)에서분할된후,인코딩 모듈 (920)에서인코딩될수있다.인코딩된이후,상기인코딩된비디오는상기 인코딩장치 (900)로부터출력될수있다. 9 shows an example of a block diagram of an encoding apparatus. The encoding apparatus 900 shown in FIG. 9 includes a partitioning module 910 and an encoding module 920. The partitioning module (0) and the image division unit (0) of the encoding device shown in FIG. The same and/or similar operations may be performed, and the encoding module 920 may perform the same and/or similar operations as the entropy encoding unit 240 of the encoding apparatus shown in FIG. 2. The input video is a partitioning module 9 After being divided in W), it can be encoded in the encoding module 920. After being encoded, the encoded video can be output from the encoding device 900.
[143] 도 W에는디코딩장치의블록도의일예시가도시되어 있다.도 W에도시된 디코딩장치 (1000)는디코딩모듈 (1010)과디블록킹필터 (1020)을포함하고 있다.상기디코딩모듈 (1010)은도 3에도시된디코딩장치의엔트로피 디코딩부 (3 W)와동일및/또는유사한동작들을수행할수있고,상기디블록킹 필터 (1020)는도 3에도시된디코딩장치의필터링부 (350)와동일및/또는유사한 동작들을수행할수있다.디코딩모듈 (1010)은상기인코딩장치 (900)로부터 수신한입력을디코딩하여타일들에대한정보를도출할수있다.상기디코딩 된정보를기반으로처리단위가결정될수있고,디블록킹필터 (1020)는인루프 디블록킹필터를적용하여상기처리단위를처리할수있다.인루프필터링은 파티셔닝과정에서생성된코딩아티팩트를제거하기위해적용될수있다.상기 인루프필터링동작은 ALF( Adaptive Loop Filter),디블록킹필터 (Deblocking Filter, DF), SAO(Sample Adaptive O伴 set)등을포함할수있다.이후디코딩된 픽처가출력될수있다. An example of a block diagram of a decoding apparatus is shown in FIG. W. The decoding apparatus 1000 shown in FIG. W includes a decoding module 1010 and a deblocking filter 1020. The decoding module ( 1010) can perform the same and/or similar operations as the entropy decoding unit 3W of the decoding apparatus shown in FIG. 3, and the deblocking filter 1020 is a filtering unit 350 of the decoding apparatus shown in FIG. The same and/or similar operations can be performed. The decoding module 1010 decodes the input received from the encoding device 900 to derive information about tiles. A processing unit based on the decoded information The deblocking filter 1020 may apply an in-loop deblocking filter to process the processing unit. In-loop filtering may be applied to remove coding artifacts generated during the partitioning process. The in-loop filtering The operation may include an adaptive loop filter (ALF), a deblocking filter (DF), a sample adaptive operation set (SAO), etc. After that, the decoded picture can be output.
[144] 각신택스요소의파싱과정을구체화하는디스크립터 (descriptor)의예시는 아래의표 3에개시되어있다.
[144] Examples of descriptors that specify the parsing process of each syntax element are shown in Table 3 below.
[145] [S3]
[145] [S3]
]
]
[147] 도 11은현재픽처를구성하는타일및타일그룹단위의일예를도시하는 도면이다. 11 is a diagram illustrating an example of a tile and tile group unit constituting a current picture.
[148] 전술된바와같이,타일들은타일그룹들을형성하기위해그루핑될수있다. 도 11은하나의픽처가타일들및타일그룹들로분할된예시를도시하고있다. 도 11에서,상기픽처는 9개의타일들및 3개의타일그룹들을포함하고있다. 각각의타일그룹은독립적으로코딩될수있다. [148] As mentioned above, tiles can be grouped to form tile groups. 11 shows an example in which one picture is divided into tiles and tile groups. In FIG. 11, the picture includes 9 tiles and 3 tile groups. Each tile group can be independently coded.
[149] 도 12는타일그룹정보의시그널링구조의일예를개략적으로도시하는 12 schematically shows an example of the signaling structure of tile group information
도면이다. It is a drawing.
[150] CVS(Coded Video Sequence)내에서각각의타일그룹은타일그룹헤더를 [150] In CVS (Coded Video Sequence), each tile group has a tile group header.
포함할수있다.타일그룹들은슬라이스그룹과유사한의미를나타낼수있다. 각타일그룹은독립적으로코딩될수있다.타일그룹은하나또는그이상의 타일들을포함할수있다.타일그룹헤더는 PPS를참조할수있고, PPS는 순차적으로 (subsequently) SPS(Sequence Parameter Set)를참조할수있다. Tile groups can have a similar meaning to a slice group. Each tile group can be independently coded. A tile group can contain one or more tiles. A tile group header can refer to a PPS, and a PPS can sequentially refer to a SPS (Sequence Parameter Set). .
[151] 도 12에서,타일그룹헤더는상기타일그룹헤더가참조하는 PPS의 PPS 12, a tile group header is a PPS of a PPS referenced by the tile group header.
인덱스를가질수있다.상기 PPS는순차로 SPS를참조할수있다. It can have an index. The PPS can refer to the SPS in sequence.
[152] PPS인덱스와더불어,일실시예에따른타일그룹헤더는다음의정보들에 대하여결정할수있다.우선픽처당하나보다많은타일이존재하는경우,타일 그룹어드레스및타일그룹내타일들의개수를결정할수있다.다음으로, 인트라/프레딕티브 (predictive)/양방향 (bi-directional)과같이타일그룹타입을 결정할수있다.다음으로, LSB(Lease Significant Bits)의 POC(Picture Order Count)를결정할수있다.다음으로,하나의픽처에하나보다많은타일이 존재하는경우,오프셋길이및타일로의엔트리포인트를결정할수있다. [152] In addition to the PPS index, the tile group header according to one embodiment can be determined for the following information. First, if more than one tile exists per picture, the tile group address and the number of tiles in the tile group are determined. Next, you can determine the tile group type, such as intra/predictive/bi-directional. Next, you can determine the picture order count (POC) of the Lease Significant Bits (LSB). Next, if there is more than one tile in a picture, you can determine the offset length and entry point to the tile.
[153] 아래의표 4는타일그룹헤더의신택스의일예시를나타낸다.표 4에서타일 그룹헤더 (tile_group_header)는슬라이스헤더로대체될수있다.
[153] Table 4 below shows an example of the syntax of the tile group header. In Table 4, the tile group header (tile_group_header) can be replaced with a slice header.
2020/175908 1»(:1^1{2020/002733 2020/175908 1»(:1^1{2020/002733
[154] [표 4] [154] [Table 4]
[155] 아래의표 5는상기 타일그룹헤더의신택스에 대한영문시맨틱스의 일 예시를나타낸다.
Table 5 below shows an example of English semantics for the syntax of the tile group header.
[156] [S.5] [156] [S.5]
When present, the value of the tile group header syntax element , group_pic_parameter_set_id and ti le_group_pic_order_cnt_l sb shall be ame in all tile group headers of a coded picture.*’ ti le_group_pic_para eter_set_id specifies the value of
When present, the value of the tile group header syntax element, group_pic_parameter_set_id and ti le_group_pic_order_cnt_l sb shall be ame in all tile group headers of a coded picture. *' ti le_group_pic_para eter_set_id specifies the value of
pps_pic_parameter_set_id for the PPS in use. The value of ti 1 e_group_pic_para eter_set_id shall be in the range of 0 to 63, inclusive.*·’ pps_pic_parameter_set_id for the PPS in use. The value of ti 1 e_group_pic_para eter_set_id shall be in the range of 0 to 63, inclusive. * · '
It is a requirement of bitstream conformance that the value of Temporal Id of the current picture shall be greater than or equal to the value of Temporal Id of the PPS that has pps_pic_parameter_set_id equal to t i 1 e_group_p i c_parameter _set_id . *’ ti le_group_address specifies the tile address of the first tile in the tile group, where tile address is the tile ID as specified by Equation c-7. The length of ti le_group_address is Cei 1 ( Log2 ( NumTi lesInPic ) ) bits. The value of ti le_group_address shall be in the range of 0 to It is a requirement of bitstream conformance that the value of Temporal Id of the current picture shall be greater than or equal to the value of Temporal Id of the PPS that has pps_pic_parameter_set_id equal to ti 1 e_group_p i c_parameter _set_id. *' ti le_group_address specifies the tile address of the first tile in the tile group, where tile address is the tile ID as specified by Equation c-7. The length of ti le_group_address is Cei 1 (Log2 (NumTi lesInPic)) bits. The value of ti le_group_address shall be in the range of 0 to
NumTi lesInPic - 1, inclusive, and the value of ti le_group_address shall not be equal to the value of ti le_group_address of any other coded tile group ML unit of the same coded picture. When ti le_group_address is not present it is inferred to be equal to 0..: num_tiies_in_ti le_group_minusl plus 1 specifies the number of tiles
[157] in the tile group. The value of num_ti les_in_ti le_group_minusl shall be in the range of 0 to Nu Ti lesInPic - 1, inclusive. When not present, the value of num_t i 1 es_ i n_t i le_group_minusl is inferred to be equal to 0.-' ti le_group_type specifies the coding type of the tile group according to table 6.-·’ NumTi lesInPic-1, inclusive, and the value of ti le_group_address shall not be equal to the value of ti le_group_address of any other coded tile group ML unit of the same coded picture. When ti le_group_address is not present it is inferred to be equal to 0..: num_tiies_in_ti le_group_minusl plus 1 specifies the number of tiles [157] in the tile group. The value of num_ti les_in_ti le_group_minusl shall be in the range of 0 to Nu Ti lesInPic-1, inclusive. When not present, the value of num_t i 1 es_ i n_t i le_group_minusl is inferred to be equal to 0.-' ti le_group_type specifies the coding type of the tile group according to table 6.-· '
When nal_unit_type is equal to IRAP_NUT, i.e., the picture is an When nal_unit_type is equal to IRAP_NUT, i.e., the picture is an
I RAP picture, ti le_group_type shall be equal to 2.*· ti le_group_pic_order_cnt_lsb specifies the picture order count modulo MaxPicOrderCntLsb for the current picture. The length of the ti le_group_pic_order_cnt_lsb syntax element is log2_max_pic_order_cnt_lsb_minus4 + 4 bits. The value of the ti le_group_pic_order_cnt_lsb shall be in the range of 0 to I RAP picture, ti le_group_type shall be equal to 2.*· ti le_group_pic_order_cnt_lsb specifies the picture order count modulo MaxPicOrderCntLsb for the current picture. The length of the ti le_group_pic_order_cnt_lsb syntax element is log2_max_pic_order_cnt_lsb_minus4 + 4 bits. The value of the ti le_group_pic_order_cnt_lsb shall be in the range of 0 to
MaxPicOrderCntLsb - 1, inclusive.-’ of fset_len_ inusl plus 1 specifies the length, in bits, of the entry_point_offset_ inusl [ i ] syntax elements. The value of offset_len_minusl shall be in the range of 0 to 31, inclusive.-·’ entry_point_of fset_minusl[ i ] plus 1 specifies the i-th entry point offset in bytes, and is represented by offset_len_minusl plus 1 bits. The tile group data that follow the tile group header consists of nu _ti les_in_ti le_group_ inusl +1 subsets, with subset index values
2020/175908 1»(:1/10公020/002733 MaxPicOrderCntLsb-1, inclusive.- ' of fset_len_ inusl plus 1 specifies the length, in bits, of the entry_point_offset_ inusl [i] syntax elements. The value of offset_len_minusl shall be in the range of 0 to 31, inclusive.-· ' entry_point_of fset_minusl[ i] plus 1 specifies the i-th entry point offset in bytes, and is represented by offset_len_minusl plus 1 bits. The tile group data that follow the tile group header consists of nu _ti les_in_ti le_group_ inusl +1 subsets, with subset index values 2020/175908 1»(:1/10公020/002733
[158] [158]
[159] [표 6] [159] [Table 6]
[16이 일실시예에서 ,타일그룹은타일그룹헤더및타일그룹데이터를포함할수 있다.타일그룹어드레스가알려지면,타일그룹내각 0X1의개별적인
2020/175908 PCT/KR2020/002733 위치들이매핑되어디코딩될수있다.아래의표 7은타일그룹데이터의 신택스의일예시를나타낸다.표 7에서타일그룹데이터는슬라이스데이터로 대체될수있다. [16 In this embodiment, the tile group may include a tile group header and tile group data. When the tile group address is known, each 0X1 in the tile group is 2020/175908 PCT/KR2020/002733 Locations can be mapped and decoded. Table 7 below shows an example of the syntax of tile group data. In Table 7, tile group data can be replaced with slice data.
[161] [표刀 [161] [Table 刀
[162] 아래의표 8은상기타일그룹데이터의신택스에대한영문시맨틱스의일 예시를나타낸다.
[162] Table 8 below shows an example of English semantics for the syntax of the tile group data.
WO 2020/175908 PCT/KR2020/002733 WO 2020/175908 PCT/KR2020/002733
[163] [S.8] [163] [S.8]
RowHeight[ j ] = ( ( j + l ) PicHeight InCtbsY ) / ows_minusl + 1 ) - ( j * PicHeight InCtbsY ) / ws_minusl + 1 ) ght [ num_t i le_rows_aiinusl ] = PicHeight InCtbsY RowHeight[ j] = ((j + l) PicHeight InCtbsY) / ows_minusl + 1)-(j * PicHeight InCtbsY) / ws_minusl + 1) ght [num_t i le_rows_aiinusl] = PicHeight InCtbsY
= 0: j < nu _t i le_rows_minusl: j++ ) i,· = 0: j <nu _t i le_rows_minusl: j++) i,·
RowHeight [ j ] = t i 1 e_row_he ight_mi nus 1 [ j ] + !- RowHeight [j] = t i 1 e_row_he ight_mi nus 1 [j] + !-
RowHeight [ num_t i le_rows_mimisl ] -= RowHeight [ j ]*' RowHeight [num_t i le_rows_mimisl] -= RowHeight [j] * '
st ColBd[ i ] for i ranging from 0 to nuin_t i le_columns_minusl e, specifying the location of the i-th tile column boundary TBs, is derived as follows—st ColBd[ i] for i ranging from 0 to nuin_t i le_columns_minusl e, specifying the location of the i-th tile column boundary TBs, is derived as follows—
olBd[ 0 ] = 0, i = 0: i <= num_tile_coluinns_mInusl: i++ )· ColBd[ i + 1 ] = ColBd[ i ] + Colfidtht i ] olBd[ 0] = 0, i = 0: i <= num_tile_coluinns_mInusl: i++ )· ColBd[ i + 1] = ColBd[ i] + Colfidtht i]
st RowBd[ j ] for j ranging from 0 to num_t i le_rows_jninusl + specifying the location of the j-th tile row boundary in , is derived as follows—· st RowBd[ j] for j ranging from 0 to num_t i le_rows_jninusl + specifying the location of the j-th tile row boundary in, is derived as follows—·
owBd[ 0 ] = 0, j = 0: j <= num_t i le_rows_minusl: j++ )·
[165] Ro Bd [ j + 1 ] = RowBd [ j ] + RowHeight[ j ] owBd[ 0] = 0, j = 0: j <= num_t i le_rows_minusl: j++ )· [165] Ro Bd [j + 1] = RowBd [j] + RowHeight[ j]
The list CtbAddrRsToTs[ ctbAddrRs ] for ctbAddrRs ranging from 0 to PicSizelnCtbsY - 1, inclusive, specifying the conversion from a CTB address in CTB raster scan of a picture to a CTB address in tile scan, is derived as follows: * The list CtbAddrRsToTs[ ctbAddrRs] for ctbAddrRs ranging from 0 to PicSizelnCtbsY-1, inclusive, specifying the conversion from a CTB address in CTB raster scan of a picture to a CTB address in tile scan, is derived as follows: *
fori ctbAddrRs = Q; ctbAddrRs < PicSizelnCtbsY: ctbAddrRs++ ) - tbX = ctbAddrRs % ricWidthlnCtbsY, fori ctbAddrRs = Q; ctbAddrRs <PicSizelnCtbsY: ctbAddrRs++)-tbX = ctbAddrRs% ricWidthlnCtbsY,
tbY = ctbAddrRs / PicfidthlnCtbsY tbY = ctbAddrRs / PicfidthlnCtbsY
fori i = 0: i <= num_ti ]e_coluiiins_minusl: i++ )<· fori i = 0: i <= num_ti ]e_coluiiins_minusl: i++ )<·
if ( tbX >= ColBd[ i ] ). if (tbX >= ColBd[ i] ).
t i leX = i ' fori j = 0: j <= num_t ile_rows_minusl; j++ ) ti leX = i ' fori j = 0: j <= num_t ile_rows_minusl; j++)
iff tbY >= Ro«-Bd[ j ] ) iff tbY >= Ro«-Bd[ j])
tileY - j · tileY-j ·
CrbAddrRsToTsf ctbAddrRs ] = 0 CrbAddrRsToTsf ctbAddrRs] = 0
for( i = 0: i < c i leX ' i-H- ) for( i = 0: i <ci leX ' iH-)
CrbAddrRsToTs[ ctbAddrRs ] += RowHeighi [ tileY ] * CrbAddrRsToTs[ ctbAddrRs] += RowHeighi [tileY] *
Colffidtht i )· Colffidtht i )·
fori j ~ 0: j < tileY; j++ ).· fori j ~ 0: j <tileY; j++ ).·
CtbAddrRsToTs[ ctbAddrRs ] += Gί cWidthlnCtbsY *
[166] RowHei ht [ j ]-CtbAddrRsToTs[ ctbAddrRs] += Gί cWidthlnCtbsY * [166] RowHei ht [j]-
CtbAddrRsToTst ctbAddrRs ] += ( tbY - RowBd[ tileY ] ) *CtbAddrRsToTst ctbAddrRs] += (tbY-RowBd[ tileY]) *
Coiffidth[ tileS ] + tbX - ColBd[ tiieX ].·Coiffidth[ tileS] + tbX-ColBd[ tiieX ].·
The list CtbAddrTsToRs[ ctbAddrTs ] for ctbAddrTs ranging from 0 to The list CtbAddrTsToRs[ ctbAddrTs] for ctbAddrTs ranging from 0 to
FicSizelnCtbsY - 1, inclusive, specifying the conversion from a CTB address in tile scan to a CTB address in CTB raster scan of a picture, is derived as follows:- for( ctbAddrRs = 0: ctbAddrRs < FicSizelnCtbsY: ctbAddrRs++ ) CtbAddrTsToRs[ CtbAddrRsToTst ctbAddrRs ] ] = ctbAddrRs-· FicSizelnCtbsY-1, inclusive, specifying the conversion from a CTB address in tile scan to a CTB address in CTB raster scan of a picture, is derived as follows:- for( ctbAddrRs = 0: ctbAddrRs <FicSizelnCtbsY: ctbAddrRs++) CtbAddrTsToRs[ CtbAddrRsTo ]] = ctbAddrRs-·
The list Tileld[ ctbAddrTs ] for ctbAddrTs ranging from 0 to ricSizelnCtbsY - 1, inclusive, specifying the conversion from a CTB address in tile scan to a tile ID, is derived as follows:- for( j = 0. tileldx = 0: j <= num_t i le_rows_minusl: j++ )*· for( i = 0: i <= num_t i le_columns_niinusl; i++, tileldx++ )«· for( y = RowBd [ j ]; y < RowBd[ j + 1 ]; y++ ) .·· The list Tileld[ ctbAddrTs] for ctbAddrTs ranging from 0 to ricSizelnCtbsY-1, inclusive, specifying the conversion from a CTB address in tile scan to a tile ID, is derived as follows:- for( j = 0. tileldx = 0: j <= num_t i le_rows_minusl: j++) *· for( i = 0: i <= num_t i le_columns_niinusl; i++, tileldx++ )«· for( y = RowBd [j ]; y <RowBd[ j + 1 ]; y++). ··
for( x = ColBd[ i ]: x < CoIBd[ i + 1 ]: X-H- )- for( x = ColBd[ i ]: x <CoIBd[ i + 1 ]: X-H- )-
Tileidt CtbAddrRsToTst y * PicWidthInCtbsY+ x 3 J = tileldx-Tileidt CtbAddrRsToTst y * PicWidthInCtbsY+ x 3 J = tileldx-
The list NumCtusInTi ie[ tileldx ] for tileldx ranging from 0 to
[167] PicSizelnCtbsY - 1, inclusive, specifying the conversion from a tile index to the number of CTUs in the tile, is derived as follows··*· The list NumCtusInTi ie[ tileldx] for tileldx ranging from 0 to [167] PicSizelnCtbsY-1, inclusive, specifying the conversion from a tile index to the number of CTUs in the tile, is derived as follows·· *·
for( j = 0, tileldx = 0; j <= num_tile_rows_minusl; j++ )* for( i = 0: i <= nuin_t ile_columns_minusl; i++, tileldx++ ) for( j = 0, tileldx = 0; j <= num_tile_rows_minusl; j++) * for( i = 0: i <= nuin_t ile_columns_minusl; i++, tileldx++)
NumCtusInTi le[ tileldx ] = Colfidth[ i ] * RowHeight[ j ]’ The list FirstCtbAddrTs[ tileldx ] for tileldx ranging from 0 to NumTilesInPic - 1. inclusive, specifying the conversion from a tile ID to the CTB address in tile scan of the first CTB in the tile are derived as fol lows: - for( ctbAddrTs = 0, tileldx = 0, tileStartFIag = 1: ctbAddrTs <NumCtusInTi le[ tileldx] = Colfidth[ i] * RowHeight[ j] ' The list FirstCtbAddrTs[ tileldx] for tileldx ranging from 0 to NumTilesInPic-1. inclusive, specifying the conversion from a tile ID to the CTB address in tile scan of the first CTB in the tile are derived as fol lows:-for( ctbAddrTs = 0, tileldx = 0, tileStartFIag = 1: ctbAddrTs <
FirstCtbAddrTsf t i ieldx ] = ctbAddrTs FirstCtbAddrTsf t i ieldx] = ctbAddrTs
tileStartFIag = 0 tileStartFIag = 0
}*> }*>
tiieEndFlag = ctbAddrTs = = PicSizelnCtbsY - 1 tiieEndFlag = ctbAddrTs = = PicSizelnCtbsY-1
Ti 1 e Id [ ctbAddrTs + 1 ] != Tiieldt ctbAddrTs ]* Ti 1 e Id [ctbAddrTs + 1] != Tiieldt ctbAddrTs] *
if( tiieEndFlag ) 1 if( tiieEndFlag) 1
ti leldx+A ti leldx+A
tileStartFIag = 1
2020/175908 1»(:1/10公020/002733 tileStartFIag = 1 2020/175908 1»(:1/10公020/002733
[168] [168]
[169] 타일들에기반한픽처의분할이요구되는다양한적용예들이존재할수 [169] There may be various application examples that require picture division based on tiles.
있으며,본실시예들은상기적용예들과연관될수있다. And, the present embodiments may be related to the above application examples.
[17이 일예시에서,병렬처리 (parallel processing)에대해검토한다.멀티코어 [17 In this example, we review parallel processing. Multicore
CPU들에서실행되는일부구현에서는소스픽처 (source picture)를타일들및 타일그룹들로분할해야한다.이때,각타일그룹은분리된코어에서병렬 처리될수있다.상기병렬처리는비디오들의고해상도실시간인코딩에유용할 수있다.추가적으로,상기병렬처리는타일그룹들간의정보공유를감소시킬 수있으며 ,이에따라메모리제한 (constraint)을감소시킬수있다.타일들은병렬 처리되는동안서로다른쓰레드 (thread)로분배될수있으므로,병렬아키텍쳐는 이러한분할메커니즘의이점을얻을수있다. Some implementations running on CPUs require dividing the source picture into tiles and tile groups, where each tile group can be processed in parallel on a separate core. The parallel processing is a high-resolution real-time encoding of videos. In addition, the above parallel processing can reduce the sharing of information between groups of tiles, thereby reducing the memory constraint. Tiles can be distributed to different threads while processing in parallel. Therefore, the parallel architecture can benefit from this partitioning mechanism.
[171] 다른일예시에서 ,최대전송유닛 (Maximum Transmission Unit, MTU)사이즈 매칭에대해검토한다.네트워크를통해전송된코딩된픽처들은,상기코딩된 픽처들이 MTU사이즈보다큰경우조각화 (fragmentation)의대상이될수있다. 유사하게 ,상기코딩된세그먼트들이작은경우, IP(Internet Protocol)헤더는 중요해질수있다.패킷조각화는에러레질리언시 (error resiliency)의손실을 초래할수있다.패킷조각화의효과들을완화하기위해픽처를타일들로 분할하고각타일/타일그룹을분리된패킷으로패킹하는경우,패킷이 MTU 사이즈보다작을수있다. [171] In another example, the maximum transmission unit (MTU) size matching is reviewed. The coded pictures transmitted through the network are subject to fragmentation when the coded pictures are larger than the MTU size. It can be different. Similarly, if the coded segments are small, the IP (Internet Protocol) header can become important. Packet fragmentation can lead to loss of error resiliency. The picture is taken to mitigate the effects of packet fragmentation. When dividing into tiles and packing each tile/tile group as a separate packet, the packet may be smaller than the MTU size.
[172] 또다른일예시에서,에러레질리언스에대해검토한다.에러레질리언스는 코딩된타일그룹들에불균형에러보호 (Unequal Error Protection, UEP)를 적용하는일부적용들의요구사항에의해동기가부여될수있다. [172] In another example, error resilience is reviewed. Error resilience is motivated by the requirements of some applications that apply Unequal Error Protection (UEP) to coded tile groups. Can be given.
[173] 상술한바와같이픽처를파티셔닝하는타일들의구조를효율적으로
시그널링하기위한방법이필요하며,이는도 13내지도 21에서구체적으로 설명한다. [173] As described above, the structure of tiles for partitioning pictures can be efficiently A method for signaling is required, which is described in detail in Figs. 13 to 21.
[174] 도 13은화상회의용비디오프로그램에서픽처의일예를나타내는도면이다. 13 is a diagram showing an example of a picture in a video conference video program.
[175] 본명세서에따르면픽처를복수의타일들로파티셔닝하는타일링에 있어서, 미리정의된사각형영역을이용하여유연한타일링을도모할수있다. [175] According to this specification, in tiling for partitioning a picture into a plurality of tiles, flexible tiling can be achieved by using a predefined rectangular area.
[176] 기존의타일링의경우래스터스캔순서에따라수행되었으나,이러한방식에 따른타일링구조는화상회의용비디오프로그램등최근의실제응용 [176] In the case of the existing tiling, it was performed according to the raster scan order, but the tiling structure according to this method has recently been applied to video programs for video conferencing
프로그램에적용하기에는적합하지않은측면이 있다. There are aspects that are not suitable for application to programs.
[177] 도 13은참가자가여러명인화상회의가진행되는경우,화상회의용비디오 프로그램에서픽처의일예를나타낼수있다.이때,참가자는화자 l(Speaker 1), 화자 2(Speaker 2),화자 3(Speaker 3)및화자 4(Speaker 4)로나타낼수있다.상기 픽처에서각참가자에대응되는영역은기설정된영역들각각에해당할수 있으며,기설정된영역들각각은단일타일또는타일그룹으로코딩될수있다. 화상회의에서참가자가변경되는경우,참가자에대응되는단일타일또는타일 그룹또한변경될수있다. [177] Fig. 13 shows an example of a picture in a video program for video conferencing when a video conference with multiple participants is held. In this case, the participant is speaker l (Speaker 1), speaker 2 (Speaker 2), speaker 3 (Speaker 3) and Speaker 4 (Speaker 4) The area corresponding to each participant in the picture can correspond to each of the preset areas, and each of the preset areas can be coded as a single tile or a group of tiles. have. When a participant changes in a video conference, the single tile or group of tiles corresponding to the participant may also change.
[178] 도 14는화상회의용비디오프로그램에서픽처를타일또는타일그룹으로 파티셔닝하는일예를나타내는도면이다. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
[179] 도 14를참고하면,화상회의에참가하는화자 1 (Speaker 1)에할당된영역은 단일타일로코딩될수있다. 찬가지로,화자 2(Speaker 2),화자 3(Speaker 3)및 화자 4(Speaker 4)들각각에할당된영역도단일타일로코딩될수있다. Referring to FIG. 14, an area assigned to speaker 1 participating in a video conference may be coded as a single tile. Similarly, the areas assigned to each of Speaker 2, Speaker 3, and Speaker 4 can be coded as a single tile.
[180] 도 14와같이참가자들각각에할당되는영역을개별타일을이용하여 [180] As shown in Fig. 14, the area allocated to each participant is
코딩하는경우,공간의존성 (spatial dependency)이개선됨에따라효율적코딩을 가능하게할수있다.또한이러한분할방식은 360비디오데이터에적용할수 있으며,하기도 15에서후술한다. In the case of coding, it is possible to enable efficient coding as spatial dependency is improved. In addition, this division method can be applied to 360 video data, which will be described later in Fig. 15.
[181] 도 15는픽처를 MCTS(Motion Constrained Tile Set)에기반하여타일또는타일 그룹으로파티셔닝하는일예를나타내는도면이다. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on a Motion Constrained Tile Set (MCTS).
[182] 도 15에서,픽처는 360도비디오데이터로부터획득될수있다. 360비디오는 VR(Virtual Reality)을제공하기위해필요한,동시에모든방향 (360도)으로 캡처되거나재생되는비디오내지이미지컨텐츠를의미할수있다. %0 비디오는 3D모델에따라다양한형태의 3D공간상에나타내어지는비디오 내지이미지를의미할수있으며,예를들어 360비디오는구형면 (Spherical surface)상에나타내어질수있다. [182] In Fig. 15, a picture can be acquired from 360 degree video data. 360 video can mean video or image content that is captured or played back in all directions (360 degrees) at the same time required to provide VR (Virtual Reality). %0 video can refer to a video or image that appears in various types of 3D space depending on the 3D model. For example, a 360 video can be displayed on a spherical surface.
[183] 360도비디오데이터로부터획득된 2D(two-dimensional space)픽처는적어도 하나의공간해상도로인코딩될수있다.예를들어,픽처는제 1해상도및제 2 해상도로인코딩될수있으며 ,제 1해상도는제 2해상도보다높을수있다.도 15를참고하면,픽처는각각 1536x1536및 768x768의사이즈를갖는 2개의공간 해상도로인코딩될수있으나,공간해상도는이에제한되는것은아니고다양한 사이즈에해당될수있다.
[184] 이때,상기두개의공간해상도각각으로인코딩된비트스트림들에대하여 6x4크기의타일그리드가이용될수있다.또한,타일들각각의위치를위한 MCTS(motion constraint tile set)가코딩되어이용될수있다.도 13및도 14에서 상술한바와같이 , MCTS들각각은픽처내기설정된영역들각각에위치한 타일들을포함할수있다. [183] A two-dimensional space (2D) picture obtained from 360-degree video data can be encoded with at least one spatial resolution. For example, a picture can be encoded with a first resolution and a second resolution, and the first resolution. May be higher than the second resolution. Referring to FIG. 15, a picture can be encoded in two spatial resolutions, each having a size of 1536x1536 and 768x768, but the spatial resolution is not limited thereto and may correspond to various sizes. [184] At this time, a 6x4 size tile grid may be used for bitstreams encoded at each of the two spatial resolutions. In addition, a motion constraint tile set (MCTS) for each position of the tiles may be coded and used. As described above with reference to FIGS. 13 and 14, each of the MCTSs may include tiles positioned in respective areas set for a picture.
[185] MCTS는사각형타일세트를형성하는적어도하나의타일을포함할수 [185] MCTS may contain at least one tile to form a set of square tiles.
있으며 ,타일은 2차원픽처의코딩트리블록 (CTB)들로구성된사각영역을 나타낼수있다.타일은픽처내에서특정타일행및타일열을기반으로구분될 수있다.인코딩/디코딩과정에서특정 MCTS내의블록들에대한인터 예측이 수행되는경우,해당특정 MCTS내의블록들은움직임추정/움직임보상을 위하여참조픽처의대응 MCTS만을참조하도록제한될수있다. A tile can represent a rectangular area composed of coding tree blocks (CTBs) of a two-dimensional picture. A tile can be classified based on a specific tile row and tile column within a picture. A specific MCTS in the encoding/decoding process When inter prediction is performed on the blocks within, the blocks within the specific MCTS may be restricted to refer only to the corresponding MCTS of the reference picture for motion estimation/motion compensation.
[186] 예를들어도 15를참고하면, 12개의제 1 MCTS들 (1510)은 1536x1536의 [186] For example, referring to 15, the 12 first MCTSs 1510 are of 1536x1536.
사이즈를갖는공간해상도로인코딩된비트스트림으로부터도출되고, 12개의 제 2 MCTS들 (1520)은 768x768의사이즈를갖는공간해상도로인코딩된 비트스트림으로부터도출될수있다.즉,제 1 MCTS들 (1510)은동일한픽처에서 제 1해상도를갖는영역에대응하고,제 2 MCTS들 (1520)은동일한픽처에서제 2해상도를갖는영역에대응할수있다. It is derived from a bitstream encoded with a spatial resolution having a size, and 12 second MCTSs 1520 may be derived from a bitstream encoded with a spatial resolution having a size of 768x768. That is, the first MCTSs 1510 May correspond to a region having a first resolution in the same picture, and the second MCTSs 1520 may correspond to a region having a second resolution in the same picture.
[187] 제 1 MCTS들은픽처내에서의뷰포트 (viewport)영역에해당할수있다.뷰포트 영역은사용자가 360도비디오에서보고있는영역을의미할수있다.또는,제 1 MCTS들은픽처내에서의 ROI(Region of Interest)영역에해당할수있다. ROI 영역은 360컨텐츠제공자가제안하는,사용자들의관심영역을의미할수있다. [187] The first MCTSs may correspond to the viewport area in the picture. The viewport area may refer to the area that the user is viewing in the 360-degree video. Or, the first MCTSs may correspond to the ROI (Region in the picture). of Interest). The ROI area can refer to the area of interest of users, suggested by the 360 content provider.
[188] 이때,단일시간에수신된 MCTS들은합쳐져서하나의머지픽처 (merged [188] At this time, MCTSs received in a single time are merged into one merged picture (merged
picture)를구성할수있다.예를들어,제 1 MCTS들 (1510)및제 2 picture), for example, the first MCTSs 1510 and the second
MCTS들 (1520)은합해져서 1920x4708크기의머지픽처 (1530)로병합될수 있으며 ,머지픽처 (1530)는 4개의타일그룹을가질수있다. The MCTSs 1520 can be merged and merged into a 1920x4708-sized merge picture 1530, and the merge picture 1530 can have four tile groups.
[189] 아래의표 9는 PPS신택스의일예시를나타낸다.
[189] Table 9 below shows an example of the PPS syntax.
2020/175908 1»(:1/10公020/0027332020/175908 1 » (:1/10公020/002733
[19이 [표 9][19] [Table 9]
[191] 아래의표 는상기 신택스에대한영문시맨틱스의일예시를나타낸다.
[191] The table below shows an example of English semantics for the above syntax.
[193] [193]
t i le_addr_val [ i ][ j ] specifies the t i le_group_address value of the tile of the i-th tile row and the j— th tile column. The length of t i le_addr_val [ i ][ j ] is t i le_addr_len_minusl + 1 bits. * ti le_addr_val [i ][ j] specifies the ti le_group_address value of the tile of the i-th tile row and the j— th tile column. The length of ti le_addr_val [i ][ j] is ti le_addr_len_minusl + 1 bits. *
For any integer m in the range of 0 to num_t i 1 e_co 1 umns_m i nu s 1 inclusive, and any integer n in the range of 0 to num_t i le_rows_minusl, inclusive, t i le_addr_val [ i ][ j ] shall not be equal to t i le_addr_val [ m ][ n ] when i is not equal to m or j is not equal to n. num_mcts_in_pic_minusl plus 1 specifies the number of MCTSs in the picture
2020/175908 1»(:1/10公020/002733 For any integer m in the range of 0 to num_t i 1 e_co 1 umns_m i nu s 1 inclusive, and any integer n in the range of 0 to num_t i le_rows_minusl, inclusive, ti le_addr_val [i ][ j] shall not be equal to ti le_addr_val [m ][ n] when i is not equal to m or j is not equal to n. num_mcts_in_pic_minusl plus 1 specifies the number of MCTSs in the picture 2020/175908 1»(:1/10公020/002733
[194] [194]
[195] 일실시예에서,픽처 내복수의 타일들이존재하는경우,픽처를유니폼하게 분할하여폭및높이가동일한타일들을도출할지 여부를나타내는신택스요소 unifoml_tile_spacing_flag가파싱될수있다.상기신택스요소 [195] In one embodiment, when there are multiple tiles within a picture, a syntax element unifoml_tile_spacing_flag indicating whether tiles having the same width and height are to be derived by dividing the picture uniformly may be parsed. The syntax element above.
unifoml_tile_spacing_flag는픽처내타일들이유니픔하게분할되었는지 여부를 나타낼때 이용될수있다.상기신택스요소 unifoml_tile_spacing_flag가 인에이블된경우,타일열의폭과타일행의높이가파싱될수있다.즉,타일 열의폭을나타내는신택스요소 1: _(:011111111_\¥1(1111_1111111181과타일행의높이를 나타내는신택스요소(116_1'0\¥_11 寒11(:_1111111181이시그널링 및/또는파싱될수 있다. The unifoml_tile_spacing_flag can be used to indicate whether the tiles in the picture are divided in a uniform manner. When the syntax element unifoml_tile_spacing_flag is enabled, the width of the tile row and the height of the tile row can be parsed, i.e., the syntax indicating the width of the tile column. Element 1: _(:011111111_\¥1(1111_1111111181 and a syntax element representing the height of the tile row (116_1'0\¥_11 寒11(:_1111111181 can be signaled and/or parsed).
[196] 일실시예에서,픽처 내타일들이
형성하는지 여부를나타내는 신택스요소 111 8_:^용가파싱될수있다.
경우,픽처내타일들 또는타일그룹들이사각형타일집합을형성하거나형성하지 않을수있으며, 사각형 타일집합외부에 있는샘플값또는변수의사용이제한되거나제한되지 않음을나타낼수있따. 111 _:^^가 1인경우,픽처는■刀 들로분할됨을 나타낼수있다. [196] In one embodiment, the tiles in the picture Syntax element indicating whether to form 111 8_:^Can be parsed. If so, it may indicate that the tiles or groups of tiles in the picture may or may not form a square tile set, and that the use of sample values or variables outside the rectangular tile set is restricted or unrestricted. 111 If _:^^ is 1, it can be indicated that the picture is divided into ■刀.
[197] 또한,상기신택스요소 1111111_111(:18_:11내1(:_1111111181는 개수를 나타낼수있다.일실시예에서 111 _£ 가 1인경우,
로 분할되는경우에는,상기신택스요소 num_mcts_in_pic_minusl가파싱될수
2020/175908 1»(:1/10公020/002733 있다. [197] In addition, the syntax element 1111111_111(:18_:11 in 1(:_1111111181 may represent the number. In one embodiment, when 111_£ is 1, In the case of dividing by, the syntax element num_mcts_in_pic_minusl can be parsed. 2020/175908 1»(:1/10公020/002733 There is.
[198] 또한,상기신택스요소 top_left_tile_addr[ i ]는 i번째 MCTS에서 [198] In addition, the syntax element top_left_tile_addr[ i] in the i-th MCTS
좌상즉 (top-left)에위치하는타일의위치인 tile_group_address value를나타낼수 있다.마찬가지로상기신택스요소 bottom_right_tile_addr[ i ]는 i번째 The tile_group_address value, which is the position of the tile located at the top-left, can be indicated. Similarly, the syntax element bottom_right_tile_addr[ i] is the i-th
MCTS에서우하즉 (bottom-right)에위치하는타일의위치인 tile_group_address value를나타낼수있다. In MCTS, the tile_group_address value, which is the location of the tile located at the bottom-right, can be displayed.
[199] 아래의표 11은타일그룹데이터신택스의일예시를나타낸다.표 11에서타일 그룹데이터 (tile group data)는슬라이스데이터로대체될수있다. [199] Table 11 below shows an example of the tile group data syntax. In Table 11, tile group data can be replaced with slice data.
[20이 [표 11] [20] [Table 11]
[201] 아래의표 12는상기타일그룹데이터신택스에대한영문시맨틱스의일 [201] Table 12 below shows English semantics for the tile group data syntax.
예시를나타낸다.
Give an example.
2020/175908 1»(:1/10公020/002733 2020/175908 1»(:1/10公020/002733
[202] [표 12] [202] [Table 12]
[203] 한편,픽처내타일들을디코딩하는순서인스캐닝프로세스( [203] On the other hand, the scanning process of the order of decoding the tiles in the picture (
2020/175908 1»(:1/10公020/002733 2020/175908 1»(:1/10公020/002733
[204] [표 13] [204] [Table 13]
[205] 도 16은픽처를 R이영역에기반하여분할하는일예를나타내는도면이다. 16 is a diagram showing an example of dividing a picture based on an R region.
[206] 본명세서에따르면픽처를복수의타일들로파티셔닝하는타일링에 있어서, 관심영역 (Region of Interest, ROI)에기반한유연한타일링을도모할수있다.도 16을참조하면,픽처는 R이영역에기반하여복수의타일그룹들로분할될수 있다. [206] According to the present specification, in tiling for partitioning a picture into a plurality of tiles, flexible tiling based on a region of interest (ROI) can be achieved. Referring to FIG. 16, a picture is in R this region. Based on this, it can be divided into multiple tile groups.
[207] 아래의표 14는 PPS신택스의일예시를나타낸다.
[207] Table 14 below shows an example of the PPS syntax.
2020/175908 1»(:1/10公020/0027332020/175908 1»(:1/10公020/002733
[208] [표 14][208] [Table 14]
[209] 아래의표 15는상기 신택스에대한영문시맨틱스의일예시를나타낸다.
[209] Table 15 below shows an example of English semantics for the above syntax.
[210] [5.15]
[210] [5.15]
2020/175908 1»(:1/10公020/002733 2020/175908 1»(:1/10公020/002733
[211] [211]
[212] 일실시예에서,타일그룹에포함된타일들과관련된타일그룹정보가 에 존재하는지또는 를참조하는타일그룹헤더에존재하는지를나타내는 신택스요소 tile_group_info_in_pps_flag가파싱될수있다. In one embodiment, a syntax element tile_group_info_in_pps_flag indicating whether tile group information related to tiles included in the tile group exists in or in a tile group header referring to may be parsed.
tile_group_info_in_pps_flag가 1인경우,타일그룹정보가 客에존재하고 客를 참조하는타일그룹헤더에는존재하지않음을나타낼수있다.또한, tile_group_info_in_pps_flag가 0인경우,타일그룹정보가 客에존재하지않고 를참조하는타일그룹헤더에는존재함을나타낼수있다.한편,
있다. If tile_group_info_in_pps_flag is 1, it can be indicated that the tile group information exists in 客 and does not exist in the tile group header referring to 客. In addition, when tile_group_info_in_pps_flag is 0, the tile group information does not exist in 客 and refers to In the tile group header, it can indicate its presence. have.
[213] 또한,상기신택스요소 niim_tile_groups_in_pic_minusl는 를참조하는픽처 내타일그룹의수를나타낼수있다. [213] In addition, the syntax element niim_tile_groups_in_pic_minusl may indicate the number of tile groups in the picture referring to.
[214] 또한,상기신택스요소 pps_first_tile_id ]는 1번째타일그룹의첫번째타일의 타일 11)를나타낼수있고,상기신택스요소 pps_last_tile_id미는 1번째타일 그룹의마지막타일의타일 11)를나타낼수있다. [214] In addition, the syntax element pps_first_tile_id] can represent the tile 11) of the first tile of the first tile group, and the syntax element pps_last_tile_id can represent the tile 11) of the last tile of the first tile group.
[215] 도 17은픽처를복수의타일들로파티셔닝하는일예를나타내는도면이다. 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
[216] 본명세서에따르면픽처를복수의타일들로분할하는타일링에 있어서,코딩
트리유닛 (0X1)의크기보다작은크기의타일을고려함으로써유연한타일링을 도모할수있다.이러한방식에따른타일링구조는화상회의용비디오 프로그램등최근의비디오응용프로그램에유용하게적용될수있다. [216] According to the present specification, coding for tiling that divides a picture into a plurality of tiles Flexible tiling can be achieved by considering tiles smaller than the size of the tree unit (0X1). The tiling structure according to this method can be usefully applied to recent video applications such as video conferencing programs.
[217] 도 17을참조하면,픽처는복수의타일들로파티셔닝될수있으며,복수의 [217] Referring to FIG. 17, a picture may be partitioned into a plurality of tiles, and a plurality of
타일들중적어도하나의타일크기는코딩트리유닛 (0X1)의크기보다작을수 있다.예를들 타일 1(1¾ 1),타일 2(1¾ 2),타일 3(1¾ 3)및타일 4(1116 4)로파 수있고,그중타일 1(1¾ 1),타일 2(1116 2)및타일 4(11노 4)의크기는 기보다작다. At least one of the tiles may be smaller than the size of the coding tree unit (0X1), e.g. tile 1 (1¾ 1), tile 2 (1¾ 2), tile 3 (1¾ 3) and tile 4 (1116). 4) It can be dug, and among them, tile 1 (1¾ 1), tile 2 (1116 2) and tile 4 (11 no 4) are smaller than that.
[219] [표 16] [219] [Table 16]
[22이 아래의표 17은상기 PPS신택스에대한영문시맨틱스의일예시를나타낸다. [22] Table 17 below shows an example of English semantics for the PPS syntax.
[221] [표 17]
[221] [Table 17]
[222] 일실시예에서,상기신택스요소 tile_size_unit_idc는타일의단위크기 (unit size)를나타낼수있다.예를들어, tile_size_unit_id가 0, 1, 2...이면,타일의높이 및폭은코딩트리블록 (CTB) 4, 8, 16...으로정의될수있다. [222] In one embodiment, the syntax element tile_size_unit_idc may represent the unit size of the tile. For example, if tile_size_unit_id is 0, 1, 2..., the height and width of the tile is a coding tree block (CTB) can be defined as 4, 8, 16...
[223] 도 18은일실시예에따른디코딩장치의동작을도시하는흐름도이고,도 19는 일실시예에따른디코딩장치의구성을도시하는블록도이다. 18 is a flow chart showing an operation of a decoding device according to an embodiment, and FIG. 19 is a block diagram showing a configuration of a decoding device according to an embodiment.
[224] 도 18에개시된각단계는도 3에개시된디코딩장치 (300)에의하여수행될수 있다.보다구체적으로, S1800및 S1810은도 3에개시된엔트로피 Each step disclosed in FIG. 18 may be performed by the decoding device 300 disclosed in FIG. 3. More specifically, S1800 and S1810 are entropy disclosed in FIG.
디코딩부 (3 W)에의하여수행될수있고, S1820은도 3에개시된예측부 (330)에 의하여수행될수있고, S1830은도 3에개시된가산부 (340)에의하여수행될수 있다.더불어 S1800내지 S1830에따른동작들은,도 1내지도 17에서전술된 내용들중일부를기반으로한것이다.따라서,도 1내지도 17에서전술된 내용과중복되는구체적인내용은설명을생략하거나간단히하기로한다. It may be performed by the decoding unit 3W, S1820 may be performed by the prediction unit 330 disclosed in FIG. 3, and S1830 may be performed by the addition unit 340 disclosed in FIG. 3. In addition, operations according to S1800 to S1830. These are based on some of the contents described above in Figs. 1 to 17. Therefore, specific contents overlapping with the contents described above in Figs. 1 to 17 will be omitted or simplified.
[225] 도 19에도시된바와같이,일실시예에따른디코딩장치는엔트로피 [225] As shown in Fig. 19, the decoding apparatus according to the embodiment is
디코딩부 (3 W),예측부 (320)및가산부 (340)를포함할수있다.그러나,경우에 따라서는도 19에도시된구성요소모두가디코딩장치의필수구성요소가 아닐수있고,디코딩장치는도 19에도시된구성요소보다많거나적은구성 요소에의해구현될수있다. It may include a decoding unit (3W), a prediction unit 320 and an addition unit 340. However, in some cases, all of the components shown in Fig. 19 may not be essential components of the decoding device, and the decoding device is It may be implemented by more or less components than the components shown in FIG. 19.
[226] 일실시예에따른디코딩장치에서엔트로피디코딩부 (3 W),예측부 (330)및 가산부 (340)는각각별도의칩 (chip)으로구현되거나,적어도둘이상의구성 요소가하나의칩을통해구현될수도있다. [226] In the decoding apparatus according to an embodiment, the entropy decoding unit (3W), the prediction unit 330, and the addition unit 340 are each implemented as a separate chip, or at least two or more components are It can also be implemented through a chip.
[227] 일실시예에따른디코딩장치는,현재픽처에대한분할정보 (partition [227] The decoding apparatus according to an embodiment includes partition information for a current picture.
information)및상기현재픽처에포함된현재블록에대한예즉정보 (prediction infomiation)를포함하는영상정보를비트스트림으로부터획득할수 information) and image information including prediction infomiation about the current block included in the current picture can be obtained from the bitstream.
있다 (S1800).보다구체적으로,디코딩장치의엔트로피디코딩부 (3 W)는현재 픽처에대한분할정보및상기현재픽처에포함된현재블록에대한예측 정보를포함하는영상정보를비트스트림으로부터획득할수있다.일예시에서, 상기분할정보는상기현재픽처를복수의타일들로분할하기위한정보,상기 현재픽처를복수의타일그룹들로분할하기위한정보또는상기현재픽처를 복수의슬라이스들로분할하기위한정보중적어도하나를포함할수있다. 상기 예측정보는,상기현재블록에대한인트라예측에대한정보,인터예측에 대한정보또는 CIIP(Combined Inter Intra Prediction)에대한정보중적어도 하나를포함할수있다. More specifically, the entropy decoding unit (3W) of the decoding device can obtain image information including segmentation information for the current picture and prediction information for the current block included in the current picture from the bitstream. In one example, the division information includes information for dividing the current picture into a plurality of tiles, information for dividing the current picture into a plurality of tile groups, or dividing the current picture into a plurality of slices. It may include at least one of the information for this purpose. The prediction information may include at least one of information on intra prediction for the current block, information on inter prediction, or information on CIIP (Combined Inter Intra Prediction).
[228] 일실시예에따른디코딩장치는,상기현재픽처에대한상기분할정보를 [228] The decoding apparatus according to an embodiment, the division information for the current picture
기반으로,복수의타일들 (a plurality of tiles)에기반한상기현재픽처의분할 구조 (partitioning structure)를도줄할수있다 (S18W).보다구체적으로,디코딩 장치의엔트로피디코딩부 (3 W)는상기현재픽처에대한상기분할정보를 기반으로,복수의타일들 (a plurality of tiles)에기반한상기현재픽처의분할 구조 (partitioning structure)를도줄할수있다.
[229] 일실시예에따른디코딩장치는,상기복수의타일들중하나의타일에포함된 상기현재블록에대한상기 예측정보를기반으로,상기현재블록에대한예측 샘플들을도출할수있다 (S1820).보다구체적으로,디코딩장치의예측부 (330)는 상기복수의타일들중하나의타일에포함된상기현재블록에대한상기예측 정보를기반으로,상기현재블록에대한예측샘플들을도출할수있다. Based on the basis, the partitioning structure of the current picture based on a plurality of tiles can also be reduced (S18W). More specifically, the entropy decoding unit 3W of the decoding device includes the A partitioning structure of the current picture based on a plurality of tiles may be provided based on the partitioning information for the current picture. [229] The decoding apparatus according to an embodiment may derive prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles (S1820). More specifically, the prediction unit 330 of the decoding apparatus may derive prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles.
[23이 일실시예에따른디코딩장치는,상기예측샘플들을기반으로상기현재 [23 The decoding apparatus according to this embodiment, based on the predicted samples,
픽처를복원할수있다 (S1830).보다구체적으로,디코딩장치의가산부 (340)는 상기 예측샘플들을기반으로상기현재픽처를복원할수있다. The picture can be restored (S1830). More specifically, the adding unit 340 of the decoding device can restore the current picture based on the prediction samples.
[231] 일실시예에서,상기현재픽처에대한상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기 현재픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에대하여 우하측 (bottom-right)에위치하는타일의위치정보중적어도하나를포함할수 있다. [231] In one embodiment, the division information on the current picture is, lag information on whether the current picture is divided into motion constrained tile sets (MCTS), and the number of MCTSs in the current picture It may include at least one of information, location information of a tile positioned at the top-left for each of the MCTSs, or location information of a tile positioned at the bottom-right for each of the MCTSs. .
[232] 일실시예에따른디코딩장치는,분할정보를기반으로상기현재픽처에대한 MCTS들을도출하는할수있다.또한,디코딩장치는상기 MCTS들을병합하여 머지픽처 (merged picture)를구성하고,상기 MCTS들을기반으로상기머지 픽처를디코딩할수있다.이때,상기 MCTS들중제 1 MCTS들은제 1해상도를 갖는영역에대응하고,상기 MCTS들중제 2 MCTS들은제 2해상도를갖는 영역에대응할수있다.상기제 1해상도는상기제 2해상도보다높을수있다. The decoding apparatus according to an embodiment may derive MCTSs for the current picture based on the division information. In addition, the decoding apparatus merges the MCTSs to form a merged picture, and the The merged picture can be decoded based on MCTSs. At this time, the first MCTS among the MCTSs corresponds to a region having a first resolution, and the second MCTS of the MCTSs may correspond to a region having a second resolution. The first resolution may be higher than the second resolution.
[233] 일실시예에서 ,상기현재픽처는 360도비디오데이터로부터획득되고,상기 제 1 MCTS들은상기현재픽처내에서의뷰포트 (viewport)영역에해당할수 있다. In one embodiment, the current picture is acquired from 360-degree video data, and the first MCTSs may correspond to a viewport area in the current picture.
[234] 일실시예에서 ,상기 MCTS들각각은상기현재픽처에서기설정된영역들 각각에위치한타일들을포함할수있다. In one embodiment, each of the MCTSs may include tiles located in each of preset regions in the current picture.
[235] 일실시예에서,상기현재픽처에대한상기분할정보는,적어도하나의타일 그룹에포함된타일들에관한타일그룹정보가 PPS또는상기 PPS를참조하는 타일그룹헤더에존재하는지여부에대한정보를포함할수있다. [235] In one embodiment, the split information on the current picture is based on whether or not tile group information on tiles included in at least one tile group exists in the PPS or a tile group header referring to the PPS. Information can be included.
[236] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일들의 단위크기 (unit size)에대한정보를포함할수있다. In one embodiment, the division information on the current picture may include information on unit sizes of the plurality of tiles.
[237] 일실시예에서,상기복수의타일들중적어도하나의타일크기는코딩트리 유닛 (CTU)의크기보다작을수있다. In one embodiment, at least one tile size among the plurality of tiles may be smaller than the size of a coding tree unit (CTU).
[238] 상술한본개시에따르면픽처를복수의타일들로유연하게파티셔닝할수 있다.또한,본개시에따르면현재픽처에대한분할정보를기반으로픽처 파티셔닝의효율을높일수있다. [238] According to the above-described main disclosure, a picture can be flexibly partitioned into a plurality of tiles. Further, according to this disclosure, the efficiency of picture partitioning can be improved based on the division information for the current picture.
[239] 도 20은일실시예에따른인코딩장치의동작을도시하는흐름도이고,도 21는 일실시예에따른인코딩장치의구성을도시하는블록도이다. 20 is a flow chart showing the operation of the encoding apparatus according to an embodiment, and FIG. 21 is a block diagram showing the configuration of the encoding apparatus according to the embodiment.
[24이 도 20및도 21에따른인코딩장치는도 18및도 19에따른디코딩장치와
2020/175908 1»(:1^1{2020/002733 대응되는동작들을수행할수있다.따라서,도 20및도 21에서후술될인코딩 장치의동작들은도 18및도 19에따른디코딩장치에도마찬가지로적용될수 있다. [24] The encoding device according to FIGS. 20 and 21 is a decoding device according to FIGS. 18 and 19 2020/175908 1» (: 1^1{2020/002733) Corresponding operations can be performed. Therefore, the operations of the encoding device to be described later in Figs. 20 and 21 can be applied in the same way to the decoding device according to Figs. 18 and 19. have.
[241] 도 20에개시된각단계는도 2에개시된인코딩장치 (200)에의하여수행될수 있다.보다구체적으로, 82000및 32010은도 2에개시된영상분할부 (210)에 의하여수행될수있고, 32020및 32030은도 2에개시된예측부 (220)에의하여 수행될수있고, 82040은도 2에개시된엔트로피인코딩부 (240)에의하여 수행될수있다.더불어 82000내지 32040에따른동작들은,도 1내지도 17에서 전술된내용들중일부를기반으로한것이다.따라서,도 1내지도 17에서 전술된내용과중복되는구체적인내용은설명을생략하거나간단히하기로 한다. Each step disclosed in FIG. 20 may be performed by the encoding apparatus 200 disclosed in FIG. 2. More specifically, 82000 and 32010 may be performed by the image dividing unit 210 disclosed in FIG. 2, and 3 2020. And 32030 may be performed by the prediction unit 220 disclosed in FIG. 2, and 82040 may be performed by the entropy encoding unit 240 disclosed in FIG. 2. In addition, operations according to 82000 to 32040 are described above in FIGS. 1 to 17. It is based on some of the contents described above. Therefore, specific contents overlapping with the contents described above in Figs. 1 to 17 will be omitted or simplified.
[242] 도 21에도시된바와같이,일실시예에따른인코딩장치는영상분할부 (210), 예측부 (220)및엔트로피인코딩부 (240)를포함할수있다.그러나,경우에 따라서는도 21에도시된구성요소모두가인코딩장치의필수구성요소가 아닐수있고,인코딩장치는도 21에도시된구성요소보다많거나적은구성 요소에의해구현될수있다. As shown in FIG. 21, the encoding apparatus according to an embodiment may include an image division unit 210, a prediction unit 220, and an entropy encoding unit 240. However, in some cases, All of the components shown in 21 may not be essential components of the encoding device, and the encoding device may be implemented by more or less components than the components shown in FIG.
[243] 일실시예에따른인코딩장치에서영상분할부 (210),예측부 (220)및엔트로피 인코딩부 (240)는각각별도의칩 (此切)으로구현되거나,적어도둘이상의구성 요소가하나의칩을통해구현될수도있다. [243] In the encoding apparatus according to an embodiment, the image division unit 210, the prediction unit 220, and the entropy encoding unit 240 are each implemented as a separate chip, or at least two or more components are It can also be implemented through the chip.
[244] 일실시예에따른인코딩장치는,현재픽처를복수의타일들로분할할수 [244] The encoding apparatus according to an embodiment can divide a current picture into a plurality of tiles.
있다 2000).보다구체적으로,인코딩장치의영상분할부 (210)는현재픽처를 복수의타일들로분할할수있다. 2000). More specifically, the image dividing unit 210 of the encoding device may divide the current picture into a plurality of tiles.
[245] 일실시예에따른인코딩장치는,상기복수의타일들을기반으로상기현재 픽처에대한분할정보를생성할수있다 2이0).보다구체적으로,인코딩 장치의영상분할부 ( 0)는상기복수의타일들을기반으로상기현재픽처에 대한분할정보를생성할수있다. The encoding apparatus according to an embodiment may generate segmentation information for the current picture based on the plurality of tiles. More specifically, the image segmentation unit (0) of the encoding device is the above. Split information for the current picture may be generated based on a plurality of tiles.
[246] 일실시예에따른인코딩장치는,상기복수의타일들중하나의타일에포함된 현재블록에대한예측샘플들을도출할수있다 2020).보다구체적으로, 인코딩장치의예측부 (220)는상기복수의타일들중하나의타일에포함된현재 블록에대한예측샘플들을도출할수있다. [246] The encoding apparatus according to an embodiment may derive prediction samples for the current block included in one of the plurality of tiles 2020). More specifically, the prediction unit 220 of the encoding apparatus includes: Prediction samples for the current block included in one of the plurality of tiles can be derived.
[247] 일실시예에따른인코딩장치는,상기예측샘플들을기반으로상기현재 블록에대한예측정보를생성할수있다 2030).보다구체적으로,인코딩 장치의 예측부 (220)는상기예측샘플들을기반으로상기현재블록에대한예측 정보를생성할수있다. [247] The encoding apparatus according to an embodiment may generate prediction information for the current block based on the prediction samples 2030). More specifically, the prediction unit 220 of the encoding apparatus is based on the prediction samples. With this, prediction information for the current block can be generated.
[248] 일실시예에따른인코딩장치는,상기현재픽처에대한분할정보및상기 현재블록에대한예측정보를포함하는영상정보를인코딩할수있다 2040). 보다구체적으로,상기현재픽처에대한분할정보또는상기현재블록에대한 예측정보중적어도하나를포함하는영상정보를인코딩할수있다.
[249] 일실시예에서 ,상기현재픽처에대한상기분할정보는,상기현재픽처가The encoding apparatus according to an embodiment may encode image information including segmentation information on the current picture and prediction information on the current block 2040). More specifically, it is possible to encode image information including at least one of division information for the current picture or prediction information for the current block. [249] In one embodiment, the division information on the current picture is, the current picture is
MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기 현재픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에대하여 우하측 (bottom-right)에위치하는타일의위치정보중적어도하나를포함할수 있다. Always lag information on whether or not it is divided into MCTS (motion constrained tile sets), information on the number of MCTSs in the current picture, information on the location of a tile located at the top-left for each of the MCTSs Or, for each of the MCTSs, at least one of the location information of the tile located at the bottom-right side may be included.
[25이 일실시예에따른인코딩장치는,분할정보를기반으로상기현재픽처에대한 [25] The encoding apparatus according to this embodiment, based on the division information, for the current picture
MCTS들을도출하는할수있다.또한,인코딩장치는상기 MCTS들을병합하여 머지픽처 (merged picture)를구성하고,상기 MCTS들을기반으로상기머지 픽처를인코딩할수있다.이때,상기 MCTS들중제 1 MCTS들은제 1해상도를 갖는영역에대응하고,상기 MCTS들중제 2 MCTS들은제 2해상도를갖는 영역에대응할수있다.상기제 1해상도는상기제 2해상도보다높을수있다. In addition, the encoding apparatus may combine the MCTSs to form a merged picture and encode the merged picture based on the MCTSs. At this time, the first MCTSs among the MCTSs It corresponds to a region having a first resolution, and among the MCTSs, second MCTSs may correspond to a region having a second resolution. The first resolution may be higher than the second resolution.
[251] 일실시예에서 ,상기현재픽처는 360도비디오데이터로부터획득되고,상기 제 1 MCTS들은상기현재픽처내에서의뷰포트 (viewport)영역에해당할수 있다. In one embodiment, the current picture is acquired from 360-degree video data, and the first MCTSs may correspond to a viewport area in the current picture.
[252] 일실시예에서 ,상기 MCTS들각각은상기현재픽처에서기설정된영역들 각각에위치한타일들을포함할수있다. In one embodiment, each of the MCTSs may include tiles located in each of preset regions in the current picture.
[253] 일실시예에서,상기현재픽처에대한상기분할정보는,적어도하나의타일 그룹에포함된타일들에관한타일그룹정보가 PPS또는상기 PPS를참조하는 타일그룹헤더에존재하는지여부에대한정보를포함할수있다. [253] In an embodiment, the split information for the current picture is determined whether or not tile group information for tiles included in at least one tile group exists in a PPS or a tile group header referring to the PPS. Information can be included.
[254] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일들의 단위크기 (unit size)에대한정보를포함할수있다. In an embodiment, the division information on the current picture may include information on a unit size of the plurality of tiles.
[255] 일실시예에서,상기복수의타일들중적어도하나의타일크기는코딩트리 유닛 (CTU)의크기보다작을수있다. In one embodiment, at least one tile size among the plurality of tiles may be smaller than the size of a coding tree unit (CTU).
[256] 상술한실시예에서 ,방법들은일련의단계또는블록으로써순서도를기초로 설명되고있지만,본개시는단계들의순서에한정되는것은아니며,어떤 단계는상술한바와다른단계와다른순서로또는동시에발생할수있다.또한, 당업자라면순서도에나타내어진단계들이배타적이지않고,다른단계가 포함되거나순서도의하나또는그이상의단계가본개시의범위에영향을 미치지않고삭제될수있음을이해할수있을것이다. [256] In the above-described embodiment, the methods are described on the basis of a flow chart as a series of steps or blocks, but this disclosure is not limited to the order of the steps, and certain steps may be performed in a different order from those described above or In addition, those skilled in the art will understand that the steps shown in the flowchart are not exclusive, other steps may be included, or one or more steps in the flowchart may be deleted without affecting the scope of this disclosure.
[257] 상술한본개시에따른방법은소프트웨어형태로구현될수있으며,본개시에 따른인코딩장치및/또는디코딩장치는예를들어 TV,컴퓨터,스마트폰, 셋톱박스,디스플레이장치등의영상처리를수행하는장치에포함될수있다. [257] The above-described method according to this disclosure can be implemented in the form of software, and the encoding device and/or decoding device according to this disclosure can perform image processing such as TV, computer, smartphone, set-top box, and display device. It can be included in a device that performs.
[258] 본개시에서실시예들이소프트웨어로구현될때,상술한방법은상술한 [258] In this disclosure, when the embodiments are implemented as software, the above-described method is
기능을수행하는모듈 (과정,기능등)로구현될수있다.모듈은메모리에 저장되고,프로세서에의해실행될수있다.메모리는프로세서내부또는 외부에 있을수있고,잘알려진다양한수단으로프로세서와연결될수있다. 프로세서는 ASIC(application- specific integrated circuit),다른칩셋,논리회로
및/또는데이터처리장치를포함할수있다.메모리는 ROM(read-only memory), RAM(random access memory),늘래쉬메모리,메모리카드,저장매체및/또는 다른저장장치를포함할수있다.즉,본개시에서설명한실시예들은프로세서, 마이크로프로세서,컨트롤러또는칩상에서구현되어수행될수있다.예를 들어,각도면에서도시한기능유닛들은컴퓨터,프로세서,마이크로프로세서, 컨트롤러또는칩상에서구현되어수행될수있다.이경우구현을위한정보 (ex. information on instructions)또는알고리즘이디지털저장매체에저장될수있다. It can be implemented as a module (process, function, etc.) that performs a function. Modules are stored in memory and can be executed by the processor. The memory can be inside or outside the processor, it is well known and can be connected to the processor by various means. . Processor is ASIC (application-specific integrated circuit), other chipset, logic circuit And/or data processing devices. The memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media and/or other storage devices, i.e., The embodiments described in the present disclosure may be implemented and performed on a processor, microprocessor, controller, or chip. For example, the functional units shown in each diagram may be implemented and performed on a computer, processor, microprocessor, controller or chip. In this case, information for implementation (ex. information on instructions) or algorithms can be stored on a digital storage medium.
[259] 또한,본개시가적용되는디코딩장치및인코딩장치는멀티미디어방송 [259] In addition, the decoding device and the encoding device to which this disclosure is applied are multimedia broadcasting.
송수신장치 ,모바일통신단말,홈시네마비디오장치,디지털시네마비디오 장치 ,감시용카메라,비디오대화장치,비디오통신과같은실시간통신장치 , 모바일스트리밍장치,저장매체,캠코더,주문형비디오 (VoD)서비스제공 장치 , OTT비디오 (Over the top video)장치,인터넷스트리밍서비스제공장치 , Transmission/reception device, mobile communication terminal, home cinema video device, digital cinema video device, surveillance camera, video conversation device, real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, video-on-demand (VoD) service provider device , OTT video (Over the top video) device, Internet streaming service providing device,
3차원 (3D)비디오장치 , VR( virtual reality)장치 , AR(argumente reality)장치 ,화상 전화비디오장치,운송수단단말 (ex.차량 (자율주행차량포함)단말,비행기 단말,선박단말등)및의료용비디오장치등에포함될수있으며,비디오신호 또는데이터신호를처리하기위해사용될수있다.예를들어 , OTT비디오 (Over the top video)장치로는게임콘솔,블루레이플레이어 ,인터넷접속 TV,홈시어터 시스템,스마트폰,태블릿 PC, DVR(Digital Video Recoder)등을포함할수있다. 3D (3D) video device, VR (virtual reality) device, AR (argumente reality) device, video phone video device, transportation terminal (ex. vehicle (including self-driving vehicle) terminal, airplane terminal, ship terminal, etc.) and It can be included in medical video equipment, etc., and can be used to process video signals or data signals. For example, OTT video (Over the top video) devices include game consoles, Blu-ray players, Internet access TVs, home theater systems, It can include smartphones, tablet PCs, and DVR (Digital Video Recoder).
[26이 또한,본개시가적용되는처리방법은컴퓨터로실행되는프로그램의형태로 생산될수있으며,컴퓨터가판독할수있는기록매체에저장될수있다.본 개시에따른데이터구조를가지는멀티미디어데이터도또한컴퓨터가판독할 수있는기록매체에저장될수있다.상기컴퓨터가판독할수있는기록매체는 컴퓨터로읽을수있는데이터가저장되는모든종류의저장장치및분산저장 장치를포함한다.상기컴퓨터가판독할수있는기록매체는,예를들어 , 블루레이디스크 (BD),범용직렬버스 (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM,자기테이프,플로피디스크및광학적데이터저장장치를 포함할수있다.또한,상기컴퓨터가판독할수있는기록매체는반송파 (예를 들어,인터넷을통한전송)의형태로구현된미디어를포함한다.또한,인코딩 방법으로생성된비트스트림이컴퓨터가판독할수있는기록매체에 [26 In addition, the processing method to which this disclosure is applied can be produced in the form of a program executed by a computer, and can be stored in a computer-readable recording medium. Multimedia data having a data structure according to the present disclosure can also be produced by a computer. The computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored. The computer-readable recording medium is, for example, a computer-readable recording medium. For example, it can include Blu-ray disk (BD), universal serial bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical data storage device. The temporary readable recording medium includes media implemented in the form of a carrier (for example, transmission via the Internet). In addition, the bitstream generated by the encoding method is on a computer readable recording medium
저장되거나유무선통신네트워크를통해전송될수있다. It can be stored or transmitted over a wired or wireless communication network.
[261] 또한,본개시의실시예는프로그램코드에의한컴퓨터프로그램제품으로 구현될수있고,상기프로그램코드는본개시의실시예에의해컴퓨터에서 수행될수있다.상기프로그램코드는컴퓨터에의해판독가능한캐리어상에 저장될수있다. Further, an embodiment of the present disclosure may be implemented as a computer program product using a program code, and the program code may be executed in a computer by an embodiment of the present disclosure. The program code is a carrier readable by a computer. Can be stored on
[262] 도 22는본문서의개시가적용될수있는컨텐츠스트리밍시스템의예를 [262] Figure 22 shows an example of a content streaming system to which the disclosure of this document can be applied.
나타낸다. Show.
[263] 도 22를참조하면,본개시가적용되는컨텐츠스트리밍시스템은크게인코딩 서버,스트리밍서버,웹서버,미디어저장소,사용자장치및멀티미디어입력
장치를포함할수있다. [263] Referring to FIG. 22, the content streaming system to which this disclosure is applied is a large encoding server, streaming server, web server, media storage, user device and multimedia input May contain devices.
[264] 상기인코딩서버는스마트폰,카메라,캠코더등과같은멀티미디어입력 [264] The above encoding server inputs multimedia such as smartphones, cameras, camcorders, etc.
장치들로부터입력된컨텐츠를디지털데이터로압축하여비트스트림을 생성하고이를상기스트리밍서버로전송하는역할을한다.다른예로, 스마트폰,카메라,캠코더등과같은멀티미디어입력장치들이비트스트림을 직접생성하는경우,상기인코딩서버는생략될수있다. It plays a role of generating a bitstream by compressing the content input from the devices into digital data and transmitting it to the streaming server. As another example, multimedia input devices such as smartphones, cameras, and camcorders directly generate the bitstream. In this case, the encoding server may be omitted.
[265] 상기비트스트림은본개시가적용되는인코딩방법또는비트스트림생성 방법에의해생성될수있고,상기스트리밍서버는상기비트스트림을전송 또는수신하는과정에서일시적으로상기비트스트림을저장할수있다. The bitstream may be generated by an encoding method or a bitstream generation method to which the present disclosure is applied, and the streaming server may temporarily store the bitstream while transmitting or receiving the bitstream.
[266] 상기스트리밍서버는웹서버를통한사용자요청에기초하여멀티미디어 데이터를사용자장치에전송하고,상기웹서버는사용자에게어떠한서비스가 있는지를알려주는매개체역할을한다.사용자가상기웹서버에원하는 서비스를요청하면,상기웹서버는이를스트리밍서버에전달하고,상기 스트리밍서버는사용자에게멀티미디어데이터를전송한다.이때 ,상기컨텐츠 스트리밍시스템은별도의제어서버를포함할수있고,이경우상기제어 서버는상기컨텐츠스트리밍시스템내각장치간명령/응답을제어하는 역할을한다. [266] The streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server serves as a medium that informs the user of what services exist. When a service is requested, the web server delivers it to the streaming server, and the streaming server transmits multimedia data to the user. At this time, the content streaming system may include a separate control server, in which case the control server is the above. It controls the command/response between devices in the content streaming system.
[267] 상기스트리밍서버는미디어저장소및/또는인코딩서버로부터컨텐츠를 수신할수있다.예를들어,상기인코딩서버로부터컨텐츠를수신하게되는 경우,상기컨텐츠를실시간으로수신할수있다.이경우,원활한스트리밍 서비스를제공하기위하여상기스트리밍서버는상기비트스트림을일정 시간동안저장할수있다. [267] The streaming server can receive content from a media storage and/or encoding server. For example, when receiving content from the encoding server, it can receive the content in real time. In this case, a seamless streaming service In order to provide a, the streaming server may store the bitstream for a predetermined time.
[268] 상기사용자장치의예로는,휴대폰,스마트폰 (smart phone),노트북 [268] Examples of the user device, mobile phones, smart phones (smart phone), notebook
컴퓨터 (laptop computer),디지털방송용단말기 , PDA(personal digital assistants), PMP(portable multimedia player),네비게이션,슬레이트 PC(slate PC),태블릿 PC(tablet PC),울트라북 (ul仕 abook),웨어러블디바이스 (wearable device,예를 들어,워치형단말기 (smartwatch),글래스형단말기 (smart glass), HMD(head mounted display)),디지털 TV,데스크탑컴퓨터,디지털사이니지등이있을수 있다. Computer (laptop computer), digital broadcasting terminal, PDA (personal digital assistants), PMP (portable multimedia player), navigation, slate PC, tablet PC, ultrabook, wearable device (Wearable devices, for example, smartwatch, smart glass, head mounted display), digital TV, desktop computer, digital signage, etc.
[269] 상기컨텐츠스트리밍시스템내각서버들은분산서버로운영될수있으며 ,이 경우각서버에서수신하는데이터는분산처리될수있다. Each server in the content streaming system can be operated as a distributed server, and in this case, data received from each server can be distributed and processed.
[27이 본명세서에기재된청구항들은다양한방식으로조합될수있다.예를들어 ,본 명세서의방법청구항의기술적특징이조합되어장치로구현될수있고,본 명세서의장치청구항의기술적특징이조합되어방법으로구현될수있다. 또한,본명세서의방법청구항의기술적특징과장치청구항의기술적특징이 조합되어장치로구현될수있고,본명세서의방법청구항의기술적특징과 장치청구항의기술적특징이조합되어방법으로구현될수있다.
[27] The claims in this specification can be combined in various ways. For example, the technical features of the method claims of this specification can be combined to be implemented as a device, and the technical features of the apparatus claims of this specification can be combined in a way. Can be implemented. In addition, the technical characteristics of the method claim of this specification and the technical characteristics of the apparatus claim may be combined to be implemented as a device, and the technical characteristics of the method claim of this specification and the technical characteristics of the device claim may be combined to be implemented in a method.
Claims
[청구항 1] 디코딩장치에의하여수행되는영상디코딩방법에 있어서, [Claim 1] In the video decoding method performed by the decoding device,
현재픽처에대한분할정보 (partition information)및상기현재픽처에 포함된현재블록에대한예즉정보 (prediction information)를포함하는 영상정보를비트스트림으로부터획득하는단계 ; Acquiring from a bitstream image information including partition information on a current picture and, for example, prediction information on a current block included in the current picture;
상기현재픽처에대한상기분할정보를기반으로,복수의타일들에 기반한상기현재픽처의분할구조 (partitioning structure)를도줄하는 단계; Reducing a partitioning structure of the current picture based on a plurality of tiles based on the division information for the current picture;
상기복수의타일들중하나의타일에포함된상기현재블록에대한상기 예측정보를기반으로,상기현재블록에대한예측샘플들을도출하는 단계;및 Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And
상기 예측샘플들을기반으로상기현재픽처를복원하는단계를 포함하고, Including the step of restoring the current picture based on the prediction samples,
상기현재픽처에대한상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기현재 픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에 대하여우하측 (bottom-right)에위치하는타일의위치정보중적어도 하나를포함하는,영상디코딩방법. The division information on the current picture is, lag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and information on each of the MCTSs. A video decoding method comprising at least one of location information of a tile positioned at the top-left or location information of a tile positioned at a bottom-right with respect to each of the MCTSs.
[청구항 2] 제 1항에 있어서, [Claim 2] The method of claim 1,
상기분할정보를기반으로상기현재픽처에대한 MCTS들을도출하는 단계를더포함하고, Further comprising the step of deriving MCTS for the current picture based on the division information,
상기 MCTS들중제 1 MCTS들은제 1해상도를갖는영역에대응하고, 상기 MCTS들중제 2 MCTS들은제 2해상도를갖는영역에대응하고, 상기제 1해상도는상기제 2해상도보다높은,영상디코딩방법. The first MCTS among the MCTSs corresponds to a region having a first resolution, the second MCTS among the MCTSs corresponds to a region having a second resolution, and the first resolution is higher than the second resolution. Way.
[청구항 3] 제 2항에 있어서, [Claim 3] In paragraph 2,
상기현재픽처는 360도비디오데이터로부터획득되고, 상기제 1 MCTS들은상기현재픽처내에서의뷰포트 (viewport)영역에 해당하는,영상디코딩방법. The current picture is obtained from 360 degree video data, and the first MCTSs correspond to a viewport area in the current picture.
[청구항 4] 제 1항에 있어서, [Claim 4] The method of claim 1,
상기 MCTS들각각은, Each of the MCTSs,
상기현재픽처내기설정된영역들각각에위치한타일들을포함하는, 영상디코딩방법. An image decoding method comprising tiles positioned in each of the regions set as the current picture.
[청구항 5] 제 1항에 있어서, [Claim 5] The method of claim 1,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
적어도하나의타일그룹에포함된타일들에관한타일그룹정보가 PPS (Picture Parameter Set)또는상기 PPS를참조하는타일그룹헤더에
존재하는지여부에대한플래그정보를포함하는,영상디코딩방법 .Tile group information about tiles included in at least one tile group is displayed in a PPS (Picture Parameter Set) or a tile group header referring to the PPS. Video decoding method, including flag information about whether it is present or not.
[청구항 6] 제 1항에 있어서, [Claim 6] The method of claim 1,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일들의단위크기 (unit size)에대한정보를포함하는,영상 디코딩방법. Including information on the unit size (unit size) of the plurality of tiles, image decoding method.
[청구항 7] 제 1항에 있어서, [Claim 7] The method of claim 1,
상기복수의타일들중적어도하나의타일크기는코딩트리 유닛 (CTU)의크기보다작은,영상디코딩방법. At least one tile size among the plurality of tiles is smaller than the size of a coding tree unit (CTU).
[청구항 8] 인코딩장치에의하여수행되는영상인코딩방법에 있어서, [Claim 8] In the video encoding method performed by the encoding device,
현재픽처를복수의타일들로분할하는단계; Dividing the current picture into a plurality of tiles;
상기복수의타일들을기반으로상기현재픽처에대한분할정보를 생성하는단계 ; Generating segmentation information for the current picture based on the plurality of tiles;
상기복수의타일들중하나의타일에포함된현재블록에대한예측 샘플들을도출하는단계 ; Deriving prediction samples for the current block included in one of the plurality of tiles;
상기 예측샘플들을기반으로상기현재블록에대한예측정보를 생성하는단계 ;및 Generating prediction information for the current block based on the prediction samples; And
상기현재픽처에대한분할정보및상기현재블록에대한예측정보를 포함하는영상정보를인코딩하는단계를포함하고, Including the step of encoding image information including segmentation information on the current picture and prediction information on the current block,
상기현재픽처에대한상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기현재 픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에 대하여우하측 (bottom-right)에위치하는타일의위치정보중적어도 하나를포함하는,영상인코딩방법 . The division information on the current picture is, lag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and information on each of the MCTSs. An image encoding method comprising at least one of the location information of the tile located at the top-left or the location information of the tile located at the bottom-right with respect to each of the MCTSs.
[청구항 9] 제 8항에 있어서, [Claim 9] The method of claim 8,
상기분할정보를기반으로상기현재픽처에대한 MCTS들을도출하는 단계를더포함하고, Further comprising the step of deriving MCTS for the current picture based on the division information,
상기 MCTS들중제 1 MCTS들은제 1해상도를갖는영역에대응하고, 상기 MCTS들중제 2 MCTS들은제 2해상도를갖는영역에대응하고, 상기제 1해상도는상기제 2해상도보다높은,영상인코딩방법 . The first MCTS among the MCTSs corresponds to a region having a first resolution, the second MCTS among the MCTSs corresponds to a region having a second resolution, and the first resolution is higher than the second resolution. Way .
[청구항 10] 제 9항에 있어서 , [Claim 10] In paragraph 9,
상기제 1 MCTS들은 360도비디오데이터에대한상기현재픽처 내에서의뷰포트 (viewport)영역에해당하는,영상디코딩방법. The first MCTSs correspond to a viewport area in the current picture for 360 degree video data.
[청구항 11] 제 8항에 있어서, [Claim 11] The method of claim 8,
상기 MCTS들각각은, Each of the MCTSs,
상기현재픽처에서기설정된영역들각각에위치한타일들을포함하는, 영상디코딩방법. An image decoding method comprising tiles positioned in each of predetermined regions in the current picture.
[청구항 12] 제 8항에 있어서 ,
상기현재픽처에대한상기분할정보는, [Claim 12] In clause 8, The division information for the current picture,
적어도하나의타일그룹에포함된타일들에관한타일그룹정보가 PPS (Picture Parameter Set)또는상기 PPS를참조하는타일그룹헤더에 존재하는지여부에대한플래그정보를포함하는,영상인코딩방법. A video encoding method comprising flag information on whether tile group information on tiles included in at least one tile group exists in a PPS (Picture Parameter Set) or a tile group header referring to the PPS.
[청구항 13] 제 8항에 있어서, [Claim 13] The method of claim 8,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일들의단위크기 (unit size)에대한정보를포함하는,영상 인코딩방법. Including information on the unit size (unit size) of the plurality of tiles, video encoding method.
[청구항 14] 제 8항에 있어서 , [Claim 14] In paragraph 8,
상기복수의타일들중적어도하나의타일크기는코딩트리 유닛 (CTU)의크기보다작은,영상인코딩방법. At least one tile size among the plurality of tiles is smaller than the size of a coding tree unit (CTU).
[청구항 15] 영상인코딩방법에의하여인코딩된영상정보를저장하는디코더로 판독가능한저장매체에있어서,상기영상인코딩방법은: 현재픽처를복수의타일들로분할하는단계; [Claim 15] In a storage medium readable by a decoder for storing image information encoded by an image encoding method, the image encoding method comprises: dividing a current picture into a plurality of tiles;
상기복수의타일들을기반으로상기현재픽처에대한분할정보를 생성하는단계 ; Generating segmentation information for the current picture based on the plurality of tiles;
상기복수의타일들중하나의타일에포함된현재블록에대한예측 샘플들을도출하는단계 ; Deriving prediction samples for the current block included in one of the plurality of tiles;
상기 예측샘플들을기반으로상기현재블록에대한예측정보를 생성하는단계 ;및 Generating prediction information for the current block based on the prediction samples; And
상기현재픽처에대한분할정보및상기현재블록에대한예측정보를 포함하는영상정보를인코딩하는단계를포함하고, Including the step of encoding image information including segmentation information on the current picture and prediction information on the current block,
상기현재픽처에대한상기분할정보는,상기현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한늘래그정보,상기현재 픽처내 MCTS들의개수에대한개수정보,상기 MCTS들각각에대하여 좌상측 (top-left)에위치하는타일의위치정보또는상기 MCTS들각각에 대하여우하측 (bottom-right)에위치하는타일의위치정보중적어도 하나를포함하는,저장매체.
The division information on the current picture is, lag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and information on each of the MCTSs. A storage medium comprising at least one of location information of a tile positioned at the top-left or location information of a tile positioned at a bottom-right with respect to each of the MCTSs.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962810941P | 2019-02-26 | 2019-02-26 | |
US62/810,941 | 2019-02-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020175908A1 true WO2020175908A1 (en) | 2020-09-03 |
Family
ID=72238974
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/002733 WO2020175908A1 (en) | 2019-02-26 | 2020-02-26 | Method and device for partitioning picture on basis of signaled information |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020175908A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023197998A1 (en) * | 2022-04-13 | 2023-10-19 | Mediatek Inc. | Extended block partition types for video coding |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150099496A (en) * | 2013-10-22 | 2015-08-31 | 주식회사 케이티 | A method and an apparatus for encoding and decoding a scalable video signal |
KR20150140360A (en) * | 2013-04-08 | 2015-12-15 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Motion-constrained tile set for region of interest coding |
KR101835802B1 (en) * | 2012-09-18 | 2018-03-08 | 브이아이디 스케일, 인크. | Region of interest video coding using tiles and tile groups |
US20180255305A1 (en) * | 2017-03-03 | 2018-09-06 | Qualcomm Incorporated | Coding identifiers for motion constrained tile sets |
KR101912485B1 (en) * | 2011-08-25 | 2018-10-26 | 선 페이턴트 트러스트 | Methods and apparatuses for encoding, extracting and decoding video using tiles coding scheme |
-
2020
- 2020-02-26 WO PCT/KR2020/002733 patent/WO2020175908A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101912485B1 (en) * | 2011-08-25 | 2018-10-26 | 선 페이턴트 트러스트 | Methods and apparatuses for encoding, extracting and decoding video using tiles coding scheme |
KR101835802B1 (en) * | 2012-09-18 | 2018-03-08 | 브이아이디 스케일, 인크. | Region of interest video coding using tiles and tile groups |
KR20150140360A (en) * | 2013-04-08 | 2015-12-15 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Motion-constrained tile set for region of interest coding |
KR20150099496A (en) * | 2013-10-22 | 2015-08-31 | 주식회사 케이티 | A method and an apparatus for encoding and decoding a scalable video signal |
US20180255305A1 (en) * | 2017-03-03 | 2018-09-06 | Qualcomm Incorporated | Coding identifiers for motion constrained tile sets |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023197998A1 (en) * | 2022-04-13 | 2023-10-19 | Mediatek Inc. | Extended block partition types for video coding |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220182681A1 (en) | Image or video coding based on sub-picture handling structure | |
US11575942B2 (en) | Syntax design method and apparatus for performing coding by using syntax | |
US11825080B2 (en) | Image decoding method and apparatus therefor | |
US12015796B2 (en) | Image coding method on basis of entry point-related information in video or image coding system | |
JP2023060310A (en) | Video encoding and decoding method and apparatus | |
JP2023503070A (en) | Image coding apparatus and method based on filtering | |
US20240333930A1 (en) | Picture partitioning-based coding method and device | |
JP2024144567A (en) | Method and apparatus for signaling picture partition information - Patents.com | |
US20240146920A1 (en) | Method for decoding image by using block partitioning in image coding system, and device therefor | |
WO2020175908A1 (en) | Method and device for partitioning picture on basis of signaled information | |
US20230308674A1 (en) | Method and apparatus for encoding/decoding image on basis of cpi sei message, and recording medium having bitstream stored therein | |
US12058335B2 (en) | Image coding method based on entry point-related information in video or image coding system | |
US20240205424A1 (en) | Image coding method based on information related to tile and information related to slice in video or image coding system | |
JP7536876B2 (en) | Image decoding method and apparatus for coding image information including a picture header | |
US20230028326A1 (en) | Image coding method based on partial entry point-associated information in video or image coding system | |
JP2023526535A (en) | Video coding method and apparatus | |
WO2020175905A1 (en) | Signaled information-based picture partitioning method and apparatus | |
US20240056591A1 (en) | Method for image coding based on signaling of information related to decoder initialization | |
WO2020175904A1 (en) | Method and apparatus for picture partitioning on basis of signaled information | |
US20240214584A1 (en) | Slice and tile configuration for image/video coding | |
US20230156228A1 (en) | Image/video encoding/decoding method and device | |
KR20220082082A (en) | Method and apparatus for signaling image information | |
KR20220083818A (en) | Method and apparatus for signaling slice-related information | |
KR20220085819A (en) | Video decoding method and apparatus | |
CA3162960A1 (en) | Method and device for signaling information related to slice in image/video encoding/decoding system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20763794 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20763794 Country of ref document: EP Kind code of ref document: A1 |