WO2024169989A1 - Methods and apparatus of merge list with constrained for cross-component model candidates in video coding - Google Patents
Methods and apparatus of merge list with constrained for cross-component model candidates in video coding Download PDFInfo
- Publication number
- WO2024169989A1 WO2024169989A1 PCT/CN2024/077432 CN2024077432W WO2024169989A1 WO 2024169989 A1 WO2024169989 A1 WO 2024169989A1 CN 2024077432 W CN2024077432 W CN 2024077432W WO 2024169989 A1 WO2024169989 A1 WO 2024169989A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- candidates
- candidate
- model
- neighbouring
- ccm
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000002123 temporal effect Effects 0.000 claims description 24
- 241000023320 Luma <angiosperm> Species 0.000 description 70
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 70
- 238000009795 derivation Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 239000013598 vector Substances 0.000 description 8
- 239000000872 buffer Substances 0.000 description 7
- 230000004927 fusion Effects 0.000 description 7
- 230000011664 signaling Effects 0.000 description 7
- 230000003044 adaptive effect Effects 0.000 description 6
- 238000005192 partition Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 238000013139 quantization Methods 0.000 description 4
- 238000007670 refining Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- VBRBNWWNRIMAII-WYMLVPIESA-N 3-[(e)-5-(4-ethylphenoxy)-3-methylpent-3-enyl]-2,2-dimethyloxirane Chemical compound C1=CC(CC)=CC=C1OC\C=C(/C)CCC1C(C)(C)O1 VBRBNWWNRIMAII-WYMLVPIESA-N 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000985610 Forpus Species 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 239000012464 large buffer Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Definitions
- the present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/485,564, filed on February 17, 2023 and U.S. Provisional Patent Application No. 63/491,089, filed on March 20, 2023.
- the U.S. Provisional Patent Applications are hereby incorporated by reference in their entireties.
- the present invention relates to video coding system.
- the present invention relates to adding cross-component model candidates into a history table or merge list and constraints on the number of candidates in the history table or merge list in a video coding system.
- VVC Versatile video coding
- JVET Joint Video Experts Team
- MPEG ISO/IEC Moving Picture Experts Group
- ISO/IEC 23090-3 2021
- Information technology -Coded representation of immersive media -Part 3 Versatile video coding, published Feb. 2021.
- VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
- HEVC High Efficiency Video Coding
- Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing.
- Intra Prediction 110 the prediction data is derived based on previously coded video data in the current picture.
- Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data.
- Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
- the prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120.
- T Transform
- Q Quantization
- the transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data.
- the bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area.
- the side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well.
- the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues.
- the residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
- the reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
- incoming video data undergoes a series of processing in the encoding system.
- the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing.
- in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality.
- deblocking filter (DF) may be used.
- SAO Sample Adaptive Offset
- ALF Adaptive Loop Filter
- the loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream.
- DF deblocking filter
- SAO Sample Adaptive Offset
- ALF Adaptive Loop Filter
- Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134.
- the system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
- HEVC High Efficiency Video Coding
- the decoder can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126.
- the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) .
- the Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140.
- the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
- an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC.
- CTUs Coding Tree Units
- Each CTU can be partitioned into one or multiple smaller size coding units (CUs) .
- the resulting CU partitions can be in square or rectangular shapes.
- VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
- the VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard. Some new tools relevant to the present invention are reviewed as follows.
- a method and apparatus for adding cross-component model candidates to a history table of merge list based on similarity are disclosed.
- input data associated with a current block comprising a first-colour block and a second-colour block are received, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side.
- a merge list or a history table is derived. Whether to add a target CCM candidate to the merge list or the history table is determined based on one or more conditions, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table.
- the second-colour block is encoded or decoded using information comprising the merge list or the history table, wherein when the target CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the target CCM candidate to reconstructed first-colour block.
- said one or more similarities are calculated based on one or more model parameters. In one embodiment, part of said one or more model parameters are used for calculating said one or more similarities. In one embodiment, the target CCM candidate corresponds to a CCLM candidate with a scale and one or more offset parameters, and said one or more similarities are measured based on the scale or based on said one or more offset parameters. In one embodiment, the target CCM candidate corresponds to a CCCM candidate with c0 to c6 parameters, and said one or more similarities are measured based on only n parameters and n is less than 7.
- said one or more similarities are calculated based on one or more model errors.
- the target cross-component model associated with the CCM candidate is applied to neighbouring reconstructed first-colour samples of the current block to derive a target model error associated with the CCM candidate, and the target model error is compared with one or more member model errors associated with said one or more member candidates.
- a maximum number of to-be-added candidates associated with a specific coding mode in the merge list or the history table is constrained to be k, and the k is a positive integer.
- the to-be-added candidates associated with the specific coding mode correspond to spatial adjacent candidates or the to-be-added candidates with the specific coding mode correspond to non-spatial adjacent candidates.
- the to-be-added candidates associated with the specific coding mode are from the history table.
- the to-be-added candidates associated with the specific coding mode have a cross-component prediction model derived from neighbouring reconstruction samples of the current block.
- the k depends on current block size, prediction mode of one or more neighbouring blocks, slice type, temporal identifier, maximum allowed number of said one or more member candidates, or a combination thereof.
- whether said one or more member candidates are allowed to be reordered or not depends on a model type or coding mode associated with said one or more member candidates. For example, when said one or more member candidates correspond to spatial adjacent candidates or said one or more member candidates correspond to non-spatial adjacent candidates, said one or more member candidates are allowed to be reordered. For another example, when said one or more member candidates correspond to spatial adjacent candidates or non-spatial adjacent candidates, said one or more member candidates are allowed to be reordered. For yet another example, when said one or more member candidates are from the history table, said one or more member candidates are allowed to be reordered. For yet another example, when said one or more member candidates are associated with a cross-component prediction model derived from neighbouring reconstruction samples of the current block, said one or more member candidates are not allowed to be reordered.
- Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
- Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
- Fig. 2 illustrates examples of a multi-type tree structure corresponding to vertical binary splitting (SPLIT_BT_VER) , horizontal binary splitting (SPLIT_BT_HOR) , vertical ternary splitting (SPLIT_TT_VER) , and horizontal ternary splitting (SPLIT_TT_HOR) .
- Fig. 3 shows the intra prediction modes as adopted by the VVC video coding standard.
- Fig. 4 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode.
- Fig. 5 shows an example of classifying the neighbouring samples into two groups.
- Fig. 6A illustrates an example of the CCLM model.
- Fig. 6B illustrates an example of the effect of the slope adjustment parameter “u” for model update.
- Fig. 7 illustrates an example of spatial part of the convolutional filter.
- Fig. 8 illustrates an example of reference area with paddings used to derive the filter coefficients.
- Fig. 9 illustrates the 16 gradient patterns for Gradient Linear Model (GLM) .
- Fig. 10 illustrates the neighbouring blocks used for deriving spatial merge candidates for VVC.
- Fig. 11 illustrates the possible candidate pairs considered for redundancy check in VVC.
- Fig. 12 illustrates an example of temporal candidate derivation, where a scaled motion vector is derived according to POC (Picture Order Count) distances.
- POC Picture Order Count
- Fig. 13 illustrates the position for the temporal candidate selected between candidates C 0 and C 1 .
- Fig. 14 illustrates an exemplary pattern of the non-adjacent spatial merge candidates.
- Fig. 15 illustrates examples of CCM information propagation, where the blocks with dash line (i.e., A, E, G) are coded in cross-component mode (e.g., CCLM, MMLM, GLM, CCCM) .
- dash line i.e., A, E, G
- cross-component mode e.g., CCLM, MMLM, GLM, CCCM
- Fig. 16 illustrates an example of neighbouring templates for calculating model error.
- Fig. 17 illustrates a flowchart of an exemplary video coding system that adds a cross-component model candidate to a history table of merge list based on similarity according to an embodiment of the present invention.
- a CTU is split into CUs by using a quaternary-tree (QT) structure denoted as coding tree to adapt to various local characteristics.
- QT quaternary-tree
- the decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the leaf CU level.
- Each leaf CU can be further split into one, two or four Pus according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis.
- a leaf CU After obtaining the residual block by applying the prediction process based on the PU splitting type, a leaf CU can be partitioned into transform units (TUs) according to another quaternary-tree structure similar to the coding tree for the CU.
- transform units TUs
- One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.
- a quadtree with nested multi-type tree using binary and ternary splits segmentation structure replaces the concepts of multiple partition unit types, i.e. it removes the separation of the CU, PU and TU concepts except as needed for CUs that have a size too large for the maximum transform length, and supports more flexibility for CU partition shapes.
- a CU can have either a square or rectangular shape.
- a coding tree unit (CTU) is first partitioned by a quaternary tree (a. k. a. quadtree) structure. Then the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure. As shown in Fig.
- the multi-type tree leaf nodes are called coding units (CUs) , and unless the CU is too large for the maximum transform length, this segmentation is used for prediction and transform processing without any further partitioning. This means that, in most cases, the CU, PU and TU have the same block size in the quadtree with nested multi-type tree coding block structure. The exception occurs when maximum supported transform length is smaller than the width or height of the colour component of the CU.
- the number of directional intra modes in VVC is extended from 33, as used in HEVC, to 65.
- the new directional modes not in HEVC are depicted as red dotted arrows in Fig. 3, and the planar and DC modes remain the same.
- These denser directional intra prediction modes apply for all block sizes and for both luma and chroma intra predictions.
- pred C (i, j) represents the predicted chroma samples in a CU and rec L ′ (i, j) represents the downsampled reconstructed luma samples of the same CU.
- the CCLM parameters ( ⁇ and ⁇ ) are derived with at most four neighbouring chroma samples and their corresponding down-sampled luma samples. Suppose the current chroma block dimensions are W ⁇ H, then W’ and H’ are set as
- the four neighbouring luma samples at the selected positions are down-sampled and compared four times to find two larger values: x 0 A and x 1 A , and two smaller values: x 0 B and x 1 B .
- Their corresponding chroma sample values are denoted as y 0 A , y 1 A , y 0 B and y 1 B .
- Fig. 4 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode.
- Fig. 4 shows the relative sample locations of N ⁇ N chroma block 410, the corresponding 2N ⁇ 2N luma block 420 and their neighbouring samples (shown as filled circles) .
- the division operation to calculate parameter ⁇ is implemented with a look-up table.
- the diff value difference between maximum and minimum values
- LM_A 2 LM modes
- LM_L 2 LM modes
- LM_A mode only the above template is used to calculate the linear model coefficients. To get more samples, the above template is extended to (W+H) samples. In LM_L mode, only left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to (H+W) samples.
- LM_LA mode left and above templates are used to calculate the linear model coefficients.
- two types of down-sampling filter are applied to luma samples to achieve 2 to 1 down-sampling ratio in both horizontal and vertical directions.
- the selection of down-sampling filter is specified by a SPS level flag.
- the two down-sampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
- Rec L ′ (i, j) [rec L (2i-1, 2j-1) +2 ⁇ rec L (2i-1, 2j-1) +rec L (2i+1, 2j-1) + rec L (2i-1, 2j) +2 ⁇ rec L (2i, 2j) +rec L (2i+1, 2j) +4] >>3 (6)
- Rec L ′ (i, j) rec L (2i, 2j-1) +rec L (2i-1, 2j) +4 ⁇ rec L (2i, 2j) +rec L (2i+1, 2j) + rec L (2i, 2j+1) +4] >>3 (7)
- This parameter computation is performed as part of the decoding process, and is not just as an encoder search operation. As a result, no syntax is used to convey the ⁇ and ⁇ values to the decoder.
- chroma intra mode coding For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (LM_LA, LM_A, and LM_L) .
- chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the centre position of the current chroma block is directly inherited.
- MMLM Multiple Model CCLM
- MMLM multiple model CCLM mode
- the samples of the current luma block are also classified based on the same rule for the classification of neighbouring luma samples.
- Three MMLM model modes (MMLM_LA, MMLM_T, and MMLM_L) are allowed for choosing the neighbouring samples from left-side and above-side, above-side only, and left-side only, respectively.
- the MMLM uses two models according to the sample level of the neighbouring samples.
- CCLM uses a model with 2 parameters to map luma values to chroma values as shown in Fig. 6A.
- mapping function is tilted or rotated around the point with luminance value y r .
- Fig. 6A and 6B illustrates the process.
- LIC Local illumination compensation
- LIC Local Illumination Compensation
- LIC is a method to do inter predict by using neighbour samples of current block and reference block. It is based on a linear model using a scaling factor a and an offset b. It derives a scaling factor a and an offset b by referring to the neighbour samples of current block and reference block. Moreover, it’s enabled or disabled adaptively for each CU.
- JVET-C1001 Joint Video Exploration Test Model 3
- JVET Joint Video Exploration Team
- CCCM Convolutional cross-component model
- a convolutional model is applied to improve the chroma prediction performance.
- the convolutional model has 7-tap filter consisting of a 5-tap plus sign shape spatial component, a nonlinear term and a bias term.
- the input to the spatial 5-tap component of the filter consists of a centre (C) luma sample which is collocated with the chroma sample to be predicted and its above/north (N) , below/south (S) , left/west (W) and right/east (E) neighbours as shown in Fig. 7.
- the bias term (denoted as B) represents a scalar offset between the input and output (similarly to the offset term in CCLM) and is set to the middle chroma value (512 for 10-bit content) .
- the filter coefficients c i are calculated by minimising MSE between predicted and reconstructed chroma samples in the reference area.
- Fig. 8 illustrates an example of the reference area which consists of 6 lines of chroma samples above and left of the PU. Reference area extends one PU width to the right and one PU height below the PU boundaries. Area is adjusted to include only available samples. The extensions to the area (indicated as “paddings” ) are needed to support the “side samples” of the plus-shaped spatial filter in Fig. 11 and are padded when in unavailable areas.
- the MSE minimization is performed by calculating autocorrelation matrix for the luma input and a cross-correlation vector between the luma input and chroma output.
- Autocorrelation matrix is LDL decomposed and the final filter coefficients are calculated using back-substitution. The process follows roughly the calculation of the ALF filter coefficients in ECM, however LDL decomposition was chosen instead of Cholesky decomposition to avoid using square root operations.
- Multi-model CCCM mode can be selected for PUs which have at least 128 reference samples available.
- the GLM utilizes luma sample gradients to derive the linear model. Specifically, when the GLM is applied, the input to the CCLM process, i.e., the down-sampled luma samples L, are replaced by luma sample gradients G. The other parts of the CCLM (e.g., parameter derivation, prediction sample linear transform) are kept unchanged.
- C ⁇ G+ ⁇
- the CCLM mode when the CCLM mode is enabled for the current CU, two flags are signalled separately for Cb and Cr components to indicate whether GLM is enabled for each component. If the GLM is enabled for one component, one syntax element is further signalled to select one of 16 gradient filters (910-940 in Fig. 9) for the gradient calculation.
- the GLM can be combined with the existing CCLM by signalling one extra flag in bitstream. When such combination is applied, the filter coefficients that are used to derive the input luma samples of the linear model are calculated as the combination of the selected gradient filter of the GLM and the down-sampling filter of the CCLM.
- the derivation of spatial merge candidates in VVC is the same as that in HEVC except that the positions of first two merge candidates are swapped.
- a maximum of four merge candidates (B 0 , A 0 , B 1 and A 1 ) for current CU 1010 are selected among candidates located in the positions depicted in Fig. 10.
- the order of derivation is B 0 , A 0 , B 1 , A 1 and B 2 .
- Position B 2 is considered only when one or more neighbouring CU of positions B 0 , A 0 , B 1 , A 1 are not available (e.g. belonging to another slice or tile) or is intra coded.
- a scaled motion vector is derived based on the co-located CU 1220 belonging to the collocated reference picture as shown in Fig. 12.
- the reference picture list and the reference index to be used for the derivation of the co-located CU is explicitly signalled in the slice header.
- the scaled motion vector 1230 for the temporal merge candidate is obtained as illustrated by the dotted line in Fig.
- tb is defined to be the POC difference between the reference picture of the current picture and the current picture
- td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture.
- the reference picture index of temporal merge candidate is set equal to zero.
- the position for the temporal candidate is selected between candidates C 0 and C 1 , as depicted in Fig. 13. If CU at position C 0 is not available, is intra coded, or is outside of the current row of CTUs, position C 1 is used. Otherwise, position C 0 is used in the derivation of the temporal merge candidate.
- Non-Adjacent Motion Vector Prediction (NAMVP)
- JVET-L0399 a coding tool referred as Non-Adjacent Motion Vector Prediction (NAMVP)
- JVET-L0399 Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, 3–12 Oct. 2018, Document: JVET-L0399
- the non-adjacent spatial merge candidates are inserted after the TMVP (i.e., the temporal MVP) in the regular merge candidate list.
- the pattern of spatial merge candidates is shown in Fig.
- each small square corresponds to a NAMVP candidate and the order of the candidates (as shown by the number inside the square) are related to the distance.
- the line buffer restriction is not applied. In other words, the NAMVP candidates far away from a current block may have to be stored that may require a large buffer.
- a flag is signalled to indicate whether CCP mode (including the CCLM, CCCM, GLM and their variants) or non-CCP mode (conventional chroma intra prediction mode, fusion of chroma intra prediction mode) is used. If the CCP mode is selected, one more flag is signalled to indicate how to derive the CCP type and parameters, i.e., either from a CCP merge list or signalled/derived on-the-fly.
- a CCP merge candidate list is constructed from the spatial adjacent, temporal, spatial non-adjacent, history-based or shifted temporal candidates. After including these candidates, default models are further included to fill the remaining empty positions in the merge list. In order to remove redundant CCP models in the list, pruning operation is applied. After constructing the list, the CCP models in the list are reordered depending on the SAD costs, which are obtained using the neighbouring template of the current block. More details are described below.
- the positions and inclusion order of the spatial adjacent and non-adjacent candidates are the same as those defined in ECM for regular inter merge prediction candidates.
- Temporal candidates are selected from the collocated picture.
- the position and inclusion order of the temporal candidates are the same as those defined in ECM for regular inter merge prediction candidates.
- the shifted temporal candidates are also selected from the collocated picture.
- the position of temporal candidates is shifted by a selected motion vector which is derived from motion vectors of neighbouring blocks.
- a history-based table is maintained to include the recently used CCP models, and the table is reset at the beginning of each CTU row. If the current list is not full after including spatial adjacent and non-adjacent candidates, the CCP models in the history-based table are added into the list.
- CCLM candidates with default scaling parameters are considered, only when the list is not full after including the spatial adjacent, spatial non-adjacent, or history-based candidates. If the current list has no candidates with the single model CCLM mode, the default scaling parameters are ⁇ 0, 1/8, -1/8, 2/8, -2/8, 3/8, -3/8, 4/8, -4/8, 5/8, -5/8, 6/8 ⁇ . Otherwise, the default scaling parameters are ⁇ 0, the scaling parameter of the first CCLM candidate + ⁇ 1/8, -1/8, 2/8, -2/8, 3/8, -3/8, 4/8, -4/8, 5/8, -5/8, 6/8 ⁇ ⁇ .
- a flag is signalled to indicate whether the CCP merge mode is applied or not. If CCP merge mode is applied, an index is signalled to indicate which candidate model is used by the current block. In addition, CCP merge mode is not allowed for the current chroma coding block when the current CU is coded by intra sub-partitions (ISP) with single tree, or the current chroma coding block size is less than or equal to 16.
- ISP intra sub-partitions
- the guided parameter set is used to refine the derived model parameters by a specified CCLM mode.
- the guided parameter set is explicitly signalled in the bitstream, after deriving the model parameters, the guided parameter set is added to the derived model parameters as the final model parameters.
- the guided parameter set contain at least one of a differential scaling parameter (dA) , a differential offset parameter (dB) , and a differential shift parameter (dS) .
- dA differential scaling parameter
- dB differential offset parameter
- dS differential shift parameter
- pred C (i, j) ( ( ( ⁇ ′+dA) ⁇ rec L ′ (i, j) ) >>s) + ⁇ .
- pred C (i, j) ( ( ⁇ ′ ⁇ rec L ′ (i, j) ) >>s) + ( ⁇ +dB) .
- pred C (i, j) ( ( ⁇ ′ ⁇ rec L ′ (i, j) ) >> (s+dS) ) + ⁇ .
- pred C (i, j) ( ( ( ⁇ ′+dA) ⁇ rec L ′ (i, j) ) >>s) + ( ⁇ +dB) .
- the guided parameter set can be signalled per colour component.
- one guided parameter set is signalled for Cb component, and another guided parameter set is signalled for Cr component.
- one guided parameter set can be signalled and shared among colour components.
- the signalled dA and dB can be a positive or negative value. When signalling dA, one bin is signalled to indicate the sign of dA. Similarly, when signalling dB, one bin is signalled to indicate the sign of dB.
- dB can be implicitly derived from the average value of neighbouring (e.g. L-shape) reconstructed samples.
- neighbouring e.g. L-shape
- dB can be implicitly derived from the average value of neighbouring (e.g. L-shape) reconstructed samples.
- neighbouring e.g. L-shape
- four neighbouring luma and chroma reconstructed samples are selected to derived model parameters.
- the average value of neighbouring luma and chroma samples are lumaAvg and chromaAvg
- ⁇ is derived by The average value of neighbouring luma samples (i.e., lumaAvg) can be calculated by all selected luma samples, the luma DC mode value of the current luma CB, or the average of the maximum and minimum luma samples (e.g., ) .
- average value of neighbouring chroma samples can be calculated by all selected chroma samples, the chroma DC mode value of the current chroma CB, or the average of the maximum and minimum chroma samples (e.g., or ) .
- the selected neighbouring luma reconstructed samples can be from the output of CCLM downsampling process.
- the shift parameter, s can be a constant value (e.g., s can be 3, 4, 5, 6, 7, or 8) , and dS is equal to 0 and no need to be signalled.
- the guided parameter set can also be signalled per model.
- one guided parameter set is signalled for one model and another guided parameter set is signalled for another model.
- one guided parameter set is signalled and shared among linear models.
- only one guided parameter set is signalled for one selected model, and another model is not further refined by guided parameter set.
- the final scaling parameter of the current block is inherited from the neighbouring blocks and further refined by dA (e.g., dA derivation or signalling can be similar or the same as the method in the previous “Guided parameter set for refining the cross-component model parameters” ) .
- the offset parameter e.g., ⁇ in CCLM
- the final scaling parameter is derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of the current block. For example, if the final scaling parameter is inherited from a selected neighbouring block, and the inherited scaling parameter is ⁇ ′ nei , then the final scaling parameter is ( ⁇ ′ nei + dA) .
- the final scaling parameter is inherited from a historical list and further refined by dA.
- the historical list records the most recent j entries of final scaling parameters from previous CCLM-coded blocks.
- the final scaling parameter is inherited from one selected entry of the historical list, ⁇ ′ nei , and the final scaling parameter is ( ⁇ ′ nei + dA) .
- the final scaling parameter is inherited from a historical list or the neighbouring blocks, but only the MSB (Most Significant Bit) part of the inherited scaling parameter is taken, and the LSB (Least Significant Bit) of the final scaling parameter is from dA.
- the final scaling parameter is inherited from a historical list or the neighbouring blocks, but does not further refine by dA.
- the offset can be further refined by dB.
- the final offset parameter is inherited from a selected neighbouring block, and the inherited offset parameter is ⁇ ′ nei , then the final scaling parameter is ( ⁇ ′ nei + dB) .
- the final offset parameter is inherited from a historical list and further refined by dB.
- the historical list records the most recent j entries of final scaling parameters from previous CCLM-coded blocks. Then, the final scaling parameter is inherited from one selected entry of the historical list, ⁇ ′ list and the final scaling parameter is ( ⁇ ′ list + dB) .
- the filter coefficients (c i ) are inherited.
- the offset parameter e.g., c 6 ⁇ B or c 6 in CCCM
- c 6 ⁇ B or c 6 in CCCM can be re-derived based on the inherited parameter and the average value of neighbouring corresponding position luma and chroma samples of the current block.
- only partial filter coefficients are inherited (e.g., only n out of 6 filter coefficients are inherited, where 1 ⁇ n ⁇ 6) , the rest filter coefficients are further re-derived using the neighbouring luma and chroma samples of the current block.
- the current block shall also inherit the GLM gradient pattern of the candidate and apply to the current luma reconstructed samples.
- the classification threshold is also inherited to classify the neighbouring samples of the current block into multiple groups, and the inherited multiple cross-component model parameters are further assigned to each group.
- the classification threshold is the average value of the neighbouring reconstructed luma samples, and the inherited multiple cross-component model parameters are further assigned to each group.
- the offset parameter of each group is re-derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of each group of the current block.
- the offset parameter (e.g., c 6 ⁇ B or c 6 in CCCM) of each group is re-derived based on the inherited coefficient parameter and the neighbouring luma and chroma samples of each group of the current block.
- inheriting model parameters may depend on the colour component.
- Cb and Cr components may inherit model parameters or model derivation method from the same candidate or different candidates.
- only one of colour components inherits model parameters, and the other colour component derives model parameters based on the inherited model derivation method (e.g., if the inherit candidate is coded by MMLM or CCCM, the current block also derives model parameters based on MMLM or CCCM using the current neighbouring reconstructed samples) .
- only one of colour components inherits model parameters, and the other colour component derives its model parameters using the current neighbouring reconstructed samples.
- Cb and Cr components can inherit model parameters or model derivation method from different candidates.
- the inherited model of Cr can depend on the inherited model of Cb.
- possible cases include but not limited to (1) if the inherited model of Cb is CCCM, the inherited model of Cr shall be CCCM; (2) if the inherit model of Cb is CCLM, the inherit model of Cr shall be CCLM; (3) if the inherited model of Cb is MMLM, the inherited model of Cr shall be MMLM; (4) if the inherited model of Cb is CCLM, the inherited model of Cr shall be CCLM or MMLM; (5) if the inherited model of Cb is MMLM, the inherited model of Cr shall be CCLM or MMLM; (6) if the inherited model of Cb is GLM, the inherited model of Cr shall be GLM.
- the (CCM) information cross-component model of the current block is derived and stored for later reconstruction process of neighbouring blocks using inherited neighbours model parameter.
- the CCM information mentioned in this disclosure includes but not limited to prediction mode (e.g., CCLM, MMLM, CCCM) , GLM pattern index, model parameters, or classification threshold.
- prediction mode e.g., CCLM, MMLM, CCCM
- GLM pattern index e.g., GLM pattern index
- model parameters e.g., classification threshold
- the cross-component model parameters of the current block can be derived by using the current luma and chroma reconstruction or prediction samples. Later, if another block is predicted by using inherited neighbours model parameters, it can inherit the model parameters from the current block.
- the current block is coded by cross-component prediction
- the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples.
- the stored cross-component model can be CCCM, LM_LA (i.e., single model LM using both above and left neighbouring samples to derive model) , or MMLM_LA (multi-model LM using both above and left neighbouring samples to derive model) .
- the cross-component model parameters of the current block are derived by using the current luma and chroma reconstruction or prediction samples.
- the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples. Later, the re-derived model parameters are combined with the original cross-component models, which is used in reconstructing the current block.
- it can use the model combination methods mentioned in the section entitled “Candidate List Construction” , or the section entitled “Inheriting Multiple Cross-Component Models” .
- ⁇ is a weighting factor.
- the weighting factor can be predefined or implicitly derived according to the neighbouring template cost.
- a flag can be signalled to indicate/select if the re-derived model is used. If the flag has a value equal to 0, the cross-component model used to encode the neighbour merge candidate is inherited. If the flag has a value equal to 1, the cross-component model re-derived based on the luma and chroma reconstruction or prediction samples of the neighbouring merge candidate is inherited.
- the current slice is a non-intra slice (e.g., P slice or B slice)
- a cross-component model of the current block is derived and stored for later reconstruction process of neighbouring blocks using inherited neighbours model parameter.
- the CCM information of the current inter-coded block is derived by copying the CCM information from its reference block that has CCM information in a reference picture, located by the motion information of the current inter-coded block. For example, as shown in Fig. 15, the block B in a P/B picture 1520 is inter-coded, then the CCM information of block B is obtained by copying CCM information from its referenced block A in an I picture 1510.
- the current block can also copy the CCM information from an intra-coded block in an P/B picture.
- the block D in a P/B picture 1530 is inter-coded, then the CCM information of block B is obtained by copying CCM information from its referenced block E that is intra-coded in the P/B picture 1520.
- the reference block in a reference picture is also inter-coded, the CCM information of the reference block is obtained by copying the CCM information from another reference block in another reference picture. For example, as shown in the Fig.
- the current block C in a current P/B picture 1530 is inter-coded and its referenced block B is also inter-coded, due to the CCM information of block B is obtained by copying the CCM information from block A, then the CCM information of block A is also propagated to the current block C.
- the current block is inter-coded with bi-directional prediction, if one of its reference blocks is intra-coded and has CCM information, the CCM information of the current block is obtained by copying the CCM information from its intra-coded reference block in a reference picture. For example, suppose block F is inter-coded with bi-prediction and has reference blocks G and H. Block G is intra-coded and has CCM information.
- the CCM information of block F is obtained by copying the CCM information from the block G coded in CCM mode.
- the CCM information of the current block is the combination of the CCM models of its reference blocks (as the method mentioned in section entitled: Inheriting Multiple Cross-Component Models) .
- the current derived model when deriving cross-component models for the current block by using the current luma and chroma reconstruction or prediction samples, if the current derived model error is greater than a threshold, the current derived model is discarded and not stored.
- the current luma reconstruction samples can be provided to the model and the distortion between the model output and the current chroma reconstruction samples can be calculated.
- the calculated distortion is then normalized by the current block size or the number of samples used in calculating the distortion. If the normalized distortion is greater than or equal to a threshold, the current derived model is discarded and not stored.
- Whether to derive cross-component models for the current block or not can depend on the current block size or area. For example, for small blocks (e.g., block width/height less than or equal to a threshold, or block area less than or equal to a threshold) , it is not allowed to derive cross-component models. For another example, for large blocks (e.g., block width/height greater than or equal to a threshold, or block area greater than or equal to a threshold) , it is not allowed to derive cross-component models.
- whether to derive the cross-component models for a block at a neighbouring position of the current block or not can depend on the availability of the reconstruction samples of the current block.
- the availability of the neighbouring reconstruction samples can be defined by the availability of the reconstructed samples inside the k lines of neighbouring samples.
- the k can be defined by the IBC neighbouring search region, or the neighbouring buffer area of other intra coding tools (e.g., multi-reference line intra prediction, CCLM or CCCM) . If the block at a neighbouring position of the current block is outside the k lines of neighbouring samples of the current block, it will not derive cross-component models for the block at a neighbouring position of the current block.
- the candidate list is constructed by adding candidates in a pre-defined order until the maximum candidate number is reached.
- the candidates added may include all or some of the aforementioned candidates, but not limited to the aforementioned candidates.
- the candidate list may include spatial neighbouring candidates, temporal neighbouring candidates, historical candidates, non-adjacent neighbouring candidates, single model candidates generated based on other inherited models or combined model (as mentioned later in the section entitled: Inheriting multiple cross-component models) .
- the candidate list can include the same candidates as the previous example, but the candidates are added into the list in a different order.
- the default candidates include, but not limited to, the candidates described below.
- the average value of neighbouring luma samples can be calculated by all selected luma samples, the luma DC mode value the current luma CB, or the average of the maximum and minimum luma samples (e.g., or ) .
- average value of neighbouring chroma samples can be calculated by all selected chroma samples, the chroma DC mode value the current chroma CB, or the average of the maximum and minimum chroma samples (e.g., or ) .
- the default candidates include, but not limited to, the candidates described below.
- the default candidates are ⁇ G+ ⁇ , where G is the luma sample gradients instead of down-sampled luma samples L.
- the 16 GLM filters described in the section, entitled Gradient Linear Model (GLM) are applied.
- the final scaling parameter ⁇ is from the set ⁇ 0, 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇ .
- the offset parameter ⁇ 1/ (1 ⁇ bit_depth) or is derived based on neighbouring luma and chroma samples.
- a default candidate can be an earlier candidate with a delta scaling parameter refinement.
- the scaling parameter of an earlier candidate is ⁇
- the scaling parameter of a default candidate is ( ⁇ + ⁇ )
- ⁇ can be a value from the set ⁇ 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇
- the offset parameter of a default candidate would be derived by ( ⁇ + ⁇ ) and the average value of neighbouring luma and chroma samples of the current block.
- a default candidate can be a shortcut to indicate a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours.
- the default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, MMLM_A, single model CCCM, multiple models CCCM or cross-component model with a specified GLM pattern.
- a default candidate can be a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours, and also with a scaling parameter update ( ⁇ ) .
- the scaling parameter of a default candidate is ( ⁇ + ⁇ ) .
- default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, or MMLM_A.
- ⁇ can be a value from the set ⁇ 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8 ⁇ .
- the offset parameter of a default candidate will be derived by ( ⁇ + ⁇ ) and the average value of neighbouring luma and chroma samples of the current block.
- the ⁇ can be different for each colour component.
- a default candidate can be an earlier candidate with partially selected model parameters. For example, suppose an earlier candidate has m parameters, it can choose k out of m parameters from the earlier candidate to be a default candidate, where 0 ⁇ k ⁇ m and m > 1.
- a default candidate can be the first model of an earlier MMLM candidate (i.e., the model used when the sample value is less than or equal to classification threshold) .
- a default candidate can be the second model of an earlier MMLM candidate (i.e., the model used when the sample value is greater than or equal to classification threshold) .
- a default candidate can be the combination of two models of an earlier MMLM candidate. For example, if the models of an earlier MMLM candidate are and The model parameters of an default candidate can be where ⁇ is a weighting factor which can be predefined or implicitly derived by neighbouring template cost, and is the x-th parameter of the y-th model.
- default candidates can be derived from reconstructed samples from non-adjacent neighbouring regions. Let the current block position be at (x, y) and the block size be w ⁇ h. If the reconstructed samples in the MxN region located at (x+dx, y+dy) are “available” , the default candidates can be derived using reconstructed luma and chroma samples in the region.
- MxN can be 8x8.
- MxN can be 16x8.
- MxN can be 16x16.
- MxN can be w ⁇ h.
- the meaning of “available” can be that the reconstructed sample inside the current block is available, or the reconstructed sample inside the k lines of neighbouring samples is available.
- the k can be defined according to the IBC neighbouring search region, or the neighbouring buffer area of other intra coding tools (e.g., multi-reference line intra prediction, CCLM or CCCM) .
- the default candidates can be derived using reconstructed samples in the MxN region located at (x mid +dx, y mid +dy) , if the reconstructed samples in the region are available.
- (x mid , y mid ) (x + w/2, y + h/2) .
- default candidates derived from reconstructed samples from non-adjacent neighbouring regions can be any type of cross-component model or some particular types of cross-component model.
- the derived model can be CCLM, MMLM, CCCM, CCCM multi-models, or other cross-component models.
- the derived model is CCCM model.
- the derived model is CCLM model.
- the derive model is CCCM or CCCM multi-models.
- (dx, dy) can be ( ⁇ xi ⁇ w, - ⁇ yi ⁇ h) , (- ⁇ xi ⁇ w, ⁇ yi ⁇ h) , (- ⁇ xi ⁇ w, - ⁇ yi ⁇ h) , ( ⁇ xi ⁇ w, 0) , (- ⁇ xi ⁇ w, 0) , (0, ⁇ yi ⁇ h) , (0, y mid - ⁇ yi ⁇ h) .
- the current block position is at (x, y) and the block size is w ⁇ h.
- ⁇ x and ⁇ y be two fixed positive numbers (dx, dy) can be ( ⁇ xi ⁇ x, - ⁇ yi ⁇ y) , (- ⁇ xi ⁇ x, + ⁇ yi ⁇ y) , (- ⁇ xi ⁇ x, - ⁇ yi ⁇ y) , ( ⁇ xi ⁇ x, 0) , (- ⁇ xi ⁇ x, 0) , (0, ⁇ yi ⁇ y) , (0, - ⁇ yi ⁇ y) .
- candidates are added to the list according to a pre-defined order.
- the pre-defined order can be spatial adjacent candidates, temporal candidates, spatial non-adjacent candidates, historical candidates, and then default candidates.
- the candidate models of non-LM coded blocks are added to into the list after candidate models of LM coded blocks are added.
- the candidate models of non-LM coded blocks are added to the list before default candidates are added.
- the candidate models of non-LM coded blocks have lower priority to be added to the list than candidate models for LM coded blocks.
- only the candidate with a certain prediction mode can be added to the list.
- only the candidates derived by CCLM or MMLM modes can be added to the list.
- only the candidates derived by single-model modes e.g., CCLM, or CCLM with single-model
- only the candidates derived by multi-model modes e.g., MMLM, or CCCM with multi-model
- only the candidates derived by GLM modes can be added to the list.
- only the candidates derived by a specific mode can be added to the list.
- a specific mode e.g., CCLM, MMLM, CCCM, CCCM with multi-model, or GLM
- the candidate with a certain prediction mode can be added to the list, during prediction mode signalling, it can signal the prediction mode first and then signal if the proposed cross-component merge mode is used or not. If the proposed cross-component merge mode is used, then signal the candidate index.
- a non-CCLM coded intra prediction and a CCLM coded intra prediction are fused together to obtain the final intra prediction.
- the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined.
- the fusion weight, the coding mode of non-CCLM coded intra prediction and the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined.
- the coding mode of non-CCLM coded intra prediction is implicitly derived (e.g., derived as DM or planar mode) , and the fusion weight and the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined.
- the non-CCLM coded intra prediction of the block/position coded by the chroma intra fusion mode can be implicitly derived (e.g., the non-CCLM coded intra prediction is DM or planar mode)
- the fusion weight and the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined.
- the specific coding mode here includes, but not limited to, the spatial adjacent/non-adjacent candidates or candidates from history table, the cross-component prediction mode of a candidate (e.g., CCLM, MMLM, CCCM, CCCM with multi-model, or GLM) , the intra/inter prediction mode, LM/non-LM prediction mode, the cross-component prediction model of the candidate is inherited from other blocks, or the cross-component prediction model of the candidate is derived from the neighbouring reconstruction samples of the current block.
- the cross-component prediction mode of a candidate e.g., CCLM, MMLM, CCCM, CCCM with multi-model, or GLM
- the intra/inter prediction mode LM/non-LM prediction mode
- the cross-component prediction model of the candidate is inherited from other blocks
- the cross-component prediction model of the candidate is derived from the neighbouring reconstruction samples of the current block.
- the maximum number of to-be-added spatial adjacent candidates in the candidate list is k 1 .
- it can constraint the maximum number of to-be-added spatial non-adjacent candidates in the candidate list is k 2 .
- it can constraint the maximum to-be-added candidates from a history table in the candidate list is k 3 .
- it can constraint the maximum number of to-be-added candidates with a specific coding mode in the list is k 4 .
- it can constraint the maximum number of to-be-added candidates have the cross-component prediction model derived from the neighbouring reconstruction samples of the current block in the list is k 5 .
- the number is greater than or equal to 1, or depends on the current block size, prediction mode of neighbouring blocks, slice type, temporal identifier, or the maximum allowed candidates of the candidate list.
- the model of a candidate parameter is similar to the existing models, the model will not be added in the candidate list. In one embodiment, it can compare the similarity of ( ⁇ lumaAvg+ ⁇ ) or ⁇ among existing candidates to decide whether to add the model of a candidate or not.
- the model of the candidate is not added.
- the threshold can be adaptive based on coding information (e.g., the current block size or area) .
- a model from a candidate and the existing model when comparing the similarity, if a model from a candidate and the existing model both use CCCM, it can compare similarity by checking the value of (c 0 C + c 1 N + c 2 S + c 3 E + c 4 W +c 5 P + c 6 B) to decide whether to include the model of a candidate or not.
- the model of the candidate parameter if a candidate position points to a CU which is the same one of the existing candidates, the model of the candidate parameter is not included.
- the model of a candidate if the model of a candidate is similar to one of the existing candidate models, it can adjust the inherited model parameters so that the inherited model is different from the existing candidate models.
- the inherited scaling parameter can add a predefined offset (e.g., 1>>S or - (1>>S) , where S is the shift parameter) so that the inherited parameter is different from the existing candidate models.
- a predefined offset e.g., 1>>S or - (1>>S) , where S is the shift parameter
- a CCLM candidate has a scale and offset parameters, it only compares if the scale or offset parameters is the same or similar to existing candidates or not. If the scale or offset parameter is the same or similar, the model will not be added to the candidate list.
- a CCCM candidate has c 0 to c 6 parameters, it can only compare if n parameters (n ⁇ 7) are the same or similar with existing candidates or not. If the scale or offset parameters is the same or similar, the model will not be added to the candidate list.
- it can apply a candidate model to the neighbouring reconstruction samples of the current block, and compare the difference with the existing candidate models. If the difference value is less than or equal to a threshold, the model will not be added to the candidate list. For example, assume the applied result is and the corresponding results of the existing models in the candidate list are to the model will not be added to the candidate list.
- the neighbouring reconstruction samples it can choose the neighbouring reconstruction sample with the maximal value, the neighbouring reconstruction sample with the minimal value, the mean/median/mode of the neighbouring reconstruction samples, the left-side neighbouring reconstruction samples, the above-side neighbouring reconstruction samples, or the above-left neighbouring reconstruction samples.
- the number of candidates with the same type is limited when adding the candidates to the list. For example, if the current list has k candidates with MMLM type, it is not allowed to further add candidates with MMLM type to the list. For another example, if the current list has k candidates with CCCM type, it is not allowed to further add candidates with CCCM type to the list. For another example, if the current list has k candidates with GLM type, it is not allowed to further add candidates with GLM type to the list.
- default candidates will not compare with the existing models in the candidate list and will be added to the candidate list.
- the candidates in the list can be reordered to reduce the syntax overhead when signalling the selected candidate index.
- the reordering rules can depend on the coding information of neighbouring blocks or the model error. For example, if neighbouring above or left blocks are coded by MMLM, the MMLM candidates in the list can be moved to the head of the current list. Similarly, if neighbouring above or left blocks are coded by single model LM or CCCM, the single model LM or CCCM candidates in the list can be moved to the head of the current list. Similarly, if GLM is used by neighbouring above or left blocks, the GLM related candidates in the list can be moved to the head of the current list.
- the reordering rule is based on the model error by applying the candidate model to the neighbouring templates of the current block, and then comparing the error with the reconstructed samples of the neighbouring template.
- the size of the above neighbouring template 1620 of the current block is w a ⁇ h a
- the size of left neighbouring template 1630 of the current block 1610 is w b ⁇ h b .
- K models are in the current candidate list, and ⁇ k and ⁇ k are the final scale and offset parameters after inheriting the candidate k.
- the model error of candidate k corresponding to the above neighbouring template is:
- model error of candidate k by the left neighbouring template is:
- model error list E ⁇ e 0 , e 1 , e 2 , ..., e k , ..., e K ⁇ . Then, it can reorder the candidate index in the inherited candidate list by sorting the model error list in ascending order.
- the candidate k uses CCCM prediction, the and are defined as:
- c0 k , c1 k , c2 k , c3 k , c4 k , c5 k , and c6 k are the final filtering coefficients after inheriting the candidate k.
- P and B are the nonlinear term and bias term.
- not all positions inside the above and left neighbouring template are used in calculating model error. It can choose partial positions inside the above and left neighbouring template to calculate the model error. For example, it can define a first start position and a first subsampling interval depending on the width of the current block to partially select positions inside the above neighbouring template. Similarly, it can define a second start position and a second subsampling interval depending on the height of the current block to partially select positions inside the left neighbouring template.
- h a or h b can be a constant value (e.g., h a or h b can be 1, 2, 3, 4, 5, or 6) .
- h a or h b can be dependent on the block size. If the current block size is greater than or equal to a threshold, h a or h b is equal to a first value. Otherwise, h a or h b is equal to a second value.
- the candidates of different types are reordered separately before the candidates are added into the final candidate list.
- the candidates are added into a primary candidate list with a pre-defined size N 1 .
- the candidates in the primary list are reordered.
- the candidates (N 2 ) with the smallest costs are then added into the final candidate list, where N 2 ⁇ N 1 .
- the candidates are categorized into different types based on the source of the candidates, including but not limited to the spatial neighbouring models, temporal neighbouring models, non-adjacent spatial neighbouring models, and the historical candidates.
- the candidates are categorized into different types based on the cross-component model mode.
- the types can be CCLM, MMLM, CCCM, and CCCM multi-model.
- the types can be GLM-non active or GLM active.
- the redundancy of the candidate can be further checked.
- a candidate is considered to be redundant if the template cost difference between it and its predecessor in the list is less than or equal to a threshold. If a candidate is considered redundant, it can be removed from the list, or it can be moved to the end of the list.
- the candidates allowed or not allowed to do reordering can depend on the model type or coding mode.
- the model type or coding mode specified here includes, but not limited to the spatial adjacent/non-adjacent candidates or candidates from history table, the intra/inter prediction mode, LM/non-LM prediction mode, the cross-component prediction mode of a candidate (e.g., CCLM, MMLM, CCCM, or CCCM with multi-model, or GLM) , the cross-component prediction model of the candidate is inherited from other blocks, or the cross-component prediction model of the candidate is derived from the neighbouring reconstruction samples of the current block.
- the spatial adjacent candidates of the current block in the candidate list are reordered.
- the spatial non-adjacent candidates of the current block in the candidate list are reordered.
- the candidates from history table in the candidate list are reordered.
- the spatial adjacent candidates and spatial non-adjacent candidates of the current block in the candidate list are reordered.
- the candidates with specific model type or coding mode are self-reordered and not reordered with other model types or coding modes.
- the candidates with the cross-component prediction model derived from the neighbouring reconstruction samples of the current block are not reordered.
- the position of the candidate in the list is not changed before and after the candidate list reordering.
- the to-be-added model When adding cross-component model to a history table, it may further check the similarity between the to-be-added model and the existing models in the history table. If the to-be-added model is similar to one of the existing models, the to-be-added model will not be added to the history table. In one embodiment, it may compare the similarity of ( ⁇ lumaAvg+ ⁇ ) or ⁇ among existing candidates to decide whether to add the to-be-added model or not. For example, if the ( ⁇ lumaAvg+ ⁇ ) or ⁇ of the to-be-added model is the same as one of the existing candidates, the to-be-added model is not added.
- the to-be-added model is not added.
- the threshold can be adaptive based on coding information (e.g., the current block size or area) .
- it may compare similarity by checking the value of (c 0 C + c 1 N + c 2 S + c 3 E + c 4 W + c 5 P + c 6 B) to decide whether to add the to-be-added model or not.
- the to-be-added model parameter is not added.
- the to-be-added model may adjust the inherited model parameters to let the to-be-added model to be different from the existing candidate models. For example, if the to-be-added scaling parameter is similar to one of existing candidate models, the to-be-added scaling parameter may add a predefined offset (e.g., 1>>S or - (1>>S) , where S is the shift parameter) to let the to-be-added model is different from the existing candidate models.
- a predefined offset e.g., 1>>S or - (1>>S
- a CCLM candidate has a scale and offset parameters, it only compares if the scale or offset parameters are the same or similar with existing candidates or not. If the scale or offset parameters are the same or similar, the to-be-added model will not be added to the history table.
- a CCCM candidate has c 0 to c 6 parameters, it may only compare if n parameters (n ⁇ 7) are the same or similar with existing candidates or not. If the scale or offset parameters is the same or similar, the to-be-added model will not be added to the history table.
- a to-be-added model is applied to the neighbouring reconstruction samples of the current block, and the difference with the existing candidate models is compared. If the difference value is less than or equal to a threshold, the to-be-added model will not be added to the history table. For example, assume the applied result is and the corresponding results of the existing models in the history table are to the to-be-added model will not be added to the history table.
- the neighbouring reconstruction samples it may choose the neighbouring reconstruction sample with the maximal value, the neighbouring reconstruction sample with the minimal value, the mean/median/mode of the neighbouring reconstruction samples, the left-side neighbouring reconstruction samples, the above-side neighbouring reconstruction samples, or the above-left neighbouring reconstruction samples.
- the number of candidates having the same type is limited when adding the candidates to the history table. For example, if the current history table has k candidates with MMLM type, it is not allowed to further add candidates with MMLM type to the history table. For another example, if the current history table has k candidates with CCCM type, it is not allowed to further add candidates with CCCM type to the history table. For another example, if the current history table has k candidates with GLM type, it is not allowed to further add candidates with GLM type to the history table.
- constraints or rules to prevent adding a redundant candidate to a history table will share/be the same as that of preventing adding a redundant candidate into a candidate list (e.g., the constraints or rules mentioned in the section entitled “Remove or Modify Similar Neighbouring Model Parameters) .
- the method and apparatus for adding cross-component model candidates to a history table of merge list based on similarity can be implemented in an encoder side or a decoder side.
- any of the proposed method can be implemented in an Intra/Inter coding module (e.g. Intra Pred. 150/MC 152 in Fig. 1B) in a decoder or an Intra/Inter coding module in an encoder (e.g. Intra Pred. 110/Inter Pred. 112 in Fig. 1A) .
- Any of the proposed method can also be implemented as a circuit coupled to the intra/inter coding module at the decoder or the encoder.
- the decoder or encoder may also use additional processing unit to implement the required processing. While the Intra Pred.
- Fig. 1A and unit 150/152 in Fig. 1B are shown as individual processing units, they may correspond to executable software or firmware codes stored on a media, such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
- a media such as hard disk or flash memory
- CPU Central Processing Unit
- programmable devices e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) .
- Fig. 17 illustrates a flowchart of an exemplary video coding system that adds a cross-component model candidate to a history table of merge list based on similarity according to an embodiment of the present invention.
- the steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side.
- the steps shown in the flowchart may also be implemented based on hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart.
- input data associated with a current block comprising a first-colour block and a second-colour block are received in step 1710, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side.
- a merge list or a history table is derived in step 1720. Whether to add a target CCM (Cross Component Model) candidate to the merge list or the history table is determined based on one or more conditions in step 1730, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table.
- target CCM Cross Component Model
- the second-colour block is encoded or decoded using information comprising the merge list or the history table in step 1740, wherein when the CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the CCM candidate to reconstructed first-colour block.
- Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
- an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
- These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware code may be developed in different programming languages and different formats or styles.
- the software code may also be compiled for different target platforms.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Method and apparatus for adding a CCM candidate to a history table or merge list. According to the method, a merge list or a history table is derived. Whether to add a target CCM candidate to a merge list or a history table is determined based on one or more conditions, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table. The second-colour block is encoded or decoded using information comprising the merge list or the history table, wherein when the target CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the target CCM candidate to reconstructed first-colour block.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
The present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/485,564, filed on February 17, 2023 and U.S. Provisional Patent Application No. 63/491,089, filed on March 20, 2023. The U.S. Provisional Patent Applications are hereby incorporated by reference in their entireties.
The present invention relates to video coding system. In particular, the present invention relates to adding cross-component model candidates into a history table or merge list and constraints on the number of candidates in the history table or merge list in a video coding system.
Versatile video coding (VVC) is the latest international video coding standard developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) . The standard has been published as an ISO standard: ISO/IEC 23090-3: 2021, Information technology -Coded representation of immersive media -Part 3: Versatile video coding, published Feb. 2021. VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing. For Intra Prediction 110, the prediction data is derived based on previously coded video data in the current picture. For Inter Prediction 112, Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data. Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter
prediction, and other information such as parameters associated with loop filters applied to underlying image area. The side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality. For example, deblocking filter (DF) , Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) may be used. The loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134. The system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
The decoder, as shown in Fig. 1B, can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126. Instead of Entropy Encoder 122, the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) . The Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140. Furthermore, for Inter prediction, the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
According to VVC, an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC. Each CTU can be partitioned into one or multiple smaller size coding units (CUs) . The resulting CU partitions can be in square or rectangular shapes. Also, VVC divides a CTU into prediction units (PUs) as a unit to apply
prediction process, such as Inter prediction, Intra prediction, etc.
The VVC standard incorporates various new coding tools to further improve the coding efficiency over the HEVC standard. Some new tools relevant to the present invention are reviewed as follows.
In the present invention, methods and apparatus to improve the way to add a cross-component model candidate to a history table of merge list are disclosed. In addition, methods and apparatus to pose constraints on the maximum number of to-be-added candidates to the history table of merge list are disclosed.
BRIEF SUMMARY OF THE INVENTION
A method and apparatus for adding cross-component model candidates to a history table of merge list based on similarity are disclosed. According to the method, input data associated with a current block comprising a first-colour block and a second-colour block are received, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side. A merge list or a history table is derived. Whether to add a target CCM candidate to the merge list or the history table is determined based on one or more conditions, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table. The second-colour block is encoded or decoded using information comprising the merge list or the history table, wherein when the target CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the target CCM candidate to reconstructed first-colour block.
In one embodiment, said one or more similarities are calculated based on one or more model parameters. In one embodiment, part of said one or more model parameters are used for calculating said one or more similarities. In one embodiment, the target CCM candidate corresponds to a CCLM candidate with a scale and one or more offset parameters, and said one or more similarities are measured based on the scale or based on said one or more offset parameters. In one embodiment, the target CCM candidate corresponds to a CCCM candidate with c0 to c6 parameters, and said one or more similarities are measured based on only n parameters and n is less than 7.
In one embodiment, said one or more similarities are calculated based on one or more model errors. In one embodiment, the target cross-component model associated with the CCM candidate is applied to neighbouring reconstructed first-colour samples of the current block
to derive a target model error associated with the CCM candidate, and the target model error is compared with one or more member model errors associated with said one or more member candidates.
In one embodiment, a maximum number of to-be-added candidates associated with a specific coding mode in the merge list or the history table is constrained to be k, and the k is a positive integer. For example, the to-be-added candidates associated with the specific coding mode correspond to spatial adjacent candidates or the to-be-added candidates with the specific coding mode correspond to non-spatial adjacent candidates. For another example, the to-be-added candidates associated with the specific coding mode are from the history table. For yet another example, the to-be-added candidates associated with the specific coding mode have a cross-component prediction model derived from neighbouring reconstruction samples of the current block. In one embodiment, the k depends on current block size, prediction mode of one or more neighbouring blocks, slice type, temporal identifier, maximum allowed number of said one or more member candidates, or a combination thereof.
In one embodiment, whether said one or more member candidates are allowed to be reordered or not depends on a model type or coding mode associated with said one or more member candidates. For example, when said one or more member candidates correspond to spatial adjacent candidates or said one or more member candidates correspond to non-spatial adjacent candidates, said one or more member candidates are allowed to be reordered. For another example, when said one or more member candidates correspond to spatial adjacent candidates or non-spatial adjacent candidates, said one or more member candidates are allowed to be reordered. For yet another example, when said one or more member candidates are from the history table, said one or more member candidates are allowed to be reordered. For yet another example, when said one or more member candidates are associated with a cross-component prediction model derived from neighbouring reconstruction samples of the current block, said one or more member candidates are not allowed to be reordered.
BRIEF DESCRIPTION OF THE DRAWI. NGS
Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
Fig. 2 illustrates examples of a multi-type tree structure corresponding to vertical binary splitting (SPLIT_BT_VER) , horizontal binary splitting (SPLIT_BT_HOR) , vertical ternary splitting (SPLIT_TT_VER) , and horizontal ternary splitting (SPLIT_TT_HOR) .
Fig. 3 shows the intra prediction modes as adopted by the VVC video coding
standard.
Fig. 4 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode.
Fig. 5 shows an example of classifying the neighbouring samples into two groups.
Fig. 6A illustrates an example of the CCLM model.
Fig. 6B illustrates an example of the effect of the slope adjustment parameter “u” for model update.
Fig. 7 illustrates an example of spatial part of the convolutional filter.
Fig. 8 illustrates an example of reference area with paddings used to derive the filter coefficients.
Fig. 9 illustrates the 16 gradient patterns for Gradient Linear Model (GLM) .
Fig. 10 illustrates the neighbouring blocks used for deriving spatial merge candidates for VVC.
Fig. 11 illustrates the possible candidate pairs considered for redundancy check in VVC.
Fig. 12 illustrates an example of temporal candidate derivation, where a scaled motion vector is derived according to POC (Picture Order Count) distances.
Fig. 13 illustrates the position for the temporal candidate selected between candidates C0 and C1.
Fig. 14 illustrates an exemplary pattern of the non-adjacent spatial merge candidates.
Fig. 15 illustrates examples of CCM information propagation, where the blocks with dash line (i.e., A, E, G) are coded in cross-component mode (e.g., CCLM, MMLM, GLM, CCCM) .
Fig. 16 illustrates an example of neighbouring templates for calculating model error.
Fig. 17 illustrates a flowchart of an exemplary video coding system that adds a cross-component model candidate to a history table of merge list based on similarity according to an embodiment of the present invention.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the systems and methods of the present invention, as represented in the figures,
is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. References throughout this specification to “one embodiment, ” “an embodiment, ” or similar language mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of apparatus and methods that are consistent with the invention as claimed herein.
Partitioning of the CTUs Using a Tree Structure
In HEVC, a CTU is split into CUs by using a quaternary-tree (QT) structure denoted as coding tree to adapt to various local characteristics. The decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the leaf CU level. Each leaf CU can be further split into one, two or four Pus according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU splitting type, a leaf CU can be partitioned into transform units (TUs) according to another quaternary-tree structure similar to the coding tree for the CU. One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.
In VVC, a quadtree with nested multi-type tree using binary and ternary splits segmentation structure replaces the concepts of multiple partition unit types, i.e. it removes the separation of the CU, PU and TU concepts except as needed for CUs that have a size too large for the maximum transform length, and supports more flexibility for CU partition shapes. In the coding tree structure, a CU can have either a square or rectangular shape. A coding tree unit (CTU) is first partitioned by a quaternary tree (a. k. a. quadtree) structure. Then the quaternary tree leaf nodes can be further partitioned by a multi-type tree structure. As shown in Fig. 2, there are four splitting types in multi-type tree structure, vertical binary splitting (SPLIT_BT_VER 210) ,
horizontal binary splitting (SPLIT_BT_HOR 220) , vertical ternary splitting (SPLIT_TT_VER 230) , and horizontal ternary splitting (SPLIT_TT_HOR 240) . The multi-type tree leaf nodes are called coding units (CUs) , and unless the CU is too large for the maximum transform length, this segmentation is used for prediction and transform processing without any further partitioning. This means that, in most cases, the CU, PU and TU have the same block size in the quadtree with nested multi-type tree coding block structure. The exception occurs when maximum supported transform length is smaller than the width or height of the colour component of the CU.
Intra Mode Coding with 67 Intra Prediction Modes
To capture the arbitrary edge directions presented in natural video, the number of directional intra modes in VVC is extended from 33, as used in HEVC, to 65. The new directional modes not in HEVC are depicted as red dotted arrows in Fig. 3, and the planar and DC modes remain the same. These denser directional intra prediction modes apply for all block sizes and for both luma and chroma intra predictions.
In VVC, several conventional angular intra prediction modes are adaptively replaced with wide-angle intra prediction modes for the non-square blocks.
Cross-Component Linear Model (CCLM) Prediction
To reduce the cross-component redundancy, a cross-component linear model (CCLM) prediction mode is used in the VVC, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
predC (i, j) =α·recL′ (i, j) + β, (1)
predC (i, j) =α·recL′ (i, j) + β, (1)
where predC (i, j) represents the predicted chroma samples in a CU and recL′ (i, j) represents the downsampled reconstructed luma samples of the same CU.
The CCLM parameters (α and β) are derived with at most four neighbouring chroma samples and their corresponding down-sampled luma samples. Suppose the current chroma block dimensions are W×H, then W’ and H’ are set as
– W’= W, H’= H when LM_LA mode is applied;
– W’=W + H when LM_A mode is applied;
– H’= H + W when LM_L mode is applied.
The above neighbouring positions are denoted as S [0, -1] …S [W’-1, -1] and the left neighbouring positions are denoted as S [-1, 0] …S [-1, H’-1] . Then the four samples are selected as
- S [W’/4, -1] , S [3 *W’/4, -1] , S [-1, H’/4] , S [-1, 3 *H’/4] when LM_LA mode
is applied and both above and left neighbouring samples are available;
- S [W’/8, -1] , S [3 *W’/8, -1] , S [5 *W’/8, -1] , S [7 *W’/8, -1] when LM_A mode is applied or only the above neighbouring samples are available;
- S [-1, H’/8] , S [-1, 3 *H’/8] , S [-1, 5 *H’/8] , S [-1, 7 *H’/8] when LM_L mode is applied or only the left neighbouring samples are available.
The four neighbouring luma samples at the selected positions are down-sampled and compared four times to find two larger values: x0
A and x1
A, and two smaller values: x0
B and x1
B. Their corresponding chroma sample values are denoted as y0
A, y1
A, y0
B and y1
B. Then xA, xB, yA and yB are derived as:
Xa= (x0A + x1A +1) >>1;
Xb= (x0B + x1B +1) >>1;
Ya= (y0A + y1A +1) >>1;
Yb= (y0B + y1B +1) >>1 (2)
Xa= (x0A + x1A +1) >>1;
Xb= (x0B + x1B +1) >>1;
Ya= (y0A + y1A +1) >>1;
Yb= (y0B + y1B +1) >>1 (2)
Finally, the linear model parameters α and β are obtained according to the following equations.
β=Yb-α·Xb (4)
β=Yb-α·Xb (4)
Fig. 4 shows an example of the location of the left and above samples and the sample of the current block involved in the LM_LA mode. Fig. 4 shows the relative sample locations of N × N chroma block 410, the corresponding 2N × 2N luma block 420 and their neighbouring samples (shown as filled circles) .
The division operation to calculate parameter α is implemented with a look-up table. To reduce the memory required for storing the table, the diff value (difference between maximum and minimum values) and the parameter α are expressed by an exponential notation. For example, diff is approximated with a 4-bit significant part and an exponent. Consequently, the table for 1/diff is reduced into 16 elements for 16 values of the significand as follows:
DivTable [] = {0, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1, 1, 1, 0} (5)
DivTable [] = {0, 7, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1, 1, 1, 0} (5)
This would have a benefit of both reducing the complexity of the calculation as well as the memory size required for storing the needed tables.
Besides the above template and left template can be used to calculate the linear
model coefficients together, they also can be used alternatively in the other 2 LM modes, called LM_A, and LM_L modes.
In LM_A mode, only the above template is used to calculate the linear model coefficients. To get more samples, the above template is extended to (W+H) samples. In LM_L mode, only left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to (H+W) samples.
In LM_LA mode, left and above templates are used to calculate the linear model coefficients.
To match the chroma sample locations for 4: 2: 0 video sequences, two types of down-sampling filter are applied to luma samples to achieve 2 to 1 down-sampling ratio in both horizontal and vertical directions. The selection of down-sampling filter is specified by a SPS level flag. The two down-sampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
RecL′ (i, j) = [recL (2i-1, 2j-1) +2·recL (2i-1, 2j-1) +recL (2i+1, 2j-1) +
recL (2i-1, 2j) +2·recL (2i, 2j) +recL (2i+1, 2j) +4] >>3 (6)
RecL′ (i, j) =recL (2i, 2j-1) +recL (2i-1, 2j) +4·recL (2i, 2j) +recL (2i+1, 2j) +
recL (2i, 2j+1) +4] >>3 (7)
RecL′ (i, j) = [recL (2i-1, 2j-1) +2·recL (2i-1, 2j-1) +recL (2i+1, 2j-1) +
recL (2i-1, 2j) +2·recL (2i, 2j) +recL (2i+1, 2j) +4] >>3 (6)
RecL′ (i, j) =recL (2i, 2j-1) +recL (2i-1, 2j) +4·recL (2i, 2j) +recL (2i+1, 2j) +
recL (2i, 2j+1) +4] >>3 (7)
Note that only one luma line (general line buffer in intra prediction) is used to make the down-sampled luma samples when the upper reference line is at the CTU boundary.
This parameter computation is performed as part of the decoding process, and is not just as an encoder search operation. As a result, no syntax is used to convey the α and β values to the decoder.
For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (LM_LA, LM_A, and LM_L) . In addition, chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the centre position of the current chroma block is directly inherited.
Multiple Model CCLM (MMLM)
In the JEM (J. Chen, E. Alshina, G.J. Sullivan, J. -R. Ohm, and J. Boyce, Algorithm Description of Joint Exploration Test Model 7, document JVET-G1001, ITU-T/ISO/IEC Joint
Video Exploration Team (JVET) , Jul. 2017) , multiple model CCLM mode (MMLM) is proposed for using two models for predicting the chroma samples from the luma samples for the whole CU. In MMLM, neighbouring luma samples and neighbouring chroma samples of the current block are classified into two groups, each group is used as a training set to derive a linear model (i.e., a particular α and β are derived for a particular group) . Furthermore, the samples of the current luma block are also classified based on the same rule for the classification of neighbouring luma samples. Three MMLM model modes (MMLM_LA, MMLM_T, and MMLM_L) are allowed for choosing the neighbouring samples from left-side and above-side, above-side only, and left-side only, respectively.
Fig. 5 shows an example of classifying the neighbouring samples into two groups. Threshold is calculated as the average value of the neighbouring reconstructed luma samples. A neighbouring sample with Rec′L [x, y] <= Threshold is classified into group 1; while a neighbouring sample with Rec′L [x, y] > Threshold is classified into group 2.
Accordingly, the MMLM uses two models according to the sample level of the neighbouring samples.
Slope adjustment of CCLM
CCLM uses a model with 2 parameters to map luma values to chroma values as shown in Fig. 6A. The slope parameter “a” and the bias parameter “b” define the mapping as follows:
chromaVal = a *lumaVal + b
chromaVal = a *lumaVal + b
An adjustment “u” to the slope parameter is signalled to update the model to the following form, as shown in Fig. 6B:
chromaVal = a’*lumaVal + b’
chromaVal = a’*lumaVal + b’
where
a’= a + u,
b’= b -u *yr.
a’= a + u,
b’= b -u *yr.
With this selection, the mapping function is tilted or rotated around the point with luminance value yr. The average of the reference luma samples used in the model creation as yr in
order to provide a meaningful modification to the model. Fig. 6A and 6B illustrates the process.
Local illumination compensation (LIC)
Local Illumination Compensation (LIC) is a method to do inter predict by using neighbour samples of current block and reference block. It is based on a linear model using a scaling factor a and an offset b. It derives a scaling factor a and an offset b by referring to the neighbour samples of current block and reference block. Moreover, it’s enabled or disabled adaptively for each CU.
For more detail for LIC, it can refer to the document JVET-C1001 (Jianle Chen, et al., “Algorithm Description of Joint Exploration Test Model 3” , Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 3rd Meeting: Geneva, CH, 26 May –1 June 2016, Document: JVET-C1001) .
Convolutional cross-component model (CCCM)
In CCCM, a convolutional model is applied to improve the chroma prediction performance. The convolutional model has 7-tap filter consisting of a 5-tap plus sign shape spatial component, a nonlinear term and a bias term. The input to the spatial 5-tap component of the filter consists of a centre (C) luma sample which is collocated with the chroma sample to be predicted and its above/north (N) , below/south (S) , left/west (W) and right/east (E) neighbours as shown in Fig. 7.
The nonlinear term (denoted as P) is represented as power of two of the centre luma sample C and scaled to the sample value range of the content:
P = (C*C + midVal) >> bitDepth.
P = (C*C + midVal) >> bitDepth.
For example, for 10-bit contents, the nonlinear term is calculated as:
P = (C*C + 512) >> 10
P = (C*C + 512) >> 10
The bias term (denoted as B) represents a scalar offset between the input and output (similarly to the offset term in CCLM) and is set to the middle chroma value (512 for 10-bit content) .
Output of the filter is calculated as a convolution between the filter coefficients ci and the input values and clipped to the range of valid chroma samples:
predChromaVal = c0C + c1N + c2S + c3E + c4W + c5P + c6B
predChromaVal = c0C + c1N + c2S + c3E + c4W + c5P + c6B
The filter coefficients ci are calculated by minimising MSE between predicted and reconstructed chroma samples in the reference area. Fig. 8 illustrates an example of the reference area which consists of 6 lines of chroma samples above and left of the PU. Reference area extends one PU width to the right and one PU height below the PU boundaries. Area is adjusted to include
only available samples. The extensions to the area (indicated as “paddings” ) are needed to support the “side samples” of the plus-shaped spatial filter in Fig. 11 and are padded when in unavailable areas.
The MSE minimization is performed by calculating autocorrelation matrix for the luma input and a cross-correlation vector between the luma input and chroma output. Autocorrelation matrix is LDL decomposed and the final filter coefficients are calculated using back-substitution. The process follows roughly the calculation of the ALF filter coefficients in ECM, however LDL decomposition was chosen instead of Cholesky decomposition to avoid using square root operations.
Also, similarly to CCLM, there is an option of using a single model or multi-model variant of CCCM. The multi-model variant uses two models, one model derived for samples above the average luma reference value and another model for the rest of the samples (following the spirit of the CCLM design) . Multi-model CCCM mode can be selected for PUs which have at least 128 reference samples available.
Gradient Linear Model (GLM)
Compared with the CCLM, instead of down-sampled luma values, the GLM utilizes luma sample gradients to derive the linear model. Specifically, when the GLM is applied, the input to the CCLM process, i.e., the down-sampled luma samples L, are replaced by luma sample gradients G. The other parts of the CCLM (e.g., parameter derivation, prediction sample linear transform) are kept unchanged.
C=α·G+β
C=α·G+β
For signalling, when the CCLM mode is enabled for the current CU, two flags are signalled separately for Cb and Cr components to indicate whether GLM is enabled for each component. If the GLM is enabled for one component, one syntax element is further signalled to select one of 16 gradient filters (910-940 in Fig. 9) for the gradient calculation. The GLM can be combined with the existing CCLM by signalling one extra flag in bitstream. When such combination is applied, the filter coefficients that are used to derive the input luma samples of the linear model are calculated as the combination of the selected gradient filter of the GLM and the down-sampling filter of the CCLM.
Spatial Candidate Derivation
The derivation of spatial merge candidates in VVC is the same as that in HEVC except that the positions of first two merge candidates are swapped. A maximum of four merge candidates (B0, A0, B1 and A1) for current CU 1010 are selected among candidates located in the positions depicted in Fig. 10. The order of derivation is B0, A0, B1, A1 and B2. Position B2 is
considered only when one or more neighbouring CU of positions B0, A0, B1, A1 are not available (e.g. belonging to another slice or tile) or is intra coded. After candidate at position A1 is added, the addition of the remaining candidates is subject to a redundancy check which ensures that candidates with the same motion information are excluded from the list so that coding efficiency is improved. To reduce computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. Instead, only the pairs linked with an arrow in Fig. 11 are considered and a candidate is only added to the list if the corresponding candidate used for redundancy check does not have the same motion information.
Temporal Candidates Derivation
In this step, only one candidate is added to the list. Particularly, in the derivation of this temporal merge candidate for a current CU 1210, a scaled motion vector is derived based on the co-located CU 1220 belonging to the collocated reference picture as shown in Fig. 12. The reference picture list and the reference index to be used for the derivation of the co-located CU is explicitly signalled in the slice header. The scaled motion vector 1230 for the temporal merge candidate is obtained as illustrated by the dotted line in Fig. 12, which is scaled from the motion vector 1240 of the co-located CU using the POC (Picture Order Count) distances, tb and td, where tb is defined to be the POC difference between the reference picture of the current picture and the current picture and td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture. The reference picture index of temporal merge candidate is set equal to zero.
The position for the temporal candidate is selected between candidates C0 and C1, as depicted in Fig. 13. If CU at position C0 is not available, is intra coded, or is outside of the current row of CTUs, position C1 is used. Otherwise, position C0 is used in the derivation of the temporal merge candidate.
Non-adjacent spatial candidate
During the development of the VVC standard, a coding tool referred as Non-Adjacent Motion Vector Prediction (NAMVP) has been proposed in JVET-L0399 (Yu Han, et al., “CE4.4.6: Improvement on Merge/Skip mode” , Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, 3–12 Oct. 2018, Document: JVET-L0399) . According to the NAMVP technique, the non-adjacent spatial merge candidates are inserted after the TMVP (i.e., the temporal MVP) in the regular merge candidate list. The pattern of spatial merge candidates is shown in Fig. 14. The distances between non-adjacent spatial candidates and current coding block are based on the width and height of current coding block. In Fig. 14, each small square corresponds to a NAMVP candidate and the order of the candidates (as shown by the number inside the square) are related to the distance. The line
buffer restriction is not applied. In other words, the NAMVP candidates far away from a current block may have to be stored that may require a large buffer.
Cross-Component Prediction (CCP) Merge Mode
For chroma coding, a flag is signalled to indicate whether CCP mode (including the CCLM, CCCM, GLM and their variants) or non-CCP mode (conventional chroma intra prediction mode, fusion of chroma intra prediction mode) is used. If the CCP mode is selected, one more flag is signalled to indicate how to derive the CCP type and parameters, i.e., either from a CCP merge list or signalled/derived on-the-fly. A CCP merge candidate list is constructed from the spatial adjacent, temporal, spatial non-adjacent, history-based or shifted temporal candidates. After including these candidates, default models are further included to fill the remaining empty positions in the merge list. In order to remove redundant CCP models in the list, pruning operation is applied. After constructing the list, the CCP models in the list are reordered depending on the SAD costs, which are obtained using the neighbouring template of the current block. More details are described below.
Spatial Adjacent and Non-Adjacent Candidates
The positions and inclusion order of the spatial adjacent and non-adjacent candidates are the same as those defined in ECM for regular inter merge prediction candidates.
Temporal and Shifted Temporal Candidates
Temporal candidates are selected from the collocated picture. The position and inclusion order of the temporal candidates are the same as those defined in ECM for regular inter merge prediction candidates. The shifted temporal candidates are also selected from the collocated picture. The position of temporal candidates is shifted by a selected motion vector which is derived from motion vectors of neighbouring blocks.
History-based Candidates
A history-based table is maintained to include the recently used CCP models, and the table is reset at the beginning of each CTU row. If the current list is not full after including spatial adjacent and non-adjacent candidates, the CCP models in the history-based table are added into the list.
Default Candidates
CCLM candidates with default scaling parameters are considered, only when the list is not full after including the spatial adjacent, spatial non-adjacent, or history-based candidates. If the current list has no candidates with the single model CCLM mode, the default scaling parameters are {0, 1/8, -1/8, 2/8, -2/8, 3/8, -3/8, 4/8, -4/8, 5/8, -5/8, 6/8} . Otherwise, the default scaling parameters are {0, the scaling parameter of the first CCLM candidate + {1/8, -1/8, 2/8, -2/8, 3/8, -3/8, 4/8, -4/8, 5/8, -5/8, 6/8} } .
A flag is signalled to indicate whether the CCP merge mode is applied or not. If CCP merge mode is applied, an index is signalled to indicate which candidate model is used by the current block. In addition, CCP merge mode is not allowed for the current chroma coding block when the current CU is coded by intra sub-partitions (ISP) with single tree, or the current chroma coding block size is less than or equal to 16.
In order to improve the performance of cross-component prediction, methods and apparatus to add a cross-component model candidate to a history table of merge list based on similarity are disclosed.
Guided parameter set for refining the cross-component model parameters
According to this method, the guided parameter set is used to refine the derived model parameters by a specified CCLM mode. For example, the guided parameter set is explicitly signalled in the bitstream, after deriving the model parameters, the guided parameter set is added to the derived model parameters as the final model parameters. The guided parameter set contain at least one of a differential scaling parameter (dA) , a differential offset parameter (dB) , and a differential shift parameter (dS) . For example, equation (1) can be rewritten as:
predC (i, j) = ( (α′·recL′ (i, j) ) >>s) + β,
predC (i, j) = ( (α′·recL′ (i, j) ) >>s) + β,
and if dA is signalled, the final prediction is:
predC (i, j) = ( ( (α′+dA) ·recL′ (i, j) ) >>s) + β.
predC (i, j) = ( ( (α′+dA) ·recL′ (i, j) ) >>s) + β.
Similarly, if dB is signalled, then the final prediction is:
predC (i, j) = ( (α′·recL′ (i, j) ) >>s) + (β+dB) .
predC (i, j) = ( (α′·recL′ (i, j) ) >>s) + (β+dB) .
If dS is signalled, then the final prediction is:
predC (i, j) = ( (α′·recL′ (i, j) ) >> (s+dS) ) + β.
predC (i, j) = ( (α′·recL′ (i, j) ) >> (s+dS) ) + β.
If dA and dB are signalled, then the final prediction is:
predC (i, j) = ( ( (α′+dA) ·recL′ (i, j) ) >>s) + (β+dB) .
predC (i, j) = ( ( (α′+dA) ·recL′ (i, j) ) >>s) + (β+dB) .
The guided parameter set can be signalled per colour component. For example, one guided parameter set is signalled for Cb component, and another guided parameter set is signalled for Cr component. Alternatively, one guided parameter set can be signalled and shared among colour components. The signalled dA and dB can be a positive or negative value. When signalling
dA, one bin is signalled to indicate the sign of dA. Similarly, when signalling dB, one bin is signalled to indicate the sign of dB.
For another embodiment, if dA is signalled, dB can be implicitly derived from the average value of neighbouring (e.g. L-shape) reconstructed samples. For example, in VVC, four neighbouring luma and chroma reconstructed samples are selected to derived model parameters. Suppose the average value of neighbouring luma and chroma samples are lumaAvg and chromaAvg, then β is derived byThe average value of neighbouring luma samples (i.e., lumaAvg) can be calculated by all selected luma samples, the luma DC mode value of the current luma CB, or the average of the maximum and minimum luma samples (e.g.,
) . Similarly, average value of neighbouring chroma samples (i.e., chromaAvg) can be calculated by all selected chroma samples, the chroma DC mode value of the current chroma CB, or the average of the maximum and minimum chroma samples (e.g., or
) . Note, for non-4: 4: 4 colour subsampling format, the selected neighbouring luma reconstructed samples can be from the output of CCLM downsampling process.
For another embodiment, the shift parameter, s, can be a constant value (e.g., s can be 3, 4, 5, 6, 7, or 8) , and dS is equal to 0 and no need to be signalled.
For another embodiment, in MMLM, the guided parameter set can also be signalled per model. For example, one guided parameter set is signalled for one model and another guided parameter set is signalled for another model. Alternatively, one guided parameter set is signalled and shared among linear models. Or only one guided parameter set is signalled for one selected model, and another model is not further refined by guided parameter set.
Inheriting Neighbouring Model Parameters for Refining the Cross-Component Model Parameters
The final scaling parameter of the current block is inherited from the neighbouring blocks and further refined by dA (e.g., dA derivation or signalling can be similar or the same as the method in the previous “Guided parameter set for refining the cross-component model parameters” ) . Once the final scaling parameter is determined, the offset parameter (e.g., β in CCLM) is derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of the current block. For example, if the final scaling parameter is inherited from a selected neighbouring block, and the inherited scaling parameter is α′nei, then the
final scaling parameter is (α′nei+ dA) . For yet another embodiment, the final scaling parameter is inherited from a historical list and further refined by dA. For example, the historical list records the most recent j entries of final scaling parameters from previous CCLM-coded blocks. Then, the final scaling parameter is inherited from one selected entry of the historical list, α′nei, and the final scaling parameter is (α′nei+ dA) . For yet another embodiment, the final scaling parameter is inherited from a historical list or the neighbouring blocks, but only the MSB (Most Significant Bit) part of the inherited scaling parameter is taken, and the LSB (Least Significant Bit) of the final scaling parameter is from dA. For yet another embodiment, the final scaling parameter is inherited from a historical list or the neighbouring blocks, but does not further refine by dA.
For yet another embodiment, after inheriting model parameters, the offset can be further refined by dB. For example, if the final offset parameter is inherited from a selected neighbouring block, and the inherited offset parameter is β′nei, then the final scaling parameter is (β′nei + dB) . For still another embodiment, the final offset parameter is inherited from a historical list and further refined by dB. For example, the historical list records the most recent j entries of final scaling parameters from previous CCLM-coded blocks. Then, the final scaling parameter is inherited from one selected entry of the historical list, β′listand the final scaling parameter is (β′list + dB) .
For yet another embodiment, if the inherited neighbour block is coded with CCCM, the filter coefficients (ci) are inherited. The offset parameter (e.g., c6×B or c6 in CCCM) can be re-derived based on the inherited parameter and the average value of neighbouring corresponding position luma and chroma samples of the current block. For still another embodiment, only partial filter coefficients are inherited (e.g., only n out of 6 filter coefficients are inherited, where 1≤n<6) , the rest filter coefficients are further re-derived using the neighbouring luma and chroma samples of the current block.
For still another embodiment, if the inherited candidate applies GLM gradient pattern to its luma reconstructed samples, the current block shall also inherit the GLM gradient pattern of the candidate and apply to the current luma reconstructed samples.
For still another embodiment, if the inherited neighbour block is coded with multiple cross-component models (e.g., MMLM, or CCCM with multi-model) , the classification threshold is also inherited to classify the neighbouring samples of the current block into multiple groups, and the inherited multiple cross-component model parameters are further assigned to each
group. For yet another embodiment, the classification threshold is the average value of the neighbouring reconstructed luma samples, and the inherited multiple cross-component model parameters are further assigned to each group. Similarly, once the final scaling parameter of each group is determined, the offset parameter of each group is re-derived based on the inherited scaling parameter and the average value of neighbouring luma and chroma samples of each group of the current block. For another example, if CCCM with multi-model is used, once the final coefficient parameter of each group is determined (e.g., c0 to c5 except for c6 in CCCM) , the offset parameter (e.g., c6×B or c6 in CCCM) of each group is re-derived based on the inherited coefficient parameter and the neighbouring luma and chroma samples of each group of the current block.
For still another embodiment, inheriting model parameters may depend on the colour component. For example, Cb and Cr components may inherit model parameters or model derivation method from the same candidate or different candidates. For yet another example, only one of colour components inherits model parameters, and the other colour component derives model parameters based on the inherited model derivation method (e.g., if the inherit candidate is coded by MMLM or CCCM, the current block also derives model parameters based on MMLM or CCCM using the current neighbouring reconstructed samples) . For still another example, only one of colour components inherits model parameters, and the other colour component derives its model parameters using the current neighbouring reconstructed samples.
For still another example, if Cb and Cr components can inherit model parameters or model derivation method from different candidates. The inherited model of Cr can depend on the inherited model of Cb. For example, possible cases include but not limited to (1) if the inherited model of Cb is CCCM, the inherited model of Cr shall be CCCM; (2) if the inherit model of Cb is CCLM, the inherit model of Cr shall be CCLM; (3) if the inherited model of Cb is MMLM, the inherited model of Cr shall be MMLM; (4) if the inherited model of Cb is CCLM, the inherited model of Cr shall be CCLM or MMLM; (5) if the inherited model of Cb is MMLM, the inherited model of Cr shall be CCLM or MMLM; (6) if the inherited model of Cb is GLM, the inherited model of Cr shall be GLM.
For yet another embodiment, after decoding a block, the (CCM) information cross-component model of the current block is derived and stored for later reconstruction process of neighbouring blocks using inherited neighbours model parameter. The CCM information mentioned in this disclosure includes but not limited to prediction mode (e.g., CCLM, MMLM, CCCM) , GLM pattern index, model parameters, or classification threshold. For example, even the current block is coded by inter prediction, the cross-component model parameters of the current block can be derived by using the current luma and chroma reconstruction or prediction samples. Later, if another block is predicted by using inherited neighbours model parameters, it can inherit
the model parameters from the current block. For another example, the current block is coded by cross-component prediction, the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples. For another example, the stored cross-component model can be CCCM, LM_LA (i.e., single model LM using both above and left neighbouring samples to derive model) , or MMLM_LA (multi-model LM using both above and left neighbouring samples to derive model) . For still example, even the current block is coded by non-cross-component intra prediction (e.g., DC, planar, intra angular modes, MIP, or ISP) , the cross-component model parameters of the current block are derived by using the current luma and chroma reconstruction or prediction samples.
For another example, even the current block is coded by cross-component prediction, the cross-component model parameters of the current block are re-derived by using the current luma and chroma reconstruction or prediction samples. Later, the re-derived model parameters are combined with the original cross-component models, which is used in reconstructing the current block. For combining with the original cross-component models, it can use the model combination methods mentioned in the section entitled “Candidate List Construction” , or the section entitled “Inheriting Multiple Cross-Component Models” . For example, if the original cross-component model parameters areand the re-derived cross-component model parameters arethen the final cross-component model iswhere α is a weighting factor. The weighting factor can be predefined or implicitly derived according to the neighbouring template cost.
For another embodiment, when inheriting a cross-component model from a neighbor merge candidate that was coded by a cross-component mode (e.g. CCLM and CCCM, …) , a flag can be signalled to indicate/select if the re-derived model is used. If the flag has a value equal to 0, the cross-component model used to encode the neighbour merge candidate is inherited. If the flag has a value equal to 1, the cross-component model re-derived based on the luma and chroma reconstruction or prediction samples of the neighbouring merge candidate is inherited.
For still another example, when the current slice is a non-intra slice (e.g., P slice or B slice) , a cross-component model of the current block is derived and stored for later reconstruction process of neighbouring blocks using inherited neighbours model parameter. For still another embodiment, when the current block is inter-coded, the CCM information of the current inter-coded block is derived by copying the CCM information from its reference block that has CCM information in a reference picture, located by the motion information of the current inter-coded block. For example, as shown in Fig. 15, the block B in a P/B picture 1520 is inter-coded, then the CCM information of block B is obtained by copying CCM information from its referenced
block A in an I picture 1510. It should be noted that the current block can also copy the CCM information from an intra-coded block in an P/B picture. For example, as shown in the Fig. 15, the block D in a P/B picture 1530 is inter-coded, then the CCM information of block B is obtained by copying CCM information from its referenced block E that is intra-coded in the P/B picture 1520. For still another embodiment, if the reference block in a reference picture is also inter-coded, the CCM information of the reference block is obtained by copying the CCM information from another reference block in another reference picture. For example, as shown in the Fig. 15, the current block C in a current P/B picture 1530 is inter-coded and its referenced block B is also inter-coded, due to the CCM information of block B is obtained by copying the CCM information from block A, then the CCM information of block A is also propagated to the current block C. For still another embodiment, when the current block is inter-coded with bi-directional prediction, if one of its reference blocks is intra-coded and has CCM information, the CCM information of the current block is obtained by copying the CCM information from its intra-coded reference block in a reference picture. For example, suppose block F is inter-coded with bi-prediction and has reference blocks G and H. Block G is intra-coded and has CCM information. The CCM information of block F is obtained by copying the CCM information from the block G coded in CCM mode. For still another embodiment, when the current block is inter-coded with bi-directional prediction, the CCM information of the current block is the combination of the CCM models of its reference blocks (as the method mentioned in section entitled: Inheriting Multiple Cross-Component Models) .
In one embodiment, when deriving cross-component models for the current block by using the current luma and chroma reconstruction or prediction samples, if the current derived model error is greater than a threshold, the current derived model is discarded and not stored. For example, the current luma reconstruction samples can be provided to the model and the distortion between the model output and the current chroma reconstruction samples can be calculated. The calculated distortion is then normalized by the current block size or the number of samples used in calculating the distortion. If the normalized distortion is greater than or equal to a threshold, the current derived model is discarded and not stored.
Whether to derive cross-component models for the current block or not can depend on the current block size or area. For example, for small blocks (e.g., block width/height less than or equal to a threshold, or block area less than or equal to a threshold) , it is not allowed to derive cross-component models. For another example, for large blocks (e.g., block width/height greater than or equal to a threshold, or block area greater than or equal to a threshold) , it is not allowed to derive cross-component models.
In still another embodiment, whether to derive the cross-component models for a
block at a neighbouring position of the current block or not can depend on the availability of the reconstruction samples of the current block. For example, the availability of the neighbouring reconstruction samples can be defined by the availability of the reconstructed samples inside the k lines of neighbouring samples. The k can be defined by the IBC neighbouring search region, or the neighbouring buffer area of other intra coding tools (e.g., multi-reference line intra prediction, CCLM or CCCM) . If the block at a neighbouring position of the current block is outside the k lines of neighbouring samples of the current block, it will not derive cross-component models for the block at a neighbouring position of the current block.
Candidate List Construction
In one embodiment, the candidate list is constructed by adding candidates in a pre-defined order until the maximum candidate number is reached. The candidates added may include all or some of the aforementioned candidates, but not limited to the aforementioned candidates. For example, the candidate list may include spatial neighbouring candidates, temporal neighbouring candidates, historical candidates, non-adjacent neighbouring candidates, single model candidates generated based on other inherited models or combined model (as mentioned later in the section entitled: Inheriting multiple cross-component models) . For another example, the candidate list can include the same candidates as the previous example, but the candidates are added into the list in a different order.
In another embodiment, if all the pre-defined neighbouring and historical candidates are added but the maximum candidate number is not reached, some default candidates are added into the candidate list until the maximum candidate number is reached.
In one sub-embodiment, the default candidates include, but not limited to, the candidates described below. The final scaling parameter α is from the set {0, 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} , and the offset parameter β=1/ (1<<bit_depth) or is derived based on neighbouring luma and chroma samples. For example, if the average value of neighbouring luma and chroma samples are lumaAvg and chromaAvg, then β is derived by β=chromaAvg-α·lumaAvg. The average value of neighbouring luma samples (lumaAvg) can be calculated by all selected luma samples, the luma DC mode value the current luma CB, or the average of the maximum and minimum luma samples (e.g.,
or) . Similarly, average value of neighbouring chroma samples (chromaAvg) can be calculated by all selected chroma samples, the chroma DC mode value the current chroma CB, or the average of the maximum and minimum chroma samples (e.g.,
or) .
In another sub-embodiment, the default candidates include, but not limited to, the candidates described below. The default candidates are α·G+β, where G is the luma sample gradients instead of down-sampled luma samples L. The 16 GLM filters described in the section, entitled Gradient Linear Model (GLM) , are applied. The final scaling parameter α is from the set {0, 1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . The offset parameter β=1/ (1<<bit_depth) or is derived based on neighbouring luma and chroma samples.
In another embodiment, a default candidate can be an earlier candidate with a delta scaling parameter refinement. For example, if the scaling parameter of an earlier candidate is α, the scaling parameter of a default candidate is (α+Δα) , where Δα can be a value from the set {1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . And the offset parameter of a default candidate would be derived by (α+Δα) and the average value of neighbouring luma and chroma samples of the current block.
In another embodiment, a default candidate can be a shortcut to indicate a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours. For example, the default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, MMLM_A, single model CCCM, multiple models CCCM or cross-component model with a specified GLM pattern.
In another embodiment, a default candidate can be a cross-component mode (i.e., using the current neighbouring luma/chroma reconstruction samples to derive cross-component models) rather than inheriting parameters from neighbours, and also with a scaling parameter update (Δα) . Then, the scaling parameter of a default candidate is (α+Δα) . For example, default candidate can be CCLM_LA, CCLM_L, CCLM_A, MMLM_LA, MMLM_L, or MMLM_A. For another example, Δα can be a value from the set {1/8, -1/8, +2/8, -2/8, +3/8, -3/8, +4/8, -4/8} . And the offset parameter of a default candidate will be derived by (α+Δα) and the average value of neighbouring luma and chroma samples of the current block. For still another example, the Δαcan be different for each colour component.
In another embodiment, a default candidate can be an earlier candidate with partially selected model parameters. For example, suppose an earlier candidate has m parameters, it can choose k out of m parameters from the earlier candidate to be a default candidate, where 0 <k < m and m > 1.
In another embodiment, a default candidate can be the first model of an earlier MMLM candidate (i.e., the model used when the sample value is less than or equal to classification threshold) . In still another embodiment, a default candidate can be the second model of an earlier
MMLM candidate (i.e., the model used when the sample value is greater than or equal to classification threshold) . In still another embodiment, a default candidate can be the combination of two models of an earlier MMLM candidate. For example, if the models of an earlier MMLM candidate areandThe model parameters of an default candidate can be
where α is a weighting factor which can be predefined or implicitly derived by neighbouring template cost, andis the x-th parameter of the y-th model.
In another embodiment, default candidates can be derived from reconstructed samples from non-adjacent neighbouring regions. Let the current block position be at (x, y) and the block size be w×h. If the reconstructed samples in the MxN region located at (x+dx, y+dy) are “available” , the default candidates can be derived using reconstructed luma and chroma samples in the region. For example, MxN can be 8x8. For another example, MxN can be 16x8. For another example, MxN can be 16x16. For another example, MxN can be w×h. The meaning of “available” can be that the reconstructed sample inside the current block is available, or the reconstructed sample inside the k lines of neighbouring samples is available. The k can be defined according to the IBC neighbouring search region, or the neighbouring buffer area of other intra coding tools (e.g., multi-reference line intra prediction, CCLM or CCCM) .
In another embodiment, let the current block position be at (x, y) and the block size be w×h, the default candidates can be derived using reconstructed samples in the MxN region located at (xmid +dx, ymid +dy) , if the reconstructed samples in the region are available. (xmid, ymid) = (x + w/2, y + h/2) .
In another embodiment, default candidates derived from reconstructed samples from non-adjacent neighbouring regions can be any type of cross-component model or some particular types of cross-component model. For example, the derived model can be CCLM, MMLM, CCCM, CCCM multi-models, or other cross-component models. For another example, the derived model is CCCM model. For another example, the derived model is CCLM model. For another example, the derive model is CCCM or CCCM multi-models.
In another embodiment, assume two value sets αx and αy are defined as:
αx= {αx1, αx2, αx3, …, αxn} , αxi<αxj if i<j
αy= {αy1, αy2, αy3, …, αyn} , αyi<αyj if i<j
αx= {αx1, αx2, αx3, …, αxn} , αxi<αxj if i<j
αy= {αy1, αy2, αy3, …, αyn} , αyi<αyj if i<j
All values in αx and αy are positive numbers. (dx, dy) can be (αxi×w, -αyi×h) , (-αxi×w, αyi×h) , (-αxi×w, -αyi×h) , (αxi×w, 0) , (-αxi×w, 0) , (0, αyi×h) , (0, ymid-αyi×h) .
In another embodiment, the current block position is at (x, y) and the block size is w×h. Let δx and δy be two fixed positive numbers (dx, dy) can be (αxi×δx, -αyi×δy) , (-αxi×δx, +αyi×δy) , (-αxi×δx, -αyi×δy) , (αxi×δx, 0) , (-αxi×δx, 0) , (0, αyi×δy) , (0, -αyi×δy) .
When constructing a candidate list, candidates are added to the list according to a pre-defined order. For example, the pre-defined order can be spatial adjacent candidates, temporal candidates, spatial non-adjacent candidates, historical candidates, and then default candidates. In one embodiment, if cross-component models are derived for non-LM coded blocks (e.g., as mentioned in the section entitled “Inherit Neighbouring Model Parameters for Refining the Cross-Component Model Parameters” ) , the candidate models of non-LM coded blocks are added to into the list after candidate models of LM coded blocks are added. In another embodiment, if cross-component models are derived for non-LM coded blocks, the candidate models of non-LM coded blocks are added to the list before default candidates are added. In still another embodiment, if cross-component models are derived for non-LM coded blocks, the candidate models of non-LM coded blocks have lower priority to be added to the list than candidate models for LM coded blocks.
When constructing a candidate list, only the candidate with a certain prediction mode can be added to the list. For example, only the candidates derived by CCLM or MMLM modes can be added to the list. For another example, only the candidates derived by single-model modes (e.g., CCLM, or CCLM with single-model) can be added to the list. For another example, only the candidates derived by multi-model modes (e.g., MMLM, or CCCM with multi-model) can be added to the list. For another example, only the candidates derived by GLM modes can be added to the list. For another example, only the candidates derived by a specific mode (e.g., CCLM, MMLM, CCCM, CCCM with multi-model, or GLM) can be added to the list. In one embodiment, if only the candidate with a certain prediction mode can be added to the list, during prediction mode signalling, it can signal the prediction mode first and then signal if the proposed cross-component merge mode is used or not. If the proposed cross-component merge mode is used, then signal the candidate index.
In the chroma intra fusion mode, a non-CCLM coded intra prediction and a CCLM coded intra prediction are fused together to obtain the final intra prediction. In one embodiment, when inheriting the cross-component model parameters from the block/position coded by a chroma intra fusion mode, the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined. In another embodiment, the fusion weight, the coding mode of non-CCLM coded intra prediction and the model parameters for obtaining the CCLM coded intra
prediction are inherited and further refined. In still another embodiment, the coding mode of non-CCLM coded intra prediction is implicitly derived (e.g., derived as DM or planar mode) , and the fusion weight and the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined. In still another embodiment, if the non-CCLM coded intra prediction of the block/position coded by the chroma intra fusion mode can be implicitly derived (e.g., the non-CCLM coded intra prediction is DM or planar mode) , the fusion weight and the model parameters for obtaining the CCLM coded intra prediction are inherited and further refined.
When constructing a candidate list or a history table, it can further constraint the maximum number of to-be-added candidates with a specific coding mode in the list or table. The specific coding mode here includes, but not limited to, the spatial adjacent/non-adjacent candidates or candidates from history table, the cross-component prediction mode of a candidate (e.g., CCLM, MMLM, CCCM, CCCM with multi-model, or GLM) , the intra/inter prediction mode, LM/non-LM prediction mode, the cross-component prediction model of the candidate is inherited from other blocks, or the cross-component prediction model of the candidate is derived from the neighbouring reconstruction samples of the current block.
For example, it can constraint the maximum number of to-be-added spatial adjacent candidates in the candidate list is k1. For example, it can constraint the maximum number of to-be-added spatial non-adjacent candidates in the candidate list is k2. For example, it can constraint the maximum to-be-added candidates from a history table in the candidate list is k3. For example, it can constraint the maximum number of to-be-added candidates with a specific coding mode in the list is k4. For another example, it can constraint the maximum number of to-be-added candidates have the cross-component prediction model derived from the neighbouring reconstruction samples of the current block in the list is k5. For the setting of k1 to k5, the number is greater than or equal to 1, or depends on the current block size, prediction mode of neighbouring blocks, slice type, temporal identifier, or the maximum allowed candidates of the candidate list.
Removing or Modifying Similar Neighbouring Model Parameters
When inheriting cross-component model parameters from other blocks, it can further check the similarity between the inherited model and the existing models in the candidate list or those model candidates derived by the neighbouring reconstructed samples of the current block (e.g., models derived by CCLM, MMLM, or CCCM using the neighbouring reconstructed samples of the current block) . If the model of a candidate parameter is similar to the existing models, the model will not be added in the candidate list. In one embodiment, it can compare the similarity of (α×lumaAvg+β) or α among existing candidates to decide whether to add the model of a candidate or not. For example, if the (α×lumaAvg+β) or α of the candidate is the same as one of the existing candidates, the model of the candidate is not added. For another
example, if the difference of (α×lumaAvg+β) or α between the candidate and one of existing candidates is less than a threshold, the model of the candidate is not added. Besides, the threshold can be adaptive based on coding information (e.g., the current block size or area) . For another example, when comparing the similarity, if a model from a candidate and the existing model both use CCCM, it can compare similarity by checking the value of (c0C + c1N + c2S + c3E + c4W +c5P + c6B) to decide whether to include the model of a candidate or not. In another embodiment, if a candidate position points to a CU which is the same one of the existing candidates, the model of the candidate parameter is not included. In still another embodiment, if the model of a candidate is similar to one of the existing candidate models, it can adjust the inherited model parameters so that the inherited model is different from the existing candidate models. For example, if the inherited scaling parameter is similar to one of the existing candidate models, the inherited scaling parameter can add a predefined offset (e.g., 1>>S or - (1>>S) , where S is the shift parameter) so that the inherited parameter is different from the existing candidate models.
In another embodiment, it only compares partial model parameters with the existing models in the candidate list. For example, a CCLM candidate has a scale and offset parameters, it only compares if the scale or offset parameters is the same or similar to existing candidates or not. If the scale or offset parameter is the same or similar, the model will not be added to the candidate list. For another example, a CCCM candidate has c0 to c6 parameters, it can only compare if n parameters (n < 7) are the same or similar with existing candidates or not. If the scale or offset parameters is the same or similar, the model will not be added to the candidate list.
In another embodiment, it can apply a candidate model to the neighbouring reconstruction samples of the current block, and compare the difference with the existing candidate models. If the difference value is less than or equal to a threshold, the model will not be added to the candidate list. For example, assume the applied result isand the corresponding results of the existing models in the candidate list areto
the model will not be added to the candidate list. For the selection of the neighbouring reconstruction samples, it can choose the neighbouring reconstruction sample with the maximal value, the neighbouring reconstruction sample with the minimal value, the mean/median/mode of the neighbouring reconstruction samples, the left-side neighbouring reconstruction samples, the above-side neighbouring reconstruction samples, or the above-left neighbouring reconstruction samples.
In another embodiment, the number of candidates with the same type (e.g., MMLM, CCCM, or GLM) is limited when adding the candidates to the list. For example, if the current list has k candidates with MMLM type, it is not allowed to further add candidates with
MMLM type to the list. For another example, if the current list has k candidates with CCCM type, it is not allowed to further add candidates with CCCM type to the list. For another example, if the current list has k candidates with GLM type, it is not allowed to further add candidates with GLM type to the list.
In another embodiment, default candidates will not compare with the existing models in the candidate list and will be added to the candidate list.
Reordering the Candidates in the List
The candidates in the list can be reordered to reduce the syntax overhead when signalling the selected candidate index. The reordering rules can depend on the coding information of neighbouring blocks or the model error. For example, if neighbouring above or left blocks are coded by MMLM, the MMLM candidates in the list can be moved to the head of the current list. Similarly, if neighbouring above or left blocks are coded by single model LM or CCCM, the single model LM or CCCM candidates in the list can be moved to the head of the current list. Similarly, if GLM is used by neighbouring above or left blocks, the GLM related candidates in the list can be moved to the head of the current list.
In still another embodiment, the reordering rule is based on the model error by applying the candidate model to the neighbouring templates of the current block, and then comparing the error with the reconstructed samples of the neighbouring template. For example, as shown in Fig. 16, the size of the above neighbouring template 1620 of the current block is wa×ha, and the size of left neighbouring template 1630 of the current block 1610 is wb×hb. Suppose K models are in the current candidate list, and αk and βk are the final scale and offset parameters after inheriting the candidate k. The model error of candidate k corresponding to the above neighbouring template is:
where, andare the reconstructed samples of luma (e.g., after downsampling process or after applying GLM pattern) and reconstructed samples of chroma at position (i, j) in the above template, and 0≤i<wa and 0≤j<ha.
Similarly, the model error of candidate k by the left neighbouring template is:
whereandare the reconstructed samples of luma (e.g., after
applying downsampling process or GLM pattern) and reconstructed samples of chroma at position (m, n) in the left template, and 0≤m<wb and 0≤n<hb.
Then the model error of candidate k is:
After calculating the model error among all candidates, it can get a model error list E= {e0, e1, e2, …, ek, …, eK} . Then, it can reorder the candidate index in the inherited candidate list by sorting the model error list in ascending order.
In still another embodiment, if the candidate k uses CCCM prediction, theand are defined as:
where c0k, c1k, c2k, c3k, c4k, c5k, and c6k are the final filtering coefficients after inheriting the candidate k. P and B are the nonlinear term and bias term.
In still another embodiment, if the above neighbouring template is not available, thenSimilarly, if the left neighbouring template is not available, thenIf both templates are not available, the candidate index reordering method using model error is not applied.
In still another embodiment, not all positions inside the above and left neighbouring template are used in calculating model error. It can choose partial positions inside the above and left neighbouring template to calculate the model error. For example, it can define a first start position and a first subsampling interval depending on the width of the current block to partially select positions inside the above neighbouring template. Similarly, it can define a second start position and a second subsampling interval depending on the height of the current block to partially select positions inside the left neighbouring template. For another example, ha or hb can be a constant value (e.g., ha or hb can be 1, 2, 3, 4, 5, or 6) . For another example, ha or hb can be
dependent on the block size. If the current block size is greater than or equal to a threshold, ha or hb is equal to a first value. Otherwise, ha or hb is equal to a second value.
In still another embodiment, the candidates of different types are reordered separately before the candidates are added into the final candidate list. For each type of the candidates, the candidates are added into a primary candidate list with a pre-defined size N1. The candidates in the primary list are reordered. The candidates (N2) with the smallest costs are then added into the final candidate list, where N2≤N1. In another embodiment, the candidates are categorized into different types based on the source of the candidates, including but not limited to the spatial neighbouring models, temporal neighbouring models, non-adjacent spatial neighbouring models, and the historical candidates. In another embodiment, the candidates are categorized into different types based on the cross-component model mode. For example, the types can be CCLM, MMLM, CCCM, and CCCM multi-model. For another example, the types can be GLM-non active or GLM active.
In still another embodiment, after the candidates are reordered based on the template cost, the redundancy of the candidate can be further checked. A candidate is considered to be redundant if the template cost difference between it and its predecessor in the list is less than or equal to a threshold. If a candidate is considered redundant, it can be removed from the list, or it can be moved to the end of the list.
In still another embodiment, the candidates allowed or not allowed to do reordering can depend on the model type or coding mode. The model type or coding mode specified here includes, but not limited to the spatial adjacent/non-adjacent candidates or candidates from history table, the intra/inter prediction mode, LM/non-LM prediction mode, the cross-component prediction mode of a candidate (e.g., CCLM, MMLM, CCCM, or CCCM with multi-model, or GLM) , the cross-component prediction model of the candidate is inherited from other blocks, or the cross-component prediction model of the candidate is derived from the neighbouring reconstruction samples of the current block. For example, the spatial adjacent candidates of the current block in the candidate list are reordered. For another example, the spatial non-adjacent candidates of the current block in the candidate list are reordered. For another example, the candidates from history table in the candidate list are reordered. For another example, the spatial adjacent candidates and spatial non-adjacent candidates of the current block in the candidate list are reordered. For another example, the candidates with specific model type or coding mode are self-reordered and not reordered with other model types or coding modes.
For another example, the candidates with the cross-component prediction model derived from the neighbouring reconstruction samples of the current block are not reordered. For another example, if a candidate is not allowed to do reordering, the position of the candidate in the
list is not changed before and after the candidate list reordering.
Removing or Modifying Similar Model Parameters When Adding Candidates to a History Table
When adding cross-component model to a history table, it may further check the similarity between the to-be-added model and the existing models in the history table. If the to-be-added model is similar to one of the existing models, the to-be-added model will not be added to the history table. In one embodiment, it may compare the similarity of (α×lumaAvg+β) or α among existing candidates to decide whether to add the to-be-added model or not. For example, if the (α×lumaAvg+β) or α of the to-be-added model is the same as one of the existing candidates, the to-be-added model is not added. For another example, if the difference of (α×lumaAvg+β) or α between the to-be-added model and one of existing models is less than a threshold, the to-be-added model is not added. Besides, the threshold can be adaptive based on coding information (e.g., the current block size or area) . For another example, when comparing the similarity if a to-be-added model and the existing model both use CCCM, it may compare similarity by checking the value of (c0C + c1N + c2S + c3E + c4W + c5P + c6B) to decide whether to add the to-be-added model or not. In another embodiment, if the CU position of the current to-be-added model is the same CU position of the existing candidates, the to-be-added model parameter is not added. In still another embodiment, if the to-be-added model is similar to one of existing candidate models, it may adjust the inherited model parameters to let the to-be-added model to be different from the existing candidate models. For example, if the to-be-added scaling parameter is similar to one of existing candidate models, the to-be-added scaling parameter may add a predefined offset (e.g., 1>>S or - (1>>S) , where S is the shift parameter) to let the to-be-added model is different from the existing candidate models.
In another embodiment, only partial model parameters are compared with the existing models in the history table. For example, a CCLM candidate has a scale and offset parameters, it only compares if the scale or offset parameters are the same or similar with existing candidates or not. If the scale or offset parameters are the same or similar, the to-be-added model will not be added to the history table. For another example, a CCCM candidate has c0 to c6 parameters, it may only compare if n parameters (n < 7) are the same or similar with existing candidates or not. If the scale or offset parameters is the same or similar, the to-be-added model will not be added to the history table.
In another embodiment, a to-be-added model is applied to the neighbouring reconstruction samples of the current block, and the difference with the existing candidate models is compared. If the difference value is less than or equal to a threshold, the to-be-added model will not be added to the history table. For example, assume the applied result isand the
corresponding results of the existing models in the history table areto
the to-be-added model will not be added to the history table. For the selection of the neighbouring reconstruction samples, it may choose the neighbouring reconstruction sample with the maximal value, the neighbouring reconstruction sample with the minimal value, the mean/median/mode of the neighbouring reconstruction samples, the left-side neighbouring reconstruction samples, the above-side neighbouring reconstruction samples, or the above-left neighbouring reconstruction samples.
In another embodiment, the number of candidates having the same type (e.g., MMLM, CCCM, or GLM) is limited when adding the candidates to the history table. For example, if the current history table has k candidates with MMLM type, it is not allowed to further add candidates with MMLM type to the history table. For another example, if the current history table has k candidates with CCCM type, it is not allowed to further add candidates with CCCM type to the history table. For another example, if the current history table has k candidates with GLM type, it is not allowed to further add candidates with GLM type to the history table.
In another embodiment, the constraints or rules to prevent adding a redundant candidate to a history table will share/be the same as that of preventing adding a redundant candidate into a candidate list (e.g., the constraints or rules mentioned in the section entitled “Remove or Modify Similar Neighbouring Model Parameters) .
The method and apparatus for adding cross-component model candidates to a history table of merge list based on similarity can be implemented in an encoder side or a decoder side. For example, any of the proposed method can be implemented in an Intra/Inter coding module (e.g. Intra Pred. 150/MC 152 in Fig. 1B) in a decoder or an Intra/Inter coding module in an encoder (e.g. Intra Pred. 110/Inter Pred. 112 in Fig. 1A) . Any of the proposed method can also be implemented as a circuit coupled to the intra/inter coding module at the decoder or the encoder. However, the decoder or encoder may also use additional processing unit to implement the required processing. While the Intra Pred. units (e.g. unit 110/112 in Fig. 1A and unit 150/152 in Fig. 1B) are shown as individual processing units, they may correspond to executable software or firmware codes stored on a media, such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
Fig. 17 illustrates a flowchart of an exemplary video coding system that adds a cross-component model candidate to a history table of merge list based on similarity according to an embodiment of the present invention. The steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder
side. The steps shown in the flowchart may also be implemented based on hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart. According to the method, input data associated with a current block comprising a first-colour block and a second-colour block are received in step 1710, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side. A merge list or a history table is derived in step 1720. Whether to add a target CCM (Cross Component Model) candidate to the merge list or the history table is determined based on one or more conditions in step 1730, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table. The second-colour block is encoded or decoded using information comprising the merge list or the history table in step 1740, wherein when the CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the CCM candidate to reconstructed first-colour block.
The flowchart shown is intended to illustrate an example of video coding according to the present invention. A person skilled in the art may modify each step, re-arranges the steps, split a step, or combine steps to practice the present invention without departing from the spirit of the present invention. In the disclosure, specific syntax and semantics have been used to illustrate examples to implement embodiments of the present invention. A skilled person may practice the present invention by substituting the syntax and semantics with equivalent syntax and semantics without departing from the spirit of the present invention.
The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a
Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (18)
- A method of coding colour pictures using coding tools including one or more cross component models related modes, the method comprising:receiving input data associated with a current block comprising a first-colour block and a second-colour block, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side;deriving a merge list or a history table;determining whether to add a target CCM (Cross Component Model) candidate to the merge list or the history table based on one or more conditions, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table; andencoding or decoding the second-colour block using information comprising the merge list or the history table, wherein when the target CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the target CCM candidate to reconstructed first-colour block.
- The method of Claim 1, wherein said one or more similarities are calculated based on one or more model parameters.
- The method of Claim 2, wherein part of said one or more model parameters are used for calculating said one or more similarities.
- The method of Claim 3, wherein the target CCM candidate corresponds to a CCLM candidate with a scale and one or more offset parameters, and said one or more similarities are measured based on the scale or based on said one or more offset parameters.
- The method of Claim 3, wherein the target CCM candidate corresponds to a CCCM candidate with c0 to c6 parameters, and said one or more similarities are measured based on only n parameters and n is less than 7.
- The method of Claim 1, wherein said one or more similarities are calculated based on one or more model errors.
- The method of Claim 6, wherein the target cross-component model associated with the target CCM candidate is applied to neighbouring reconstructed first-colour samples of the current block to derive a target model error associated with the CCM candidate, and the target model error is compared with one or more member model errors associated with said one or more member candidates.
- The method of Claim 1, wherein a maximum number of to-be-added candidates associated with a specific coding mode in the merge list or the history table is constrained to be k, and the k is a positive integer.
- The method of Claim 8, wherein the to-be-added candidates associated with the specific coding mode correspond to spatial adjacent candidates or the to-be-added candidates with the specific coding mode correspond to non-spatial adjacent candidates.
- The method of Claim 8, wherein the to-be-added candidates associated with the specific coding mode are from the history table.
- The method of Claim 8, wherein the to-be-added candidates associated with the specific coding mode have a cross-component prediction model derived from neighbouring reconstruction samples of the current block.
- The method of Claim 8, wherein the k depends on current block size, prediction mode of one or more neighbouring blocks, slice type, temporal identifier, maximum allowed number of said one or more member candidates, or a combination thereof.
- The method of Claim 1, wherein whether said one or more member candidates are allowed to be reordered or not depends on a model type or coding mode associated with said one or more member candidates.
- The method of Claim 13, wherein when said one or more member candidates correspond to spatial adjacent candidates or said one or more member candidates correspond to non-spatial adjacent candidates, said one or more member candidates are allowed to be reordered.
- The method of Claim 13, wherein when said one or more member candidates correspond to spatial adjacent candidates or non-spatial adjacent candidates, said one or more member candidates are allowed to be reordered.
- The method of Claim 13, wherein when said one or more member candidates are from the history table, said one or more member candidates are allowed to be reordered.
- The method of Claim 13, wherein when said one or more member candidates are associated with a cross-component prediction model derived from neighbouring reconstruction samples of the current block, said one or more member candidates are not allowed to be reordered.
- An apparatus for video coding, the apparatus comprising one or more electronics or processors arranged to:receive input data associated with a current block comprising a first-colour block and a second-colour block, wherein the input data comprise pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side;derive a merge list or a history table;determine whether to add a target CCM (Cross Component Model) candidate to the merge list or the history table based on one or more conditions, wherein said one or more conditions comprise one or more similarities calculated between the target CCM candidate and one or more member candidates respectively, and said one or more member candidates are in the merge list or the history table; andencode or decode the second-colour block using information comprising the merge list or the history table, wherein when the target CCM candidate is selected for the current block, a predictor for the second-colour block is generated by applying a target cross-component model associated with the target CCM candidate to reconstructed first-colour block.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363485564P | 2023-02-17 | 2023-02-17 | |
US63/485564 | 2023-02-17 | ||
US202363491089P | 2023-03-20 | 2023-03-20 | |
US63/491089 | 2023-03-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024169989A1 true WO2024169989A1 (en) | 2024-08-22 |
Family
ID=92422173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2024/077432 WO2024169989A1 (en) | 2023-02-17 | 2024-02-18 | Methods and apparatus of merge list with constrained for cross-component model candidates in video coding |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024169989A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103392185A (en) * | 2010-12-30 | 2013-11-13 | 派尔高公司 | Color similarity sorting for video forensics search |
US20170295380A1 (en) * | 2016-04-06 | 2017-10-12 | Mediatek Singapore Pte. Ltd. | Method and apparatus of video coding |
CN113545052A (en) * | 2019-03-08 | 2021-10-22 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
CN115239739A (en) * | 2022-07-19 | 2022-10-25 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable medium |
-
2024
- 2024-02-18 WO PCT/CN2024/077432 patent/WO2024169989A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103392185A (en) * | 2010-12-30 | 2013-11-13 | 派尔高公司 | Color similarity sorting for video forensics search |
US20170295380A1 (en) * | 2016-04-06 | 2017-10-12 | Mediatek Singapore Pte. Ltd. | Method and apparatus of video coding |
CN113545052A (en) * | 2019-03-08 | 2021-10-22 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
CN115239739A (en) * | 2022-07-19 | 2022-10-25 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11792424B2 (en) | Method and device using inter prediction information | |
CN112425160B (en) | Method for encoding/decoding video signal and apparatus therefor | |
US11082713B2 (en) | Method and apparatus for global motion compensation in video coding system | |
CN112369021A (en) | Image encoding/decoding method and apparatus for throughput enhancement and recording medium storing bitstream | |
US20230421804A1 (en) | Method and device using inter prediction information | |
CN116781880A (en) | Method and device for deblocking subblocks in video encoding and decoding | |
TWI852244B (en) | Method and apparatus for coding mode selection in video coding system | |
WO2023020390A1 (en) | Method and apparatus for low-latency template matching in video coding system | |
WO2024169989A1 (en) | Methods and apparatus of merge list with constrained for cross-component model candidates in video coding | |
WO2024109618A1 (en) | Method and apparatus of inheriting cross-component models with cross-component information propagation in video coding system | |
WO2024120478A1 (en) | Method and apparatus of inheriting cross-component models in video coding system | |
WO2024109715A1 (en) | Method and apparatus of inheriting cross-component models with availability constraints in video coding system | |
WO2024120307A1 (en) | Method and apparatus of candidates reordering of inherited cross-component models in video coding system | |
WO2024149251A1 (en) | Methods and apparatus of cross-component model merge mode for video coding | |
WO2024153069A1 (en) | Method and apparatus of default model derivation for cross-component model merge mode in video coding system | |
WO2024120386A1 (en) | Methods and apparatus of sharing buffer resource for cross-component models | |
WO2024149293A1 (en) | Methods and apparatus for improvement of transform information coding according to intra chroma cross-component prediction model in video coding | |
WO2024149247A1 (en) | Methods and apparatus of region-wise cross-component model merge mode for video coding | |
WO2024149159A1 (en) | Methods and apparatus for improvement of transform information coding according to intra chroma cross-component prediction model in video coding | |
WO2024104086A1 (en) | Method and apparatus of inheriting shared cross-component linear model with history table in video coding system | |
WO2024088340A1 (en) | Method and apparatus of inheriting multiple cross-component models in video coding system | |
WO2024093785A1 (en) | Method and apparatus of inheriting shared cross-component models in video coding systems | |
WO2024074129A1 (en) | Method and apparatus of inheriting temporal neighbouring model parameters in video coding system | |
WO2024074131A1 (en) | Method and apparatus of inheriting cross-component model parameters in video coding system | |
WO2024193577A1 (en) | Methods and apparatus for hiding bias term of cross-component prediction model in video coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24756359 Country of ref document: EP Kind code of ref document: A1 |