[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020200159A1 - Interactions between adaptive loop filtering and other coding tools - Google Patents

Interactions between adaptive loop filtering and other coding tools Download PDF

Info

Publication number
WO2020200159A1
WO2020200159A1 PCT/CN2020/082038 CN2020082038W WO2020200159A1 WO 2020200159 A1 WO2020200159 A1 WO 2020200159A1 CN 2020082038 W CN2020082038 W CN 2020082038W WO 2020200159 A1 WO2020200159 A1 WO 2020200159A1
Authority
WO
WIPO (PCT)
Prior art keywords
prec
gradient values
off2
off1
coding tool
Prior art date
Application number
PCT/CN2020/082038
Other languages
French (fr)
Inventor
Li Zhang
Kai Zhang
Hsiao Chiang Chuang
Hongbin Liu
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN202080025296.6A priority Critical patent/CN113632480B/en
Publication of WO2020200159A1 publication Critical patent/WO2020200159A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • This patent document relates to video coding techniques, devices and systems.
  • Devices, systems and methods related to digital video coding, and specifically, to interactions between adaptive loop filtering and other coding tools are described.
  • the described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards (e.g., Versatile Video Coding (VVC) ) or codecs.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • the disclosed technology may be used to provide a method for video processing.
  • This method configuring, for a current video block, a first derivation process for deriving a first gradient value used in a first coding tool based on a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool, and reconstructing, based on the first derivation process and the first coding tool, the current video block from a corresponding bitstream representation, wherein at least one of the first coding tool and the second coding tool relates to a pixel filtering process, and wherein the first gradient value and the second gradient value are indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
  • the disclosed technology may be used to provide a method for video processing.
  • This method configuring, for a current video block, a first padding process used in a first coding tool based on a second padding process used in a second coding tool that is different from the first coding tool, and reconstructing, based on the first padding process and the first coding tool, the current video block from a corresponding bitstream representation, wherein the first padding process and the second padding process comprise adding out of range samples in a calculation of a gradient value that is indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
  • the disclosed technology may be used to provide a method for video processing.
  • This method comprises: configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first derivation process for deriving a first gradient value used in a first coding tool to be aligned with a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool; and performing the conversion based on the configured first derivation process.
  • the disclosed technology may be used to provide a method for video processing.
  • This method comprises: deriving, for a conversion between a first block of video and a bitstream representation of the first block, gradient values used in one or more coding tools by applying a sub-block level gradient calculation process, wherein the gradient values are derived for partial samples within prediction blocks of the first block; and performing the conversion based on the derived gradient values.
  • the disclosed technology may be used to provide a method for video processing.
  • This method comprises: configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first padding process in a first coding tool to be aligned with a second padding process in a second coding tool that is different from the first coding tool, wherein the first padding process is for padding samples out of range used to derive gradient values used in the first coding tool, and the second padding process is for padding samples out of range used to derive gradient values used in the second coding tool; and performing the conversion based on the configured first padding process.
  • the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
  • a device that is configured or operable to perform the above-described method.
  • the device may include a processor that is programmed to implement this method.
  • a video decoder apparatus may implement a method as described herein.
  • FIG. 1 shows an example of an encoder block diagram for video coding.
  • FIGS. 2A, 2B and 2C show examples of geometry transformation-based adaptive loop filter (GALF) filter shapes.
  • GALF geometry transformation-based adaptive loop filter
  • FIG. 3 shows an example of a flow graph for a GALF encoder decision.
  • FIGS. 4A-4D show example subsampled Laplacian calculations for adaptive loop filter (ALF) classification.
  • ALF adaptive loop filter
  • FIG. 5 shows an example of a luma filter shape.
  • FIG. 6 shows an example of region division of a Wide Video Graphic Array (WVGA) sequence.
  • WVGA Wide Video Graphic Array
  • FIG. 7 shows an example of an optical flow trajectory used by the bi-directional optical flow (BIO) algorithm.
  • FIGS. 8A and 8B show example snapshots of using of the bi-directional optical flow (BIO) algorithm without block extensions.
  • FIG. 9 shows an example of the interpolated samples used in BIO.
  • FIG. 10 shows an example of prediction refinement with optical flow (PROF) .
  • FIGS. 11A and 11B show flowcharts of example methods for interactions between adaptive loop filtering and other coding tools, in accordance with the disclosed technology.
  • FIG. 12 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
  • FIG. 13 shows a flowchart of yet another example method for video processing.
  • FIG. 14 shows a flowchart of yet another example method for video processing.
  • FIG. 15 shows a flowchart of yet another example method for video processing.
  • Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency.
  • a video codec converts uncompressed video to a compressed format or vice versa.
  • the compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • JEM Joint Exploration Model
  • affine prediction alternative temporal motion vector prediction
  • STMVP spatial-temporal motion vector prediction
  • BIO bi-directional optical flow
  • FRUC Frame-Rate Up Conversion
  • LAMVR Locally Adaptive Motion Vector Resolution
  • OBMC Overlapped Block Motion Compensation
  • LIC Local Illumination Compensation
  • DMVR Decoder-side Motion Vector Refinement
  • Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve runtime performance.
  • Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
  • Color space also known as the color model (or color system)
  • color model is an abstract mathematical model which simply describes the range of colors as tuples of numbers, typically as 3 or 4 values or color components (e.g. RGB) .
  • color space is an elaboration of the coordinate system and sub-space.
  • YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr also written as YCBCR or Y'CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems.
  • Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components.
  • Y′ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
  • Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.
  • Each of the three Y'CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.
  • the two chroma components are sampled at half the sample rate of luma, e.g. the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.
  • Cb and Cr are cosited horizontally.
  • Cb and Cr are sited between pixels in the vertical direction (sited interstitially) .
  • Cb and Cr are sited interstitially, halfway between alternate luma samples.
  • Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.
  • FIG. 1 shows an example of encoder block diagram of VVC, which contains three in-loop filtering blocks: deblocking filter (DF) , sample adaptive offset (SAO) and ALF.
  • DF deblocking filter
  • SAO sample adaptive offset
  • ALF utilize the original samples of the current picture to reduce the mean square errors between the original samples and the reconstructed samples by adding an offset and by applying a finite impulse response (FIR) filter, respectively, with coded side information signaling the offsets and filter coefficients.
  • FIR finite impulse response
  • ALF is located at the last processing stage of each picture and can be regarded as a tool trying to catch and fix artifacts created by the previous stages.
  • GAF geometry transformation-based adaptive loop filter
  • For the luma component one among 25 filters is selected for each 2 ⁇ 2 block, based on the direction and activity of local gradients.
  • up to three diamond filter shapes (as shown in FIGS. 2A, 2B and 2C for the 5 ⁇ 5 diamond, 7 ⁇ 7 diamond and 9 ⁇ 9 diamond, respectively) can be selected for the luma component.
  • An index is signalled at the picture level to indicate the filter shape used for the luma component.
  • the 5 ⁇ 5 diamond shape is always used.
  • Each 2 ⁇ 2 block is categorized into one out of 25 classes.
  • the classification index C is derived based on its directionality D and a quantized value of activity as follows:
  • Indices i and j refer to the coordinates of the upper left sample in the 2 ⁇ 2 block and R (i, j) indicates a reconstructed sample at coordinate (i, j) .
  • D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
  • Step 1 If both and are true, D is set to 0.
  • Step 2 If continue from Step 3; otherwise continue from Step 4.
  • Step 3 If D is set to 2; otherwise D is set to 1.
  • the activity value A is calculated as:
  • A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as For both chroma components in a picture, no classification method is applied, i.e. a single set of ALF coefficients is applied for each chroma component.
  • K is the size of the filter and 0 ⁇ k, l ⁇ K-1 are coefficients coordinates, such that location (0, 0) is at the upper left corner and location (K-1, K-1) is at the lower right corner.
  • the transformations are applied to the filter coefficients f (k, l) depending on gradient values calculated for that block.
  • Table 1 The relationship between the transformation and the four gradients of the four directions are summarized in Table 1.
  • GALF filter parameters are signaled for the first CTU, i.e., after the slice header and before the SAO parameters of the first CTU. Up to 25 sets of luma filter coefficients could be signaled. To reduce bits overhead, filter coefficients of different classification can be merged. Also, the GALF coefficients of reference pictures are stored and allowed to be reused as GALF coefficients of a current picture. The current picture may choose to use GALF coefficients stored for the reference pictures, and bypass the GALF coefficients signaling. In this case, only an index to one of the reference pictures is signaled, and the stored GALF coefficients of the indicated reference picture are inherited for the current picture.
  • a candidate list of GALF filter sets is maintained. At the beginning of decoding a new sequence, the candidate list is empty. After decoding one picture, the corresponding set of filters may be added to the candidate list. Once the size of the candidate list reaches the maximum allowed value (i.e., 6 in current JEM) , a new set of filters overwrites the oldest set in decoding order, and that is, first-in-first-out (FIFO) rule is applied to update the candidate list. To avoid duplications, a set could only be added to the list when the corresponding picture doesn’t use GALF temporal prediction. To support temporal scalability, there are multiple candidate lists of filter sets, and each candidate list is associated with a temporal layer.
  • each array assigned by temporal layer index may compose filter sets of previously decoded pictures with equal to lower TempIdx.
  • the k-th array is assigned to be associated with TempIdx equal to k, and it only contains filter sets from pictures with TempIdx smaller than or equal to k. After coding a certain picture, the filter sets associated with the picture will be used to update those arrays associated with equal or higher TempIdx.
  • Temporal prediction of GALF coefficients is used for inter coded frames to minimize signaling overhead.
  • temporal prediction is not available, and a set of 16 fixed filters is assigned to each class.
  • a flag for each class is signaled and if required, the index of the chosen fixed filter.
  • the coefficients of the adaptive filter f (k, l) can still be sent for this class in which case the coefficients of the filter which will be applied to the reconstructed image are sum of both sets of coefficients.
  • the filtering process of luma component can controlled at CU level.
  • a flag is signaled to indicate whether GALF is applied to the luma component of a CU.
  • chroma component whether GALF is applied or not is indicated at picture level only.
  • each sample R (i, j) within the block is filtered, resulting in sample value R′ (i, j) as shown below, where L denotes filter length, f m, n represents filter coefficient, and f (k, l) denotes the decoded filter coefficients.
  • ALF is the last stage of in-loop filtering. There are two stages in this process.
  • the first stage is filter coefficient derivation. To train the filter coefficients, the encoder classifies reconstructed pixels of the luminance component into 16 regions, and one set of filter coefficients is trained for each category using wiener-hopf equations to minimize the mean squared error between the original frame and the reconstructed frame. To reduce the redundancy between these 16 sets of filter coefficients, the encoder will adaptively merge them based on the rate-distortion performance. At its maximum, 16 different filter sets can be assigned for the luminance component and only one for the chrominance components.
  • the second stage is a filter decision, which includes both the frame level and LCU level. Firstly the encoder decides whether frame-level adaptive loop filtering is performed. If frame level ALF is on, then the encoder further decides whether the LCU level ALF is performed.
  • the filter shape adopted inAVS-2 is a 7 ⁇ 7 cross shape superposing a 3 ⁇ 3 square shape, just as illustrated in FIG. 5 for both luminance and chroma components.
  • Each square in FIG. 5 corresponds to a sample. Therefore, a total of 17 samples are used to derive a filtered value for the sample of position C8.
  • a point-symmetrical filter is utilized with only nine coefficients left, ⁇ C0, C1, ..., C8 ⁇ , which reduces the number of filter coefficients to half as well as the number of multiplications in filtering.
  • the point-symmetrical filter can also reduce half of the computation for one filtered sample, e.g., only 9 multiplications and 14 add operations for one filtered sample.
  • AVS-2 adopts region-based multiple adaptive loop filters for luminance component.
  • the luminance component is divided into 16 roughly-equal-size basic regions where each basic region is aligned with largest coding unit (LCU) boundaries as shown in FIG. 6, and one Wiener filter is derived for each region.
  • LCU largest coding unit
  • these regions can be merged into fewer larger regions, which share the same filter coefficients.
  • each region is assigned with an index according to a modified Hilbert order based on the image prior correlations. Two regions with successive indices can be merged based on rate-distortion cost.
  • mapping information between regions should be signaled to the decoder.
  • AVS-2 the number of basic regions is used to represent the merge results and the filter coefficients are compressed sequentially according to its region order. For example, when ⁇ 0, 1 ⁇ , ⁇ 2, 3, 4 ⁇ , ⁇ 5, 6, 7, 8, 9 ⁇ and the left basic regions merged into one region respectively, only three integers are coded to represent this merge map, i.e., 2, 3, 5.
  • the sequence switch flag, adaptive_loop_filter_enable is used to control whether adaptive loop filter is applied for the whole sequence.
  • the image switch flags, picture_alf_enble [i] control whether ALF is applied for the corresponding ith image component. Only if the picture_alf_enble [i] is enabled, the corresponding LCU-level flags and filter coefficients for that color component will be transmitted.
  • the LCU level flags, lcu_alf_enable [k] control whether ALF is enabled for the corresponding kth LCU, and are interleaved into the slice data.
  • the decision of different level regulated flags is all based on the rate-distortion cost. The high flexibility further makes the ALF improve the coding efficiency much more significantly.
  • one set of filter coefficients may be transmitted.
  • BIO motion compensation is first performed to generate the first predictions (in each prediction direction) of the current block.
  • the first predictions are used to derive the spatial gradient, the temporal gradient and the optical flow of each sub-block or pixel within the block, which are then used to generate the second prediction, e.g., the final prediction of the sub-block or pixel.
  • the details are described as follows.
  • the bi-directional optical flow (BIO) method is a sample-wise motion refinement performed on top of block-wise motion compensation for bi-prediction.
  • the sample-level motion refinement does not use signaling.
  • the motion vector field (v x , v y ) is given by:
  • FIG. 7 shows an example optical flow trajectory in the Bi-directional Optical flow (BIO) method.
  • ⁇ 0 and ⁇ 1 denote the distances to the reference frames.
  • BIO is applied if the prediction is not from the same time moment (e.g., ⁇ 0 ⁇ 1 ) .
  • the motion vector field (v x , v y ) is determined by minimizing the difference ⁇ between values in points A and B.
  • FIG. 7 shows an example of intersection of motion trajectory and reference frame planes. Model uses only first linear term of a local Taylor expansion for ⁇ :
  • the JEM uses a simplified approach making first a minimization in the vertical direction and then in the horizontal direction. This results in the following:
  • d is bit depth of the video samples.
  • FIG. 8A shows an example of access positions outside of a block 800.
  • 2M+1) ⁇ (2M+1) square window ⁇ centered in currently predicted point on a boundary of predicted block needs to accesses positions outside of the block.
  • values of I (k) , outside of the block are set to be equal to the nearest available value inside the block. For example, this can be implemented as a padding area 801, as shown in FIG. 8B.
  • BIO it is possible that the motion field can be refined for each sample.
  • a block-based design of BIO is used in the JEM.
  • the motion refinement can be calculated based on a 4x4 block.
  • the values of s n in Eq. (17) of all samples in a 4x4 block can be aggregated, and then the aggregated values of s n in are used to derived BIO motion vectors offset for the 4 ⁇ 4 block. More specifically, the following formula can used for block-based BIO derivation:
  • b k denotes the set of samples belonging to the k-th 4x4 block of the predicted block.
  • s n in Eq (15) and Eq (16) are replaced by ( (s n, bk ) >> 4) to derive the associated motion vector offsets.
  • MV regiment of BIO may be unreliable due to noise or irregular motion. Therefore, in BIO, the magnitude of MV regiment is clipped to a threshold value.
  • the threshold value is determined based on whether the reference pictures of the current picture are all from one direction. For example, if all the reference pictures of the current picture are from one direction, the value of the threshold is set to 12 ⁇ 2 14-d ; otherwise, it is set to 12 ⁇ 2 13-d .
  • Gradients for BIO can be calculated at the same time with motion compensation interpolation using operations consistent with HEVC motion compensation process (e.g., 2D separable Finite Impulse Response (FIR) ) .
  • the input for the 2D separable FIR is the same reference frame sample as for motion compensation process and fractional position (fracX, fracY) according to the fractional part of block motion vector.
  • fracX, fracY fractional position
  • fracX, fracY fractional position
  • BIOfilterG For vertical gradient a gradient filter is applied vertically using BIOfilterG corresponding to the fractional position fracY with de-scaling shift d-8. The signal displacement is then performed using BIOfilterS in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18-d.
  • the length of interpolation filter for gradients calculation BIOfilterG and signal displacement BIOfilterF can be shorter (e.g., 6-tap) in order to maintain reasonable complexity.
  • Table 2 shows example filters that can be used for gradients calculation of different fractional positions of block motion vector in BIO.
  • Table 3 shows example interpolation filters that can be used for prediction signal generation in BIO.
  • Fractional pel position Interpolation filter for gradient (BIOfilterG) 0 ⁇ 8, -39, -3, 46, -17, 5 ⁇ 1/16 ⁇ 8, -32, -13, 50, -18, 5 ⁇ 1/8 ⁇ 7, -27, -20, 54, -19, 5 ⁇ 3/16 ⁇ 6, -21, -29, 57, -18, 5 ⁇ 1/4 ⁇ 4, -17, -36, 60, -15, 4 ⁇ 5/16 ⁇ 3, -9, -44, 61, -15, 4 ⁇ 3/8 ⁇ 1, -4, -48, 61, -13, 3 ⁇ 7/16 ⁇ 0, 1, -54, 60, -9, 2 ⁇ 1/2 ⁇ -1, 4, -57, 57, -4, 1 ⁇
  • Fractional pel position Interpolation filter for prediction signal (BIOfilterS) 0 ⁇ 0, 0, 64, 0, 0, 0 ⁇ 1/16 ⁇ 1, -3, 64, 4, -2, 0 ⁇ 1/8 ⁇ 1, -6, 62, 9, -3, 1 ⁇ 3/16 ⁇ 2, -8, 60, 14, -5, 1 ⁇ 1/4 ⁇ 2, -9, 57, 19, -7, 2 ⁇ 5/16 ⁇ 3, -10, 53, 24, -8, 2 ⁇ 3/8 ⁇ 3, -11, 50, 29, -9, 2 ⁇ 7/16 ⁇ 3, -11, 44, 35, -10, 3 ⁇ 1/2 ⁇ 3, -10, 35, 44, -11, 3 ⁇
  • BIO can be applied to all bi-predicted blocks when the two predictions are from different reference pictures.
  • BIO can be disabled.
  • BIO is applied for a block after normal MC process.
  • BIO may not be applied during the OBMC process. This means that BIO is applied in the MC process for a block when using its own MV and is not applied in the MC process when the MV of a neighboring block is used during the OBMC process.
  • Step 1 Judge whether BIO is applicable (W/H are width/height of current block)
  • BIO is not applicable if
  • BIO is not used if total SAD between the two reference blocks (denoted as R 0 and R 1 ) is smaller than a threshold, wherein
  • the inner WxH samples are interpolated with the 8-tap interpolation filter as in normal motion compensation.
  • the four side outer lines of samples (black circles in FIG. 9) are interpolated with the bi-linear filter.
  • Gy0 (x, y) (R0 (x, y+1) -R0 (x, y-1) ) >>4
  • Gy1 (x, y) (R1 (x, y+1) -R1 (x, y-1) ) >>4
  • T1 (R0 (x, y) >>6) - (R1 (x, y) >>6)
  • T2 (Gx0 (x, y) +Gx1 (x, y) ) >>3
  • T3 (Gy0 (x, y) +Gy1 (x, y) ) >>3;
  • Step 3 Calculate prediction for each block
  • BIO is skipped for a 4 ⁇ 4 block if SAD between the two 4 ⁇ 4 reference blocks is smaller than a threshold.
  • b (x, y) is known as a correction item.
  • variable shift is set equal to Max (2, 14 -bitDepth) .
  • variables cuLevelAbsDiffThres and subCuLevelAbsDiffThres are set equal to (1 ⁇ (bitDepth –8 + shift) ) *cbWidth*cbHeight and 1 ⁇ (bitDepth –3 + shift) .
  • the variable cuLevelSumAbsoluteDiff is set to 0.
  • variable subCuLevelSumAbsoluteDiff [xSbIdx] [ySbIdx] and the bidirectional optical flow utilization flag bioUtilizationFlag [xSbIdx] [ySbIdx] of the current subblock are derived as:
  • sbHeight -1 are derived by invoking the bi-directional optical flow sample prediction process specified in clause 8.3.4.5 with the luma coding subblock width sbWidth, the luma coding subblock height sbHeight and the sample arrays predSamplesL0L and predSamplesL1L, and the variables predFlagL0, predFlagL1, refIdxL0, refIdxL1.
  • a luma location (xSb, ySb ) specifying the top-left sample of the current coding sub-block relative to the top left luma sample of the current picture
  • variable sbWidth specifying the width of the current coding sub-block in luma samples
  • variable sbHeight specifying the height of the current coding sub-block in luma samples
  • an (sbWidth) x (sbHeight) array predSamplesLXL of prediction luma sample values when bioAvailableFlag is FALSE, or an (sbWidth+2) x (sbHeight+2) array predSamplesLXL of prediction luma samaple values when bioAvailableFlag is TRUE.
  • bilinearFiltEnabledFlag is derived as follows:
  • the prediction luma sample value predSamplesLXL [xL] [yL] is derived by invoking the process specified in clause 8.3.4.3.2 with (xIntL, yIntL) , (xFracL, yFracL) , refPicLXL and bilinearFiltEnabledFlag as inputs.
  • variable bilinearFiltEnabledFlag is set to FALSE.
  • the prediction luma sample value predSamplesLXL [xL] [yL] is derived by invoking the process specified in clause 8.3.4.3.2 with (xIntL, yIntL) , (xFracL, yFracL) , and refPicLXL and bilinearFiltEnabledFlag as inputs.
  • nCbW and nCbH specifying the width and the height of the current coding block
  • Output of this process is the (nCbW) x (nCbH) array pbSamples of luma prediction sample values.
  • bitDepth is set equal to BitDepthY.
  • variable shift2 is set equal to Max (3, 15 -bitDepth) and the variable offset2 is set equal to 1 ⁇ (shift2 -1) .
  • variable mvRefineThres is set equal to 1 ⁇ (13 -bitDepth) .
  • the prediction sample values of the current prediction unit are derived as follows:
  • the location (xSb, ySb ) specifying the top-left sample of the current subblock relative to the top left sample of prediction sample arrays predSamplesL0 and predSampleL1 is derived as follows:
  • temp, tempX and tempY are derived as follows:
  • vx sGx2 > 0 ? Clip3 (-mvRefineThres, mvRefineThres, - (sGxdI ⁇ 3) >>Floor (Log2 (sGx2) ) ) : 0
  • sGxGym sGxGy>>12;
  • pbSamples [x] [y] Clip3 (0, (1 ⁇ bitDepth) -1, (predSamplesL0 [x+1] [y+1] + predSamplesL1 [x+1] [y+1] +sampleEnh + offset2) >> shift2)
  • This contribution proposes a method to refine the sub-block based affine motion compensated prediction with optical flow. After the sub-block based affine motion compensation is performed, prediction sample is refined by adding a difference derived by the optical flow equation, which is referred as prediction refinement with optical flow (PROF) .
  • the proposed method can achieve inter prediction in pixel level granularity without increasing the memory access bandwidth.
  • this contribution proposes a method to refine the sub-block based affine motion compensated prediction with optical flow. After the sub-block based affine motion compensation is performed, luma prediction sample is refined by adding a difference derived by the optical flow equation.
  • the proposed PROF is described as following four steps.
  • Step 1) The sub-block-based affine motion compensation is performed to generate sub-block prediction I (i, j) .
  • Step2 The spatial gradients g x (i, j) and g y (i, j) of the sub-block prediction are calculated at each sample location using a 3-tap filter [-1, 0, 1] .
  • the sub-block prediction is extended by one pixel on each side for the gradient calculation. To reduce the memory bandwidth and complexity, the pixels on the extended borders are copied from the nearest integer pixel position in the reference picture. Therefore, additional interpolation for padding region is avoided.
  • Step 3 The luma prediction refinement is calculated by the optical flow equation.
  • ⁇ I (i, j) g x (i, j) * ⁇ v x (i, j) +g y (i, j) * ⁇ v y (i, j)
  • ⁇ v (i, j) is the difference between pixel MV computed for sample location (i, j) , denoted by v (i, j) , and the sub-block MV of the sub-block to which pixel (i, j) belongs, as shown in FIG. 10.
  • ⁇ v (i, j) can be calculated for the first sub-block, and reused for other sub-blocks in the same CU.
  • x and y be the horizontal and vertical offset from the pixel location to the center of the sub-block, ⁇ v (x, y) can be derived by the following equation,
  • (v 0x , v 0y ) , (v 1x , v 1y ) , (v 2x , v 2y ) are the top-left, top-right and bottom-left control point motion vectors, w and h are the width and height of the CU.
  • Step 4) Finally, the luma prediction refinement is added to the sub-block prediction I (i, j) .
  • the final prediction I’ is generated as the following equation:
  • VVC has the following problems:
  • the gradient calculation is done in sample level wherein for each sample, the gradient is calculated. While the refined motion vectors (such as Vx and Vy) are done in 4x4 sub-block level which depends on gradient values of a 6x6 block covering the sub-block. The per-sample calculation of gradient increases the computational complexity.
  • Embodiments of the presently disclosed technology overcome the drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies.
  • the interactions of adaptive loop filtering with other coding tools, based on the disclosed technology, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations.
  • the examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
  • Shift (x, s) (x + off) >> s.
  • Clip3 (x, Min, Max) is defined as
  • the vertical gradient values in BIO (denoted as g v ) are calculated in the same way to calculate the vertical gradient values in ALF.
  • the vertical gradient calculation is defined as [-1, 2, -1]filter.
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • the horizontal gradient values in BIO (denoted as g h ) are calculated in the same way as that utilized to calculate the horizontal gradient values in ALF.
  • the horizontal gradient calculation is defined as [-1, 2, -1] filter.
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • the vertical gradient values in PROF (denoted as g v ) are calculated in the same way to calculate the vertical gradient values in ALF.
  • the vertical gradient calculation is defined as [-1, 2, -1] filter.
  • g v Shift (2R (k, l) -R (k, l-off1) -R (k, l+ off2) , prec) wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) .
  • the horizontal gradient values in PROF (denoted as g h ) are calculated in the same way to calculate the horizontal gradient values in ALF.
  • the horizontal gradient calculation is defined as [-1, 2, -1] filter.
  • g h Shift (2R (k, l) -R (k-off1, l) -R (k+ off2, l) , prec) wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) .
  • the associated gradient values may be copied from that associated with its neighbors wherein gradient is calculated.
  • the vertical gradient values in ALF (denoted as g v ) are calculated in the same way to calculate the vertical gradient values in BIO.
  • g v (x, y) Shift (R (x, y+off1) -R (x, y-off2) , prec) ,
  • R (x, y) indicates a reconstructed or prediction sample at coordinate (x, y)
  • g v (x, y) SatShift (R (x, y+off1) -R (x, y-off2) , prec) .
  • g v (x, y) Shift (
  • g v (x, y) SatShift (
  • the horizontal gradient values in ALF (denoted as g h ) are calculated in the same way to calculate the horizontal gradient values in BIO.
  • g h (x, y) Shift (R (x+1, y) -R (x-1, y) , prec) ,
  • R (x, y) indicates a reconstructed or prediction sample at coordinate (x, y) .
  • g h (x, y) SatShift (R (x+1, y) -R (x-1, y) , prec) .
  • g h (x, y) Shift (
  • g h (x, y) SatShift (
  • the vertical gradient values for all or some pixels in a block are calculated in the same way to calculate the vertical gradient values in BIO and averaged (or being processed in other ways) to get the vertical gradient value for a block used in ALF.
  • the horizontal gradient values for all or some pixels in a block are calculated in the same way to calculate the horizontal gradient values in BIO and averaged (or being processed in other ways) to get the horizontal gradient value for a block used in ALF.
  • the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in BIO is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in ALF.
  • the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in BIO is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in PROF.
  • the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in ALF is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in PROF.
  • the proposed methods may be also applicable to other coding tools that rely on the calculation of gradients.
  • methods 1100 and 1150 may be implemented at a video decoder or a video encoder.
  • FIG. 11A shows a flowchart of an exemplary method for video processing.
  • the method 1100 includes, at step 1102, configuring, for a current video block, a first derivation process for deriving a first gradient value used in a first coding tool based on a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool.
  • the method 1100 includes, at step 1104, reconstructing, based on the first derivation process and the first coding tool, the current video block from a corresponding bitstream representation.
  • at least one of the first coding tool and the second coding tool relates to a pixel filtering process, and the first gradient value and the second gradient value are indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
  • the pixel filtering process is an adaptive loop filtering process.
  • the first coding tool is a bi-directional optical flow (BIO) refinement and the second coding tool is an adaptive loop filtering (ALF) process.
  • BIO bi-directional optical flow
  • ALF adaptive loop filtering
  • the first coding tool is a prediction refinement with optical flow (PROF) process and the second coding tool is an adaptive loop filtering (ALF) process.
  • PROF prediction refinement with optical flow
  • ALF adaptive loop filtering
  • the first coding tool is an adaptive loop filtering (ALF) process and the second coding tool is a bi-directional optical flow (BIO) refinement.
  • ALF adaptive loop filtering
  • BIO bi-directional optical flow
  • the first and second derivation processes comprise a vertical gradient value calculation or a horizontal gradient value calculation.
  • the vertical gradient value calculation or the horizontal gradient value calculation is based on a [-1, 2, -1]filter.
  • the first derivation process comprises a sub-block level gradient value calculation. In an example, the first derivation process is not applied to each sample of the current video block.
  • FIG. 11B shows a flowchart of an exemplary method for video processing.
  • the method 1150 includes, at step 1152, configuring, for a current video block, a first padding process used in a first coding tool based on a second padding process used in a second coding tool that is different from the first coding tool.
  • the method 1150 includes, at step 1154, reconstructing, based on the first padding process and the first coding tool, the current video block from a corresponding bitstream representation.
  • the first padding process and the second padding process comprise adding out of range samples in a calculation of a gradient value that is indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
  • the first coding tool is a bi-directional optical flow (BIO) refinement and the second coding tool is an adaptive loop filtering (ALF) process.
  • BIO bi-directional optical flow
  • ALF adaptive loop filtering
  • the first coding tool is an adaptive loop filtering (ALF) process and the second coding tool is a bi-directional optical flow (BIO) refinement.
  • ALF adaptive loop filtering
  • BIO bi-directional optical flow
  • the first coding tool is a bi-directional optical flow (BIO) refinement and the second coding tool is a prediction refinement with optical flow (PROF) process.
  • BIO bi-directional optical flow
  • PROF prediction refinement with optical flow
  • the first coding tool is a prediction refinement with optical flow (PROF) process and the second coding tool is a bi-directional optical flow (BIO) refinement.
  • PROF prediction refinement with optical flow
  • BIO bi-directional optical flow
  • the first coding tool is a prediction refinement with optical flow (PROF) process and the second coding tool is an adaptive loop filtering (ALF) process.
  • PROF prediction refinement with optical flow
  • ALF adaptive loop filtering
  • the first coding tool is an adaptive loop filtering (ALF) process and the second coding tool is a prediction refinement with optical flow (PROF) process.
  • ALF adaptive loop filtering
  • PROF prediction refinement with optical flow
  • FIG. 12 is a block diagram of a video processing apparatus 1200.
  • the apparatus 1200 may be used to implement one or more of the methods described herein.
  • the apparatus 1200 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 1200 may include one or more processors 1202, one or more memories 1204 and video processing hardware 1206.
  • the processor (s) 1202 may be configured to implement one or more methods (including, but not limited to, methods 1100 and 1150) described in the present document.
  • the memory (memories) 1204 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 1206 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 12.
  • FIG. 13 is a flowchart for an example method 1300 of video processing.
  • the method 1300 includes configuring (1302) , for a conversion between a first block of video and a bitstream representation of the first block, a first derivation process for deriving a first gradient value used in a first coding tool to be aligned with a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool; and performing (1304) the conversion based on the configured first derivation process.
  • whole or partial of the first derivation process is aligned with the corresponding whole or partial of the second derivation process.
  • the first derivation processes and the second derivation processes comprise the same vertical gradient value calculation for calculating vertical gradient values and/or the same horizontal gradient value calculation for calculating horizontal gradient values.
  • the first coding tool is a bi-directional optical flow (BDOF) refinement
  • the second coding tool is an adaptive loop filtering (ALF) process.
  • BDOF bi-directional optical flow
  • ALF adaptive loop filtering
  • the first coding tool is an ALF process refinement
  • the second coding tool is a BDOF refinement
  • the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [-1, 2, -1] filter.
  • the vertical gradient values in BDOF (g v ) and/or the horizontal gradient values in BDOF (g h ) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
  • offset0 is set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) )
  • n is an integer.
  • the vertical gradient values in BDOF (g v ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the horizontal gradient values in BDOF (g h ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the vertical gradient values in BDOF (g v ) and/or the horizontal gradient values in BDOF (g h ) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
  • offset0 and/or offset1 are set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) )
  • n is an integer.
  • the vertical gradient values in BDOF (g v ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the horizontal gradient values in BDOF (g h ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the first coding tool is a prediction refinement with optical flow (PROF) process which is applied to affine coded blocks
  • the second coding tool is an adaptive loop filtering (ALF) process.
  • PROF prediction refinement with optical flow
  • ALF adaptive loop filtering
  • the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [-1, 2, -1] filter.
  • the vertical gradient values in PROF (g v ) and/or the horizontal gradient values in PROF (g h ) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
  • offset0 is set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) )
  • n is an integer.
  • the vertical gradient values in PROF (g v ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the horizontal gradient values in PROF (g h ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the vertical gradient values in PROF (g v ) and/or the horizontal gradient values in PROF (g h ) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
  • offset0 and/or offset1 are set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) )
  • n is an integer.
  • the vertical gradient values in PROF (g v ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the horizontal gradient values in PROF (g h ) are calculated as following:
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the first coding tool is an adaptive loop filtering (ALF) process
  • the second coding tool is a bi-directional optical flow (BDOF) refinement.
  • ALF adaptive loop filtering
  • BDOF bi-directional optical flow
  • the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [1, -1] filter.
  • the vertical gradient values in ALF (g v ) and/or the horizontal gradient values in ALF (g h ) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
  • offset0 is set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) )
  • n is an integer.
  • the vertical gradient values in ALF (g v ) are calculated as following:
  • g v (x, y) Shift (R (x, y+off1) -R (x, y-off2) , prec) ,
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the vertical gradient values in ALF (g v ) are calculated as following:
  • g v (x, y) Shift (
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the horizontal gradient values in ALF (g h ) are calculated as following:
  • g h (x, y) Shift (R (x+1, y) -R (x-1, y) , prec) ,
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variable prec is an integer
  • the horizontal gradient values in ALF (g h ) are calculated as following:
  • g h (x, y) Shift (
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variable prec is an integer
  • the vertical gradient values in ALF (g v ) and/or the horizontal gradient values in ALF (g h ) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
  • offset0 and/or offset1 are set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) )
  • n is an integer.
  • the vertical gradient values in ALF (g v ) are calculated as following:
  • g v (x, y) SatShift (R (x, y+off1) -R (x, y-off2) , prec) ,
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the vertical gradient values in ALF (g v ) are calculated as following:
  • g v (x, y) SatShift (
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variables off1, off2 and prec are integers.
  • the horizontal gradient values in ALF (g h ) are calculated as following:
  • g h (x, y) SatShift (R (x+1, y) -R (x-1, y) , prec) ,
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variable prec is an integer
  • the horizontal gradient values in ALF (g h ) are calculated as following:
  • g h (x, y) SatShift (
  • R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j)
  • variable prec is an integer
  • the vertical gradient values for all or partial samples in the first block are calculated in the same way to calculate the vertical gradient values in BDOF and averaged to obtain the vertical gradient value for the first block used in ALF.
  • the horizontal gradient values for all or partial samples in the first block are calculated in the same way to calculate the horizontal gradient values in BDOF and averaged to obtain the horizontal gradient value for the first block used in ALF.
  • FIG. 14 is a flowchart for an example method 1400 of video processing.
  • the method 1400 includes deriving (1402) , for a conversion between a first block of video and a bitstream representation of the first block, gradient values used in one or more coding tools by applying a sub-block level gradient calculation process, wherein the gradient values are derived for partial samples within prediction blocks of the first block; and performing (1404) the conversion based on the derived gradient values.
  • the one or more coding tools include at least one of a bi- directional optical flow (BDOF) refinement, a prediction refinement with optical flow (PROF) process and other non-ALF coding tools.
  • BDOF bi- directional optical flow
  • PROF prediction refinement with optical flow
  • only samples at selected coordinates are used to derive the gradient values.
  • the gradient values associated the samples at certain coordinates are copied from that associated with its neighbors wherein gradient vale is calculated.
  • how to copy gradient values from selected samples to those remaining samples depends on gradient direction including at least one of horizontal gradient or vertical gradient.
  • FIG. 15 is a flowchart for an example method 1500 of video processing.
  • the method 1500 includes configuring (1502) , for a conversion between a first block of video and a bitstream representation of the first block, a first padding process in a first coding tool to be aligned with a second padding process in a second coding tool that is different from the first coding tool, wherein the first padding process is for padding samples out of range used to derive gradient values used in the first coding tool, and the second padding process is for padding samples out of range used to derive gradient values used in the second coding tool; and performing (1504) the conversion based on the configured first padding process.
  • whole or partial of the first padding process is aligned with the corresponding whole or partial of the second padding process.
  • the first coding tool is a bi-directional optical flow (BDOF) refinement
  • the second coding tool is an adaptive loop filtering (ALF) process.
  • BDOF bi-directional optical flow
  • ALF adaptive loop filtering
  • the first coding tool is an adaptive loop filtering (ALF) process
  • the second coding tool is a bi-directional optical flow (BDOF) refinement.
  • ALF adaptive loop filtering
  • BDOF bi-directional optical flow
  • the first coding tool is a bi-directional optical flow (BDOF) refinement
  • the second coding tool is a prediction refinement with optical flow (PROF) process.
  • BDOF bi-directional optical flow
  • PROF prediction refinement with optical flow
  • the first coding tool is a prediction refinement with optical flow (PROF) process
  • the second coding tool is a bi-directional optical flow (BDOF) refinement.
  • PROF prediction refinement with optical flow
  • BDOF bi-directional optical flow
  • the first coding tool is a prediction refinement with optical flow (PROF) process
  • the second coding tool is an adaptive loop filtering (ALF) process.
  • the first coding tool is an adaptive loop filtering (ALF) process
  • the second coding tool is a prediction refinement with optical flow (PROF) process.
  • ALF adaptive loop filtering
  • PROF prediction refinement with optical flow
  • the first coding tool and/or the second coding tool include coding tools that rely on calculation of gradient values.
  • the conversion generates the first block of video from the bitstream representation.
  • the conversion generates the bitstream representation from the first block of video.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Interactions between adaptive loop filtering and other coding tools are described. In an exemplary aspect, a method for video processing includes configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first derivation process for deriving a first gradient value used in a first coding tool to be aligned with a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool; and performing the conversion based on the configured first derivation process.

Description

INTERACTIONS BETWEEN ADAPTIVE LOOP FILTERING AND OTHER CODING TOOLS
CROSS-REFERENCE TO RELATED APPLICATION
Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent Application No. PCT/CN2019/080356, filed on March 29, 2019. The entire disclosures of International Patent Application No. PCT/CN2019/080356 is incorporated by reference as part of the disclosure of this application.
TECHNICAL FIELD
This patent document relates to video coding techniques, devices and systems.
BACKGROUND
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
SUMMARY
Devices, systems and methods related to digital video coding, and specifically, to interactions between adaptive loop filtering and other coding tools are described. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards (e.g., Versatile Video Coding (VVC) ) or codecs.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method configuring, for a current video block, a first derivation process for deriving a first gradient value used in a first coding tool based on a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool, and reconstructing, based on the first derivation process and the first coding tool, the current video block from a corresponding bitstream representation,  wherein at least one of the first coding tool and the second coding tool relates to a pixel filtering process, and wherein the first gradient value and the second gradient value are indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method configuring, for a current video block, a first padding process used in a first coding tool based on a second padding process used in a second coding tool that is different from the first coding tool, and reconstructing, based on the first padding process and the first coding tool, the current video block from a corresponding bitstream representation, wherein the first padding process and the second padding process comprise adding out of range samples in a calculation of a gradient value that is indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method comprises: configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first derivation process for deriving a first gradient value used in a first coding tool to be aligned with a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool; and performing the conversion based on the configured first derivation process.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method comprises: deriving, for a conversion between a first block of video and a bitstream representation of the first block, gradient values used in one or more coding tools by applying a sub-block level gradient calculation process, wherein the gradient values are derived for partial samples within prediction blocks of the first block; and performing the conversion based on the derived gradient values.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method comprises: configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first padding process in a first coding tool to be aligned with a second padding process in a second coding tool that is different from the first coding tool, wherein the first padding process is for padding samples out  of range used to derive gradient values used in the first coding tool, and the second padding process is for padding samples out of range used to derive gradient values used in the second coding tool; and performing the conversion based on the configured first padding process.
In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
In yet another representative aspect, a video decoder apparatus may implement a method as described herein.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an example of an encoder block diagram for video coding.
FIGS. 2A, 2B and 2C show examples of geometry transformation-based adaptive loop filter (GALF) filter shapes.
FIG. 3 shows an example of a flow graph for a GALF encoder decision.
FIGS. 4A-4D show example subsampled Laplacian calculations for adaptive loop filter (ALF) classification.
FIG. 5 shows an example of a luma filter shape.
FIG. 6 shows an example of region division of a Wide Video Graphic Array (WVGA) sequence.
FIG. 7 shows an example of an optical flow trajectory used by the bi-directional optical flow (BIO) algorithm.
FIGS. 8A and 8B show example snapshots of using of the bi-directional optical flow (BIO) algorithm without block extensions.
FIG. 9 shows an example of the interpolated samples used in BIO.
FIG. 10 shows an example of prediction refinement with optical flow (PROF) .
FIGS. 11A and 11B show flowcharts of example methods for interactions between  adaptive loop filtering and other coding tools, in accordance with the disclosed technology.
FIG. 12 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
FIG. 13 shows a flowchart of yet another example method for video processing.
FIG. 14 shows a flowchart of yet another example method for video processing.
FIG. 15 shows a flowchart of yet another example method for video processing.
DETAILED DESCRIPTION
Due to the increasing demand of higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency. A video codec converts uncompressed video to a compressed format or vice versa. There are complex relationships between the video quality, the amount of data used to represent the video (determined by the bit rate) , the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay (latency) . The compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards.
In some embodiments, future video coding technologies are explored using a reference software known as the Joint Exploration Model (JEM) . In JEM, sub-block based prediction is adopted in several coding tools, such as affine prediction, alternative temporal motion vector prediction (ATMVP) , spatial-temporal motion vector prediction (STMVP) , bi-directional optical flow (BIO) , Frame-Rate Up Conversion (FRUC) , Locally Adaptive Motion Vector Resolution (LAMVR) , Overlapped Block Motion Compensation (OBMC) , Local Illumination Compensation (LIC) , and Decoder-side Motion Vector Refinement (DMVR) .
Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve runtime performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective  sections only.
1 Examples of color space and chroma subsampling
Color space, also known as the color model (or color system) , is an abstract mathematical model which simply describes the range of colors as tuples of numbers, typically as 3 or 4 values or color components (e.g. RGB) . Basically speaking, color space is an elaboration of the coordinate system and sub-space.
For video compression, the most frequently used color spaces are YCbCr and RGB.
YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y'CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.
1.1 The 4: 4: 4 color format
Each of the three Y'CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.
1.2 The 4: 2: 2 color format
The two chroma components are sampled at half the sample rate of luma, e.g. the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.
1.3 The 4: 2: 0 color format
In 4: 2: 0, the horizontal sampling is doubled compared to 4: 1: 1, but as the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is thus the same. Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically. There are three variants of 4: 2: 0 schemes, having different horizontal and vertical siting.
○ In MPEG-2, Cb and Cr are cosited horizontally. Cb and Cr are sited between  pixels in the vertical direction (sited interstitially) .
○ In JPEG/JFIF, H. 261, and MPEG-1, Cb and Cr are sited interstitially, halfway between alternate luma samples.
○ In 4: 2: 0 DV, Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.
2 Examples of the coding flow of a typical video codec
FIG. 1 shows an example of encoder block diagram of VVC, which contains three in-loop filtering blocks: deblocking filter (DF) , sample adaptive offset (SAO) and ALF. Unlike DF, which uses predefined filters, SAO and ALF utilize the original samples of the current picture to reduce the mean square errors between the original samples and the reconstructed samples by adding an offset and by applying a finite impulse response (FIR) filter, respectively, with coded side information signaling the offsets and filter coefficients. ALF is located at the last processing stage of each picture and can be regarded as a tool trying to catch and fix artifacts created by the previous stages.
3 Examples of a geometry transformation-based adaptive loop filter in JEM
In the JEM, an geometry transformation-based adaptive loop filter (GALF) with block-based filter adaption is applied. For the luma component, one among 25 filters is selected for each 2×2 block, based on the direction and activity of local gradients.
3.1 Examples of filter shape
In the JEM, up to three diamond filter shapes (as shown in FIGS. 2A, 2B and 2C for the 5×5 diamond, 7×7 diamond and 9×9 diamond, respectively) can be selected for the luma component. An index is signalled at the picture level to indicate the filter shape used for the luma component. For chroma components in a picture, the 5×5 diamond shape is always used.
3.1.1 Block classification
Each 2×2 block is categorized into one out of 25 classes. The classification index C is derived based on its directionality D and a quantized value of activity
Figure PCTCN2020082038-appb-000001
as follows:
Figure PCTCN2020082038-appb-000002
To calculate D and
Figure PCTCN2020082038-appb-000003
gradients of the horizontal, vertical and two diagonal direction are first calculated using 1-D Laplacian:
Figure PCTCN2020082038-appb-000004
Figure PCTCN2020082038-appb-000005
Figure PCTCN2020082038-appb-000006
Figure PCTCN2020082038-appb-000007
Indices i and j refer to the coordinates of the upper left sample in the 2×2 block and R (i, j) indicates a reconstructed sample at coordinate (i, j) .
Then D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
Figure PCTCN2020082038-appb-000008
and the maximum and minimum values of the gradient of two diagonal directions are set as:
Figure PCTCN2020082038-appb-000009
To derive the value of the directionality D, these values are compared against each other and with two thresholds t 1 and t 2:
Step 1. If both
Figure PCTCN2020082038-appb-000010
and
Figure PCTCN2020082038-appb-000011
are true, D is set to 0.
Step 2. If
Figure PCTCN2020082038-appb-000012
continue from Step 3; otherwise continue from Step 4. 
Step 3. If
Figure PCTCN2020082038-appb-000013
D is set to 2; otherwise D is set to 1.
Step 4. If
Figure PCTCN2020082038-appb-000014
D is set to 4; otherwise D is set to 3.
The activity value A is calculated as:
Figure PCTCN2020082038-appb-000015
A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as
Figure PCTCN2020082038-appb-000016
For both chroma components in a picture, no classification method is applied, i.e. a single set of ALF coefficients is applied for each chroma component.
3.1.2 Geometric transformations of filter coefficients
Before filtering each 2×2 block, geometric transformations such as rotation or diagonal and vertical flipping are applied to the filter coefficients f (k, l) depending on gradient values calculated for that block. This is equivalent to applying these transformations to the samples in the filter support region. The idea is to make different blocks to which ALF is applied more similar by aligning their directionality.
Three geometric transformations, including diagonal, vertical flip and rotation are introduced:
Diagonal: f D (k, l) =f (l, k) ,
Vertical flip: f V (k, l) =f (k, K-l-1) ,         (9)
Rotation: f R (k, l) =f (K-l-1, k) .
Herein, K is the size of the filter and 0≤k, l≤K-1 are coefficients coordinates, such that location (0, 0) is at the upper left corner and location (K-1, K-1) is at the lower right corner. The transformations are applied to the filter coefficients f (k, l) depending on gradient values calculated for that block. The relationship between the transformation and the four gradients of the four directions are summarized in Table 1.
Table 1: Mapping of the gradient calculated for one block and the transformations
Gradient values Transformation
g d2 < g d1 and g h < g v No transformation
g d2 < g d1 and g v < g h Diagonal
g d1 < g d2 and g h < g v Vertical flip
g d1 < g d2 and g v < g h Rotation
3.1.3 Signaling of filter parameters
In the JEM, GALF filter parameters are signaled for the first CTU, i.e., after the slice header and before the SAO parameters of the first CTU. Up to 25 sets of luma filter coefficients could be signaled. To reduce bits overhead, filter coefficients of different classification can be merged. Also, the GALF coefficients of reference pictures are stored and allowed to be reused as  GALF coefficients of a current picture. The current picture may choose to use GALF coefficients stored for the reference pictures, and bypass the GALF coefficients signaling. In this case, only an index to one of the reference pictures is signaled, and the stored GALF coefficients of the indicated reference picture are inherited for the current picture.
To support GALF temporal prediction, a candidate list of GALF filter sets is maintained. At the beginning of decoding a new sequence, the candidate list is empty. After decoding one picture, the corresponding set of filters may be added to the candidate list. Once the size of the candidate list reaches the maximum allowed value (i.e., 6 in current JEM) , a new set of filters overwrites the oldest set in decoding order, and that is, first-in-first-out (FIFO) rule is applied to update the candidate list. To avoid duplications, a set could only be added to the list when the corresponding picture doesn’t use GALF temporal prediction. To support temporal scalability, there are multiple candidate lists of filter sets, and each candidate list is associated with a temporal layer. More specifically, each array assigned by temporal layer index (TempIdx) may compose filter sets of previously decoded pictures with equal to lower TempIdx. For example, the k-th array is assigned to be associated with TempIdx equal to k, and it only contains filter sets from pictures with TempIdx smaller than or equal to k. After coding a certain picture, the filter sets associated with the picture will be used to update those arrays associated with equal or higher TempIdx.
Temporal prediction of GALF coefficients is used for inter coded frames to minimize signaling overhead. For intra frames, temporal prediction is not available, and a set of 16 fixed filters is assigned to each class. To indicate the usage of the fixed filter, a flag for each class is signaled and if required, the index of the chosen fixed filter. Even when the fixed filter is selected for a given class, the coefficients of the adaptive filter f (k, l) can still be sent for this class in which case the coefficients of the filter which will be applied to the reconstructed image are sum of both sets of coefficients.
The filtering process of luma component can controlled at CU level. A flag is signaled to indicate whether GALF is applied to the luma component of a CU. For chroma component, whether GALF is applied or not is indicated at picture level only.
3.1.4 Filtering process
At decoder side, when GALF is enabled for a block, each sample R (i, j) within the  block is filtered, resulting in sample value R′ (i, j) as shown below, where L denotes filter length, f m, n represents filter coefficient, and f (k, l) denotes the decoded filter coefficients.
Figure PCTCN2020082038-appb-000017
3.1.5 Determination process for encoder side filter parameters
Overall encoder decision process for GALF is illustrated in FIG. 3. For luma samples of each CU, the encoder makes a decision on whether or not the GALF is applied and the appropriate signalling flag is included in the slice header. For chroma samples, the decision to apply the filter is done based on the picture-level rather than CU-level. Furthermore, chroma GALF for a picture is checked only when luma GALF is enabled for the picture.
4 Examples of a geometry transformation-based adaptive loop filter in VVC
The current design of GALF in VVC has the following major changes compared to that in JEM:
1) The adaptive filter shape is removed. Only 7x7 filter shape is allowed for luma component and 5x5 filter shape is allowed for chroma component.
2) Temporal prediction of ALF parameters and prediction from fixed filters are both removed.
3) For each CTU, one bit flag is signaled whether ALF is enabled or disabled.
4) Calculation of class index is performed in 4x4 level instead of 2x2. In addition, as proposed, sub-sampled Laplacian calculation method for ALF classification is utilized. More specifically, there is no need to calculate the horizontal/vertical/45 diagonal /135 degree gradients for each sample within one block. Instead, 1: 2 subsampling is utilized. 
5 Examples of a region-based adaptive loop filter in AVS2
ALF is the last stage of in-loop filtering. There are two stages in this process. The first stage is filter coefficient derivation. To train the filter coefficients, the encoder classifies reconstructed pixels of the luminance component into 16 regions, and one set of filter coefficients is trained for each category using wiener-hopf equations to minimize the mean squared error between the original frame and the reconstructed frame. To reduce the redundancy between these 16 sets of filter coefficients, the encoder will adaptively merge them based on the rate-distortion performance. At its maximum, 16 different filter sets can be assigned for the luminance component and only one for the chrominance components. The second stage is a filter  decision, which includes both the frame level and LCU level. Firstly the encoder decides whether frame-level adaptive loop filtering is performed. If frame level ALF is on, then the encoder further decides whether the LCU level ALF is performed.
5.1 Filter shape
The filter shape adopted inAVS-2 is a 7×7 cross shape superposing a 3×3 square shape, just as illustrated in FIG. 5 for both luminance and chroma components. Each square in FIG. 5 corresponds to a sample. Therefore, a total of 17 samples are used to derive a filtered value for the sample of position C8. Considering overhead of transmitting the coefficients, a point-symmetrical filter is utilized with only nine coefficients left, {C0, C1, ..., C8} , which reduces the number of filter coefficients to half as well as the number of multiplications in filtering. The point-symmetrical filter can also reduce half of the computation for one filtered sample, e.g., only 9 multiplications and 14 add operations for one filtered sample.
5.2 Region-based adaptive merge
In order to adapt different coding errors, AVS-2 adopts region-based multiple adaptive loop filters for luminance component. The luminance component is divided into 16 roughly-equal-size basic regions where each basic region is aligned with largest coding unit (LCU) boundaries as shown in FIG. 6, and one Wiener filter is derived for each region. The more filters are used, the more distortions are reduced, but the bits used to encode these coefficients increase along with the number of filters. In order to achieve the best rate-distortion performance, these regions can be merged into fewer larger regions, which share the same filter coefficients. In order to simplify the merging process, each region is assigned with an index according to a modified Hilbert order based on the image prior correlations. Two regions with successive indices can be merged based on rate-distortion cost.
The mapping information between regions should be signaled to the decoder. In AVS-2, the number of basic regions is used to represent the merge results and the filter coefficients are compressed sequentially according to its region order. For example, when {0, 1} , {2, 3, 4} , {5, 6, 7, 8, 9} and the left basic regions merged into one region respectively, only three integers are coded to represent this merge map, i.e., 2, 3, 5.
5.3 Signaling of side information
Multiple switch flags are also used. The sequence switch flag,  adaptive_loop_filter_enable, is used to control whether adaptive loop filter is applied for the whole sequence. The image switch flags, picture_alf_enble [i] , control whether ALF is applied for the corresponding ith image component. Only if the picture_alf_enble [i] is enabled, the corresponding LCU-level flags and filter coefficients for that color component will be transmitted. The LCU level flags, lcu_alf_enable [k] , control whether ALF is enabled for the corresponding kth LCU, and are interleaved into the slice data. The decision of different level regulated flags is all based on the rate-distortion cost. The high flexibility further makes the ALF improve the coding efficiency much more significantly.
In some embodiments, and for a luma component, there could be up to 16 sets of filter coefficients.
In some embodiments, and for each chroma component (Cb and Cr) , one set of filter coefficients may be transmitted.
6 Bi-directional optical flow (BIO)
6.1 Overview and analysis of BIO
In BIO, motion compensation is first performed to generate the first predictions (in each prediction direction) of the current block. The first predictions are used to derive the spatial gradient, the temporal gradient and the optical flow of each sub-block or pixel within the block, which are then used to generate the second prediction, e.g., the final prediction of the sub-block or pixel. The details are described as follows.
The bi-directional optical flow (BIO) method is a sample-wise motion refinement performed on top of block-wise motion compensation for bi-prediction. In some implementations, the sample-level motion refinement does not use signaling.
Let I  (k) be the luma value from reference k (k=0, 1) after block motion compensation, and denote
Figure PCTCN2020082038-appb-000018
and
Figure PCTCN2020082038-appb-000019
as the horizontal and vertical components of the I  (k) gradient, respectively. Assuming the optical flow is valid, the motion vector field (v x, v y) is given by:
Figure PCTCN2020082038-appb-000020
Combining this optical flow equation with Hermite interpolation for the motion trajectory of each sample results in a unique third-order polynomial that matches both the function values I  (k) and derivatives
Figure PCTCN2020082038-appb-000021
and
Figure PCTCN2020082038-appb-000022
at the ends. The value of this  polynomial at t=0 is the BIO prediction:
Figure PCTCN2020082038-appb-000023
FIG. 7 shows an example optical flow trajectory in the Bi-directional Optical flow (BIO) method. Here, τ 0 and τ 1 denote the distances to the reference frames. Distances τ 0 and τ 1 are calculated based on POC for Ref 0 and Ref 1: τ 0=POC (current) -POC (Ref 0) , τ 1= POC (Ref 1) -POC(current) . If both predictions come from the same time direction (either both from the past or both from the future) then the signs are different (e.g., τ 0·τ 1<0) . In this case, BIO is applied if the prediction is not from the same time moment (e.g., τ 0≠τ 1) . Both referenced regions have non-zero motion (e.g., MVx 0, MVy 0, MVx 1, MVy 1≠0) and the block motion vectors are proportional to the time distance (e.g., MVx 0/MVx 1=MVy 0/MVy 1=-τ 01) .
The motion vector field (v x, v y) is determined by minimizing the difference Δ between values in points A and B. FIG. 7 shows an example of intersection of motion trajectory and reference frame planes. Model uses only first linear term of a local Taylor expansion for Δ:
Figure PCTCN2020082038-appb-000024
All values in the above equation depend on the sample location, denoted as (i′, j′) . Assuming the motion is consistent in the local surrounding area, Δ can be minimized inside the (2M+1) × (2M+1) square window Ω centered on the currently predicted point (i, j) , where M is equal to 2:
Figure PCTCN2020082038-appb-000025
For this optimization problem, the JEM uses a simplified approach making first a minimization in the vertical direction and then in the horizontal direction. This results in the following:
Figure PCTCN2020082038-appb-000026
Figure PCTCN2020082038-appb-000027
where,
Figure PCTCN2020082038-appb-000028
In order to avoid division by zero or a very small value, regularization parameters r and m can be introduced in Eq. (15) and Eq. (16) , where:
r=500·4 d-8                 Eq. (18)
m=700·4 d-8                 Eq. (19)
Here, d is bit depth of the video samples.
In order to keep the memory access for BIO the same as for regular bi-predictive motion compensation, all prediction and gradients values, I  (k) , 
Figure PCTCN2020082038-appb-000029
are calculated for positions inside the current block. FIG. 8A shows an example of access positions outside of a block 800. As shown in FIG. 8A, in Eq. (17) , (2M+1) × (2M+1) square window Ωcentered in currently predicted point on a boundary of predicted block needs to accesses positions outside of the block. In the JEM, values of I  (k) , 
Figure PCTCN2020082038-appb-000030
outside of the block are set to be equal to the nearest available value inside the block. For example, this can be implemented as a padding area 801, as shown in FIG. 8B.
With BIO, it is possible that the motion field can be refined for each sample. To reduce the computational complexity, a block-based design of BIO is used in the JEM. The motion refinement can be calculated based on a 4x4 block. In the block-based BIO, the values of s n in Eq. (17) of all samples in a 4x4 block can be aggregated, and then the aggregated values of s n in are used to derived BIO motion vectors offset for the 4×4 block. More specifically, the following formula can used for block-based BIO derivation:
Figure PCTCN2020082038-appb-000031
Here, b k denotes the set of samples belonging to the k-th 4x4 block of the predicted block. s n in Eq (15) and Eq (16) are replaced by ( (s n, bk) >> 4) to derive the associated motion  vector offsets.
In some scenarios, MV regiment of BIO may be unreliable due to noise or irregular motion. Therefore, in BIO, the magnitude of MV regiment is clipped to a threshold value. The threshold value is determined based on whether the reference pictures of the current picture are all from one direction. For example, if all the reference pictures of the current picture are from one direction, the value of the threshold is set to 12×2 14-d; otherwise, it is set to 12×2 13-d.
Gradients for BIO can be calculated at the same time with motion compensation interpolation using operations consistent with HEVC motion compensation process (e.g., 2D separable Finite Impulse Response (FIR) ) . In some embodiments, the input for the 2D separable FIR is the same reference frame sample as for motion compensation process and fractional position (fracX, fracY) according to the fractional part of block motion vector. For horizontal gradient
Figure PCTCN2020082038-appb-000032
a signal is first interpolated vertically using BIOfilterS corresponding to the fractional position fracY with de-scaling shift d-8. Gradient filter BIOfilterG is then applied in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18-d. For vertical gradient
Figure PCTCN2020082038-appb-000033
a gradient filter is applied vertically using BIOfilterG corresponding to the fractional position fracY with de-scaling shift d-8. The signal displacement is then performed using BIOfilterS in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18-d. The length of interpolation filter for gradients calculation BIOfilterG and signal displacement BIOfilterF can be shorter (e.g., 6-tap) in order to maintain reasonable complexity. Table 2 shows example filters that can be used for gradients calculation of different fractional positions of block motion vector in BIO. Table 3 shows example interpolation filters that can be used for prediction signal generation in BIO.
Table 2: Exemplary filters for gradient calculations in BIO
Fractional pel position Interpolation filter for gradient (BIOfilterG)
0 {8, -39, -3, 46, -17, 5}
1/16 {8, -32, -13, 50, -18, 5}
1/8 {7, -27, -20, 54, -19, 5}
3/16 {6, -21, -29, 57, -18, 5}
1/4 {4, -17, -36, 60, -15, 4}
5/16 {3, -9, -44, 61, -15, 4}
3/8 {1, -4, -48, 61, -13, 3}
7/16 {0, 1, -54, 60, -9, 2}
1/2 {-1, 4, -57, 57, -4, 1}
Table 3: Exemplary interpolation filters for prediction signal generation in BIO
Fractional pel position Interpolation filter for prediction signal (BIOfilterS)
0 {0, 0, 64, 0, 0, 0}
1/16 {1, -3, 64, 4, -2, 0}
1/8 {1, -6, 62, 9, -3, 1}
3/16 {2, -8, 60, 14, -5, 1}
1/4 {2, -9, 57, 19, -7, 2}
5/16 {3, -10, 53, 24, -8, 2}
3/8 {3, -11, 50, 29, -9, 2}
7/16 {3, -11, 44, 35, -10, 3}
1/2 {3, -10, 35, 44, -11, 3}
In the JEM, BIO can be applied to all bi-predicted blocks when the two predictions are from different reference pictures. When Local Illumination Compensation (LIC) is enabled for a CU, BIO can be disabled.
In some embodiments, OBMC is applied for a block after normal MC process. To reduce the computational complexity, BIO may not be applied during the OBMC process. This means that BIO is applied in the MC process for a block when using its own MV and is not applied in the MC process when the MV of a neighboring block is used during the OBMC process.
6.2 Examples of BIO in VTM-3.0
Step 1: Judge whether BIO is applicable (W/H are width/height of current block) 
BIO is not applicable if
○ Current video block is affine coded or ATMVP coded
○ (iPOC -iPOC 0) × (iPOC -iPOC 1) ≥ 0
○ H==4 or (W==4 and H==8)
○ with Weighted Prediction
○ GBi weights are not (1, 1)
BIO is not used if total SAD between the two reference blocks (denoted as R 0 and R 1) is smaller than a threshold, wherein
Figure PCTCN2020082038-appb-000034
Step 2: Data preparation
For a WxH block, (W+2) x (H+2) samples are interpolated.
The inner WxH samples are interpolated with the 8-tap interpolation filter as in normal motion compensation.
The four side outer lines of samples (black circles in FIG. 9) are interpolated with the bi-linear filter.
For each position, gradients are calculated on the two reference blocks (R 0 and R 1) .
Gx0 (x, y) = (R0 (x+1, y) -R0 (x-1, y) ) >>4
Gy0 (x, y) = (R0 (x, y+1) -R0 (x, y-1) ) >>4
Gx1 (x, y) = (R1 (x+1, y) -R1 (x-1, y) ) >>4
Gy1 (x, y) = (R1 (x, y+1) -R1 (x, y-1) ) >>4
For each position, internal values are calculated as:
T1= (R0 (x, y) >>6) - (R1 (x, y) >>6) , T2= (Gx0 (x, y) +Gx1 (x, y) ) >>3, T3= (Gy0 (x, y) +Gy1 (x, y) ) >>3; and
B1 (x, y) = T2*T2, B2 (x, y) =T2*T3, B3 (x, y) =-T1*T2, B5 (x, y) =T3*T3, B6 (x, y) =-T1*T3
Step 3: Calculate prediction for each block
BIO is skipped for a 4×4 block if SAD between the two 4×4 reference blocks is smaller than a threshold.
Calculate Vx and Vy.
Calculate the final prediction for each position in the 4×4 block:
b (x, y) = (Vx (Gx 0 (x, y) -Gx 1 (x, y) ) +Vy (Gy 0 (x, y) -Gy 1 (x, y) ) +1) >>1
P (x, y) = (R 0 (x, y) +R 1 (x, y) +b (x, y) +offset) >> shift
Herein, b (x, y) is known as a correction item.
6.3 Alternative examples of BIO in VTM-3.0
8.3.4 Decoding process for inter blocks
-- If predFlagL0 and predFlagL1 are equal to 1, DiffPicOrderCnt (currPic, refPicList0 [refIdx0] ) *DiffPicOrderCnt (currPic, refPicList1 [refIdx1] ) < 0, MotionModelIdc [xCb] [yCb] is equal to 0 and MergeModeList [merge_idx [xCb] [yCb] ] is not equal to SbCol, set the value of bioAvailableFlag to TRUE.
-- Otherwise, set the value of bioAvailableFlag to FALSE.
-- If bioAvailableFlag is equal to TRUE, the following is applied:
-- The variable shift is set equal to Max (2, 14 -bitDepth) .
-- The variables cuLevelAbsDiffThres and subCuLevelAbsDiffThres are set equal to (1<< (bitDepth –8 + shift) ) *cbWidth*cbHeight and 1<< (bitDepth –3 + shift) . The variable cuLevelSumAbsoluteDiff is set to 0.
-- For xSbIdx=0.. (cbWidth>>2) -1 and ySbIdx=0.. (cbHeight>>2) -1, the variable subCuLevelSumAbsoluteDiff [xSbIdx] [ySbIdx] and the bidirectional optical flow utilization flag bioUtilizationFlag [xSbIdx] [ySbIdx] of the current subblock are derived as:
subCuLevelSumAbsoluteDiff [xSbIdx] [ySbIdx] = ∑ ij Abs (predSamplesL0L [ (xSbIdx<<2) +1+i] [ (ySbIdx<<2) +1+j] -predSamplesL1L [ (xSbIdx<<2) +1+i] [ (ySbIdx<<2) +1+j] ) with i, j = 0.. 3
bioUtilizationFlag [xSbIdx] [ySbIdx] = subCuLevelSumAbsoluteDiff [xSbIdx] [ySbIdx] >= subCuLevelAbsDiffThres
cuLevelSumAbsoluteDiff += subCuLevelSumAbsoluteDiff [xSbIdx] [ySbIdx]
-- If cuLevelSumAbsoluteDiff is smaller than cuLevelAbsDiffThres, set bioAvailableFlag to FALSE.
-- If bioAvailableFlag is equal to TRUE, the prediction samples inside the current luma coding subblock, predSamplesL [xL + xSb] [yL + ySb] with xL = 0.. sbWidth -1 and yL = 0.. sbHeight -1, are derived by invoking the bi-directional optical flow sample prediction process specified in clause 8.3.4.5 with the luma coding subblock width sbWidth, the luma  coding subblock height sbHeight and the sample arrays predSamplesL0L and predSamplesL1L, and the variables predFlagL0, predFlagL1, refIdxL0, refIdxL1.
8.3.4.3 Fractional sample interpolation process
8.3.4.3.1 General
Inputs to this process are:
– a luma location (xSb, ySb ) specifying the top-left sample of the current coding sub-block relative to the top left luma sample of the current picture,
– a variable sbWidth specifying the width of the current coding sub-block in luma samples,
– a variable sbHeight specifying the height of the current coding sub-block in luma samples,
– a luma motion vector mvLX given in 1/16-luma-sample units,
– a chroma motion vector mvCLX given in 1/32-chroma-sample units,
– the selected reference picture sample array refPicLXL and the arrays refPicLXCb and refPicLXCr.
– the bidirectional optical flow enabling flag bioAvailableFlag.
Outputs of this process are:
– an (sbWidth) x (sbHeight) array predSamplesLXL of prediction luma sample values when bioAvailableFlag is FALSE, or an (sbWidth+2) x (sbHeight+2) array predSamplesLXL of prediction luma samaple values when bioAvailableFlag is TRUE.
– two (sbWidth /2) x (sbHeight /2) arrays predSamplesLXCb and predSamplesLXCr of prediction chroma sample values.
Let (xIntL, yIntL) be a luma location given in full-sample units and (xFracL, yFracL) be an offset given in 1/16-sample units. These variables are used only in this clause for specifying fractional-sample locations inside the reference sample arrays refPicLXL, refPicLXCb and refPicLXCr.
When bioAvailableFlag is equal to TRUE, for each luma sample location (xL = -1..sbWidth, yL = -1.. sbHeight) inside the prediction luma sample array predSamplesLXL, the corresponding prediction luma sample value predSamplesLXL [xL] [yL] is derived as follows:
– The variables xIntL, yIntL, xFracL and yFracL are derived as follows:
xIntL = xSb -1 + (mvLX [0] >> 4) + xL
yIntL = ySb -1 + (mvLX [1] >> 4) + yL
xFracL = mvLX [0] &15
yFracL = mvLX [1] &15
– The value of bilinearFiltEnabledFlag is derived as follows:
– If xL is equal to -1 or sbWidth, or yL is equal to -1 or sbHeight, set the value of bilinearFiltEnabledFlag to TRUE.
– Else, set the value of bilinearFiltEnabledFlag to FALSE
– The prediction luma sample value predSamplesLXL [xL] [yL] is derived by invoking the process specified in clause 8.3.4.3.2 with (xIntL, yIntL) , (xFracL, yFracL) , refPicLXL and bilinearFiltEnabledFlag as inputs.
When bioAvailableFlag is equal to FALSE, for each luma sample location (xL = 0..sbWidth -1, yL = 0.. sbHeight -1) inside the prediction luma sample array predSamplesLXL, the corresponding prediction luma sample value predSamplesLXL [xL] [yL] is derived as follows:
– The variables xIntL, yIntL, xFracL and yFracL are derived as follows:
xIntL = xSb + (mvLX [0] >> 4) + xL
yIntL = ySb + (mvLX [1] >> 4) + yL
xFracL = mvLX [0] &15
yFracL = mvLX [1] &15
– The variable bilinearFiltEnabledFlag is set to FALSE.
– The prediction luma sample value predSamplesLXL [xL] [yL] is derived by invoking the process specified in clause 8.3.4.3.2 with (xIntL, yIntL) , (xFracL, yFracL) , and refPicLXL and bilinearFiltEnabledFlag as inputs.
8.3.4.5 Bi-directional optical flow prediction process
Inputs to this process are:
– two variables nCbW and nCbH specifying the width and the height of the current coding block,
– two (nCbW+2) x (nCbH+2) luma prediction sample arrays predSamplesL0 and predSamplesL1,
– the prediction list utilization flags, predFlagL0 and predFlagL1,
– the reference indices refIdxL0 and refIdxL1,
– the bidirectional optical flow utilization flags bioUtilizationFlag [xSbIdx ] [ySbIdx] with xSbIdx = 0.. (nCbW>>2) -1, ySbIdx = 0 .. (nCbH>>2) -1
Output of this process is the (nCbW) x (nCbH) array pbSamples of luma prediction sample values.
The variable bitDepth is set equal to BitDepthY.
The variable shift2 is set equal to Max (3, 15 -bitDepth) and the variable offset2 is set equal to 1 << (shift2 -1) .
The variable mvRefineThres is set equal to 1 << (13 -bitDepth) .
For xSbIdx = 0.. (nCbW >> 2) -1 and ySbIdx = 0.. (nCbH >> 2) -1,
– If bioUtilizationFlag [xSbIdx] [ySbIdx] is FALSE, for x=xSb.. xSb+3, y=ySb.. ySb+3, the prediction sample values of the current prediction unit are derived as follows:
pbSamples [x] [y] = Clip3 (0, (1 << bitDepth) -1,
(predSamplesL0 [x] [y] + predSamplesL1 [x] [y] + offset2) >> shift2)
– Otherwise, the prediction sample values of the current prediction unit are derived as follows:
– The location (xSb, ySb ) specifying the top-left sample of the current subblock relative to the top left sample of prediction sample arrays predSamplesL0 and predSampleL1 is derived as follows:
xSb = (xSbIdx<<2) + 1
ySb = (ySbIdx<<2) + 1
– For x=xSb–1.. xSb+4, y=ySb-1.. ySb+4, the followings are applied:
– The locations (hx, vy) for each of the corresponding sample (x, y) inside the prediction sample arrays are derived as follows:
hx = Clip3 (1, nCbW, x )
vy = Clip3 (1, nCbH, y)
– The variables gradientHL0 [x] [y] , gradientVL0 [x] [y] , gradientHL1 [x] [y] and gradientVL1 [x] [y] are derived as follows:
gradientHL0 [x] [y] = (predSamplesL0 [hx +1] [vy] –predSampleL0 [hx-1] [vy] ) >>4
gradientVL0 [x] [y] = (predSampleL0 [hx] [vy+1] - predSampleL0 [hx] [vy-1] ) >>4
gradientHL1 [x] [y] = (predSamplesL1 [hx+1] [vy] – predSampleL1 [hx-1] [vy] ) >>4
gradientVL1 [x] [y] = (predSampleL1 [hx] [vy+1] – predSampleL1 [hx] [vy-1] ) >>4
– The variables temp, tempX and tempY are derived as follows:
temp [x] [y] = (predSamplesL0 [hx] [vy] >>6) – (predSamplesL1 [hx] [vy] >>6)
tempX [x] [y] = (gradientHL0 [x] [y] + gradientHL1 [x] [y] ) >>3
tempY [x] [y] = (gradientVL0 [x] [y] + gradientVL1 [x] [y] ) >>3
– The variables sGx2, sGy2, sGxGy, sGxdI and sGydI are derived as follows:
sGx2 = ∑ xy (tempX [xSb+x] [ySb+y] *tempX [xSb+x] [ySb+y] ) with x, y = -1.. 4
sGy2 = ∑ xy (tempY [xSb+x] [ySb+y] *tempY [xSb+x] [ySb+y] ) with x, y = -1.. 4
sGxGy = ∑ xy (tempX [xSb+x] [ySb+y] *tempY [xSb+x] [ySb+y] ) with x, y = -1.. 4
sGxdI = ∑ xy (-tempX [xSb+x] [ySb+y] *temp [xSb+x] [ySb+y] ) with x, y = -1.. 4
sGydI = ∑ xy (-tempY [xSb+x] [ySb+y] *temp [xSb+x] [ySb+y] ) with x, y = -1.. 4
– The horizontal and vertical motion refinements of the current sub-block are derived as:
vx = sGx2 > 0 ? Clip3 (-mvRefineThres, mvRefineThres, -  (sGxdI<<3) >>Floor (Log2 (sGx2) ) ) : 0
vy = sGy2 > 0 ? Clip3 (-mvRefineThres, mvRefineThres, ( (sGydI<<3) - ( (vx*sGxGym) <<12 + vx*sGxGys) >>1) >>Floor (Log2 (sGy2) ) ) : 0
sGxGym = sGxGy>>12;
sGxGys = sGxGy & ( (1<<12) -1)
For x=xSb-1.. xSb+2, y=ySb-1.. ySb+2, the followings are applied:
sampleEnh = Round ( (vx* (gradientHL1 [x+1] [y+1] -gradientHL0 [x+1] [y+1] ) ) >>1) + Round ( (vy* (gradientVL1 [x+1] [y+1] -gradientVL0 [x+1] [y+1] ) ) >>1)
pbSamples [x] [y] = Clip3 (0, (1 << bitDepth) -1, (predSamplesL0 [x+1] [y+1] + predSamplesL1 [x+1] [y+1] +sampleEnh + offset2) >> shift2)
7 Prediction refinement with optical flow (PROF)
This contribution proposes a method to refine the sub-block based affine motion compensated prediction with optical flow. After the sub-block based affine motion compensation is performed, prediction sample is refined by adding a difference derived by the optical flow equation, which is referred as prediction refinement with optical flow (PROF) . The proposed method can achieve inter prediction in pixel level granularity without increasing the memory access bandwidth.
To achieve a finer granularity of motion compensation, this contribution proposes a method to refine the sub-block based affine motion compensated prediction with optical flow. After the sub-block based affine motion compensation is performed, luma prediction sample is refined by adding a difference derived by the optical flow equation. The proposed PROF is described as following four steps.
Step 1) The sub-block-based affine motion compensation is performed to generate sub-block prediction I (i, j) .
Step2) The spatial gradients g x (i, j) and g y (i, j) of the sub-block prediction are calculated at each sample location using a 3-tap filter [-1, 0, 1] .
g x (i, j) =I (i+1, j) -I (i-1, j)
g y (i, j) =I (i, j+1) -I (i, j-1)
The sub-block prediction is extended by one pixel on each side for the gradient calculation. To reduce the memory bandwidth and complexity, the pixels on the extended  borders are copied from the nearest integer pixel position in the reference picture. Therefore, additional interpolation for padding region is avoided.
Step 3) The luma prediction refinement is calculated by the optical flow equation.
ΔI (i, j) = g x (i, j) *Δv x (i, j) +g y (i, j) *Δv y (i, j)
Herein, Δv (i, j) is the difference between pixel MV computed for sample location (i, j) , denoted by v (i, j) , and the sub-block MV of the sub-block to which pixel (i, j) belongs, as shown in FIG. 10.
Since the affine model parameters and the pixel location relative to the sub-block center are not changed from sub-block to sub-block, Δv (i, j) can be calculated for the first sub-block, and reused for other sub-blocks in the same CU. Let x and y be the horizontal and vertical offset from the pixel location to the center of the sub-block, Δv (x, y) can be derived by the following equation,
Figure PCTCN2020082038-appb-000035
For 4-parameter affine model,
Figure PCTCN2020082038-appb-000036
For 6-parameter affine model,
Figure PCTCN2020082038-appb-000037
Herein, (v 0x, v 0y) , (v 1x, v 1y) , (v 2x, v 2y) are the top-left, top-right and bottom-left control point motion vectors, w and h are the width and height of the CU.
Step 4) Finally, the luma prediction refinement is added to the sub-block prediction I (i, j) . The final prediction I’ is generated as the following equation:
′ (i, j) = I (i, j) +ΔI (i, j)
8 Drawbacks of existing implementations
The design of VVC has the following problems:
(1) Different gradient calculation methods, such as [1, -1] , [-1, 2, -1] , are utilized in BIO (a.k.a., BDOF) /ALF/PROF.
(2) In the BIO process, the gradient calculation is done in sample level wherein for each sample, the gradient is calculated. While the refined motion vectors (such as Vx and Vy) are done in 4x4 sub-block level which depends on gradient values of a 6x6 block covering the sub-block. The per-sample calculation of gradient increases the computational complexity.
9 Exemplary methods for interactions of ALF with other coding tools
Embodiments of the presently disclosed technology overcome the drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies. The interactions of adaptive loop filtering with other coding tools, based on the disclosed technology, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations. The examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
In the following examples, Shift (x, s) is defined as Shift (x, s) = (x + off) >> s.
In the following examples, SatShift (x, n) is defined as
Figure PCTCN2020082038-appb-000038
In an example, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) . In another example, offset0 and/or offset1 are set to 0. In another example, offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1.
In the following examples, Clip3 (x, Min, Max) is defined as
Figure PCTCN2020082038-appb-000039
It is proposed to align the gradient calculation process in ALF/Non-local ALF to that used in the decoder derived methods, such as decoder motion vector refinement/BIO etc.
1. It is proposed that the whole or partial of the derivation process of the gradient values used in BIO is aligned with the corresponding derivation process of the gradient values used in ALF.
a. In one example, the vertical gradient values in BIO (denoted as g v) are calculated in the same way to calculate the vertical gradient values in ALF.
i. In one example, the vertical gradient calculation is defined as [-1, 2, -1]filter.
ii. In one example,
g v=Shift (2R (K, l) -R (k, l-off1) -R (k, l+off2) , prec)
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers such as off1=off2=1 and prec = 0.
iii. Alternatively,
g v=SatShift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec)
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers such as off1=off2=1 and prec = 0.
b. In one example, the horizontal gradient values in BIO (denoted as g h) are calculated in the same way as that utilized to calculate the horizontal gradient values in ALF.
i. In one example, the horizontal gradient calculation is defined as [-1, 2, -1] filter.
ii. In one example,
g h=Shift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers such as off1=off2=1 and prec = 0;
iii. Alternatively,
g h=SatShift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec)
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers such as off1=off2=1 and prec = 0.
2. It is proposed that the whole or partial of the derivation process of the gradient values used in PROF is the same as the corresponding derivation process of the gradient values used in ALF.
a. In one example, the vertical gradient values in PROF (denoted as g v) are calculated in the same way to calculate the vertical gradient values in ALF.
i. In one example, the vertical gradient calculation is defined as [-1, 2, -1] filter.
ii. In one example, g v=Shift (2R (k, l) -R (k, l-off1) -R (k, l+ off2) , prec) wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) . off1, off2 and prec are integers such as off1=off2=1 and prec = 0;
iii. Alternatively, g v=SatShift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , pr c)
b. In one example, the horizontal gradient values in PROF (denoted as g h) are calculated in the same way to calculate the horizontal gradient values in ALF.
i. In one example, the horizontal gradient calculation is defined as [-1, 2, -1] filter.
ii. In one example, g h=Shift (2R (k, l) -R (k-off1, l) -R (k+ off2, l) , prec) wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) . off1, off2 and prec are integers such as off1=off2=1 and prec = 0;
iii. Alternatively,
g h=SatShift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec)
3. It is proposed that a sub-block level gradient calculation method may be applied to BIO/PROF and other non-ALF coding tools wherein the gradient is not calculated for all samples within one block.
a. In one example, only selected coordinates may be utilized to derive gradient values, such as depicted in FIG. 4.
b. In one example, when gradient values are not calculated for certain coordinates, the associated gradient values may be copied from that associated with its neighbors wherein gradient is calculated.
c. How to copy gradient values from selected samples to those remaining samples may depend on the gradient direction, such as horizontal gradient or vertical gradient.
4. It is proposed that the whole or partial of the derivation process of the gradient values used in ALF is the same as the corresponding derivation process of other coding tools, such as the gradient values used in BIO.
a. In one example, the vertical gradient values in ALF (denoted as g v) are calculated in the same way to calculate the vertical gradient values in BIO.
i. For example,
g v (x, y) =Shift (R (x, y+off1) -R (x, y-off2) , prec) ,
where R (x, y) indicates a reconstructed or prediction sample at coordinate (x, y) , variables off1, off2 and prec are integers such as off1=off2=1 and prec = 0;
ii. Alternatively,
g v (x, y) =SatShift (R (x, y+off1) -R (x, y-off2) , prec) .
iii. Alternatively,
g v (x, y) =Shift (|R (x, y+off1) -R (x, y-off2) |, prec) .
iv. Alternatively,
g v (x, y) =SatShift (|R (x, y+off1) -R (x, y-off2) |, prec) .
b. In one example, the horizontal gradient values in ALF (denoted as g h) are calculated in the same way to calculate the horizontal gradient values in BIO.
i. For example,
g h (x, y) =Shift (R (x+1, y) -R (x-1, y) , prec) ,
where R (x, y) indicates a reconstructed or prediction sample at coordinate (x, y) . off1, off2 and prec are integers such as off1=off2=1 and prec = 0;
ii. Alternatively,
g h (x, y) =SatShift (R (x+1, y) -R (x-1, y) , prec) .
iii. Alternatively,
g h (x, y) =Shift (|R (x+1, y) -R (x-1, y) |, prec) ..
iv. Alternatively,
g h (x, y) =SatShift (|R (x+1, y) -R (x-1, y) |, prec) .
c. In one example, the vertical gradient values for all or some pixels in a block are calculated in the same way to calculate the vertical gradient values in BIO and averaged (or being processed in other ways) to get the vertical gradient value for a block used in ALF.
d. In one example, the horizontal gradient values for all or some pixels in a block are calculated in the same way to calculate the horizontal gradient values in BIO and averaged (or being processed in other ways) to get the horizontal gradient value for a block used in ALF.
5. It is proposed that the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in ALF, is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in BIO.
a. Alternatively, the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in BIO, is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in ALF.
6. It is proposed that the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in PROF, is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in BIO.
a. Alternatively, the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in BIO, is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in PROF.
7. It is proposed that the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in PROF, is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in ALF.
a. Alternatively, the whole or partial of the padding method used to pad the samples out of range used to derive the gradients in ALF, is the same as the corresponding padding method used to pad the samples out of range used to derive the gradients in PROF.
8. The proposed methods may be also applicable to other coding tools that rely on the calculation of gradients.
The examples described above may be incorporated in the context of the method described below, e.g.,  methods  1100 and 1150, which may be implemented at a video decoder or a video encoder.
FIG. 11A shows a flowchart of an exemplary method for video processing. The method 1100 includes, at step 1102, configuring, for a current video block, a first derivation process for deriving a first gradient value used in a first coding tool based on a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool.
The method 1100 includes, at step 1104, reconstructing, based on the first derivation process and the first coding tool, the current video block from a corresponding bitstream representation. In some embodiments, at least one of the first coding tool and the second coding tool relates to a pixel filtering process, and the first gradient value and the second gradient value are indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
In some embodiments, the pixel filtering process is an adaptive loop filtering process.
In some embodiments, the first coding tool is a bi-directional optical flow (BIO) refinement and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is a prediction refinement with optical flow (PROF) process and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is an adaptive loop filtering (ALF) process  and the second coding tool is a bi-directional optical flow (BIO) refinement.
In some embodiments, the first and second derivation processes comprise a vertical gradient value calculation or a horizontal gradient value calculation. In an example, the vertical gradient value calculation or the horizontal gradient value calculation is based on a [-1, 2, -1]filter.
In some embodiments, the first derivation process comprises a sub-block level gradient value calculation. In an example, the first derivation process is not applied to each sample of the current video block.
FIG. 11B shows a flowchart of an exemplary method for video processing. The method 1150 includes, at step 1152, configuring, for a current video block, a first padding process used in a first coding tool based on a second padding process used in a second coding tool that is different from the first coding tool.
The method 1150 includes, at step 1154, reconstructing, based on the first padding process and the first coding tool, the current video block from a corresponding bitstream representation. In some embodiments, the first padding process and the second padding process comprise adding out of range samples in a calculation of a gradient value that is indicative of a directional change in an intensity or a color component over a subset of samples in the current video block.
In some embodiments, the first coding tool is a bi-directional optical flow (BIO) refinement and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is an adaptive loop filtering (ALF) process and the second coding tool is a bi-directional optical flow (BIO) refinement.
In some embodiments, the first coding tool is a bi-directional optical flow (BIO) refinement and the second coding tool is a prediction refinement with optical flow (PROF) process.
In some embodiments, the first coding tool is a prediction refinement with optical flow (PROF) process and the second coding tool is a bi-directional optical flow (BIO) refinement.
In some embodiments, the first coding tool is a prediction refinement with optical flow (PROF) process and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is an adaptive loop filtering (ALF) process and the second coding tool is a prediction refinement with optical flow (PROF) process.
10 Example implementations of the disclosed technology
FIG. 12 is a block diagram of a video processing apparatus 1200. The apparatus 1200 may be used to implement one or more of the methods described herein. The apparatus 1200 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 1200 may include one or more processors 1202, one or more memories 1204 and video processing hardware 1206. The processor (s) 1202 may be configured to implement one or more methods (including, but not limited to, methods 1100 and 1150) described in the present document. The memory (memories) 1204 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 1206 may be used to implement, in hardware circuitry, some techniques described in the present document.
In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 12.
FIG. 13 is a flowchart for an example method 1300 of video processing. The method 1300 includes configuring (1302) , for a conversion between a first block of video and a bitstream representation of the first block, a first derivation process for deriving a first gradient value used in a first coding tool to be aligned with a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool; and performing (1304) the conversion based on the configured first derivation process.
In some embodiments, whole or partial of the first derivation process is aligned with the corresponding whole or partial of the second derivation process.
In some embodiments, the first derivation processes and the second derivation processes comprise the same vertical gradient value calculation for calculating vertical gradient values and/or the same horizontal gradient value calculation for calculating horizontal gradient values.
In some embodiments, the first coding tool is a bi-directional optical flow (BDOF) refinement, and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is an ALF process refinement, and the  second coding tool is a BDOF refinement.
In some embodiments, the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [-1, 2, -1] filter.
In some embodiments, the vertical gradient values in BDOF (g v) and/or the horizontal gradient values in BDOF (g h) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
Shift (x, n) = (x+ offset0) >>n,
where, x is a variable, offset0 is set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 is set to 0, or offset0= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
In some embodiments, the vertical gradient values in BDOF (g v) are calculated as following:
g v=Shift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the horizontal gradient values in BDOF (g h) are calculated as following:
g h=Shift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, off1=off2=1 and prec = 0.
In some embodiments, the vertical gradient values in BDOF (g v) and/or the horizontal gradient values in BDOF (g h) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
Figure PCTCN2020082038-appb-000040
where, x is a variable, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 and/or offset1 are set to 0, or offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
In some embodiments, the vertical gradient values in BDOF (g v) are calculated as following:
g v=SatShift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the horizontal gradient values in BDOF (g h) are calculated as following:
g h=SatShift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, off1=off2=1 and prec = 0.
In some embodiments, the first coding tool is a prediction refinement with optical flow (PROF) process which is applied to affine coded blocks, and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [-1, 2, -1] filter.
In some embodiments, the vertical gradient values in PROF (g v) and/or the horizontal gradient values in PROF (g h) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
Shift (x, n) = (x+ offset0) >>n,
where, x is a variable, offset0 is set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 is set to 0, or offset0= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
In some embodiments, the vertical gradient values in PROF (g v) are calculated as following:
g v=Shift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the horizontal gradient values in PROF (g h) are calculated as following:
g h=Shift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, off1=off2=1 and prec = 0.
In some embodiments, the vertical gradient values in PROF (g v) and/or the horizontal gradient values in PROF (g h) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
Figure PCTCN2020082038-appb-000041
where, x is a variable, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 and/or offset1 are set to 0, or offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
In some embodiments, the vertical gradient values in PROF (g v) are calculated as following:
g v=SatShift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the horizontal gradient values in PROF (g h) are calculated as following:
g h=SatShift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, off1=off2=1 and prec = 0.
In some embodiments, the first coding tool is an adaptive loop filtering (ALF) process, and the second coding tool is a bi-directional optical flow (BDOF) refinement.
In some embodiments, the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [1, -1] filter.
In some embodiments, the vertical gradient values in ALF (g v) and/or the horizontal gradient values in ALF (g h) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
Shift (x, n) = (x+ offset0) >>n,
where, x is a variable, offset0 is set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 is set to 0, or offset0= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
In some embodiments, the vertical gradient values in ALF (g v) are calculated as following:
g v (x, y) =Shift (R (x, y+off1) -R (x, y-off2) , prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the vertical gradient values in ALF (g v) are calculated as following:
g v (x, y) =Shift (|R (x, y+off1) -R (x, y-off2) |, prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the horizontal gradient values in ALF (g h) are calculated as following:
g h (x, y) =Shift (R (x+1, y) -R (x-1, y) , prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
In some embodiments, the horizontal gradient values in ALF (g h) are calculated as following:
g h (x, y) =Shift (|R (x+1, y) -R (x-1, y) |, prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
In some embodiments, off1=off2=1 and prec = 0.
In some embodiments, the vertical gradient values in ALF (g v) and/or the horizontal gradient values in ALF (g h) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
Figure PCTCN2020082038-appb-000042
where, x is a variable, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 and/or offset1 are set to 0, or offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
In some embodiments, the vertical gradient values in ALF (g v) are calculated as following:
g v (x, y) =SatShift (R (x, y+off1) -R (x, y-off2) , prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate  (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the vertical gradient values in ALF (g v) are calculated as following:
g v (x, y) =SatShift (|R (x, y+off1) -R (x, y-off2) |, prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
In some embodiments, the horizontal gradient values in ALF (g h) are calculated as following:
g h (x, y) =SatShift (R (x+1, y) -R (x-1, y) , prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
In some embodiments, the horizontal gradient values in ALF (g h) are calculated as following:
g h (x, y) =SatShift (|R (x+1, y) -R (x-1, y) |, prec) ,
where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
In some embodiments, off1=off2=1 and prec = 0.
In some embodiments, the vertical gradient values for all or partial samples in the first block are calculated in the same way to calculate the vertical gradient values in BDOF and averaged to obtain the vertical gradient value for the first block used in ALF.
In some embodiments, the horizontal gradient values for all or partial samples in the first block are calculated in the same way to calculate the horizontal gradient values in BDOF and averaged to obtain the horizontal gradient value for the first block used in ALF.
FIG. 14 is a flowchart for an example method 1400 of video processing. The method 1400 includes deriving (1402) , for a conversion between a first block of video and a bitstream representation of the first block, gradient values used in one or more coding tools by applying a sub-block level gradient calculation process, wherein the gradient values are derived for partial samples within prediction blocks of the first block; and performing (1404) the conversion based on the derived gradient values.
In some embodiments, the one or more coding tools include at least one of a bi- directional optical flow (BDOF) refinement, a prediction refinement with optical flow (PROF) process and other non-ALF coding tools.
In some embodiments, only samples at selected coordinates are used to derive the gradient values.
In some embodiments, when gradient values are not calculated for samples at certain coordinates, the gradient values associated the samples at certain coordinates are copied from that associated with its neighbors wherein gradient vale is calculated.
In some embodiments, how to copy gradient values from selected samples to those remaining samples depends on gradient direction including at least one of horizontal gradient or vertical gradient.
FIG. 15 is a flowchart for an example method 1500 of video processing. The method 1500 includes configuring (1502) , for a conversion between a first block of video and a bitstream representation of the first block, a first padding process in a first coding tool to be aligned with a second padding process in a second coding tool that is different from the first coding tool, wherein the first padding process is for padding samples out of range used to derive gradient values used in the first coding tool, and the second padding process is for padding samples out of range used to derive gradient values used in the second coding tool; and performing (1504) the conversion based on the configured first padding process.
In some embodiments, whole or partial of the first padding process is aligned with the corresponding whole or partial of the second padding process.
In some embodiments, the first coding tool is a bi-directional optical flow (BDOF) refinement, and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is an adaptive loop filtering (ALF) process, and the second coding tool is a bi-directional optical flow (BDOF) refinement.
In some embodiments, the first coding tool is a bi-directional optical flow (BDOF) refinement, and the second coding tool is a prediction refinement with optical flow (PROF) process.
In some embodiments, the first coding tool is a prediction refinement with optical flow (PROF) process, and the second coding tool is a bi-directional optical flow (BDOF) refinement.
In some embodiments, the first coding tool is a prediction refinement with optical flow (PROF) process, and the second coding tool is an adaptive loop filtering (ALF) process.
In some embodiments, the first coding tool is an adaptive loop filtering (ALF) process, and the second coding tool is a prediction refinement with optical flow (PROF) process.
In some embodiments, the first coding tool and/or the second coding tool include coding tools that rely on calculation of gradient values.
In some embodiments, the conversion generates the first block of video from the bitstream representation.
In some embodiments, the conversion generates the bitstream representation from the first block of video.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) . A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Additionally, the use of “or” is intended to include “and/or” , unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (58)

  1. A method for processing video, comprising:
    configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first derivation process for deriving a first gradient value used in a first coding tool to be aligned with a second derivation process for deriving a second gradient value used in a second coding tool that is different from the first coding tool; and
    performing the conversion based on the configured first derivation process.
  2. The method of claim 1, wherein whole or partial of the first derivation process is aligned with the corresponding whole or partial of the second derivation process.
  3. The method of claim 1 or 2, wherein the first derivation processes and the second derivation processes comprise the same vertical gradient value calculation for calculating vertical gradient values and/or the same horizontal gradient value calculation for calculating horizontal gradient values.
  4. The method of claim 3, wherein the first coding tool is a bi-directional optical flow (BDOF) refinement, and the second coding tool is an adaptive loop filtering (ALF) process.
  5. The method of claim 3, wherein the first coding tool is an ALF process refinement, and the second coding tool is a BDOF refinement.
  6. The method of claim 4 or 5, wherein the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [-1, 2, -1] filter.
  7. The method of claim 6, wherein the vertical gradient values in BDOF (g v) and/or the horizontal gradient values in BDOF (g h) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
    Shift (x, n) = (x+ offset0) >>n,
    where, x is a variable, offset0 is set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 is set to 0, or offset0= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
  8. The method of claim 7, wherein the vertical gradient values in BDOF (g v) are calculated as following:
    g v=Shift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  9. The method of claim 7, wherein the horizontal gradient values in BDOF (g h) are calculated as following:
    g h=Shift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  10. The method of claim 8 or 9, wherein off1=off2=1 and prec=0.
  11. The method of claim 6, wherein the vertical gradient values in BDOF (g v) and/or the horizontal gradient values in BDOF (g h) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
    Figure PCTCN2020082038-appb-100001
    where, x is a variable, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 and/or offset1 are set to 0, or offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
  12. The method of claim 11, wherein the vertical gradient values in BDOF (g v) are calculated as following:
    g v=SatShift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  13. The method of claim 11, wherein the horizontal gradient values in BDOF (g h) are calculated as following:
    g h=SatShift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  14. The method of claim 12 or 13, wherein off1=off2=1 and prec=0.
  15. The method of claim 14, wherein the first coding tool is a prediction refinement with optical flow (PROF) process which is applied to affine coded blocks, and the second coding tool is an adaptive loop filtering (ALF) process.
  16. The method of claim 15, wherein the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [-1, 2, -1] filter.
  17. The method of claim 16, wherein the vertical gradient values in PROF (g v) and/or the horizontal gradient values in PROF (g h) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
    Shift (x, n) = (x+ offset0) >>n,
    where, x is a variable, offset0 is set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 is set to 0, or offset0= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
  18. The method of claim 17, wherein the vertical gradient values in PROF (g v) are calculated as following:
    g v=Shift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  19. The method of claim 17, wherein the horizontal gradient values in PROF (g h) are calculated as following:
    g h=Shift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  20. The method of claim 18 or 19, wherein off1=off2=1 and prec=0.
  21. The method of claim 16, wherein the vertical gradient values in PROF (g v) and/or the horizontal gradient values in PROF (g h) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
    Figure PCTCN2020082038-appb-100002
    where, x is a variable, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 and/or offset1 are set to 0, or offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
  22. The method of claim 21, wherein the vertical gradient values in PROF (g v) are calculated as following:
    g v=SatShift (2R (k, l) -R (k, l-off1) -R (k, l+off2) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  23. The method of claim 21, wherein the horizontal gradient values in PROF (g h) are calculated as following:
    g h=SatShift (2R (k, l) -R (k-off1, l) -R (k+off2, l) , prec) ,
    wherein R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  24. The method of claim 22 or 23, wherein off1=off2=1 and prec=0.
  25. The method of claim 3, wherein the first coding tool is an adaptive loop filtering (ALF) process, and the second coding tool is a bi-directional optical flow (BDOF) refinement.
  26. The method of claim 25, wherein the vertical gradient value calculation and/or the horizontal gradient value calculation are based on a [1, -1] filter.
  27. The method of claim 26, wherein the vertical gradient values in ALF (g v) and/or the horizontal gradient values in ALF (g h) are calculated by using a function of Shift (x, n) , the Shift (x, n) is defined as
    Shift (x, n) = (x+ offset0) >>n,
    where, x is a variable, offset0 is set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 is set to 0, or offset0= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
  28. The method of claim 27, wherein the vertical gradient values in ALF (g v) are calculated as following:
    g v (x, y) =Shift (R (x, y+off1) -R (x, y-off2) , prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  29. The method of claim 27, wherein the vertical gradient values in ALF (g v) are calculated as following:
    g v (x, y) =Shift (|R (x, y+off1) -R (x, y-off2) |, prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  30. The method of claim 27, wherein the horizontal gradient values in ALF (g h) are calculated as following:
    g h (x, y) =Shift (R (x+1, y) -R (x-1, y) , prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
  31. The method of claim 27, wherein the horizontal gradient values in ALF (g h) are calculated as following:
    g h (x, y) =Shift (|R (x+1, y) -R (x-1, y) |, prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
  32. The method of any of claims 28 -31, wherein off1=off2=1 and prec=0.
  33. The method of claim 26, wherein the vertical gradient values in ALF (g v) and/or the horizontal gradient values in ALF (g h) are calculated by using a function of SatShift (x, n) , the SatShift (x, n) is defined as
    Figure PCTCN2020082038-appb-100003
    where, x is a variable, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) , or offset0 and/or offset1 are set to 0, or offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1, n is an integer.
  34. The method of claim 33, wherein the vertical gradient values in ALF (g v) are calculated as following:
    g v (x, y) =SatShift (R (x, y+off1) -R (x, y-off2) , prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  35. The method of claim 33, wherein the vertical gradient values in ALF (g v) are calculated as following:
    g v (x, y) =SatShift (|R (x, y+off1) -R (x, y-off2) |, prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variables off1, off2 and prec are integers.
  36. The method of claim 33, wherein the horizontal gradient values in ALF (g h) are calculated as following:
    g h (x, y) =SatShift (R (x+1, y) -R (x-1, y) , prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
  37. The method of claim 33, wherein the horizontal gradient values in ALF (g h) are calculated as following:
    g h (x, y) =SatShift (|R (x+1, y) -R (x-1, y) |, prec) ,
    where, x and y are variables, R (i, j) indicates a reconstructed or prediction sample at coordinate (i, j) , variable prec is an integer.
  38. The method of any of claims 34-37, wherein off1=off2=1 and prec=0.
  39. The method of claim 25, wherein the vertical gradient values for all or partial samples in the first block are calculated in the same way to calculate the vertical gradient values in BDOF and averaged to obtain the vertical gradient value for the first block used in ALF.
  40. The method of claim 25, wherein the horizontal gradient values for all or partial samples in the first block are calculated in the same way to calculate the horizontal gradient values in BDOF and averaged to obtain the horizontal gradient value for the first block used in ALF.
  41. A method for processing video, comprising:
    deriving, for a conversion between a first block of video and a bitstream representation of the first block, gradient values used in one or more coding tools by applying a sub-block level gradient calculation process, wherein the gradient values are derived for partial samples within prediction blocks of the first block; and
    performing the conversion based on the derived gradient values.
  42. The method of claim 41, wherein the one or more coding tools include at least one of a bi-directional optical flow (BDOF) refinement, a prediction refinement with optical flow (PROF) process and other non-ALF coding tools.
  43. The method of claim 41 or 42, wherein only samples at selected coordinates are used to derive the gradient values.
  44. The method of any of claims 41 -43, wherein when gradient values are not calculated for samples at certain coordinates, the gradient values associated the samples at certain coordinates are copied from that associated with its neighbors wherein gradient vale is calculated.
  45. The method of claim 44, wherein how to copy gradient values from selected samples to those remaining samples depends on gradient direction including at least one of horizontal gradient or vertical gradient.
  46. A method for processing video, comprising:
    configuring, for a conversion between a first block of video and a bitstream representation of the first block, a first padding process in a first coding tool to be aligned with a second padding process in a second coding tool that is different from the first coding tool, wherein the first padding process is for padding samples out of range used to derive gradient values used in the first coding tool, and the second padding process is for padding samples out of range used to derive gradient values used in the second coding tool; and
    performing the conversion based on the configured first padding process.
  47. The method of claim 46, wherein whole or partial of the first padding process is aligned with the corresponding whole or partial of the second padding process.
  48. The method of claim 47, wherein the first coding tool is a bi-directional optical flow (BDOF) refinement, and the second coding tool is an adaptive loop filtering (ALF) process.
  49. The method of claim 47, wherein the first coding tool is an adaptive loop filtering (ALF) process, and the second coding tool is a bi-directional optical flow (BDOF) refinement.
  50. The method of claim 47, wherein the first coding tool is a bi-directional optical flow (BDOF) refinement, and the second coding tool is a prediction refinement with optical flow (PROF) process.
  51. The method of claim 47, wherein the first coding tool is a prediction refinement with optical flow (PROF) process, and the second coding tool is a bi-directional optical flow (BDOF) refinement.
  52. The method of claim 47, wherein the first coding tool is a prediction refinement with optical flow (PROF) process, and the second coding tool is an adaptive loop filtering (ALF) process.
  53. The method of claim 47, wherein the first coding tool is an adaptive loop filtering (ALF) process, and the second coding tool is a prediction refinement with optical flow (PROF) process.
  54. The method of any of claims 1-53, wherein the first coding tool and/or the second coding tool include coding tools that rely on calculation of gradient values.
  55. The method of any one of claims 1-54, wherein the conversion generates the first block of video from the bitstream representation.
  56. The method of any one of claims 1-54, wherein the conversion generates the bitstream representation from the first block of video.
  57. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 56.
  58. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of claims 1 to 56.
PCT/CN2020/082038 2019-03-29 2020-03-30 Interactions between adaptive loop filtering and other coding tools WO2020200159A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080025296.6A CN113632480B (en) 2019-03-29 2020-03-30 Interaction between adaptive loop filtering and other codec tools

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2019/080356 2019-03-29
CN2019080356 2019-03-29

Publications (1)

Publication Number Publication Date
WO2020200159A1 true WO2020200159A1 (en) 2020-10-08

Family

ID=72664728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082038 WO2020200159A1 (en) 2019-03-29 2020-03-30 Interactions between adaptive loop filtering and other coding tools

Country Status (2)

Country Link
CN (1) CN113632480B (en)
WO (1) WO2020200159A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021054886A1 (en) * 2019-09-20 2021-03-25 Telefonaktiebolaget Lm Ericsson (Publ) Methods of video encoding and/or decoding with bidirectional optical flow simplification on shift operations and related apparatus
WO2023025178A1 (en) * 2021-08-24 2023-03-02 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing
WO2024039803A1 (en) * 2022-08-18 2024-02-22 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for adaptive loop filter

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018166357A1 (en) * 2017-03-16 2018-09-20 Mediatek Inc. Method and apparatus of motion refinement based on bi-directional optical flow for video coding
WO2018221631A1 (en) * 2017-06-02 2018-12-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method, and decoding method
WO2019010156A1 (en) * 2017-07-03 2019-01-10 Vid Scale, Inc. Motion-compensation prediction based on bi-directional optical flow
CN109417620A (en) * 2016-03-25 2019-03-01 松下知识产权经营株式会社 For using signal dependent form adaptive quantizing by moving image encoding and decoded method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107925775A (en) * 2015-09-02 2018-04-17 联发科技股份有限公司 The motion compensation process and device of coding and decoding video based on bi-directional predicted optic flow technique
CN115278232A (en) * 2015-11-11 2022-11-01 三星电子株式会社 Method for decoding video and method for encoding video
US20170374369A1 (en) * 2016-06-24 2017-12-28 Mediatek Inc. Methods and Apparatuses of Decoder Side Intra Mode Derivation
US10757442B2 (en) * 2017-07-05 2020-08-25 Qualcomm Incorporated Partial reconstruction based template matching for motion vector derivation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109417620A (en) * 2016-03-25 2019-03-01 松下知识产权经营株式会社 For using signal dependent form adaptive quantizing by moving image encoding and decoded method and device
WO2018166357A1 (en) * 2017-03-16 2018-09-20 Mediatek Inc. Method and apparatus of motion refinement based on bi-directional optical flow for video coding
WO2018221631A1 (en) * 2017-06-02 2018-12-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method, and decoding method
WO2019010156A1 (en) * 2017-07-03 2019-01-10 Vid Scale, Inc. Motion-compensation prediction based on bi-directional optical flow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHING-YEH CHEN ET AL.: "Description of Core Experiment 5 (CE5): Adaptive Loop Filter", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 13TH MEETING: MARRAKECH, MA, 9–18 JAN. 2019, 18 January 2019 (2019-01-18), XP030202509 *
VADIM SEREGIN ET AL.: "CE5: Summary Report on Adaptive Loop Filter", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING: GENEVA, CH, 19–27 MARCH 2019, 27 March 2019 (2019-03-27), XP030203579 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021054886A1 (en) * 2019-09-20 2021-03-25 Telefonaktiebolaget Lm Ericsson (Publ) Methods of video encoding and/or decoding with bidirectional optical flow simplification on shift operations and related apparatus
US11936904B2 (en) 2019-09-20 2024-03-19 Telefonaktiebolaget Lm Ericsson (Publ) Methods of video encoding and/or decoding with bidirectional optical flow simplification on shift operations and related apparatus
WO2023025178A1 (en) * 2021-08-24 2023-03-02 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing
WO2024039803A1 (en) * 2022-08-18 2024-02-22 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for adaptive loop filter

Also Published As

Publication number Publication date
CN113632480B (en) 2024-07-12
CN113632480A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
US11516497B2 (en) Bidirectional optical flow based video coding and decoding
US11611747B2 (en) Adaptive loop filtering for video coding
US11889108B2 (en) Gradient computation in bi-directional optical flow
WO2020084464A1 (en) Decoder side motion vector derivation based on reference pictures
CA3134075A1 (en) Nonlinear adaptive loop filtering in video processing
US12113984B2 (en) Motion vector derivation between color components
CN112997500B (en) Improvements to region-based adaptive loop filters
WO2020200159A1 (en) Interactions between adaptive loop filtering and other coding tools
WO2020003260A1 (en) Boundary enhancement for sub-block
US20230156186A1 (en) Boundary location for adaptive loop filtering
US20240244226A1 (en) Method, apparatus, and medium for video processing
WO2020143826A1 (en) Interaction between interweaved prediction and other coding tools
US20240251108A1 (en) Method, device, and medium for video processing
WO2020140949A1 (en) Usage of interweaved prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20783666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20783666

Country of ref document: EP

Kind code of ref document: A1