CN113302938B - Integer MV motion compensation - Google Patents
Integer MV motion compensation Download PDFInfo
- Publication number
- CN113302938B CN113302938B CN202080008723.XA CN202080008723A CN113302938B CN 113302938 B CN113302938 B CN 113302938B CN 202080008723 A CN202080008723 A CN 202080008723A CN 113302938 B CN113302938 B CN 113302938B
- Authority
- CN
- China
- Prior art keywords
- block
- prediction
- equal
- rounding
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 claims abstract description 286
- 239000013598 vector Substances 0.000 claims abstract description 165
- 230000008569 process Effects 0.000 claims abstract description 49
- 238000006243 chemical reaction Methods 0.000 claims abstract description 31
- 230000007704 transition Effects 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims description 64
- 241000023320 Luma <angiosperm> Species 0.000 claims description 39
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 39
- 230000002123 temporal effect Effects 0.000 claims description 33
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 24
- 230000015654 memory Effects 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 2
- 238000009795 derivation Methods 0.000 description 26
- 238000005192 partition Methods 0.000 description 17
- 238000003672 processing method Methods 0.000 description 15
- 230000011664 signaling Effects 0.000 description 15
- 238000004590 computer program Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 239000000945 filler Substances 0.000 description 8
- 238000000638 solvent extraction Methods 0.000 description 7
- 238000010276 construction Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 101150114515 CTBS gene Proteins 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 229920000433 Lyocell Polymers 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Integer MV motion compensation is described. One example method includes: determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; performing a rounding process on a Motion Vector (MV) of a first block based on characteristics of the first block; and performing the conversion by using the rounded MV.
Description
Cross Reference to Related Applications
The present application aims to claim in time the priority and benefit of international patent application No. pct/CN2019/071396 filed on 1 month 11 of 2019, international patent application No. pct/CN2019/071503 filed on 1 month 12 of 2019, and international patent application No. pct/CN2019/077171 filed on 3 month 6 of 2019, in accordance with the applicable patent laws and/or rules of the paris convention. The entire disclosures of international patent applications No. pct/CN2019/071396, no. pct/CN2019/071503, and No. pct/CN2019/077171 are incorporated by reference as part of the present disclosure.
Technical Field
This document relates to video codec technology.
Background
Digital video occupies the largest bandwidth usage on the internet and other digital communication networks. As the number of networked user devices capable of receiving and displaying video increases, the bandwidth requirements of digital video usage are expected to continue to increase.
Disclosure of Invention
The disclosed techniques may be used by video decoder or encoder embodiments in which block shape interpolation ordering techniques are used to improve interpolation.
In one example aspect, a method of video bitstream processing is disclosed. The method comprises the following steps: determining a shape of the first video block; determining an interpolation order based on the shape of the first video block, the interpolation order indicating an order in which horizontal interpolation and vertical interpolation are performed; and performing horizontal interpolation and vertical interpolation on the first video block in order according to the interpolation order to reconstruct a decoded representation of the first video block.
In another example aspect, a method of video bitstream processing includes: determining a characteristic of a motion vector associated with the first video block; determining an interpolation order based on the characteristics of the motion vectors, the interpolation order indicating an order in which horizontal interpolation and vertical interpolation are performed; and performing horizontal interpolation and vertical interpolation on the first video block in order according to the interpolation order to reconstruct a decoded representation of the first video block.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining, by a processor, a size characteristic of a first video block; determining, by the processor, that a first interpolation filter is to be applied to the first video block based on the determination of the size characteristic; and performing further processing of the first video block using the first interpolation filter.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining, by the processor, a first characteristic of the first video block; determining, by the processor, that a first interpolation filter is to be applied to the first video block based on the first characteristic; performing further processing of the first video block using a first interpolation filter; determining, by the processor, a second characteristic of the second video block; determining, by the processor, that a second interpolation filter is to be applied to the second video block based on the second characteristic, the first interpolation filter and the second interpolation filter being different short tap filters; and performing further processing of the second video block using a second interpolation filter.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining, by the processor, a characteristic of the first video block, the characteristic including one or more of: size information of the first video block, prediction direction of the first video block, or motion information of the first video block; rounding a Motion Vector (MV) associated with the first video block to integer-pixel precision or half-pixel precision based on the determination of the characteristic of the first video block; and performing further processing of the first video block using the rounded motion vector.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining, by the processor, that the first video block is encoded in a Merge mode; rounding motion information associated with the first video block to integer precision based on a determination that the first video block is encoded in the Merge mode to generate modified motion information; and performing a motion compensation process on the first video block using the modified motion information.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of the first video block, the characteristic being one or both of: the size of the first video block or the shape of the first video block; modifying a motion vector associated with the first video block to integer-pixel precision or half-pixel precision to generate a modified motion vector; and performing further processing of the first video block using the modified motion vector.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of the first video block, the characteristic being one or both of: the size of the first video block or the prediction direction of the first video block; determining MMVD side information based on the determination of the characteristic of the first video block; and performing further processing of the first video block using MMVD side information.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of the first video block, the characteristic being one or both of: the size of the first video block or the shape of the first video block; modifying a motion vector associated with the first video block to integer-pixel precision or half-pixel precision to generate a modified motion vector; and performing further processing of the first video block using the modified motion vector.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of the first video block, the characteristic being one or both of: the size of the first video block or the shape of the first video block; determining a threshold number of half-pixel Motion Vector (MV) components or quarter-pixel MV components to be constrained based on the determination of the characteristic of the first video block; and performing further processing of the first video block using the threshold number.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of the first video block, the characteristic comprising a size of the first video block; modifying a Motion Vector (MV) associated with the first video block from a fractional precision to an integer precision based on the determination of the characteristic of the first video block; and performing motion compensation on the first video block using the modified MV.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a first size of a first video block; determining a first precision of a Motion Vector (MV) associated with the first video block based on the determination of the first size; determining a second size of the second video block, the first size and the second size being different sizes; determining a second precision of the MV associated with the second video block based on the determination of the second size, the first precision and the second precision being different precision; and performing further processing of the first video block using the first size and performing further processing of the second video block using the second size.
In another example aspect, a method of video processing is disclosed. The method comprises the following steps: determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; determining a filter having interpolation filter parameters for interpolation of the first block based on characteristics of the first block; and performing the conversion by using a filter having interpolation filter parameters.
In another example aspect, a method of video processing is disclosed. The method comprises the following steps: extracting reference pixels of a first reference block from a reference picture for a transition between a first block of video and a bitstream representation of the first block, wherein the first reference block is smaller than a second reference block required for motion compensation of the first block; filling the first reference block with filling pixels to generate a second reference block; and performing the conversion by using the generated second reference block.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; performing a rounding process on a Motion Vector (MV) of a first block based on characteristics of the first block; and performing the conversion by using the rounded MV.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; performing motion compensation on the first block using the MV having the first precision; and storing the MV having the second precision for the first block; wherein the first precision is different from the second precision.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a codec mode for a first block of the video for a transition between the first block and a bitstream representation of the first block; performing a rounding process on a Motion Vector (MV) of the first block if the codec mode of the first block satisfies a predetermined rule; and performing motion compensation of the first block by using the rounded MV.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: generating a first Motion Vector (MV) candidate list for the first block for conversion between the first block of video and a bitstream representation of the first block; performing a rounding procedure on MVs of the at least one candidate before adding the at least one candidate to the first MV candidate list; and performing the conversion by using the first MV candidate list.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; determining a constraint parameter to be applied to the first block based on a characteristic of the first block, wherein the constraint parameter constrains a maximum number of fractional Motion Vector (MV) components of the first block; and performing the conversion by using the constraint parameter.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: acquiring an indication that signaling of at least one of bi-prediction and uni-prediction is not allowed when a characteristic of the block satisfies a predetermined rule; determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; and performing the conversion by using the indication when the characteristic of the first block satisfies a predetermined rule.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: signaling an indication that at least one of bi-prediction and uni-prediction is not allowed when a characteristic of the block meets a predetermined rule; determining a characteristic of a first block of the video for a transition between the first block and a bit stream representation of the first block; the conversion is performed based on the characteristics of the first block, wherein at least one of bi-prediction and uni-prediction is disabled during the conversion when the characteristics of the first block satisfy a predetermined rule.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining whether a fractional Motion Vector (MV) or Motion Vector Difference (MVD) precision is allowed for the first block for a transition between the first block and a bitstream representation of the first block of video; based on the determination, signaling Advanced Motion Vector Resolution (AMVR) parameters for the first block; and performing the conversion by using the AMVR parameters.
In another example aspect, a method for video bitstream processing is disclosed. The method comprises the following steps: determining whether a fractional Motion Vector (MV) or Motion Vector Difference (MVD) precision is allowed for the first block for a transition between the first block and a bitstream representation of the first block of video; based on the determination, obtaining Advanced Motion Vector Resolution (AMVR) parameters for the first block; and performing the conversion by using the AMVR parameters.
In another example aspect, the above-described method may be implemented by a video decoder apparatus including a processor.
In another example aspect, the above-described method may be implemented by a video encoder apparatus comprising a processor for decoding encoded video during a video encoding process.
In yet another example aspect, the methods may be embodied in the form of processor-executable instructions and stored on a computer-readable program medium.
These and other aspects are further described in this document.
Drawings
FIG. 1 is a diagram of a quadtree BINARY TREE (QTBT) structure.
Fig. 2 shows an example derivation process of Merge candidate list construction.
FIG. 3 illustrates example locations of airspace Merge candidates.
Fig. 4 shows an example of candidate pairs that consider redundancy checks for spatial Merge candidates.
Fig. 5 (a) and 5 (b) show examples of the positions of the second Prediction Units (PU) of the nx2n and 2nxn partitions.
Fig. 6 is a graphical representation of motion vector scaling of a temporal Merge candidate.
Fig. 7 shows example candidate locations C0 and C1 of the time domain Merge candidate.
Fig. 8 shows an example of a combined bi-prediction Merge candidate.
Fig. 9 shows an example of a derivation process of motion vector prediction candidates.
Fig. 10 is a diagram of motion vector scaling of spatial motion vector candidates.
Fig. 11 shows an example of advanced temporal motion vector prediction (ALTERNATIVE TEMPORAL MOTION VECTOR PREDICTION, ATMVP) for a Coding Unit (CU).
FIG. 12 shows an example of one CU with four sub-blocks (A-D) and its neighboring blocks (a-D).
Fig. 13 shows proposed non-adjacent Merge candidates in one example.
Fig. 14 shows proposed non-adjacent Merge candidates in one example.
Fig. 15 shows proposed non-adjacent Merge candidates in one example.
Fig. 16 shows an example of integer-sample and fractional-sample positions for quarter-sample luminance interpolation.
Fig. 17 is a block diagram of an example of a video processing apparatus.
Fig. 18 shows a block diagram of an example embodiment of a video encoder.
Fig. 19 is a flowchart of an example of a video bitstream processing method.
Fig. 20 is a flowchart of an example of a video bitstream processing method.
Fig. 21 shows an example of repeated boundary pixels of a reference block before interpolation.
Fig. 22 is a flowchart of an example of a video bitstream processing method.
Fig. 23 is a flowchart of an example of a video bitstream processing method.
Fig. 24 is a flowchart of an example of a video bitstream processing method.
Fig. 25 is a flowchart of an example of a video bitstream processing method.
Fig. 26 is a flowchart of an example of a video bitstream processing method.
Fig. 27 is a flowchart of an example of a video bitstream processing method.
Fig. 28 is a flowchart of an example of a video bitstream processing method.
Fig. 29 is a flowchart of an example of a video bitstream processing method.
Fig. 30 is a flowchart of an example of a video bitstream processing method.
Fig. 31 is a flowchart of an example of a video bitstream processing method.
Fig. 32 is a flowchart of an example of a video bitstream processing method.
Fig. 33 is a flowchart of an example of a video bitstream processing method.
Fig. 34 is a flowchart of an example of a video bitstream processing method.
Detailed Description
This document provides various techniques that may be used by a decoder of a video bitstream to improve the quality of decompressed or decoded digital video. Furthermore, the video encoder may also implement these techniques during the encoding process in order to reconstruct the decoded frames for further encoding.
Chapter headings are used in this document for ease of understanding and do not limit the embodiments and techniques to the corresponding chapters. As such, embodiments of one section may be combined with embodiments of other sections.
1. Summary of the invention
This patent document relates to video codec technology. In particular, it relates to interpolation in video codec. It can be applied to existing video coding standards such as HEVC, or standards to be finalized (multi-function video coding). It may also be applicable to future video codec standards or video codecs.
2. Background
Video codec standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards. ITU-T makes h.261 and h.263, ISO/IEC makes MPEG-1 and MPEG-4 vision, and these two organizations jointly make h.262/MPEG-2 video and h.264/MPEG-4 Advanced Video Codec (AVC) and h.265/HEVC standards. Since h.262, video codec standards have been based on hybrid video codec structures, where temporal prediction plus transform coding is utilized. To explore future video coding techniques beyond HEVC, a joint video exploration group (JVET) was established by VCEG and MPEG in 2015. Thereafter, JVET employed a number of new methods and entered them into reference software named Joint Exploration Model (JEM). In month 4 2018, a joint video expert group (JVET) was created between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) with the goal of a 50% bit rate reduction of the VVC standard compared to HEVC.
Fig. 18 is a block diagram of an example implementation of a video encoder.
2.1 Quadtree and binary Tree (QTBT) Block Structure with larger CTU
In HEVC, CTUs are partitioned into CUs by using a quadtree structure denoted as a coding tree to accommodate various local features. At the CU level it is decided whether to use inter-picture (temporal) or intra-picture (spatial) prediction for coding the picture region. Each CU may be further divided into one, two, or four PUs according to PU partition types. Within one PU the same prediction process is applied and the relevant information is sent to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU partition type, the CU may be partitioned into Transform Units (TUs) according to another quadtree structure similar to the coding tree for the CU. One of the key features of the HEVC structure is that it has multiple partition concepts, including CUs, PUs, and TUs.
The QTBT structure removes the concept of multiple partition types, i.e., it removes the separation of CU, PU and TU concepts and supports greater flexibility in CU partition shape. In the QTBT block structure, a CU may have a square or rectangular shape. As shown in fig. 1, a Coding Tree Unit (CTU) is first divided by a quadtree structure. The quadtree leaf nodes are further partitioned by a binary tree structure. In binary tree partitioning, there are two partition types, symmetric horizontal partitioning and symmetric vertical partitioning. The binary tree leaf child node is called a coding and decoding unit (CU), and this segmentation is used for prediction and transformation processing without any further segmentation. This means that in QTBT codec block structures, the CU, PU, and TU have the same block size. In JEM, a CU sometimes consists of Codec Blocks (CBs) of different color components, e.g., one CU contains one luma CB and two chroma CBs in the case of P and B slices of a 4:2:0 chroma format, and sometimes consists of CBs of a single component, e.g., one CU contains only one luma CB or only two chroma CBs in the case of I slices.
The following parameters are defined for QTBT partitioning scheme.
CTU size: root node size of quadtree, same concept as in HEVC
-MinQTSize: minimum allowable quadtree node size
-MaxBTSize: maximum allowable binary tree root node size
-MaxBTDepth: maximum allowable binary tree depth
-MinBTSize: minimum allowable binary tree leaf node size
In one example of QTBT split structure, CTU size is set to 128 x 128 luma samples with two corresponding 64 x 64 chroma sample blocks, minQTSize is set to 16 x 16, maxbtsize is set to 64 x 64, minbtsize (for both width and height) is set to 4 x 4, and MaxBTDepth is set to 4. The quadtree partition is first applied to the CTUs to generate quadtree leaf nodes. The size of the quad-leaf nodes may range from 16×16 (i.e., minQTSize) to 128×128 (i.e., CTU size). If the leaf quadtree node is 128 x 128, it is not further divided by the binary tree because the size exceeds MaxBTSize (i.e., 64 x 64). Otherwise, the quadtree leaf nodes may be further partitioned by the binary tree. Thus, the quadtree leaf node is also the root node of the binary tree, and its binary tree depth is 0. When the depth of the binary tree reaches MaxBTDepth (i.e., 4), no further partitioning is considered. When the width of the binary tree node is equal to MinBTSize (i.e., 4), no further horizontal partitioning is considered. Similarly, when the height of the binary tree node is equal to MinBTSize, further vertical partitioning is not considered. The leaf nodes of the binary tree are further processed by the prediction and transformation processes without any further segmentation. In JEM, the maximum CTU size is 256×256 luminance samples.
Fig. 1 shows an example of block segmentation by using QTBT, and fig. 1 (right) shows a corresponding tree representation. The solid line represents a quadtree partition and the dashed line represents a binary tree partition. In each partition (i.e., non-leaf) node of the binary tree, a flag is signaled to indicate which partition type (i.e., horizontal or vertical) to use, where 0 indicates a horizontal partition and 1 indicates a vertical partition. For the quadtree division, since the quadtree division always divides the block in the horizontal and vertical directions to generate 4 sub-blocks having the same size, the division type does not need to be indicated.
In addition, the QTBT scheme supports the ability for luminance and chrominance to have separate QTBT structures. Currently, for P and B slices, the luma and chroma CTBs in one CTU share the same QTBT structure. However, for the I-slice, the luma CTB is partitioned into CUs by the QTBT structure, and the chroma CTB is partitioned into chroma CUs by another QTBT structure. This means that a CU in an I slice includes a codec block for a luma component or a codec block for two chroma components, and a CU in a P or B slice includes a codec block for all three color components.
In HEVC, inter prediction for small blocks is limited to reduce motion compensated memory access such that bi-directional prediction is not supported for 4x 8 and 8 x4 blocks and inter prediction is not supported for 4x4 blocks. In QTBT of JEM, these restrictions are removed.
2.2 Inter prediction for HEVC/H.265
Each inter-predicted PU has motion parameters for one or two reference picture lists. The motion parameters include a motion vector and a reference picture index. The use of one of the two reference picture lists may also be signaled using inter predidc. The motion vector may be explicitly encoded as an increment relative to the predictor.
When a CU is encoded with skip mode, one PU is associated with the CU and there are no significant residual coefficients, motion vector delta without encoding or reference picture index. The Merge mode is specified whereby the motion parameters of the current PU are obtained from neighboring PUs that include spatial and temporal candidates. The Merge mode may be applied to any inter-predicted PU, not just for skip mode. An alternative to the Merge mode is explicit transmission of motion parameters, where each PU explicitly signals a motion vector (more precisely, a motion vector difference compared to a motion vector prediction), a corresponding reference picture index for each reference picture list, and the use of the reference picture list. In this disclosure, such a mode is referred to as advanced motion vector prediction (Advanced Motion Vector Prediction, AMVP).
When the signaling indicates that one of the two reference picture lists is to be used, a PU is generated from a sample block. This is called "unidirectional prediction". Unidirectional prediction may be used for P-stripes and B-stripes.
When the signaling indicates that both of the two reference picture lists are to be used, a PU is generated from the two sample blocks. This is called "bi-prediction". Bi-prediction is only available for B slices.
The following text provides detailed information about inter prediction modes specified in HEVC. The description will start from the Merge mode.
2.2.1Merge mode
2.2.1.1 Derivation of candidates for Merge mode
When predicting a PU using the Merge mode, an index to an entry in the Merge candidate list is parsed from the bitstream and used to retrieve motion information. The construction of this list is specified in the HEVC standard and can be summarized in the sequence of the following steps:
Step 1: initial candidate derivation
Step 1.1: spatial candidate derivation
Step 1.2: redundancy check of airspace candidates
Step 1.3: time domain candidate derivation
Step 2: additional candidate inserts
Step 2.1: creating bi-prediction candidates
Step 2.2: inserting zero motion candidates
These steps are also schematically depicted in fig. 2. For spatial-domain Merge candidate derivation, a maximum of four Merge candidates are selected among the candidates located at five different positions. For time domain Merge candidate derivation, a maximum of one Merge candidate is selected among the two candidates. Since a constant number of candidates per PU is assumed at the decoder, additional candidates are generated when the number of candidates obtained from step 1 does not reach the maximum number of Merge candidates signaled in the slice header (MaxNumMergeCand). Since the number of candidates is constant, the index of the best Merge candidate is encoded using truncated unary binarization (Truncated Unary binarization, TU). If the size of the CU is equal to 8, then all PUs of the current CU share a single Merge candidate list, which is the same as the Merge candidate list of the 2Nx2N prediction unit.
Hereinafter, the operations associated with the above steps will be described in detail.
2.2.1.2 Spatial candidate derivation
In the derivation of the spatial-domain Merge candidates, up to four Merge candidates are selected among candidates located at the positions depicted in fig. 3. The deduced sequences are a 1、B1、B0、A0 and B 2. Position B 2 is only considered when any PU of position a 1、B1、B0、A0 is not available (e.g., because it belongs to another slice or slice) or is intra-coded. After adding the candidates at the position a 1, redundancy check is performed on the addition of the remaining candidates, which ensures that candidates having the same motion information are excluded from the list, thereby improving the codec efficiency. Fig. 4 shows an example of candidate pairs that consider redundancy checks for spatial Merge candidates. In order to reduce the computational complexity, all possible candidate pairs are not considered in the mentioned redundancy check. Instead, only the pair linked with the arrow in fig. 4 is considered, and if the corresponding candidates for redundancy check do not have the same motion information, the candidates are added to the list only. Another source of duplicate motion information is a "second PU" associated with a partition other than 2nx2n. As an example, fig. 5 depicts a second PU in the case of nx2n and 2nxn. When the current PU is partitioned into nx2n, the candidate at position a 1 is not considered for list construction. In fact, by adding this candidate will result in two prediction units having the same motion information, which is redundant for having only one PU in the coding unit. Similarly, when the current PU is partitioned into 2nxn, position B 1 is not considered.
2.2.1.3 Time-domain candidate derivation
In this step, only one candidate is added to the list. In particular, in the derivation of the temporal Merge candidate, a scaled motion vector is derived based on a collocated (co-located) PU belonging to a picture having a smallest POC (Picture Order Count ) difference from a current picture within a given reference picture list. The reference picture list to be used for deriving the collocated PU is explicitly signaled in the slice header. Fig. 5 is a graphical representation of motion vector scaling of a temporal Merge candidate. As shown by the dashed line in fig. 5, a scaled motion vector for the time-domain mere candidate is obtained, which is scaled from the motion vector of the collocated PU using POC distances tb and td, where tb is defined as the POC difference between the reference picture of the current picture and the current picture, and td is defined as the POC difference between the reference picture of the collocated picture and the collocated picture. The reference picture index of the temporal Merge candidate is set equal to zero. The actual implementation of the scaling procedure is described in the HEVC specification. For the B slices, two motion vectors (one for reference picture list 0 and the other for reference picture list 1) are obtained and combined to generate bi-prediction Merge candidates.
Fig. 6 is a diagram of motion vector scaling for a temporal Merge candidate.
In the collocated PU (Y) belonging to the reference frame, the location of the time domain candidate is selected between candidates C 0 and C 1, as depicted in fig. 7. If the PU at location C 0 is not available, is intra-coded, or is outside the current CTU row (row), location C 1 is used. Otherwise, position C 0 is used in the derivation of the time domain Merge candidate.
2.2.1.4 Additional candidate insertions
In addition to the spatial and temporal Merge candidates, there are two additional types of Merge candidates: combined bi-predictive Merge candidate and zero Merge candidate. A combined bi-predictive Merge candidate is generated by utilizing both spatial and temporal Merge candidates. The combined bi-predictive Merge candidate is only for the B stripe. A combined bi-prediction candidate is generated by combining a first reference picture list motion parameter of an initial candidate with a second reference picture list motion parameter of another initial candidate. If the two tuples provide different motion hypotheses they will form new bi-prediction candidates. As an example, fig. 8 depicts the case when two candidates (with mvL0 and refIdxL0 or mvL1 and refIdxL 1) in the original list (on the left) are used to create a combined bi-prediction Merge candidate that is added to the final list (on the right). There are many rules about the combination that are considered to generate these additional Merge candidates.
Zero motion candidates are inserted to fill the remaining entries in the Merge candidate list and thus reach MaxNumMergeCand capacity. These candidates have zero spatial displacement and a reference picture index that starts from zero and increases each time a new zero motion candidate is added to the list. The number of reference frames used by these candidates is 1 and 2 for unidirectional prediction and bi-prediction, respectively. Finally, no redundancy check is performed on these candidates.
2.2.1.5 Motion estimation regions for parallel processing
In order to accelerate the encoding process, motion estimation may be performed in parallel, whereby the motion vectors of all prediction units within a given region are derived simultaneously. Deriving Merge candidates from spatial neighborhood may interfere with parallel processing because one prediction unit cannot derive motion parameters from neighboring PUs until its associated motion estimation is complete. To mitigate the tradeoff between codec efficiency and processing latency (trade-off), HEVC defines a motion estimation region (Motion Estimation Region, MER) whose size is signaled in a picture parameter set using a "log2_parallel_merge_level_minus2" syntax element. When defining MERs, mere candidates that fall into the same region are marked as unavailable and are therefore not considered in list construction.
2.2.2AMVP
AMVP exploits the spatio-temporal correlation of motion vectors with neighboring PUs, which is used for explicit transmission of motion parameters. For each reference picture list, a motion vector candidate list is constructed by first checking the availability of the left, upper temporally adjacent PU locations, removing redundant candidates and adding zero vectors to make the candidate list a constant length. The encoder may then select the best predicted amount from the candidate list and send a corresponding index indicating the selected candidate. Similar to the Merge index signaling, the index of the best motion vector candidate is encoded using truncated unary. In this case, the maximum value to be encoded is 2 (see fig. 9). In the following section, details about the derivation process of motion vector prediction candidates will be provided.
Derivation of 2.2.2.1AMVP candidates
Fig. 9 outlines the derivation of motion vector prediction candidates.
In motion vector prediction, two types of motion vector candidates are considered: spatial domain motion vector candidates and temporal motion vector candidates. For spatial domain motion vector candidate derivation, two motion vector candidates are ultimately derived based on the motion vector of each PU located at five different locations as depicted in fig. 3.
For temporal motion vector candidate derivation, one motion vector candidate is selected from two candidates derived based on two different collocated positions. After the first list of space-time candidates is generated, the repeated motion vector candidates in the list are removed. If the number of potential candidates is greater than two, motion vector candidates having reference picture indices greater than 1 within the associated reference picture list are removed from the list. If the number of spatio-temporal motion vector candidates is smaller than two, additional zero motion vector candidates are added to the list.
2.2.2.2 Spatial motion vector candidates
In the derivation of spatial motion vector candidates, a maximum of two candidates are considered among five potential candidates derived from PUs located at positions as depicted in fig. 3, those positions being the same as the position of motion Merge. The derivation order on the left side of the current PU is defined as a 0、A1, and scale a 0, scale a 1. The derivation order of the upper side of the current PU is defined as B 0、B1、B2, scale B 0, scale B 1, scale B 2. Thus for each side there are four cases that can be used as motion vector candidates, two of which do not require spatial scaling and two of which use spatial scaling. Four different cases are summarized as follows:
no spatial scaling
- (1) Identical reference picture list, and identical reference picture index (identical POC)
- (2) Different reference picture lists, but the same reference picture (same POC)
Spatial domain scaling
- (3) Identical reference picture list, but different reference pictures (different POC)
- (4) Different reference picture list, different reference picture (different POC)
First check the non-spatial scaling case, then spatial scaling. Spatial scaling is considered when POC differs between the reference picture of the neighboring PU and the reference picture of the current PU, regardless of the reference picture list. If all PUs of the left candidate are not available or are intra-coded, scaling for the top motion vector is allowed to aid in the parallel derivation of the left and top MV candidates. Otherwise, spatial scaling is not allowed for the top motion vector.
Fig. 10 is a diagram of motion vector scaling for spatial motion vector candidates.
As depicted in fig. 10, in the spatial scaling process, the motion vectors of neighboring PUs are scaled in a similar manner as in the temporal scaling. The main difference is that the reference picture list and the index of the current PU are given as inputs; the actual scaling procedure is the same as the time domain scaling procedure.
2.2.2.3 Temporal motion vector candidates
All procedures for deriving temporal Merge candidates are the same as those for deriving spatial motion vector candidates except for reference picture index derivation (see fig. 7). The reference picture index is signaled to the decoder.
2.3 New inter Merge candidates in JEM
2.3.1 Sub-CU based motion vector prediction
In JEM with QTBT, there can be at most one set of motion parameters per prediction direction per CU. By dividing the large CU into sub-CUs and deriving the motion information of all sub-CUs of the large CU, two sub-CU level motion vector prediction methods are considered in the encoder. An optional temporal motion vector prediction (ATMVP) method allows each CU to extract multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In a Spatial-temporal-Temporal Motion Vector Prediction (STMVP) method, the motion vectors of the sub-CUs are recursively derived by using a time-domain motion vector predictor and a Spatial-domain neighboring motion vector.
In order to maintain a more accurate motion field for sub-CU motion prediction, motion compression of the reference frame is currently disabled.
2.3.1.1 Optional temporal motion vector prediction
In an Alternative Temporal Motion Vector Prediction (ATMVP) method, motion vector temporal motion vector prediction (Temporal Motion Vector Prediction, TMVP) is modified by extracting multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in fig. 11, the sub CU is a square nxn block (N is set to 4 by default).
ATMVP predicts the motion vectors of sub-CUs within a CU in two steps. The first step is to identify the corresponding block in the reference picture with a so-called temporal vector. The reference picture is also called a motion source picture. The second step is to divide the current CU into sub-CUs and obtain a motion vector and a reference index of each sub-CU from a block corresponding to each sub-CU, as shown in fig. 11.
In a first step, the reference picture and the corresponding block are determined from motion information of spatial neighboring blocks of the current CU. To avoid the iterative scanning process of neighboring blocks, the first Merge candidate in the Merge candidate list of the current CU is used. The first available motion vector and its associated reference index are set as the temporal vector and the index of the motion source picture. In this way, in an ATMVP, the corresponding block (sometimes referred to as a collocated block) may be more accurately identified than a TMVP, where the corresponding block is always in a lower right or center position relative to the current CU.
In a second step, by adding a temporal vector to the coordinates of the current CU, the corresponding block of the sub-CU is identified by the temporal vector in the motion source picture. For each sub-CU, the motion information of its corresponding block (the smallest motion grid covering the center sample) is used to derive the motion information of the sub-CU. After identifying the motion information of the corresponding nxn block, it is converted into a motion vector and a reference index of the current sub-CU in the same manner as TMVP of HEVC, where motion scaling and other processes are applicable. For example, the decoder checks whether a low latency condition is met (i.e., POC of all reference pictures of the current picture is less than POC of the current picture), and may use the motion vector MV x (e.g., the motion vector corresponding to reference picture list X) to predict the motion vector MV y for each sub-CU (e.g., where X is equal to 0 or 1, and Y is equal to 1-X).
2.3.1.2 Spatio-temporal motion vector prediction (STMVP)
In this method, the motion vectors of the sub-CUs are recursively deduced in raster scan order. Fig. 12 shows this concept. Let us consider an 8 x 8 CU, which contains 4 x 4 sub-CUs: A. b, C and D. The neighboring 4 x 4 blocks in the current frame are labeled a, b, c, and d.
The motion derivation of sub-CU a begins by identifying its two spatial neighbors (neighbors). The first neighbor is an nxn block (block c) above sub-CU a. If this block c is not available or intra-coded, then the other nxn blocks above sub-CU a are checked (from left to right, starting from block c). The second neighbor is the block to the left of sub-CU a (block b). If block b is not available or intra-coded, other blocks to the left of sub-CU a are checked (from top to bottom, starting from block b). The motion information obtained from the neighboring blocks of each list is scaled to the first reference frame of the given list. Next, a Temporal Motion Vector Prediction (TMVP) of sub-block a is derived by following the same procedure as TMVP derivation specified by HEVC. The motion information of the collocated block at position D is extracted and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged separately for each reference list. The average motion vector is assigned as the motion vector of the current sub-CU.
2.3.1.3 Sub-CU motion prediction mode signaling
The sub-CU modes are enabled as additional Merge candidates and these modes need not be signaled by additional syntax elements. Two additional Merge candidates are added to the Merge candidate list for each CU to represent the ATMVP mode and the STMVP mode. Up to seven Merge candidates may be used if the sequence parameter set indicates that ATMVP and STMVP are enabled. The coding logic of the additional Merge candidate is the same as that of the Merge candidate in the HM, which means that for each CU in the P-slice or B-slice, two more RD checks may be required for the two additional Merge candidates.
In JEM, all binary bits (bins) of the Merge index are context-coded by CABAC. Whereas in HEVC, only the first binary bit is context-coded and the remaining binary bits are context-bypass-coded.
2.3.2 Non-adjacent Merge candidates
The high pass proposes deriving additional spatial Merge candidates from non-adjacent neighbors as marked 6 to 49 in fig. 13. The derived candidates are added to the Merge candidate list after the TMVP candidates.
Tencel proposes deriving additional spatial Merge candidates from the position in the outer reference region at which the offset with respect to the current block is (-96 ).
As shown in FIG. 14, the locations are labeled A (i, j), B (i, j), C (i, j), D (i, j), and E (i, j). Each candidate B (i, j) or C (i, j) is offset by 16 in the vertical direction compared to the previous B or C candidate. Each candidate a (i, j) or D (i, j) is offset by 16 in the horizontal direction compared to the previous a or D candidate. Each E (i, j) has a shift of 16 in both the horizontal and vertical directions compared to the previous E candidate. Candidates are checked from inside to outside. And the candidates are in the order a (i, j), B (i, j), C (i, j), D (i, j), and E (i, j). Further studies were made as to whether the number of Merge candidates could be further reduced. The candidate is added to the Merge candidate list after the TMVP candidate.
In some examples, as in fig. 15, extended spatial locations from 6 to 27 may be checked according to their numerical order after the time domain candidates. To save the MV line buffers, all spatial candidates are limited to two CTU lines.
2.4 Intra prediction in JEM
2.4.1 Intra mode codec with 67 intra prediction modes
For luminance interpolation filtering, an 8-tap separable DCT-based interpolation filter was used for the 2/4 precision samples, and a 7-tap separable DCT-based interpolation filter was used for the 1/4 precision samples, as shown in Table 1.
Table 1: 8-tap DCT-IF coefficients for 1/4 th luminance interpolation
Position of | Filter coefficients |
1/4 | {-1,4,-10,58,17,-5,1} |
2/4 | {-1,4,-11,40,40,-11,4,-1} |
3/4 | {1,-5,17,58,-10,4,-1} |
Similarly, a 4-tap separable DCT-based interpolation filter is used for the chroma interpolation filter, as shown in Table 2.
Table 2: 4-tap DCT-IF coefficients for 1/8 chroma interpolation
Position of | Filter coefficients |
1/8 | {-2,58,10,-2} |
2/8 | {-4,54,16,-2} |
3/8 | {-6,46,28,-4} |
4/8 | {-4,36,36,-4} |
5/8 | {-4,28,46,-6} |
6/8 | {-2,16,54,-4} |
7/8 | {-2,10,58,-2} |
For vertical interpolation of 4:2:2 and horizontal and vertical interpolation for 4:4:4 chroma channels, the odd positions in table 2 are not used, resulting in a 1/4 th chroma interpolation.
For bi-prediction, the bit depth of the output of the interpolation filter is kept to 14-bit precision, regardless of the source bit depth, before averaging the two prediction signals. The actual averaging process is implicitly done using a bit depth reduction process:
predSamples[x,y]=(predSamplesL0[x,y]+predSamplesL1[x,y]+offset)>>shift
Wherein shift= (15-BitDepth) and offset = 1< < < (shift-1)
If both the horizontal and vertical components of the motion vector point to sub-pixel positions, then always horizontal interpolation is performed first, followed by vertical interpolation. For example, to interpolate sub-pixel j 0,0 shown in fig. 16, sub-pixel j 0,0 may be interpolated. In fig. 16, b 0,k (k= -3, -2, … 3) is first interpolated according to equation 2-1, and then j 0,0 is interpolated according to equation 2-2. Here, shift 1=min (4, bitdepthy-8) and shift 2=6.
b0,k=(-A-3,k+4*A-2,k-11*A-1,k+40*A0,k+40*A1,k-11*A2,k+4*A3,k-A4,k)>>shift1 (2-1)
j0,0=(-b0,-3+4*b0,-2-11*b0,-1+40*b0,0+40*b0,1-11*b0,2+4*b0,3-b0,4)>>shift2 (2-2)
Alternatively, we can perform the vertical interpolation first, then the horizontal interpolation. In this case, to interpolate the interpolation j 0,0, h k,0 (k= -3, -2, … 3) is first interpolated according to equations 2-3, and then j 0,0 is interpolated according to equations 2-4. When BitDepthY is less than or equal to 8, shift1 is 0, and no content is lost in the first interpolation stage, so the final interpolation result is not changed in the interpolation order. However, when BitDepthY is greater than 8, shift1 is greater than 0. In this case, the final interpolation result may be different when different interpolation sequences are applied.
hk,0=(-Ak,-3+4*Ak,-2-11*Ak,-1+40*Ak,0+40*Ak,1-11*Ak,2+4*Ak,3-Ak,4)>>shift1 (2-3)
j0,0=(-h-3,0+4*h-2,0-11*h-1,0+40*h0,0+40*h1,0-11*h2,0+4*h3,0-h4,0)>>shift2 (2-4)
3. Example of problems solved by the embodiments
For the luminance block size w×h, if we always perform horizontal interpolation first, the required interpolation (per pixel) is as shown in table 3.
Table 3: interpolation required for W X H luminance component by HEVC/JEM
On the other hand, if we first perform vertical interpolation, then the required interpolation is as shown in Table 4. Obviously, the optimal interpolation sequence is a sequence requiring a smaller interpolation time between tables 3 and 4.
Table 4: interpolation required for w×h luminance components when the interpolation order is reversed
For the chrominance components, if horizontal interpolation is always performed first, the required interpolation is ((h+3) ×w+w×h)/(w×h) =2+3/H. If we always perform vertical interpolation first, the required interpolation is ((w+3) ×h+w×h)/(w×h) =2+3/W.
As described above, when the bit depth of the input video is greater than 8, different interpolation sequences may result in different interpolation results. Thus, the interpolation order should be implicitly defined in both the encoder and the decoder.
4. Examples of the embodiments
To address these issues and provide other benefits, we propose interpolation sequences that rely on shape. Let the interpolation filter tap (in motion compensation) be N (e.g., 8, 6, 4 or 2) and the current block size be w×h.
Let the number of MVDs allowed in MMVD (such as the number of entries of the distance table) be M. Note that the triangle pattern is regarded as a bi-prediction pattern, and the following technique related to bi-prediction can also be applied to the triangle pattern.
The following detailed examples should be considered as examples explaining the general concepts. These examples should not be interpreted narrowly. Furthermore, these examples may be combined in any manner.
1. It is proposed that the interpolation order depends on the current codec block shape (e.g., the codec block is a CU).
A. In one example, for blocks of width > height (such as CUs, PUs or sub-blocks used in sub-block based prediction like affine, ATMVP or BIO), vertical interpolation is performed first, then horizontal interpolation is performed, e.g. pixels d k,0、hk,0 and n k,0 are interpolated first, then e 0,0 to r 0,0 are interpolated. Examples of j 0,0 are shown in equations 2-3 and 2-4.
I. Alternatively, for a block of width > =height (such as a CU, PU or sub-block used in sub-block based prediction like affine, ATMVP or BIO), vertical interpolation is performed first, then horizontal interpolation is performed.
B. In one example, for blocks of width < = height (such as CUs, PUs or sub-blocks used in sub-block based prediction like affine, ATMVP or BIO), horizontal interpolation is performed first, then vertical interpolation is performed.
I. Alternatively, for blocks of width < height (such as CU, PU or sub-blocks used in sub-block based prediction like affine, ATMVP or BIO), horizontal interpolation is performed first, then vertical interpolation is performed.
C. In one example, both the luminance component and the chrominance component follow the same interpolation order.
D. Alternatively, when one chroma codec block corresponds to a plurality of luma codec blocks (e.g., one chroma 4 x 4 block may correspond to two 8 x 4 or 4 x 8 luma blocks for a 4:2:0 color format), the luma and chroma may use different interpolation orders.
E. in one example, when different interpolation sequences are utilized, the scaling factors in the multiple phases (i.e., shift1 and shift 2) may also be changed accordingly.
2. Furthermore, alternatively, it is proposed that the interpolation order of the luminance components may also depend on the MV.
A. In one example, if the vertical MV component points to a quarter-pixel location and the horizontal MV component points to a half-pixel location, then horizontal interpolation is performed first, followed by vertical interpolation.
B. in one example, if the vertical MV component points to a half-pixel location and the horizontal MV component points to a quarter-pixel location, then the vertical interpolation is performed first, followed by the horizontal interpolation.
C. in one example, the proposed method is applied only to square codec blocks.
3. It is proposed that for blocks that are codec in Merge mode (e.g., regular Merge list, triangle Merge list, affine Merge list, or other non-intra/non-AMVP mode), the associated motion information may be modified to integer precision (e.g., via rounding) before invoking the motion compensation process.
A. Alternatively, the Merge candidate with the score Merge candidate may be excluded from the Merge list.
B. Alternatively, when associating a Merge candidate derived from a spatial or temporal block or otherwise (such as HMVP, paired bi-predictive Merge candidates) with a fractional motion vector, the fractional motion vector may first be modified to an integer precision (e.g., via rounding) before being added to the Merge list.
C. In one example, a separate HMVP table may be maintained dynamically (on-the-fly) to store motion candidates with integer precision.
D. alternatively, the above method may be applied only when the Merge candidate is a bi-predictive candidate.
E. in one example, the above method may be applied to certain block sizes, such as 4×16, 16×4,4×8, 8×4,4×4.
F. In one example, the above method may be applied to an AMVP codec block, where the Merge candidate may be replaced with an AMVP candidate.
G. In one example, the above method may be applied to certain block modes, such as non-affine modes.
4. It is proposed that MMVD side information (such as distance table, direction) may depend on block size and/or prediction direction (e.g., unidirectional prediction or bi-directional prediction).
A. in one example, a distance table with all integer accuracies may be defined or signaled.
B. In one example, if the base Merge candidate is associated with a motion vector of fractional precision, it may first be modified (such as via rounding) to integer precision and then used to derive the final motion vector for motion compensation.
5. It is proposed that for certain block sizes or block shapes, MVs in MMVD modes can be constrained to have integer-pixel precision or half-pixel precision.
A. In one example, if integer-pixel precision is selected for MMVD codec blocks, the base Merge candidate used in MMVD may first be modified to integer-pixel precision (such as via rounding).
B. in one example, if half-pixel precision is selected for MMVD codec blocks, the base Merge candidate used in MMVD may be modified to half-pixel precision (such as via rounding).
I. In one example, rounding may be performed during the basic Merge list construction, and thus rounded MVs are used in pruning.
In one example, rounding may be performed after the basic Merge list construction process, thus using non-rounded MVs in pruning.
C. In one example, if integer pixel precision or half pixel precision is used for MMVD modes, only MVDs with the same or lower precision are allowed.
I. For example, if integer-pixel precision is used for MMVD modes, only integer-pixel precision, 2-pixel precision, or N-pixel precision (N > =1) MVD is allowed.
D. In one example, if K MVD is not allowed in MMVD mode, the binarization of the MVD index may be modified because the maximum MVD index is M-K-1 instead of M-1. Meanwhile, different contexts may be used in CABAC codec.
E. in one example, rounding may be performed after deriving the MV in MMVD mode.
F. the constraint may be different for bi-prediction and uni-prediction. For example, the constraint may not be applied in unidirectional prediction.
G. The constraint may be different for different block sizes or block shapes.
6. It is proposed that for certain block sizes or block shapes, the maximum number of half-pixel MV components or/and quarter-pixel MV components (e.g. horizontal MV or vertical MV) may be constrained.
A. in one example, the bitstream should meet this constraint.
B. the constraint may be different for bi-prediction and uni-prediction. For example, the constraint may not be applied in unidirectional prediction.
I. For example, such constraint may be applied to bi-directionally predicted 4×8 or/and 8×4 or/and 4×16 or/and 16×4 blocks, however, it may not be applied to uni-directionally predicted 4×8 or/and 8×4 or/and 4×16 or/and 16×4 blocks.
For example, such constraints may be applied to 4 x 4 blocks of bi-prediction and uni-prediction.
C. the constraint may be different for different block sizes or block shapes.
D. the constraint may be applied to a triangle pattern.
I. for example, such constraints may be applied to 4 x 16 or/and 16 x 4 blocks that are encoded and decoded in triangle mode.
E. In one example, for bi-prediction blocks, up to 3 quarter-pixel MV components may be allowed.
F. in one example, for bi-prediction blocks, up to 2 quarter-pixel MV components may be allowed.
G. In one example, for bi-prediction blocks, up to 1 quarter-pixel MV component may be allowed.
H. In one example, for bi-prediction blocks, up to 0 quarter-pixel MV components may be allowed.
I. in one example, for unidirectional prediction blocks, up to 1 quarter-pixel MV component may be allowed.
J. in one example, for unidirectional prediction blocks, up to 0 quarter-pixel MV components may be allowed.
K. In one example, for bi-prediction blocks, up to 3 fractional MV components may be allowed.
In one example, for bi-prediction blocks, up to 2 fractional MV components may be allowed.
M. in one example, for bi-prediction blocks, up to 1 fractional MV component may be allowed.
N. in one example, for bi-predictive blocks, up to 0 fractional MV components may be allowed.
In one example, for unidirectional prediction blocks, up to 1 fractional MV component may be allowed.
In one example, up to 0 fractional MV components may be allowed for unidirectional prediction blocks.
7. It is proposed that some components of the MV may be rounded to integer pixel precision or half pixel precision depending on the size of the block (e.g. width and/or height, ratio of width and height) or/and prediction direction or/and motion information.
A. in one example, the MVs are rounded to the nearest integer pixel precision MVs or/and half pixel precision MVs.
B. In one example, a different rounding method may be used. For example, a down-rounding, up-rounding, zero-rounding, or far-zero-rounding (rounding away from zero) may be used.
C. In one example, if the size (i.e., width x height) of the block is less than (or greater than) (and/or equal to) a threshold L (e.g., l=16 or 64), then MV rounding may be applied to the horizontal or/and vertical MV components.
D. in one example, if the width (or height) of the block is less than (and/or equal to) a threshold L1 (e.g., l1=4, 8), then MV rounding may be applied to the horizontal (or vertical) MV component.
E. in one example, the thresholds L and L1 may be different for bi-prediction blocks and uni-prediction blocks. For example, a smaller threshold may be used for bi-predictive blocks.
F. In one example, MV rounding may be applied if the ratio between width and height is greater than a first threshold or less than a second threshold (such as for narrow blocks such as 4x 16 or 16 x 4).
G. In one example, MV rounding may be applied only when both the horizontal and vertical components of the MV are fractional (i.e., they point to fractional pixel locations instead of integer pixel locations).
H. Whether MV rounding is applied may depend on whether the current block is bi-predicted or uni-predicted.
I. for example, MV rounding may be applied only when the current block is bi-predictive.
I. Whether MV rounding is applied may depend on the prediction direction (e.g., from list 0 or list 1) and/or the associated motion vector. In one example, for bi-predictive blocks, whether MV rounding is applied may be different for different prediction directions.
I. In one example, if MVs of the prediction direction X (x=0 or 1) have fractional components in both the horizontal direction and the vertical direction, MV rounding may be applied to the N MV components of the prediction direction X; otherwise, MV rounding may not be applied. Here, n=0, 1, or 2.
In one example, if N (N > =0) MV components have fractional precision, MV rounding may be applied to M (0 < =m < =n) MV components of the N MV components.
N and M may be different for bi-directional prediction blocks and uni-directional prediction blocks.
N and M may be different for different block sizes (width or/and height or/and width x height).
3. For example, for a bi-prediction block, N equals 4 and M equals 4.
4. For example, for a bi-predictive block, N equals 4 and M equals 3.
5. For example, for a bi-prediction block, N equals 4 and M equals 2.
6. For example, for a bi-prediction block, N equals 4 and M equals 1.
7. For example, for a bi-predictive block, N is equal to 3 and M is equal to 3.
8. For example, for a bi-prediction block, N equals 3 and M equals 2.
9. For example, for a bi-prediction block, N equals 3 and M equals 1.
10. For example, for a bi-prediction block, N equals 2 and M equals 2.
11. For example, for a bi-prediction block, N equals 2 and M equals 1.
12. For example, for a bi-prediction block, N is equal to 1 and M is equal to 1.
13. For example, for a unidirectional prediction block, N equals 2 and M equals 2.
14. For example, for a unidirectional prediction block, N equals 2 and M equals 1.
15. For example, for a unidirectional prediction block, N equals 1 and M equals 1.
In one example, K MV components of the M MV components are rounded to integer-pixel precision and M-K MV components are rounded to half-pixel precision, where k=0, 1, …, M-1.
J. Whether MV rounding is applied may be different for different color components (such as Y, cb and Cr).
I. For example, whether and how MV rounding is applied may depend on the color format, such as 4:2:0, 4:2:2, or 4:4:4.
K. whether and/or how MV rounding is applied may depend on block size (or width, height), block shape, prediction direction, etc.
I. In one example, some MV components of a 4 x 16 or/and 16 x 4 bi-predictive luma block or/and uni-predictive luma block may be rounded to half-pixel precision.
In one example, some MV components of a4 x 16 or/and 16 x 4 bi-predictive luma block or/and uni-predictive luma block may be rounded to integer-pixel precision.
In one example, some MV components of a4 x 4 uni-directional predicted luma block or/and a bi-directional predicted luma block may be rounded to integer-pixel precision.
In one example, some MV components of a 4 x 8 or/and 8 x 4 bi-predictive luma block or/and uni-predictive luma block may be rounded to integer-pixel precision.
In one example, MV rounding may not be applied to sub-block prediction, such as affine prediction.
I. In an alternative example, MV rounding may be applied to sub-block prediction such as ATMVP prediction. In this case, each sub-block is treated as a codec block to determine whether and how MV rounding is applied.
8. It is proposed that for certain block sizes, the motion vectors of one block should be modified to integer precision before being used for motion compensation (e.g. if they are fractional precision).
9. In one example, for some block sizes, the stored motion vectors and the motion vectors used for motion compensation may be of different precision.
A. In one example, sub-pixel precision (also referred to as fractional precision, such as 1/4 pixel, 1/16 pixel) may be stored for blocks having certain block sizes, but the motion compensation process is based on integer versions of these motion vectors (such as via rounding).
10. It is proposed that an indication that certain block sizes do not allow bi-prediction can be signaled in the sequence parameter set/picture parameter set/sequence header/picture header/slice header/CTU line/region/other high level syntax.
A. Alternatively, an indication that certain block sizes do not allow bi-prediction may be signaled in the sequence parameter set/picture parameter set/sequence header/picture header/slice header/CTU line/region/other high level syntax.
B. Alternatively, an indication that certain block sizes do not allow bi-prediction and/or uni-prediction may be signaled in the sequence parameter set/picture parameter set/sequence header/picture header/slice header/CTU line/region/other high level syntax.
C. Further, alternatively, such indications may be applied only to certain modes, such as non-affine modes.
D. further, alternatively, when unidirectional/bi-directional prediction is not allowed for the block, the signaling of the AMVR index may be modified accordingly, such as allowing only integer pixel precision, or conversely different MV precision may be utilized.
E. furthermore, the above methods (such as bullets 3-9) may also be applicable alternatively.
11. It is proposed that the consistent bitstream should follow a rule that for certain block sizes only integer pixel motion vectors are allowed for bi-predictive codec blocks.
A. It is proposed that the consistent bitstream should follow a rule that for certain block sizes only integer pixel motion vectors are allowed for bi-predictive codec blocks.
Signaling of the amvr flag may depend on whether fractional motion vectors are allowed for the block.
A. in one example, if fractional (i.e., 1/4 pixel) MV/MVD precision is not allowed for a block, a flag indicating whether the MV/MVD precision of the current block is 1/4 pixel may be skipped and implicitly deduced as false.
13. In one example, the block sizes described above are, for example, 4×16, 16×4, 4×8, 8×4, 4×4.
14. It is proposed that filters with different interpolation filters (e.g. different filter taps and/or different filter interpolation filter coefficients) can be used in the interpolation according to the size of the block (e.g. width and/or height, ratio of width and height).
A. Different filters may be used for vertical interpolation and horizontal interpolation. For example, a shorter tap filter may be applied to vertical interpolation than a filter used for horizontal interpolation.
B. in one example, an interpolation filter with fewer taps than the interpolation filter in VTM-3.0 may be applied in some cases. These interpolation filters with fewer taps are also referred to as "short tap filters".
C. In one example, if the size (i.e., width x height) of the block is less than (or greater than) (and/or equal to) the threshold L (e.g., l=16 or 64), a different filter (e.g., a short tap filter) may be used for horizontal interpolation or/and vertical interpolation.
D. In one example, if the width (or height) of the block is less than (and/or equal to) the threshold L1 (e.g., l1=4, 8), a different filter (e.g., a short tap filter) may be used for horizontal (or vertical) interpolation.
E. In one example, if the ratio between width and height is greater than a first threshold or less than a second threshold (such as for narrow blocks such as 4 x 16 or 16 x 4), a different filter (e.g., a short tap filter) than that used for other kinds of blocks may be selected.
F. in one example, a short tap filter may be used only when both the horizontal and vertical components of the MV are fractional (i.e., they point to fractional pixel locations instead of integer pixel locations).
G. Which filter to use (e.g., a short tap filter may or may not be used) may depend on whether the current block is bi-directionally predicted or uni-directionally predicted.
I. For example, a short tap filter may be used only when the current block is bi-directional predicted.
H. Which filter to use (e.g., short tap filters may or may not be used) may depend on the prediction direction (e.g., from list 0 or list 1) and/or the associated motion vector. In one example, for bi-prediction blocks, whether a short tap filter is used may be different for different prediction directions.
I. In one example, if MV of the prediction direction X (x=0 or 1) has a fractional component in both horizontal and vertical directions, a short tap filter is used for the prediction direction X; otherwise, the short tap filter is not used.
In one example, if N (N > =0) MV components have fractional precision, a short tap filter may be applied to M (0 < =m < =n) MV components of the N MV components.
N and M may be different for bi-directional prediction blocks and uni-directional prediction blocks.
N and M may be different for different block sizes (width or/and height or/and width x height).
3. For example, for a bi-prediction block, N equals 4 and M equals 4.
4. For example, for a bi-predictive block, N equals 4 and M equals 3.
5. For example, for a bi-prediction block, N equals 4 and M equals 2.
6. For example, for a bi-prediction block, N equals 4 and M equals 1.
7. For example, for a bi-predictive block, N is equal to 3 and M is equal to 3.
8. For example, for a bi-prediction block, N equals 3 and M equals 2.
9. For example, for a bi-prediction block, N equals 3 and M equals 1.
10. For example, for a bi-prediction block, N equals 2 and M equals 2.
11. For example, for a bi-prediction block, N equals 2 and M equals 1.
12. For example, for a bi-prediction block, N is equal to 1 and M is equal to 1.
13. For example, for a unidirectional prediction block, N equals 2 and M equals 2.
14. For example, for a unidirectional prediction block, N equals 2 and M equals 1.
15. For example, for a unidirectional prediction block, N equals 1 and M equals 1.
Different short tap filters can be used for the M MV components.
1. In one example, K of the M MV components use an S1 tap filter and M-K MV components use an S2 tap filter, where k=0, 1. For example, S1 equals 6 and S2 equals 4.
I. in one example, a different filter (e.g., a short tap filter) may be used for only some pixels. For example, they are only used for boundary pixels of a block.
I. For example, they are only used for the N1 right column or/and N2 left column or/and N3 top row or/and N4 bottom row of the block.
J. Whether a short tap filter is used may be different for a unidirectional prediction block and a bidirectional prediction block.
K. whether or not a short tap filter is used may be different for different color components, such as Y, cb and Cr.
I. for example, whether and how the short tap filter is applied may depend on the color format, such as 4:2:0, 4:2:2, or 4:4:4.
Different short tap filters may be used for different blocks. The short tap filter selected may depend on block size (or width, height), block shape, prediction direction, etc.
I. In one example, a 7 tap filter is used for horizontal interpolation and vertical interpolation of 4 x 16 or/and 16 x 4 bi-predictive luminance blocks or/and uni-predictive luminance blocks.
In one example, a 7 tap filter is used for horizontal (or vertical) interpolation of 4 x 4 unidirectional predicted luma blocks or/and bi-predictive luma blocks.
In one example, a 6 tap filter is used for horizontal interpolation and vertical interpolation of 4x8 or/and 8x4 bi-predictive luminance blocks or/and uni-predictive luminance blocks.
1. Alternatively, a 6-tap filter and a 5-tap filter (or a 5-tap filter and a 6-tap filter) are used in the horizontal interpolation and the vertical interpolation for the 4×8 or/and 8×4 bi-predictive luminance block or/and uni-predictive luminance block, respectively.
Different short tap filters can be used for different kinds of motion vectors.
I. In one example, a longer tap length filter may be used for motion vectors having fractional components in only one direction (i.e., horizontal or vertical), and a shorter tap length filter may be used for motion vectors having fractional components in both horizontal and vertical directions.
For example, an 8-tap filter is used for 4×16 or/and 16×4 or/and 4×8 or/and 8×4 or/and 4×4 bi-directional prediction blocks or/and uni-directional prediction blocks having fractional MV components in only one direction, and a short-tap filter described in the bullets 3.h is used for 4×16 or/and 16×4 or/and 4×8 or/and 8×4 or/and 4×4 bi-directional prediction blocks or/and uni-directional prediction blocks having fractional MV components in two directions.
In one example, the interpolation filter for affine motion may be different from the interpolation filter for translational motion vectors.
In one example, a short tap interpolation filter may be used for affine motion as compared to an interpolation filter for translational motion vectors.
N. in one example, the short tap filter may not be applied to sub-block prediction, such as affine prediction.
I. In alternative examples, a short tap filter may be applied to sub-block prediction, such as ATMVP prediction. In this case, each sub-block is treated as a codec block to determine whether and how to apply the short tap filter.
Whether and/or how to apply the short tap filter may depend on block size, codec information, etc., in one example.
I. In one example, a short tap filter may be applied when a certain mode (such as OBMC, interleaved affine prediction mode) is enabled for the block.
15. It is proposed that (w+n-1-PW) × (w+n-1-PH) reference pixels (instead of (w+n-1) × (h+n-1) reference pixels) can be extracted for motion compensation of a w×h block, where PW and PH cannot both be equal to 0.
A. Furthermore, in one example, for the remaining reference pixels (not extracted, but required for motion compensation), padding or derivation from the extracted reference samples may be applied.
B. Further alternatively, pixels at the reference block boundaries (top, left, bottom, and right) are repeated to generate a (w+n-1) x (h+n-1) block, which is used for final interpolation. As shown in fig. 21, for example, in the figure, w= 8,H =4, n=7, pw=2, and ph=3.
C. The extracted reference pixel may be identified by (x+ MVXInt-N/2+offSet1, y+MVYInt-N/2+offSet2), where (x, y) is the upper left position of the current block, (MVXInt, MVYInt) is an integer portion of the MV, offSet and offSet2 are integers such as-2, -1, 0,1, 2, etc.
D. In one example, PH is zero and only the left boundary or/and the right boundary is repeated.
E. in one example, PW is zero, and only the top boundary or/and the bottom boundary is repeated.
F. In one example, PW and PH are both greater than zero, and the left boundary or/and the right boundary is repeated first, then the top boundary or/and the bottom boundary is repeated.
G. In one example, PW and PH are both greater than zero, and the top boundary or/and the bottom boundary is repeated first, and then the left boundary or/and the right boundary is repeated.
H. in one example, the left boundary is repeated M1 times and the right boundary is repeated PW-M1 times, where M1 is an integer and M1> =0.
I. Alternatively, if M1 (or PW-M1) is greater than 1, instead of repeating the first left (or right) column M1 times, a plurality of columns may be utilized, such as M1 left columns (or PW-M1 right columns) may be repeated.
I. In one example, the top boundary is repeated M2 times and the bottom boundary is repeated PH-M2 times, where M2 is an integer and M2> =0.
I. alternatively, if M2 (or PH-M2) is greater than 1, instead of repeating the first top (or bottom) row M2 times, multiple rows may be utilized, such as M2 top rows (or PH-M2 bottom rows) may be repeated.
J. In one example, some default values may be used for boundary filling.
K. In one example, such a boundary pixel repetition method may be used only when both the horizontal and vertical components of the MV are fractional (i.e., they point to fractional pixel locations instead of integer pixel locations).
In one example, such a boundary pixel repetition method may be applied to some or all of the reference blocks.
I. In one example, if MV of the prediction direction X (x=0 or 1) has a fractional component in both the horizontal direction and the vertical direction, such a boundary pixel repetition method is used for the prediction direction X; otherwise, the method is not used.
In one example, if N (N > =0) MV components have fractional precision, the boundary pixel repetition method may be applied to M (0 < =m < =n) MV components of the N MV components.
N and M may be different for bi-directional prediction blocks and uni-directional prediction blocks.
N and M may be different for different block sizes (width or/and height or/and width x height).
3. For example, for a bi-prediction block, N equals 4 and M equals 4.
4. For example, for a bi-predictive block, N equals 4 and M equals 3.
5. For example, for a bi-prediction block, N equals 4 and M equals 2.
6. For example, for a bi-prediction block, N equals 4 and M equals 1.
7. For example, for a bi-predictive block, N is equal to 3 and M is equal to 3.
8. For example, for a bi-prediction block, N equals 3 and M equals 2.
9. For example, for a bi-prediction block, N equals 3 and M equals 1.
10. For example, for a bi-prediction block, N equals 2 and M equals 2.
11. For example, for a bi-prediction block, N equals 2 and M equals 1.
12. For example, for a bi-prediction block, N is equal to 1 and M is equal to 1.
13. For example, for a unidirectional prediction block, N equals 2 and M equals 2.
14. For example, for a unidirectional prediction block, N equals 2 and M equals 1.
15. For example, for a unidirectional prediction block, N equals 1 and M equals 1.
Different boundary pixel repetition methods can be used for the M MV components.
Pw and/or PH may be different for different color components (such as Y, cb and Cr).
I. For example, whether and how boundary pixel repetition is applied may depend on the color format, such as 4:2:0, 4:2:2, or 4:4:4.
N. in one example, PW and/or PH may be different for different block sizes or shapes.
In one example, PW and PH are set equal to 1 for 4×16 or/and 16×4 bi-prediction blocks or/and uni-prediction blocks.
In one example, PW and PH are set equal to 0 and 1 (or 1 and 0), respectively, for 4 x 4 bi-prediction or/and uni-prediction blocks.
In one example, PW and PH are set equal to 2 for 4 x 8 or/and 8 x 4 bi-prediction blocks or/and uni-prediction blocks.
1. Alternatively, PW and PH are set equal to 2 and 3 (or 3 and 2) for 4×8 or/and 8×4 bi-prediction blocks or/and uni-prediction blocks, respectively.
In one example, PW and PH may be different for unidirectional prediction and bi-directional prediction.
P.pw and PH may be different for different kinds of motion vectors.
In one example, PW and PH may be smaller (even zero) for motion vectors having fractional components in only one direction (i.e., horizontal or vertical) and PW and PH may be greater for motion vectors having fractional components in both horizontal and vertical directions.
For example, for a 4×16 or/and 16×4 or/and 4×8 or/and 8×4 or/and 4×4 bi-prediction block or/and uni-prediction block having a fractional MV component in only one direction, PW and PH are set equal to 0, and PW and PH described by the bulletin 4.i are used for a 4×16 or/and 16×4 or/and 4×8 or/and 8×4 bi-prediction block or/and uni-prediction block having a fractional MV component in both directions.
Fig. 21 shows an example of boundary pixels of a repeated reference block before interpolation.
16. The proposed method may be applied to certain modes, block sizes/shapes, and/or certain sub-block sizes.
A. the proposed method may be applied to certain modes, such as bi-predictive modes.
B. the proposed method can be applied to certain block sizes.
I. In one example, it is applied only to blocks of w×h < = T, where w and h are the width and height of the current block.
In one example, it is applied only to blocks of h < = T.
C. the proposed method may be applied to a certain color component (such as only the luminance component).
17. The rounding operation described above may be defined as:
Shift (x, s) is defined as
Shift(x,s)=(x+off)>>s
SignShift (x, s) is defined as
Where off is an integer such as 0 or 2 s-1.
C. it may be defined as those used for motion vector rounding in an AMVR process, affine process, or other process module.
18. In one example, how the MV is rounded may depend on the MV component.
A. For example, the y component of the MV is rounded to integer pixels, but the x component of the MV is not rounded.
B. in one example, the MV may be rounded to integer pixels before motion compensation for the luma component, but rounded to 2 pixels before motion compensation for the chroma component when the color format is 4:2:0.
19. Bilinear filters are proposed for interpolation filtering for one or more specific situations, such as:
a.4x4 unidirectional prediction;
b.4x8 bi-directional prediction;
c.8 x4 bi-directional prediction;
4.4x16 bi-directional prediction;
e.16x4 bi-directional prediction;
f.8 ×8 bi-directional prediction;
g.8 x4 unidirectional prediction;
h.4 ×8 unidirectional prediction;
20. it is proposed that when multi-hypothesis prediction is applied to one block, short taps or different interpolation filters may be applied compared to those applied to the normal prediction mode.
A. in one example, a bilinear filter may be used.
B. the short tap or the second interpolation filter may be applied to a reference picture list involving a plurality of reference blocks, and for another reference picture having only one reference block, the same filter as that used for the normal prediction mode may be applied.
C. the proposed method may be applied under certain conditions, such as certain temporal layer(s), the quantization parameter of the block/slice/picture containing the block being within a range (such as above a threshold).
Fig. 17 is a block diagram of a video processing apparatus 1700. The apparatus 1700 may be used to implement one or more of the methods described herein. The apparatus 1700 may be embodied in a smart phone, tablet, computer, internet of things (IoT) receiver, or the like. The apparatus 1700 may include one or more processors 1702, one or more memories 1704, and video processing hardware 1706. The processor(s) 1702 may be configured to implement one or more of the methods described in this document. The memory(s) 1704 may be used to store data and code for implementing the methods and techniques described herein. The video processing hardware 1706 may be used to implement some of the techniques described in this document in hardware circuitry.
Fig. 19 is a flow chart of a method 1900 of video bitstream processing. The method 1900 includes: determining (1905) a shape of the video block; determining (1910) an interpolation order based on the video blocks, the interpolation order indicating an order in which horizontal interpolation and vertical interpolation are performed; and performing horizontal interpolation and vertical interpolation according to the interpolation order of the video blocks to reconstruct (1915) a decoded representation of the video blocks.
Fig. 20 is a flow chart of a method 2000 of video bitstream processing. The method 2000 comprises the following steps: determining (2005) a characteristic of a motion vector associated with the video block; determining (2010) an interpolation order of the video blocks based on the characteristics of the motion vectors, the interpolation order indicating an order in which horizontal interpolation and vertical interpolation are performed; and performing horizontal interpolation and vertical interpolation according to the interpolation order of the video blocks to reconstruct (2015) the decoded representation of the video blocks.
Fig. 22 is a flow chart of a method 2200 of video bitstream processing. The method 2200 includes: determining (2205) a size characteristic of the first video block; determining (2210) that a first interpolation filter is to be applied to the first video block based on the determination of the size characteristic; and performing (2215) further processing of the first video block using the first interpolation filter.
Fig. 23 is a flow chart of a method 2300 of video bitstream processing. The method 2300 includes: determining (2305) a first characteristic of a first video block; determining (2310) that a first interpolation filter is to be applied to the first video block based on the determination of the first characteristic; performing (2315) further processing of the first video block using the first interpolation filter; determining (2320) a second characteristic of a second video block; determining (2325) that a second interpolation filter is to be applied to the first video block based on the second characteristic, the first interpolation filter and the second interpolation filter being different short tap filters; and performing (2330) further processing of the second video block using the second interpolation filter.
Some examples of the order in which horizontal interpolation and vertical interpolation are performed and their use are described in section 4 of this document with reference to methods 1900, 2000, 2200, and 2300. For example, as described in section 4, one of the horizontal interpolation or the vertical interpolation may be preferentially performed first under different shapes of the video block. In some embodiments, horizontal interpolation is performed prior to vertical interpolation, and in some embodiments, vertical interpolation is performed prior to horizontal interpolation.
Referring to methods 1900, 2000, 2200, and 2300, video blocks may be encoded in a video bitstream in which bit efficiency may be achieved by using a bitstream generation rule related to an interpolation order, which also depends on the shape of the video blocks.
The method may include, wherein rounding the motion vector comprises one or more of: rounding to the nearest integer pixel precision MV or rounding to half pixel precision MV.
The method may include, wherein rounding the MV includes one or more of: rounding down, rounding up, rounding to zero or rounding away from zero.
The method may include wherein the size information indicates that the size of the first video block is less than a threshold, and rounding the MV is applied to one or both of the horizontal MV component or the vertical MV component based on the size information indicating that the size of the first video block is less than the threshold.
The method may include wherein the size information indicates that the width or height of the first video block is less than a threshold, and rounding the MVs is applied to one or both of the horizontal MV component or the vertical MV component based on the size information indicating that the width or height of the first video block is less than the threshold.
The method may include wherein the threshold is different for bi-directional prediction blocks and uni-directional prediction blocks.
The method may include wherein the size information indicates that a ratio between a width and a height of the first video block is greater than a first threshold or less than a second threshold, and wherein rounding the MV is based on a determination of the size information.
The method may include wherein rounding the MV is further based on both the horizontal and vertical components of the MV being fractional.
The method may include wherein rounding the MVs is further based on the first video block being bi-directionally predicted or uni-directionally predicted.
The method may include wherein rounding the MVs is further based on a prediction direction associated with the first video block.
The method may include wherein rounding the MVs is further based on color components of the first video block.
The method may include wherein rounding the MV is further based on a size of the first video block, a shape of the first video block, or a predicted shape of the first video block.
The method may include wherein rounding the MVs is applied to sub-block prediction.
The method may include wherein the short tap filter is applied to the MV component based on the MV component having fractional accuracy.
The method may include wherein the short tap filter is applied based on a size of the first video block or codec information of the first video block.
The method may include wherein the short tap filter is applied based on a mode of the first video block.
The method may include wherein the default value is for boundary padding associated with the first video block.
The method may include, wherein the Merge mode is one or more of: a rule Merge list, a triangle Merge list, an affine Merge list, or other non-intra or non-AMVP modes.
The method may include wherein the Merge candidate having a score Merge candidate is excluded from the Merge list.
The method may include wherein rounding the motion information includes rounding the Merge candidate associated with the fractional motion vector to integer precision, and the modified motion information is inserted into the Merge list.
The method may include wherein the motion information is a bi-prediction candidate.
The method may include wherein MMVD is the average magnitude of the vector difference.
The method may include wherein the motion vector is in MMVD mode.
The method may include wherein the first video block is a MMVD codec block to be associated with integer-pixel precision, and wherein the base Merge candidate used in MMVD is modified to integer-pixel precision via rounding.
The method may include wherein the first video block is a MMVD codec block to be associated with half-pixel precision, and wherein the base Merge candidate used in MMVD is modified to half-pixel precision via rounding.
The method may include wherein the threshold number is a maximum number of allowed half-pixel MV components or quarter-pixel MV components.
The method may include wherein the threshold number is different between bi-directional prediction and uni-directional prediction.
The method may include wherein the indication that bi-prediction is not allowed is signaled in a sequence parameter set, a picture parameter set, a sequence header, a picture header, a slice group header, CTU lines, regions, or other high level syntax.
The method may include, wherein the method complies with a bitstream rule that allows only integer pixel motion vectors of bi-directionally predicted codec blocks having a particular size.
The method may include, wherein the size of the first video block is: 4×6, 16×4, 4×8, 8×4, or 4×4.
The method may include wherein modifying or rounding the motion information includes modifying different MV components differently.
The method may include wherein the y component of the first MV is modified or rounded to integer pixels and the x component of the first MV is not modified or rounded.
The method may include wherein the luma component of the first MV is rounded to integer pixels and the chroma component of the first MV is rounded to 2 pixel pixels.
The method may include wherein the first MV is associated with a video block having a color format that is 4:2:0.
The method may include wherein the bilateral filter is used for 4 x 4 uni-directional prediction, 4 x 8 bi-directional prediction, 8 x 4 bi-directional prediction, 4 x 16 bi-directional prediction, 16 x 4 bi-directional prediction, 8 x 8 bi-directional prediction, 8 x 4 uni-directional prediction, or 4 x 8 uni-directional prediction.
Fig. 24 is a flow chart of a method 2400 of video processing. The method 2400 includes: determining (2402) characteristics of a first block of the video for a transition between the first block and a bitstream representation of the first block; determining (2404) a filter having interpolation filter parameters for interpolation of the first block based on characteristics of the first block; and performing (2406) the conversion by using a filter having interpolation filter parameters.
In some examples, the interpolation filter parameters include filter taps and/or interpolation filter coefficients, and the interpolation includes at least one of vertical interpolation and horizontal interpolation.
In some examples, the filter includes a short tap filter having fewer taps than a conventional interpolation filter.
In some examples, a conventional interpolation filter has 8 taps.
In some examples, the characteristic of the first block includes a size parameter, wherein the size parameter includes at least one of a width, a height, a ratio of the width to the height, a dimension of the width by the height of the first block.
In some examples, the filter used for vertical interpolation is different in the number of taps than the filter used for horizontal interpolation.
In some examples, the filter used for vertical interpolation has fewer taps than the filter used for horizontal interpolation.
In some examples, the filter used for horizontal interpolation has fewer taps than the filter used for vertical interpolation.
In some examples, a short tap filter is used for horizontal interpolation or/and vertical interpolation when the size of the first block is less than and/or equal to a threshold.
In some examples, a short tap filter is used for horizontal interpolation or/and vertical interpolation when the size of the first block is greater than and/or equal to a threshold.
In some examples, the short tap filter is used for horizontal interpolation when the width of the first block is less than and/or equal to a threshold, or for vertical interpolation when the height of the first block is less than and/or equal to a threshold.
In some examples, the short tap filter is used for vertical interpolation and/or horizontal interpolation when the ratio between the width and the height is greater than a first threshold or less than a second threshold.
In some examples, the characteristics of the first block include at least one Motion Vector (MV) associated with the first block.
In some examples, a short tap filter is used for interpolation only when both the horizontal and vertical components of the MV are fractional.
In some examples, the characteristic of the first block includes a prediction parameter indicating whether the first block is bi-directionally predicted or uni-directionally predicted.
In some examples, whether to use a short tap filter depends on the prediction parameters.
In some examples, a short tap filter is used for interpolation only when the first block is bi-directionally predicted.
In some examples, the characteristics of the first block include indicating a prediction direction from list 0 or list 1 and/or an associated Motion Vector (MV).
In some examples, whether to use a short tap filter depends on the prediction direction and/or MV of the first block.
In some examples, where the first block is a bi-predictive block, whether a short tap filter is used is different for different prediction directions.
In some examples, if the MV of the prediction direction X (X is 0 or 1) has a fractional component in both the horizontal direction and the vertical direction, a short tap filter is used for the prediction direction X; otherwise, the short tap filter is not used.
In some examples, if the N MV components have fractional precision, a short tap filter is used for M MV components of the N MV components, where N, M is an integer and 0< = M < = N.
In some examples, N and M are different for bi-prediction blocks and uni-prediction blocks.
In some examples, for bi-prediction blocks, N equals 4 and M equals 4, or N equals 4 and M equals 3, or N equals 4 and M equals 2, or N equals 4 and M equals 1, or N equals 3 and M equals 3, or N equals 3 and M equals 2, or N equals 3 and M equals 1, or N equals 2 and M equals 2, or N equals 2 and M equals 1, or N equals 1 and M equals 1.
In some examples, for unidirectional prediction blocks, N equals 2 and M equals 2, or N equals 2 and M equals 1, or N equals 1 and M equals 1.
In some examples, the short tap filter includes a first short tap filter having an S1 tap and a second short tap filter having an S2 tap, and wherein K MV components of the M MV components use the first short tap filter and (M-K) MV components of the M MV components use the second short tap filter, wherein K is an integer ranging from 0 to M-1, and S1 and S2 are integers.
In some examples, N and M are different for different size parameters of the block, where the size parameters include width or/and height or/and width x height of the block.
In some examples, the characteristic of the first block includes a location of a pixel of the first block.
In some examples, whether to use a short tap filter depends on the location of the pixel.
In some examples, a short tap filter is used only for boundary pixels of the first block.
In some examples, the short tap filter is used only for the N1 right column or/and the N2 left column or/and the N3 top row or/and the N4 bottom row of the first block, N1, N2, N3, N4 being integers.
In some examples, the characteristic of the first block includes a color component of the first block.
In some examples, whether to use a short tap filter is different for different color components of the first block.
In some examples, the color components include Y, cb and Cr.
In some examples, the characteristics of the first block include a color format of the first block.
In some examples, whether and how the short tap filter is applied depends on the color format of the first block.
In some examples, the color format includes 4:2:0, 4:2:2, or 4:4:4.
In some examples, the filters include different short tap filters having different taps, and the selection of the different short tap filters is based on the characteristics of the block.
In some examples, a 7 tap filter is selected for horizontal interpolation and vertical interpolation of 4 x 16 or/and 16 x 4 bi-predictive luminance blocks or/and uni-predictive luminance blocks.
In some examples, a 7 tap filter is selected for horizontal interpolation or vertical interpolation of the 4 x 4 uni-directional predicted luma block or/and the bi-directional predicted luma block.
In some examples, a 6 tap filter is selected for horizontal interpolation and vertical interpolation of 4x8 or/and 8x4 bi-predictive luminance blocks or/and uni-predictive luminance blocks.
In some examples, a 6-tap filter and a 5-tap filter or a 5-tap filter and a 6-tap filter are selected for horizontal interpolation and vertical interpolation for a 4 x 8 or/and an 8 x 4 bi-predictive luminance block or/and a uni-predictive luminance block, respectively.
In some examples, the filters include different short tap filters with different taps, and the different short tap filters are used for different kinds of Motion Vectors (MVs).
In some examples, a longer tap length filter from a different short tap filter is used for MVs having fractional components in only one of the horizontal or vertical directions, and a shorter tap length filter from a different short tap filter is used for MVs having fractional components in both the horizontal and vertical directions.
In some examples, an 8-tap filter is used for 4×16 or/and 16×4 or/and 4×8 or/and 8×4 bi-prediction blocks or/and uni-prediction blocks having fractional MV components in only one of the horizontal or vertical directions, and a short-tap filter is used for 4×16 or/and 16×4 or/and 4×8 or/and 8×4 bi-prediction blocks or/and uni-prediction blocks having fractional MV components in both directions.
In some examples, the filter for affine motion is different from the filter for translational motion vectors.
In some examples, the filter for affine motion has fewer taps than the filter for translational motion vectors.
In some examples, the short tap filter is not applied to sub-block based prediction including affine prediction.
In some examples, a short tap filter is applied to sub-block based prediction including Advanced Temporal Motion Vector Prediction (ATMVP) prediction.
In some examples, each sub-block is used as a codec block to determine whether and how to apply the short tap filter.
In some examples, the characteristics of the first block include a size parameter and codec information of the first block, and whether and how to apply the short tap filter depends on the block size and the codec information of the first block.
In some examples, the short tap filter is applied when a certain mode including at least one of OBMC and interleaved affine prediction modes is enabled for the first block.
In some examples, the conversion generates the first/second blocks of video from the bitstream representation.
In some examples, the conversion generates a bitstream representation from the first/second block of video.
Fig. 25 is a flow chart of a method 2500 of video processing. The method 2500 includes: extracting (2502) reference pixels of a first reference block from a reference picture for a transition between a first block of video and a bitstream representation of the first block, wherein the first reference block is smaller than a second reference block required for motion compensation of the first block; filling (2504) the first reference block with filling pixels to generate a second reference block required for motion compensation of the first block; and performing (2506) the conversion by using the generated second reference block.
In some examples, the first block has a size w×h, the first reference block has a size (w+n-1-PW) ×h+n-1-PH, and the second reference block has a size (w+n-1) ×h+n-1, where W is the width of the first block, H is the height of the first block, N is the number of interpolation filter taps for the first block, PW and PH are integers.
In some examples, the step of filling the first reference block with filler pixels to generate the second reference block includes: pixels at one or more boundaries of the first reference block are repeated as filler pixels to generate a second reference block.
In some examples, the boundaries are a top boundary, a left boundary, a bottom boundary, and a right boundary of the first reference block.
In some examples, w= 8,H =4, n=7, pw=2, and ph=3.
In some examples, pixels at the top, left, and right boundaries are repeated once, and pixels at the bottom boundary are repeated twice.
In some examples, the extracted reference pixels are identified by (x+ MVXInt-N/2+offset1, y+mvyint-N/2+offset2), where (x, y) is the upper left position of the first block, (MVXInt, MVYInt) is an integer portion of the Motion Vector (MV) of the first block, and offSet and offSet2 are integers.
In some examples, when PH is zero, only pixels at the left boundary or/and the right boundary of the first reference block are repeated.
In some examples, when PW is zero, only pixels at the top boundary or/and bottom boundary of the first reference block are repeated.
In some examples, when PW and PH are both greater than zero, pixels at the left or/and right boundary of the first reference block are repeated first, then pixels at the top or/and bottom boundary of the first reference block are repeated, or the top or/and bottom boundary of the first reference block is repeated first, then the left or/and right boundary of the first reference block is repeated.
In some examples, pixels at the left boundary of the first reference block are repeated M1 times and pixels at the right boundary of the first reference block are repeated (PW-M1) times, where M1 is an integer and M1> =0.
In some examples, the pixels of the M1 left columns of the first reference block or the pixels of the (PW-M1) right columns of the first reference block are repeated, where M1>1 or PW-M1>1.
In some examples, the pixels at the top boundary of the first reference block are repeated M2 times and the pixels at the bottom boundary of the first reference block are repeated (PH-M2) times, where M2 is an integer and M2> =0.
In some examples, the pixels of the M2 top rows of the first reference block or the pixels of the (PH-M2) bottom rows of the first reference block are repeated, where M2>1 or PW-M2>1.
In some examples, when both the horizontal and vertical components of the MV of the first block are fractional, pixels at one or more boundaries of the first reference block are repeated as filler pixels to generate the second reference block.
In some examples, when MVs of the prediction direction X (X is 0 or 1) have fractional components in both the horizontal and vertical directions, pixels at one or more boundaries of the first reference block are repeated as filler pixels to generate the second reference block.
In some examples, the first reference block is any one of a portion or all of the reference blocks of the first block.
In some examples, if the MV of the prediction direction X (X is 0 or 1) has fractional components in both the horizontal and vertical directions, pixels at one or more boundaries of the first reference block are repeated as filler pixels to generate a second reference block of the prediction direction X; otherwise, the pixel is not repeated.
In some examples, if the N2 MV components have fractional precision, pixels at one or more boundaries of the first reference block are repeated as filler pixels to generate a second reference block of M MV components of the N2 MV components, where N2, M are integers, and 0< = M < = N2.
In some examples, N2 and M are different for bi-prediction blocks and uni-prediction blocks.
In some examples, N2 and M are different for different block sizes, the block sizes being associated with a width or/and a height or/and a width x height of the block.
In some examples, for bi-prediction blocks, N2 equals 4 and M equals 4, or N2 equals 4 and M equals 3, or N2 equals 4 and M equals 2, or N2 equals 4 and M equals 1, or N2 equals 3 and M equals 3, or N2 equals 3 and M equals 2, or N2 equals 3 and M equals 1, or N2 equals 2 and M equals 2, or N2 equals 2 and M equals 1, or N2 equals 1 and M equals 1.
In some examples, for unidirectional prediction blocks, N2 is equal to 2 and M is equal to 2, or N2 is equal to 2 and M is equal to 1, or N2 is equal to 1 and M is equal to 1.
In some examples, pixels at different boundaries of the first reference block are repeated as filler pixels in different ways to generate a second reference block of M MV components.
In some examples, PW is set equal to zero when MV is used to extract the first reference block when pixel padding is not used for horizontal MV components.
In some examples, when pixel padding is not used for the vertical MV component, PH is set equal to zero when the first reference block is extracted using MV.
In some examples, PW and/or PH are different for different color components of the first block.
In some examples, the color components include Y, cb and Cr.
In some examples, PW and/or PH are different for different block sizes or shapes.
In some examples, PW and PH are set equal to 1 for 4×16 or/and 16×4 bi-prediction blocks or/and uni-prediction blocks.
In some examples, PW and PH are set equal to 0 and 1, or 1 and 0, respectively, for a 4×4 bi-prediction block or/and a uni-prediction block.
In some examples, PW and PH are set equal to 2 for 4×8 or/and 8×4 bi-prediction blocks or/and uni-prediction blocks.
In some examples, PW and PH are set equal to 2 and 3, or 3 and 2, respectively, for 4 x 8 or/and 8 x 4 bi-prediction blocks or/and uni-prediction blocks.
In some examples, PW and PH are different for unidirectional prediction and bi-directional prediction.
In some examples, PW and PH are different for different kinds of motion vectors.
In some examples, PW and PH are set to smaller values or equal to zero for Motion Vectors (MVs) having fractional components in only one of the horizontal direction or the vertical direction, and PW and PH are set to larger values for MVs having fractional components in both the horizontal direction and the vertical direction.
In some examples, PW and PH are set equal to 0 for a 4×16 or/and 16×4 or/and 4×8 or/and 8×4 or/and 4×4 bi-prediction block or/and uni-prediction block having a fractional MV component in only one of the horizontal direction or the vertical direction.
In some examples, PW and PH are used for 4×16 or/and 16×4 or/and 4×8 or/and 8×4 or/and 4×4 bi-prediction blocks or/and uni-prediction blocks having fractional MV components in both the horizontal and vertical directions.
In some examples, whether and how to repeat the pixels at the boundary depends on the color format of the first block.
In some examples, the color format includes 4:2:0, 4:2:2, or 4:4:4.
In some examples, the step of filling the first reference block with filler pixels to generate the second reference block includes: the default values are padded as padded pixels to generate a second reference block.
In some examples, the conversion generates a first block of video from the bitstream representation.
In some examples, the conversion generates a bitstream representation from the first/second block of video.
Fig. 26 is a flow chart of a method 2600 of video processing. The method 2600 comprises: determining (2602) characteristics of a first block of the video for a transition between the first block and a bitstream representation of the first block; performing (2604) a rounding procedure on a Motion Vector (MV) of the first block based on a characteristic of the first block; and performing (2606) the conversion by using the rounded MVs.
In some examples, performing the rounding process on the MV includes rounding the MV to integer-pixel precision or half-pixel precision.
In some examples, the MVs are rounded to the nearest integer pixel precision MV or half-pixel precision MV.
In some examples, performing the rounding procedure on the MV includes rounding up, rounding down, rounding to zero, or rounding far to zero on the MV.
In some examples, the characteristic of the first block includes a size parameter, wherein the size parameter includes at least one of a width, a height, a ratio of the width to the height, a dimension of the width by the height of the first block.
In some examples, the rounding process is performed on the horizontal or/and vertical components of the MV when the size of the first block is less than and/or equal to the threshold L.
In some examples, the rounding process is performed on the horizontal or/and vertical components of the MV when the size of the first block is greater than and/or equal to the threshold L.
In some examples, the rounding process is performed on the horizontal component of the MV when the width of the first block is less than and/or equal to the second threshold L1, or the rounding process is performed on the vertical component of the MV when the height of the first block is less than and/or equal to the second threshold L1.
In some examples, the thresholds L and L1 are different for bi-prediction blocks and uni-prediction blocks.
In some examples, the rounding process is performed on the MV when the ratio between the width and the height is greater than the third threshold L3 or less than the fourth threshold L4.
In some examples, the rounding process is performed on the MV when both the horizontal and vertical components of the MV are fractional.
In some examples, the characteristic of the first block includes a prediction parameter indicating whether the first block is bi-directionally predicted or uni-directionally predicted.
In some examples, whether to perform the rounding procedure on the MVs depends on the prediction parameters.
In some examples, the rounding process is performed on MVs only when the first block is bi-directionally predicted.
In some examples, the characteristics of the first block include indicating a prediction direction from list 0 or list 1 and/or an associated MV.
In some examples, whether to perform the rounding process on the MVs depends on the prediction direction of the first block and/or the MVs.
In some examples, where the first block is a bi-predictive block, whether to perform the rounding procedure on the MVs is different for different prediction directions.
In some examples, if MVs of the prediction direction X (X is 0 or 1) have fractional components in both the horizontal direction and the vertical direction, performing a rounding procedure on N MV components for the prediction direction X, N being an integer ranging from 0 to 2; otherwise, no rounding procedure is performed.
In some examples, if the N1 MV components have fractional precision, a rounding process is performed on M MV components of the N1 MV components, where N1, M are integers, and 0< =m < =n1.
In some examples, N1 and M are different for bi-prediction blocks and uni-prediction blocks.
In some examples, for bi-prediction blocks,
N1 is equal to 4 and M is equal to 4, or
N1 is equal to 4 and M is equal to 3, or
N1 is equal to 4 and M is equal to 2, or
N1 is equal to 4 and M is equal to 1, or
N1 is equal to 3 and M is equal to 3, or
N1 is equal to 3 and M is equal to 2, or
N1 is equal to 3 and M is equal to 1, or
N1 is equal to 2 and M is equal to 2, or
N1 is equal to 2 and M is equal to 1, or
N1 is equal to 1 and M is equal to 1.
In some examples, for unidirectional prediction blocks,
N1 is equal to 2 and M is equal to 2, or
N1 is equal to 2 and M is equal to 1, or
N1 is equal to 1 and M is equal to 1.
In some examples, N1 and M are different for different size parameters, wherein the size parameters include at least one of a width, a height, a ratio of width to height, a dimension of width to height of the first block.
In some examples, K of the M MV components are rounded to integer pixel precision and M-K MV components are rounded to half pixel precision, where K is an integer ranging from 0 to M-1.
In some examples, the characteristic of the first block includes a color component of the first block.
In some examples, whether to perform the rounding procedure on the MVs is different for different color components of the first block.
In some examples, the color components include Y, cb and Cr.
In some examples, the characteristics of the first block include a color format of the first block.
In some examples, whether to perform the rounding process on the MVs depends on the color format of the first block.
In some examples, the color format includes 4:2:0, 4:2:2, or 4:4:4.
In some examples, whether and/or how to perform the rounding process on MVs depends on the characteristics of the block.
In some examples, one or more MV components of a 4 x 16 or/and 16 x 4 bi-predictive luma block or/and uni-predictive luma block are rounded to half-pixel precision.
In some examples, one or more MV components of a4 x 16 or/and 16 x 4 bi-predictive luma block or/and uni-predictive luma block are rounded to integer-pixel precision.
In some examples, one or more MV components of a 4 x 4 uni-directional predicted luma block or/and a bi-directional predicted luma block are rounded to integer-pixel precision.
In some examples, one or more MV components of a4 x 8 or/and 8 x 4 bi-predictive luma block or/and uni-predictive luma block are rounded to integer-pixel precision.
In some examples, the characteristics of the first block include whether the first block is encoded using a sub-block based prediction method including an affine prediction mode and a sub-block based temporal motion vector prediction (SbTMVP) mode.
In some examples, if the first block is encoded in affine prediction mode, then the rounding procedure for MVs is not applied.
In some examples, if the first block is coded in SbTMVP mode, a rounding procedure on the MV is applied and a rounding procedure is performed on each sub-block of the first block.
In some examples, performing a rounding procedure on a Motion Vector (MV) of the first block based on characteristics of the first block includes: determining whether at least one MV of the first block is of fractional precision when the size parameter of the first block satisfies a predetermined rule; and in response to determining that at least one MV of the first block is of fractional precision, performing a rounding procedure on the at least one MV to generate a rounded MV having integer precision.
In some examples, the bitstream representation of the first block follows a rule that depends on a size parameter of the first block, wherein only integer pixels MVs are allowed for bi-prediction of the codec block.
In some examples, the size parameter of the first block is 4×16, 16×4, 4×8, 8×4, or 4×4.
In some examples, performing the conversion by using the rounded MVs includes: motion compensation is performed on the first block by using the rounded MV.
Fig. 27 is a flow chart of a method 2700 of video processing. The method 2700 includes: determining (2702) a characteristic of a first block of the video for a transition between the first block and a bitstream representation of the first block; -performing (2704) motion compensation on the first block using MVs having a first precision; and storing (2706) for the first block MVs having the second precision; wherein the first precision is different from the second precision.
In some examples, the characteristic of the first block includes a size parameter, wherein the size parameter includes at least one of a width, a height, a ratio of the width to the height, a dimension of the width by the height of the first block.
In some examples, the first precision is an integer precision and the second precision is a fractional precision.
Fig. 28 is a flow chart of a method 2800 of video processing. The method 2800 includes: determining (2802) a codec mode for the first block for a transition between the first block of video and a bitstream representation of the first block; performing (2804) a rounding procedure on a Motion Vector (MV) of the first block if a codec mode of the first block satisfies a predetermined rule; and performing (2806) motion compensation of the first block by using the rounded MV.
In some examples, the predetermined rule includes: the first block is encoded and decoded in Merge mode, non-intra mode, or non-Advanced Motion Vector Prediction (AMVP) mode.
Fig. 29 is a flow chart of a method 2900 of video processing. The method 2900 includes: generating (2902) a first Motion Vector (MV) candidate list for the first block for conversion between the first block of video and a bitstream representation of the first block; performing (2904) a rounding procedure on MVs of the at least one candidate before adding the at least one candidate to the first MV candidate list; and performing (2906) the conversion by using the first MV candidate list.
In some examples, the first block is encoded in Merge mode, non-intra mode, or non-Advanced Motion Vector Prediction (AMVP) mode, and the MV candidate list includes a Merge candidate list and a non-Merge candidate list.
In some examples, candidates with fractional MVs are excluded from the first MV candidate list.
In some examples, the at least one candidate includes: candidates derived from spatial blocks, candidates derived from temporal blocks, candidates derived from a table of Historical Motion Vector Predictions (HMVP), or paired bi-prediction Merge candidates.
In some examples, the method further comprises: a separate HMVP table is provided to store candidates for MVs with integer precision.
In some examples, the method further comprises: the rounding process is performed on MVs or on candidate MVs in the candidate list based on the characteristics of the first block.
In some examples, the characteristic of the first block includes a size parameter, wherein the size parameter includes at least one of a width, a height, a ratio of the width to the height, a dimension of the width by the height of the first block.
In some examples, the size parameter includes at least one of 4×16, 16×4, 4×8, 8×4, 4×4.
In some examples, the characteristics of the first block include prediction parameters indicating whether the first block is bi-directionally predicted or uni-directionally predicted, and performing the rounding procedure on the MV includes: only when the candidate is a bi-prediction candidate, a rounding process is performed on MVs or on MVs of the candidates in the candidate list.
In some examples, the first block is encoded in AMVP mode and the candidate is an AMVP candidate.
In some examples, the first block is a non-affine pattern.
Fig. 30 is a flow chart of a method 3000 of video processing. The method 3000 includes: determining (3002) characteristics of a first block of the video for a transition between the first block and a bitstream representation of the first block; determining (3004) a constraint parameter to be applied to the first block based on a characteristic of the first block, wherein the constraint parameter constrains a maximum number of fractional Motion Vector (MV) components of the first block; and performing (3006) the conversion by using the constraint parameters.
In some examples, the MV components include at least one of horizontal MV components and/or vertical MV components, and the fractional MV components include at least one of half-pixel MV components, quarter-pixel MV components, MV components with finer precision than quarter-pixels.
In some examples, the characteristic of the first block includes a prediction parameter indicating whether the first block is bi-directionally predicted or uni-directionally predicted.
In some examples, the constraint parameters are different for bi-prediction and uni-prediction.
In some examples, constraint parameters are not applied in unidirectional prediction.
In some examples, the constraint parameter is applied when the first block is a bi-directionally predicted 4 x 8, 8 x 4, 4 x 16, or 16 x 4 block.
In some examples, the constraint parameter is not applied when the first block is a uni-directionally predicted 4 x 8, 8 x 4, 4 x 16, or 16 x 4 block.
In some examples, the constraint parameter is applied when the first block is a uni-directionally predicted 4 x 4 block or a bi-directionally predicted 4 x 4 block.
In some examples, the maximum number of fractional MV components is 3, 2, 1, or 0 for a bi-prediction block.
In some examples, the maximum number of fractional MV components is 1 or 0 for a unidirectional prediction block.
In some examples, for bi-prediction blocks, the maximum number of quarter-pixel MV components is 3, 2, 1, or 0.
In some examples, for unidirectional prediction blocks, the maximum number of quarter-pixel MV components is 1 or 0.
In some examples, the characteristics of the first block include at least one of a shape and a size parameter, wherein the size parameter includes at least one of a width, a height, a ratio of the width to the height, a dimension of the width to the height, and a shape of the first block.
In some examples, the constraint parameters are different for different sizes or shapes of the first block.
In some examples, the characteristic of the first block includes a mode parameter indicating a codec mode of the first block.
In some examples, the codec mode includes a triangle mode in which the current is divided into two partitions, where each partition has at least one MV.
In some examples, the constraint parameter is applied when the first block is a4×16 or 16×4 block that is encoded and decoded in triangle mode.
Fig. 31 is a flow chart of a method 3100 of video processing. The method 3100 includes: obtaining (3102) an indication that signaling of at least one of bi-prediction and uni-prediction is not allowed when a characteristic of the block satisfies a predetermined rule; determining (3104) characteristics of a first block of the video for conversion between the first block and a bitstream representation of the first block; and performing (3106) the conversion by using the indication when the characteristic of the first block satisfies a predetermined rule.
Fig. 32 is a flow chart of a method 3200 of video processing. The method 3200 includes: signaling (3202) an indication that at least one of bi-prediction and uni-prediction is not allowed when a characteristic of the block meets a predetermined rule; determining (3204) characteristics of a first block of the video for a transition between the first block and a bitstream representation of the first block; the conversion is performed (3206) based on characteristics of the first block, wherein at least one of bi-prediction and uni-prediction is disabled during the conversion when the characteristics of the first block satisfy a predetermined rule.
In some examples, the indication is signaled in a sequence parameter set/picture parameter set/sequence header/picture header/slice group header/Codec Tree Unit (CTU) line/region/other high level syntax.
In some examples, the characteristic of the first block includes a size parameter, wherein the size parameter includes at least one of a width, a height, a ratio of the width to the height, a size of the width to the height, and a shape of the first block.
In some examples, the predetermined rule includes: the first block has some block size.
In some examples, the characteristic of the first block includes a mode parameter indicating a codec mode of the first block.
In some examples, the predetermined rule includes: the first block is encoded in a non-affine mode.
In some examples, when at least one of unidirectional prediction and bi-directional prediction is not allowed for the first block, signaling of Advanced Motion Vector Resolution (AMVR) parameters of the first block is modified accordingly.
In some examples, signaling of Advanced Motion Vector Resolution (AMVR) parameters is modified such that only integer pixel precision is allowed for the first block.
In some examples, signaling of Advanced Motion Vector Resolution (AMVR) parameters is modified such that different Motion Vector (MV) accuracies are utilized.
In some examples, the block size of the first block is at least one of 4 x 16, 16 x 4, 4 x 8, 8 x 4, 4 x 4.
In some examples, the bitstream representation of the first block follows a rule that depends on a size parameter of the first block, wherein only integer pixels MVs are allowed for bi-prediction of the codec block.
Fig. 33 is a flow chart of a method 3300 of video processing. Method 3300 includes: determining (3302) whether fractional Motion Vector (MV) or Motion Vector Difference (MVD) precision is allowed for the first block for a transition between the first block and a bitstream representation of the first block of the video; based on the determination, signaling (3304) Advanced Motion Vector Resolution (AMVR) parameters of the first block; and performing (3306) the conversion by using the AMVR parameters.
Fig. 34 is a flow chart of a method 3400 of video processing. The method 3400 includes: determining (3402) whether fractional Motion Vector (MV) or Motion Vector Difference (MVD) precision is allowed for the first block for a transition between the first block and a bitstream representation of the first block of video; based on the determination, obtaining (3404) Advanced Motion Vector Resolution (AMVR) parameters of the first block; and performing (3406) the conversion by using the AMVR parameters.
In some examples, if the fractional MV or MVD precision is not allowed for the first block, an AMVR parameter indicating whether the MV/MVD precision of the current block is fractional precision is skipped and implicitly derived as false.
5. Examples
In the following examples, PW and PH are designed for 4×16, 16×4, 4×4, 8×4 and 4×8 blocks.
Assume that MVs of blocks in the reference list X are MVX, and that horizontal and vertical components of MVX are MVX [0] and MVX [1] respectively, and integer parts of MVX [0] and MVX [1] are MVXInt [0] and MVXInt [1] respectively, where x=0 or 1. Let the interpolation filter tap (in motion compensation) be N (e.g., 8, 6, 4, or 2), and the current block size be w×h, and the position of the current block (i.e., the position of the upper left pixel) be (x, y). The index of the rows and columns starts with 1, e.g., the H rows include row 1, …, (H-1).
The following boundary pixel repetition process is performed only when MVX [0] and MVX [1] are both fractional.
5.1 Example
For 4×16 and 16×4 unidirectional prediction blocks and bi-directional prediction blocks, PW and PH are both set equal to 1 for the prediction direction X. First, (W+N-2) x (H+N-2) reference pixels are extracted from a reference picture, wherein the upper left position of the reference pixel is identified by (MVXInt [0] +x-N/2+1, MVXInt [1] +y-N/2+1). Then, the (W+N-1) th column is generated by copying the (W+N-2) th column. Finally, row (H+N-1) is generated by copying row (H+N-2).
For a 4×4 unidirectional prediction block, PW and PH are set equal to 0 and 1, respectively. First, (W+N-1) x (H+N-2) reference pixels are extracted from a reference picture, wherein the upper left position of the reference pixel is identified by (MVXInt [0] +x-N/2+1, MVXInt [1] +y-N/2+1). Then, row (H+N-1) is generated by copying row (H+N-2).
For 4×8 and 8×4 unidirectional prediction blocks and bi-directional prediction blocks, PW and PH are set equal to 2 and 3, respectively. First, (W+N-3) x (H+N-4) reference pixels are extracted from a reference picture, wherein the upper left position of the reference pixel is identified by (MVXInt [0] +x-N/2+2, MVXInt [1] +y-N/2+2). Then, the 1 st column is copied to the left thereof to obtain W+N-2 columns, after which the (W+N-1) column is generated by copying the (W+N-2) th column. Finally, row 1 is copied to its upper side to obtain H+N-3 rows, after which rows (H+N-2) and (H+N-1) are generated by copying (H+N-3).
5.2 Example
For 4×16 and 16×4 unidirectional prediction blocks and bi-directional prediction blocks, PW and PH are both set equal to 1 for the prediction direction X. First, (W+N-2) x (H+N-2) reference pixels are extracted from a reference picture, wherein the upper left position of the reference pixel is identified by (MVXInt [0] +x-N/2+2, MVXInt [1] +y-N/2+2). Then, column 1 is copied to its left to obtain W+N-1 columns. Finally, row 1 is copied to its upper side to obtain H+N-1 rows.
For a 4×4 unidirectional prediction block, PW and PH are set equal to 0 and 1, respectively. First, (W+N-1) x (H+N-2) reference pixels are extracted from a reference picture, wherein the upper left position of the reference pixel is identified by (MVXInt [0] +x-N/2+1, MVXInt [1] +y-N/2+2). Row 1 is then copied to its upper side to obtain h+n-1 rows.
For 4×8 and 8×4 unidirectional prediction blocks and bi-directional prediction blocks, PW and PH are set equal to 2 and 3, respectively. First, (W+N-3) x (H+N-4) reference pixels are extracted from a reference picture, wherein the upper left position of the reference pixel is identified by (MVXInt [0] +x-N/2+2, MVXInt [1] +y-N/2+2). Then, the 1 st column is copied to the left thereof to obtain W+N-2 columns, after which the (W+N-1) column is generated by copying the (W+N-2) th column. Finally, row 1 is copied to its upper side to obtain H+N-3 rows, after which rows (H+N-2) and (H+N-1) are generated by copying row (H+N-3).
It should be appreciated that the disclosed techniques may be embodied in video encoders or decoders to improve compression efficiency when the shape of the compressed codec unit is significantly different from a conventional square block or half square rectangular block. For example, new codec tools that use long or tall codec units, such as 4 x 32 or 32 x 4 sized units, may benefit from the disclosed techniques.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described in this document may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium, for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of materials embodying a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. In addition to hardware, the apparatus may include code that creates a runtime environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. The computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Typically, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not require such a device. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disk; CD-ROM and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Furthermore, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few embodiments and examples are described, and other implementations, enhancements, and variations may be made based on what is described and illustrated in this patent document.
Claims (41)
1. A method of video processing, comprising:
Determining a characteristic of a first block of video for a transition between the first block and a bit stream of the first block;
performing a rounding process on a motion vector MV of the first block based on a characteristic of the first block; and
By performing the conversion using the rounded MVs,
Wherein performing a rounding procedure on the MV includes rounding the MV to integer-pixel precision or half-pixel precision,
Wherein the MVs are rounded to the nearest integer pixel precision MVs or half-pixel precision MVs,
Wherein the characteristics of the first block include whether the first block is encoded and decoded by a subblock-based prediction method including an affine prediction mode and a subblock-based temporal motion vector prediction SbTMVP mode, and
Wherein if the first block is encoded and decoded in an affine prediction mode, a rounding process for the MV is not applied, or if the first block is encoded and decoded in a SbTMVP mode, a rounding process for the MV is applied, and a rounding process is performed for each sub-block of the first block.
2. The method of claim 1, wherein performing a rounding procedure on the MV comprises rounding up, rounding down, rounding to zero, or rounding far to zero on the MV.
3. The method of claim 1 or 2, wherein the characteristics of the first block comprise a size parameter, wherein the size parameter comprises at least one of a width, a height, a ratio of width to height, a dimension of width by height of the first block.
4. A method according to claim 3, wherein a rounding procedure is performed on the horizontal or/and vertical components of the MV when the size of the first block is smaller than and/or equal to a threshold L.
5. A method according to claim 3, wherein a rounding procedure is performed on the horizontal or/and vertical components of the MV when the size of the first block is greater than and/or equal to a threshold L.
6. A method according to claim 3, wherein when the width of the first block is less than and/or equal to a second threshold L1, a rounding process is performed on the horizontal component of the MV, or
When the height of the first block is less than and/or equal to a second threshold L1, a rounding process is performed on the vertical component of the MV.
7. The method of any of claims 4-6, wherein the thresholds L and L1 are different for bi-prediction blocks and uni-prediction blocks.
8. A method according to claim 3, wherein a rounding procedure is performed on the MV when the ratio between width and height is greater than a third threshold L3 or less than a fourth threshold L4.
9. The method of claim 1 or 2, wherein a rounding process is performed on the MV when both the horizontal and vertical components of the MV are fractional.
10. The method of claim 1 or 2, wherein the characteristics of the first block include a prediction parameter indicating whether the first block is bi-directionally predicted or uni-directionally predicted.
11. The method of claim 10, wherein whether to perform a rounding procedure on the MV depends on the prediction parameters.
12. The method of claim 11, wherein a rounding process is performed on the MV only when the first block is bi-directionally predicted.
13. The method of claim 1 or 2, wherein the characteristics of the first block include indicating a prediction direction from list 0 or list 1 and/or an associated MV.
14. The method of claim 13, wherein whether to perform a rounding procedure on the MV depends on a prediction direction of the first block and/or MV.
15. The method of claim 14, wherein, in the case where the first block is a bi-predictive block, whether to perform a rounding procedure on the MV is different for different prediction directions.
16. The method of claim 15, wherein if MVs of a prediction direction X have fractional components in both horizontal and vertical directions, X is 0 or 1, performing a rounding procedure on N MV components for the prediction direction X, N being an integer ranging from 0 to 2;
Otherwise, no rounding procedure is performed.
17. The method of claim 14, wherein if the N1 MV components have fractional precision, performing a rounding procedure on M MV components of the N1 MV components, wherein N1, M are integers, and 0< = M < = N1.
18. The method of claim 17, wherein N1 and M are different for bi-directional prediction blocks and uni-directional prediction blocks.
19. The method of claim 18, wherein, for bi-prediction blocks,
N1 is equal to 4 and M is equal to 4, or
N1 is equal to 4 and M is equal to 3, or
N1 is equal to 4 and M is equal to 2, or
N1 is equal to 4 and M is equal to 1, or
N1 is equal to 3 and M is equal to 3, or
N1 is equal to 3 and M is equal to 2, or
N1 is equal to 3 and M is equal to 1, or
N1 is equal to 2 and M is equal to 2, or
N1 is equal to 2 and M is equal to 1, or
N1 is equal to 1 and M is equal to 1.
20. The method of claim 18, wherein, for a unidirectional prediction block,
N1 is equal to 2 and M is equal to 2, or
N1 is equal to 2 and M is equal to 1, or
N1 is equal to 1 and M is equal to 1.
21. The method of claim 17, wherein N1 and M are different for different size parameters, wherein the size parameters include at least one of width, height, ratio of width to height, size of width to height of the first block.
22. The method of claim 17, wherein K of the M MV components are rounded to integer pixel precision and M-K MV components are rounded to half pixel precision, where K is an integer ranging from 0 to M-1.
23. The method of claim 1 or 2, wherein the characteristic of the first block comprises a color component of the first block.
24. The method of claim 23, wherein whether to perform a rounding procedure on the MV is different for different color components of the first block.
25. The method of claim 24, wherein the color components comprise Y, cb and Cr.
26. The method of claim 1 or 2, wherein the characteristics of the first block include a color format of the first block.
27. The method of claim 26, wherein whether to perform a rounding procedure on the MV depends on a color format of the first block.
28. The method of claim 27, wherein the color format comprises 4:2:0, 4:2:2, or 4:4:4.
29. The method according to claim 1 or 2, wherein whether and/or how to perform a rounding procedure on the MV depends on the characteristics of the first block.
30. The method of claim 29, wherein one or more MV components of the 4 x 16 or/and 16 x 4 bi-predictive luma block or/and uni-predictive luma block are rounded to half-pixel precision.
31. The method of claim 29, wherein one or more MV components of the 4 x 16 or/and 16 x 4 bi-predictive luma block or/and uni-predictive luma block are rounded to integer-pixel precision.
32. The method of claim 29, wherein one or more MV components of the 4 x 4 uni-directional predicted luma block or/and the bi-directional predicted luma block are rounded to integer-pixel precision.
33. The method of claim 29, wherein one or more MV components of the 4 x 8 or/and 8 x4 bi-predictive luma block or/and uni-predictive luma block are rounded to integer-pixel precision.
34. A method according to claim 3, wherein performing a rounding procedure on the motion vector MV of the first block based on the characteristics of the first block comprises:
determining whether at least one MV of the first block is of fractional precision when the size parameter of the first block satisfies a predetermined rule; and
In response to determining that at least one MV of the first block is of fractional precision, performing a rounding process on the at least one MV to generate a rounded MV having integer precision.
35. A method according to claim 3, wherein the bit stream of the first block follows a rule that depends on a size parameter of the first block, wherein only integer pixels MVs are allowed for bi-prediction codec blocks.
36. A method according to claim 3, wherein the size parameter of the first block is 4 x 16, 16 x 4, 4 x 8, 8 x 4 or 4 x 4.
37. The method of claim 1 or 2, wherein performing the conversion by using the rounded MVs comprises:
motion compensation is performed on the first block by using the rounded MV.
38. The method of claim 1 or 2, wherein the converting generates a first block of video from the bitstream.
39. The method of claim 1 or 2, wherein the converting generates the bitstream from a first block of video.
40. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-39.
41. A non-transitory computer readable medium storing code which, when executed by a processor, causes the processor to perform the method of any one of claims 1 to 39.
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2019071396 | 2019-01-11 | ||
CNPCT/CN2019/071396 | 2019-01-11 | ||
CNPCT/CN2019/071503 | 2019-01-12 | ||
CN2019071503 | 2019-01-12 | ||
CN2019077171 | 2019-03-06 | ||
CNPCT/CN2019/077171 | 2019-03-06 | ||
PCT/CN2020/071770 WO2020143830A1 (en) | 2019-01-11 | 2020-01-13 | Integer mv motion compensation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113302938A CN113302938A (en) | 2021-08-24 |
CN113302938B true CN113302938B (en) | 2024-08-16 |
Family
ID=71520976
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202080008723.XA Active CN113302938B (en) | 2019-01-11 | 2020-01-13 | Integer MV motion compensation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113302938B (en) |
WO (1) | WO2020143830A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20240022758A1 (en) * | 2022-07-12 | 2024-01-18 | Tencent America LLC | Full-pixel based and group-based motion compensated picture boundary padding |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6414992B1 (en) * | 1999-01-27 | 2002-07-02 | Sun Microsystems, Inc. | Optimal encoding of motion compensated video |
JP4506875B2 (en) * | 2008-05-19 | 2010-07-21 | ソニー株式会社 | Image processing apparatus and image processing method |
JP5749710B2 (en) * | 2009-05-16 | 2015-07-15 | トムソン ライセンシングThomson Licensing | Method and apparatus for adjusting improved quantization rounding offset for video encoding and decoding |
RU2565363C2 (en) * | 2011-01-07 | 2015-10-20 | Нокиа Корпорейшн | Motion prediction during video coding |
US9325991B2 (en) * | 2012-04-11 | 2016-04-26 | Qualcomm Incorporated | Motion vector rounding |
WO2017052000A1 (en) * | 2015-09-24 | 2017-03-30 | 엘지전자 주식회사 | Method and apparatus for motion vector refinement-based inter prediction in image coding system |
US10819978B2 (en) * | 2016-01-11 | 2020-10-27 | Samsung Electronics Co., Ltd. | Image encoding method and apparatus, and image decoding method and apparatus |
WO2017156705A1 (en) * | 2016-03-15 | 2017-09-21 | Mediatek Inc. | Affine prediction for video coding |
WO2017164645A2 (en) * | 2016-03-24 | 2017-09-28 | 인텔렉추얼디스커버리 주식회사 | Method and apparatus for encoding/decoding video signal |
WO2018049594A1 (en) * | 2016-09-14 | 2018-03-22 | Mediatek Inc. | Methods of encoder decision for quad-tree plus binary tree structure |
US10979732B2 (en) * | 2016-10-04 | 2021-04-13 | Qualcomm Incorporated | Adaptive motion vector precision for video coding |
WO2018113658A1 (en) * | 2016-12-22 | 2018-06-28 | Mediatek Inc. | Method and apparatus of motion refinement for video coding |
US20180199057A1 (en) * | 2017-01-12 | 2018-07-12 | Mediatek Inc. | Method and Apparatus of Candidate Skipping for Predictor Refinement in Video Coding |
WO2018210315A1 (en) * | 2017-05-18 | 2018-11-22 | Mediatek Inc. | Method and apparatus of motion vector constraint for video coding |
-
2020
- 2020-01-13 WO PCT/CN2020/071770 patent/WO2020143830A1/en active Application Filing
- 2020-01-13 CN CN202080008723.XA patent/CN113302938B/en active Active
Also Published As
Publication number | Publication date |
---|---|
WO2020143830A1 (en) | 2020-07-16 |
CN113302938A (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110620933B (en) | Different precisions of different reference lists | |
CN113196772B (en) | Interaction between intra-block copy mode and sub-block based motion vector prediction mode | |
CN111064961B (en) | Video processing method and device | |
CN113711592B (en) | One-half pixel interpolation filter in intra block copy coding mode | |
US11805269B2 (en) | Pruning in multi-motion model based skip and direct mode coded video blocks | |
CN113330739A (en) | Insertion order of motion candidates in LUT | |
CN110858901B (en) | Overlapped block motion compensation using temporal neighbors | |
CN113228637B (en) | Shape dependent interpolation filter | |
CN113302938B (en) | Integer MV motion compensation | |
CN113287303B (en) | Bi-directional prediction constraints | |
CN113273216B (en) | Improvement of MMVD | |
CN110719475B (en) | Shape dependent interpolation order | |
CN110677650B (en) | Reducing complexity of non-adjacent mere designs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |