EP3566441B1 - Motion vector reconstructions for bi-directional optical flow (bio) - Google Patents
Motion vector reconstructions for bi-directional optical flow (bio) Download PDFInfo
- Publication number
- EP3566441B1 EP3566441B1 EP18701947.6A EP18701947A EP3566441B1 EP 3566441 B1 EP3566441 B1 EP 3566441B1 EP 18701947 A EP18701947 A EP 18701947A EP 3566441 B1 EP3566441 B1 EP 3566441B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- block
- sub
- predictive
- video data
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000033001 locomotion Effects 0.000 title claims description 259
- 239000013598 vector Substances 0.000 title claims description 94
- 230000003287 optical effect Effects 0.000 title claims description 26
- 238000000034 method Methods 0.000 claims description 109
- 230000008569 process Effects 0.000 claims description 28
- 238000003860 storage Methods 0.000 claims description 22
- 238000004891 communication Methods 0.000 claims description 19
- 241000023320 Luma <angiosperm> Species 0.000 description 29
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 29
- 238000012545 processing Methods 0.000 description 29
- 238000013139 quantization Methods 0.000 description 18
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 16
- 230000002123 temporal effect Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000005192 partition Methods 0.000 description 10
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 7
- 230000003044 adaptive effect Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 7
- 238000013500 data storage Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000009795 derivation Methods 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 5
- 238000013138 pruning Methods 0.000 description 5
- 238000000638 solvent extraction Methods 0.000 description 5
- 230000002146 bilateral effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 101100537098 Mus musculus Alyref gene Proteins 0.000 description 2
- 101150095908 apex1 gene Proteins 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- VBRBNWWNRIMAII-WYMLVPIESA-N 3-[(e)-5-(4-ethylphenoxy)-3-methylpent-3-enyl]-2,2-dimethyloxirane Chemical compound C1=CC(CC)=CC=C1OC\C=C(/C)CCC1C(C)(C)O1 VBRBNWWNRIMAII-WYMLVPIESA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 241000985610 Forpus Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000012432 intermediate storage Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Definitions
- This disclosure relates to video coding.
- Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, so-called "smart phones," video teleconferencing devices, video streaming devices, and the like.
- Digital video devices implement video coding techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), ITU-T H.265/High Efficiency Video Coding (HEVC), and extensions of such standards.
- the video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video coding techniques.
- Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences.
- a video slice e.g., a video frame or a portion of a video frame
- video blocks which may also be referred to as treeblocks, coding units (CUs), and/or coding nodes.
- Video blocks in an intra-coded (I) slice of a picture may be encoded using spatial prediction with respect to reference samples in neighboring blocks in the same picture.
- Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures.
- Pictures may be referred to as frames, and reference pictures may be referred to as reference frames.
- Residual data represents pixel differences between the original block to be coded and the predictive block.
- An inter-coded block is encoded according to a motion vector that points to a block of reference samples forming the predictive block, and the residual data indicating the difference between the coded block and the predictive block.
- An intra-coded block is encoded according to an intra-coding mode and the residual data.
- the residual data may be transformed from the pixel domain to a transform domain, resulting in residual transform coefficients, which then may be quantized.
- the quantized transform coefficients initially arranged in a two-dimensional array, may be scanned in order to produce a one-dimensional vector of transform coefficients, and entropy coding may be applied to achieve even more compression.
- this disclosure describes techniques related to bi-directional optical flow (BIO) in video coding.
- the techniques of this disclosure may be used in conjunction with existing video codecs, such as High Efficiency Video Coding (HEVC), or be an efficient coding tool for future video coding standards.
- HEVC High Efficiency Video Coding
- BIO bi-directional optical flow
- BIO may be applied during motion compensation.
- BIO is used to modify predictive sample values for bi-predicted inter coded blocks based on an optical flow trajectory in order to determine better predictive blocks, e.g., predictive blocks that more closely match an original block of video data.
- the various techniques of this disclosure may be applied, alone or in any combination, to determine when and whether to perform BIO when predicting blocks of video data, e.g., during motion compensation.
- video coding generically refers to either video encoding or video decoding.
- video coder may generically refer to a video encoder or a video decoder.
- certain techniques described in this disclosure with respect to video decoding may also apply to video encoding, and vice versa. For example, often times video encoders and video decoders are configured to perform the same process, or reciprocal processes.
- video encoders typically perform video decoding as part of the processes of determining how to encode video data. Therefore, unless explicitly stated to the contrary, it should not be assumed that a technique described with respect to video decoding cannot also be performed by a video encoder, or vice versa.
- This disclosure may also use terms such as current layer, current block, current picture, current slice, etc.
- current is intended to identify a block, picture, slice, etc. that is currently being coded, as opposed to, for example, previously or already coded blocks, pictures, and slices or yet to be coded blocks, pictures, and slices.
- a picture is divided into blocks, each of which may be predictively coded.
- a video coder may predict a current block using intra-prediction techniques (using data from the picture including the current block), inter-prediction techniques (using data from a previously coded picture relative to the picture including the current block), or other techniques such as intra block copy, palette mode, dictionary mode, etc.
- Inter-prediction includes both uni-directional prediction and bi-directional prediction.
- a video coder may determine a set of motion information.
- the set of motion information may contain motion information for forward and backward prediction directions.
- forward and backward prediction directions are two prediction directions of a bi-directional prediction mode.
- the terms "forward” and “backward” do not necessarily have a geometry meaning. Instead, the terms generally correspond to whether the reference pictures are to be displayed before (“backward") or after ("forward") the current picture.
- "forward" and "backward” prediction directions may correspond to reference picture list 0 (RefPicList0) and reference picture list 1 (RefPicList1) of a current picture.
- RefPicList0 reference picture list 0
- RefPicList1 reference picture list 1
- a motion vector together with a corresponding reference index may be used in a decoding process.
- Such a motion vector with and associated reference index is denoted as a uni-predictive set of motion information.
- the motion information contains a reference index and a motion vector.
- a motion vector itself may be referred to in a way that it is assumed that the motion vector has an associated reference index.
- a reference index may be used to identify a reference picture in the current reference picture list (RefPicList0 or RefPicList1).
- a motion vector has a horizontal (x) and a vertical (y) component.
- the horizontal component indicates a horizontal displacement within a reference picture, relative to the position of a current block in a current picture, needed to locate an x-coordinate of a reference block
- the vertical component indicates a vertical displacement within the reference picture, relative to the position of the current block, needed to locate a y-coordinate of the reference block.
- Picture order count (POC) values are widely used in video coding standards to identify a display order of a picture. Although there are cases in which two pictures within one coded video sequence may have the same POC value, this typically does not happen within a coded video sequence. Thus, POC values of pictures are generally unique, and thus can uniquely identify corresponding pictures. When multiple coded video sequences are present in a bitstream, pictures having the same POC value may be closer to each other in terms of decoding order. POC values of pictures are typically used for reference picture list construction, derivation of reference picture sets as in HEVC, and motion vector scaling.
- I t 0 is on the motion trajectory of I t . That is, the motion from I t 0 to I t is considered in the formula.
- V x 0 and V y 0 may be used to represent them.
- ⁇ G x G x 0 - G x 1
- V x and V y the final prediction of the block is calculated with equation (E).
- V x and V y is called "BIO motion" for convenience.
- a video coder performs BIO during motion compensation. That is, after the video coder determines a motion vector for a current block, the video coder produces a predicted block for the current block using motion compensation with respect to the motion vector.
- the motion vector identifies the location of a reference block with respect to the current block in a reference picture.
- BIO modifies the motion vector on a per-pixel basis for the current block. That is, rather than retrieving each pixel of the reference block as a block unit, according to BIO, the video coder determines per-pixel modifications to the motion vector for the current block, and constructs the reference block such that the reference block includes reference pixels identified by the motion vector and the per-pixel modification for the corresponding pixel of the current block.
- BIO may be used to produce a more accurate reference block for the current block.
- FIG. 1 is a block diagram illustrating an example video encoding and decoding system 10 that may utilize techniques for bi-directional optical flow.
- system 10 includes a source device 12 that provides encoded video data to be decoded at a later time by a destination device 14.
- source device 12 provides the video data to destination device 14 via a computer-readable medium 16.
- Source device 12 and destination device 14 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or the like.
- source device 12 and destination device 14 may be equipped for wireless communication.
- Computer-readable medium 16 may comprise any type of medium or device capable of moving the encoded video data from source device 12 to destination device 14.
- computer-readable medium 16 may comprise a communication medium to enable source device 12 to transmit encoded video data directly to destination device 14 in real-time.
- the encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to destination device 14.
- the communication medium may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
- the communication medium may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet.
- the communication medium may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14.
- encoded data may be output from output interface 22 to a storage device.
- encoded data may be accessed from the storage device by input interface.
- the storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data.
- the storage device may correspond to a file server or another intermediate storage device that may store the encoded video generated by source device 12. Destination device 14 may access stored video data from the storage device via streaming or download.
- the file server may be any type of server capable of storing encoded video data and transmitting that encoded video data to the destination device 14.
- Example file servers include a web server (e.g., for a website), an FTP server, network attached storage (NAS) devices, or a local disk drive.
- Destination device 14 may access the encoded video data through any standard data connection, including an Internet connection. This may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both that is suitable for accessing encoded video data stored on a file server.
- the transmission of encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof.
- system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
- source device 12 includes video source 18, video encoder 20, and output interface 22.
- Destination device 14 includes input interface 28, video decoder 30, and display device 32.
- video encoder 20 of source device 12 may be configured to apply the techniques for bi-directional optical flow.
- a source device and a destination device may include other components or arrangements.
- source device 12 may receive video data from an external video source 18, such as an external camera.
- destination device 14 may interface with an external display device, rather than including an integrated display device.
- the illustrated system 10 of FIG. 1 is merely one example.
- Techniques for bi-directional optical flow may be performed by any digital video encoding and/or decoding device.
- the techniques of this disclosure are performed by a video encoding device, the techniques may also be performed by a video encoder/decoder, typically referred to as a "CODEC.”
- the techniques of this disclosure may also be performed by a video preprocessor.
- Source device 12 and destination device 14 are merely examples of such coding devices in which source device 12 generates coded video data for transmission to destination device 14.
- devices 12, 14 may operate in a substantially symmetrical manner such that each of devices 12, 14 include video encoding and decoding components.
- system 10 may support one-way or two-way video transmission between video devices 12, 14, e.g., for video streaming, video playback, video broadcasting, or video telephony.
- Video source 18 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider.
- video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video.
- source device 12 and destination device 14 may form so-called camera phones or video phones.
- the techniques described in this disclosure may be applicable to video coding in general, and may be applied to wireless and/or wired applications.
- the captured, pre-captured, or computer-generated video may be encoded by video encoder 20.
- the encoded video information may then be output by output interface 22 onto a computer-readable medium 16.
- Computer-readable medium 16 may include transient media, such as a wireless broadcast or wired network transmission, or storage media (that is, non-transitory storage media), such as a hard disk, flash drive, compact disc, digital video disc, Blu-ray disc, or other computer-readable media.
- a network server (not shown) may receive encoded video data from source device 12 and provide the encoded video data to destination device 14, e.g., via network transmission.
- a computing device of a medium production facility such as a disc stamping facility, may receive encoded video data from source device 12 and produce a disc containing the encoded video data. Therefore, computer-readable medium 16 may be understood to include one or more computer-readable media of various forms, in various examples.
- Input interface 28 of destination device 14 receives information from computer-readable medium 16.
- the information of computer-readable medium 16 may include syntax information defined by video encoder 20, which is also used by video decoder 30, that includes syntax elements that describe characteristics and/or processing of the video data.
- Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.
- CTR cathode ray tube
- LCD liquid crystal display
- plasma display a plasma display
- OLED organic light emitting diode
- Video encoder 20 and video decoder 30 may operate according to one or more video coding standards, such as ITU-T H.264/AVC (Advanced Video Coding) or High Efficiency Video Coding (HEVC), also referred to as ITU-T H.265.
- ITU-T H.264/AVC Advanced Video Coding
- HEVC High Efficiency Video Coding
- H.264 is described in International Telecommunication Union, "Advanced video coding for generic audiovisual services," SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services - Coding of moving video, H.264, June 2011 .
- H.265 is described in International Telecommunication Union, "High efficiency video coding," SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services - Coding of moving video, April 2015 .
- the techniques of this disclosure may also be applied to any other previous or future video coding standards as an efficient coding tool.
- Video Coding Experts Group (VCEG) started a new research project which targets a next generation of video coding standard.
- the reference software is called HM-KTA.
- ITU-T VCEG Q6/16
- ISO/IEC MPEG JTC 1/SC 29/WG 11
- JVET Joint Video Exploration Team
- JEM3 Joint Exploration Model 3
- JVET-D1001 The latest version of reference software, i.e., Joint Exploration Model 3 (JEM 3) can be downloaded from: https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/HM-16.6-JEM-4.0/ An algorithm description of Joint Exploration Test Model 3 (JEM3) could be referred to JVET-D1001.
- Certain video coding techniques such as those of H.264 and HEVC that are related to the techniques of this disclosure, are described in this disclosure. Certain techniques of this disclosure may be described with reference to H.264 and/or HEVC to aid in understanding, but the techniques describe are not necessarily limited to H.264 or HEVC and can be used in conjunction with other coding standards and other coding tools.
- video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams.
- MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP).
- a video sequence typically includes a series of pictures. Pictures may also be referred to as "frames.”
- a picture may include three sample arrays, denoted S L , S Cb , and S Cr .
- S L is a two-dimensional array (i.e., a block) of luma samples.
- S Cb is a two-dimensional array of Cb chrominance samples.
- Scr is a two-dimensional array of Cr chrominance samples.
- Chrominance samples may also be referred to herein as "chroma" samples.
- a picture may be monochrome and may only include an array of luma samples.
- video encoder 20 may generate a set of coding tree units (CTUs).
- Each of the CTUs may comprise a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples, and syntax structures used to code the samples of the coding tree blocks.
- a CTU may comprise a single coding tree block and syntax structures used to code the samples of the coding tree block.
- a coding tree block may be an NxN block of samples.
- a CTU may also be referred to as a "tree block” or a "largest coding unit” (LCU).
- the CTUs of HEVC may be broadly analogous to the macroblocks of other standards, such as H.264/AVC.
- a CTU is not necessarily limited to a particular size and may include one or more coding units (CUs).
- a slice may include an integer number of CTUs ordered consecutively in a raster scan order.
- a CTB contains a quad-tree the nodes of which are coding units.
- the size of a CTB can be ranges from 16x16 to 64x64 in the HEVC main profile (although technically 8x8 CTB sizes can be supported).
- a coding unit (CU) could be the same size of a CTB although and as small as 8x8.
- Each coding unit is coded with one mode.
- the CU may be further partitioned into 2 or 4 prediction units (PUs) or become just one PU when further partition does not apply.
- the two PUs can be half size rectangles or two rectangle size with 1 ⁇ 4 or 3 ⁇ 4 size of the CU.
- video encoder 20 may recursively perform quad-tree partitioning on the coding tree blocks of a CTU to divide the coding tree blocks into coding blocks, hence the name "coding tree units.”
- a coding block may be an NxN block of samples.
- a CU may comprise a coding block of luma samples and two corresponding coding blocks of chroma samples of a picture that has a luma sample array, a Cb sample array, and a Cr sample array, and syntax structures used to code the samples of the coding blocks.
- a CU may comprise a single coding block and syntax structures used to code the samples of the coding block.
- Video encoder 20 may partition a coding block of a CU into one or more prediction blocks.
- a prediction block is a rectangular (i.e., square or non-square) block of samples on which the same prediction is applied.
- a prediction unit (PU) of a CU may comprise a prediction block of luma samples, two corresponding prediction blocks of chroma samples, and syntax structures used to predict the prediction blocks. In monochrome pictures or pictures having three separate color planes, a PU may comprise a single prediction block and syntax structures used to predict the prediction block.
- Video encoder 20 may generate predictive luma, Cb, and Cr blocks for luma, Cb, and Cr prediction blocks of each PU of the CU.
- Video encoder 20 may use intra prediction or inter prediction to generate the predictive blocks for a PU. If video encoder 20 uses intra prediction to generate the predictive blocks of a PU, video encoder 20 may generate the predictive blocks of the PU based on decoded samples of the picture associated with the PU. If video encoder 20 uses inter prediction to generate the predictive blocks of a PU, video encoder 20 may generate the predictive blocks of the PU based on decoded samples of one or more pictures other than the picture associated with the PU. When the CU is inter coded, one set of motion information may be present for each PU. In addition, each PU may be coded with a unique inter-prediction mode to derive the set of motion information.
- video encoder 20 may generate a luma residual block for the CU.
- Each sample in the CU's luma residual block indicates a difference between a luma sample in one of the CU's predictive luma blocks and a corresponding sample in the CU's original luma coding block.
- video encoder 20 may generate a Cb residual block for the CU.
- Each sample in the CU's Cb residual block may indicate a difference between a Cb sample in one of the CU's predictive Cb blocks and a corresponding sample in the CU's original Cb coding block.
- Video encoder 20 may also generate a Cr residual block for the CU.
- Each sample in the CU's Cr residual block may indicate a difference between a Cr sample in one of the CU's predictive Cr blocks and a corresponding sample in the CU's original Cr coding block.
- video encoder 20 may use quad-tree partitioning to decompose the luma, Cb, and Cr residual blocks of a CU into one or more luma, Cb, and Cr transform blocks.
- a transform block is a rectangular (e.g., square or non-square) block of samples on which the same transform is applied.
- a transform unit (TU) of a CU may comprise a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax structures used to transform the transform block samples.
- each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block.
- the luma transform block associated with the TU may be a sub-block of the CU's luma residual block.
- the Cb transform block may be a sub-block of the CU's Cb residual block.
- the Cr transform block may be a sub-block of the CU's Cr residual block.
- a TU may comprise a single transform block and syntax structures used to transform the samples of the transform block.
- Video encoder 20 may apply one or more transforms to a luma transform block of a TU to generate a luma coefficient block for the TU.
- a coefficient block may be a two-dimensional array of transform coefficients.
- a transform coefficient may be a scalar quantity.
- Video encoder 20 may apply one or more transforms to a Cb transform block of a TU to generate a Cb coefficient block for the TU.
- Video encoder 20 may apply one or more transforms to a Cr transform block of a TU to generate a Cr coefficient block for the TU.
- video encoder 20 may quantize the coefficient block. Quantization generally refers to a process in which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, providing further compression.
- video encoder 20 may entropy encode syntax elements indicating the quantized transform coefficients. For example, video encoder 20 may perform Context-Adaptive Binary Arithmetic Coding (CABAC) on the syntax elements indicating the quantized transform coefficients.
- CABAC Context-Adaptive Binary Arithmetic Coding
- Video encoder 20 may output a bitstream that includes a sequence of bits that forms a representation of coded pictures and associated data.
- the bitstream may comprise a sequence of NAL units.
- a NAL unit is a syntax structure containing an indication of the type of data in the NAL unit and bytes containing that data in the form of a RBSP interspersed as necessary with emulation prevention bits.
- Each of the NAL units includes a NAL unit header and encapsulates a RBSP.
- the NAL unit header may include a syntax element that indicates a NAL unit type code.
- the NAL unit type code specified by the NAL unit header of a NAL unit indicates the type of the NAL unit.
- a RBSP may be a syntax structure containing an integer number of bytes that is encapsulated within a NAL unit. In some instances, an RBSP includes zero bits.
- NAL units may encapsulate different types of RBSPs. For example, a first type of NAL unit may encapsulate an RBSP for a PPS, a second type of NAL unit may encapsulate an RBSP for a coded slice, a third type of NAL unit may encapsulate an RBSP for SEI messages, and so on.
- NAL units that encapsulate RBSPs for video coding data (as opposed to RBSPs for parameter sets and SEI messages) may be referred to as VCL NAL units.
- Video decoder 30 may receive a bitstream generated by video encoder 20.
- video decoder 30 may parse the bitstream to obtain syntax elements from the bitstream.
- Video decoder 30 may reconstruct the pictures of the video data based at least in part on the syntax elements obtained from the bitstream. The process to reconstruct the video data may be generally reciprocal to the process performed by video encoder 20.
- video decoder 30 may inverse quantize coefficient blocks associated with TUs of a current CU.
- Video decoder 30 may perform inverse transforms on the coefficient blocks to reconstruct transform blocks associated with the TUs of the current CU.
- Video decoder 30 may reconstruct the coding blocks of the current CU by adding the samples of the predictive blocks for PUs of the current CU to corresponding samples of the transform blocks of the TUs of the current CU. By reconstructing the coding blocks for each CU of a picture, video decoder 30 may reconstruct the picture.
- video encoder 20 and/or video decoder 30 may further perform BIO techniques during motion compensation as discussed in greater detail below.
- Video encoder 20 and video decoder 30 each may be implemented as any of a variety of suitable encoder or decoder circuitry, as applicable, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuitry, software, hardware, firmware or any combinations thereof.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined video encoder/decoder (CODEC).
- a device including video encoder 20 and/or video decoder 30 may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone.
- FIG. 2 is a conceptual diagram illustrating an example of unilateral motion estimation (ME) as a block-matching algorithm (BMA) performed for motion compensated frame-rate up-conversion (MC-FRUC).
- a video coder such as video encoder 20 or video decoder 30 performs unilateral ME to obtain motion vectors (MVs), such as MV 112, by searching for the best matching block (e.g., reference block 108) from reference frame 102 for current block 106 of current frame 100. Then, the video coder interpolates an interpolated block 110 along the motion trajectory of motion vector 112 in interpolated frame 104. That is, in the example of FIG. 2 , motion vector 112 passes through midpoints of current block 106, reference block 108, and interpolated block 110.
- MVs motion vectors
- interpolated block 110 in interpolated frame 104 need not fully belong to a coded block. Consequently, overlapped regions of the blocks and un-filled (holes) regions may occur in interpolated frame 104.
- FRUC algorithms may involve averaging and overwriting the overlapped pixels.
- holes may be covered by the pixel values from a reference or a current frame.
- these algorithms may result in blocking artifacts and blurring.
- motion field segmentation, successive extrapolation using the discrete Hartley transform, and image inpainting may be used to handle holes and overlaps without increasing blocking artifacts and blurring.
- FIG. 3 is a conceptual diagram illustrating an example of bilateral ME as a BMA performed for MC-FRUC.
- Bilateral ME is another solution (in MC-FRUC) that can be used to avoid the problems caused by overlaps and holes.
- a video coder (such as video encoder 20 and/or video decoder 30) performing bilateral ME obtains MVs 132, 134 passing through interpolated block 130 of interpolated frame 124 (which is intermediate to current frame 120 and reference frame 122) using temporal symmetry between current block 126 of current frame 120 and reference block 128 of reference frame 122. As a result, the video coder does not generate overlaps and holes in interpolated frame 124.
- current block 126 is a block that the video coder processes in a certain order, e.g., as in the case of video coding
- a sequence of such blocks would cover the whole intermediate picture without overlap.
- blocks can be processed in the decoding order. Therefore, such a method may be more suitable if FRUC ideas can be considered in a video coding framework.
- AMVP advanced motion vector prediction
- the MV candidate list contains up to 5 candidates for the merge mode and only two candidates for the AMVP mode.
- Other coding standards may include more or fewer candidates.
- a merge candidate may contain a set of motion information, e.g., motion vectors corresponding to both reference picture lists (list 0 and list 1) and the reference indices.
- a video decoder receives a merge candidate identified by a merge index, and the video decoder predicts a current PU using the identified reference picture(s) and motion vector(s).
- a reference index needs to be explicitly signaled, together with an MV predictor (MVP) index to the MV candidate list since the AMVP candidate contains only a motion vector.
- MVP MV predictor
- a merge candidate corresponds to a full set of motion information while an AMVP candidate contains just one motion vector for a specific prediction direction and reference index.
- the candidates for both modes are derived similarly from the same spatial and temporal neighboring blocks.
- FIG. 4A shows spatial neighboring MV candidates for merge mode
- FIG. 4B shows spatial neighboring MV candidates for AMVP modes.
- Spatial MV candidates are derived from the neighboring blocks shown in FIGS. 4A and 4B , for a specific PU (PU 0 ), although the methods generating the candidates from the blocks differ for merge and AMVP modes.
- up to four spatial MV candidates can be derived with the orders shown in FIG. 4A .
- the ordering is as follows: left (0), above (1), above right (2), below left (3), and above left (4), as shown in FIG. 4A . If all of spatial MV candidates 0-3 are available and unique, then the video coder may not include motion information for the above left block in the candidate list. If, however, one or more of spatial MV candidates 0-3 are not available or not unique, then the video coder may include motion information for the above left block in the candidate list.
- the neighboring blocks are divided into two groups: left group consisting of the block 0 and 1, and above group consisting of the blocks 2, 3, and 4 as shown on FIG. 4B .
- the potential candidate in a neighboring block referring to the same reference picture as that indicated by the signaled reference index has the highest priority to be chosen to form a final candidate of the group. It is possible that all neighboring blocks do not contain a motion vector pointing to the same reference picture. Therefore, if such a candidate cannot be found, the first available candidate will be scaled to form the final candidate, thus the temporal distance differences can be compensated.
- FIG. 5A shows an example of a TMVP candidate
- FIG. 5B shows an example of MV scaling.
- Temporal motion vector predictor (TMVP) candidate if enabled and available, is added into the MV candidate list after spatial motion vector candidates.
- the process of motion vector derivation for TMVP candidate is the same for both merge and AMVP modes, however the target reference index for the TMVP candidate in the merge mode is always set to 0.
- the primary block location for TMVP candidate derivation is the bottom right block outside of the collocated PU as shown in FIG. 5A as a block "T", to compensate the bias to the above and left blocks used to generate spatial neighboring candidates. However, if that block is located outside of the current CTB row or motion information is not available, the block is substituted with a center block of the PU.
- Motion vector for TMVP candidate is derived from the co-located PU of the co-located picture, indicated in the slice level.
- the motion vector for the co-located PU is called collocated MV.
- the co-located MV need to be scaled to compensate the temporal distance differences, as shown in FIGS. 5B .
- HEVC also utilizes motion vector scaling. It is assumed that the value of motion vectors is proportional to the distance of pictures in the presentation time.
- a motion vector associates two pictures, the reference picture, and the picture containing the motion vector (namely the containing picture). When a motion vector is utilized to predict the other motion vector, the distance of the containing picture and the reference picture is calculated based on POC values.
- both the motion vector's associated containing picture and reference picture may be different. Therefore, a new distance (based on POC) is calculated, and the motion vector is scaled based on these two POC distances.
- a spatial neighboring candidate the containing pictures for the two motion vectors are the same, while the reference pictures are different.
- motion vector scaling applies to both TMVP and AMVP for spatial and temporal neighboring candidates.
- HEVC also utilizes artificial motion vector candidate generation. If a motion vector candidate list is not complete, artificial motion vector candidates are generated and inserted at the end of the list until all available entries in the motion vector candidate list have a candidate.
- merge mode there are two types of artificial MV candidates: combined candidate derived only for B-slices and zero candidates used only for AMVP if the first type does not provide enough artificial candidates.
- bi-directional combined motion vector candidates are derived by a combination of the motion vector of the first candidate referring to a picture in the list 0 and the motion vector of a second candidate referring to a picture in the list 1.
- HEVC also utilizes a pruning process for candidate insertion. Candidates from different blocks may happen to be the same, which decreases the efficiency of a merge/AMVP candidate list. A pruning process may be applied to solve this problem. A pruning process compares one candidate against the others in the current candidate list to avoid inserting identical candidate. To reduce the complexity, only a limited numbers of pruning process may be applied instead of comparing each potential one with all the other existing ones. As one example, a video coder may apply a pruning process to spatial and temporal neighboring candidates but not to artificially generated candidates.
- FIG. 6 shows an example of optical flow trajectory.
- BIO utilizes pixel-wise motion refinement which is performed on top of block-wise motion compensation in a case of bi-prediction. As BIO compensates the fine motion inside the block, enabling BIO may effectively result in enlarging block size for motion compensation.
- Sample-level motion refinement does not require exhaustive searching or signaling but instead utilizes an explicit equation which gives fine motion vector for each sample.
- ⁇ I ( k ) / ⁇ x, ⁇ I ( k ) / ⁇ y are horizontal and vertical components of the I ( k ) gradient respectively.
- ⁇ 0 and ⁇ 1 denote the distance to reference frames as shown on a FIG. 6 .
- the motion vector field ( v x , v y ), also referred to as the amount of BIO motion, is determined by minimizing the difference ⁇ between values in points A and B (intersection of motion trajectory and reference frame planes on FIG. 6 ).
- s 1 ⁇ i ′ , j ⁇ ⁇ ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x 2 ;
- s 3 ⁇ i ′ , j ⁇ ⁇ I 1 ⁇ I 0 ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x ;
- s 2 ⁇ i ′ , j ⁇ ⁇ ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x ;
- s 2 ⁇ i ′ , j ⁇ ⁇ ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x ⁇ 1 ⁇ I 1 / ⁇ y + ⁇ 0 ⁇ I 0 / ⁇ y ;
- s 5 ⁇ i ′ , j ⁇ ⁇
- MV refinement of BIO might be unreliable due to noise or irregular motion. Therefore, in BIO, the magnitude of MV refinement is clipped to the certain threshold thBIO.
- the threshold value is determined based on whether all the reference pictures of the current picture are all from one direction. If all the reference pictures of the current pictures of the current picture are from one direction, the value of the threshold may be set to 12 ⁇ 2 14- d , otherwise, the threshold may be set to 12 ⁇ 2 13- d .
- Gradients for BIO are calculated at the same time with motion compensation interpolation using operations consistent with HEVC motion compensation process (2D separable FIR).
- the input for this 2D separable FIR is the same reference frame sample as for motion compensation process and fractional position ( fracX, fracY ) according to the fractional part of block motion vector.
- fracX, fracY fractional position
- BIOfilterG corresponding to the fractional position fracY with de-scaling shift d -8
- signal displacement is performed using BIOfilterS in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18- d .
- the length of interpolation filter for gradients calculation BIOfilterG and signal displacement BIOfilterF is shorter (6-tap) in order to maintain reasonable complexity.
- Table 1 shows the filters used for gradients calculation for different fractional positions of block motion vector in BIO.
- Table 2 shows the interpolation filters used for prediction signal generation in BIO.
- FIG. 7 shows an example of the gradient calculation for an 8x4 block.
- a video coder fetches the motion compensated predictors and calculates the HOR/VER gradients of all the pixels within current block as well as the outer two lines of pixels because solving vx and vy for each pixel needs the HOR/VER gradient values and motion compensated predictors of the pixels within the window ⁇ centered in each pixel as shown in equation (4).
- the size of this window is set to 5x5. Therefore, a video coder needs to fetch the motion compensated predictors and calculate the gradients for the outer two lines of pixels.
- Table 1 Filters for gradients calculation in BIO Fractional pel position Interpolation filter for gradient(BIOfilterG) 0 ⁇ 8, -39, -3, 46, -17, 5 ⁇ 1/16 ⁇ 8, -32, -13, 50, -18, 5 ⁇ 1/8 ⁇ 7, -27, -20, 54, -19, 5 ⁇ 3/16 ⁇ 6, -21, -29, 57, -18, 5 ⁇ 1 ⁇ 4 ⁇ 4, -17, -36, 60, -15, 4 ⁇ 5/16 ⁇ 3, -9, -44, 61, -15, 4 ⁇ 3/8 ⁇ 1, -4, -48, 61, -13, 3 ⁇ 7/16 ⁇ 0, 1, -54, 60, -9, 2 ⁇ 1/2 ⁇ 1, 4, -57, 57, -4, 1 ⁇ Table 2: Interpolation filters for prediction signal generation in BIO Fractional pel position Interpolation filter for prediction signal(BIOfilterS) 0 ⁇ 0, 0, 64, 0, 0, 0 ⁇ 1/16 ⁇ 1, -3, 64,
- BIO In JEM, BIO is applied to all bi-directional predicted blocks when the two predictions are from different reference pictures.
- BIO When LIC is enabled for a CU, BIO is disabled.
- FIG. 8 shows an example of modified BIO for 8x4 block proposed in JVET-D0042.
- JVET-D0042 A. Alshina, E. Alshina, "AHG6: On BIO memory bandwidth", JVET-D0042, October 2016
- JVET-D0042 a proposal JVET-D0042 (A. Alshina, E. Alshina, "AHG6: On BIO memory bandwidth", JVET-D0042, October 2016) was submitted to modify the BIO operations and reduce the memory access bandwidth.
- no motion compensated predictors and gradient values are needed for the pixels outside the current block.
- the solving of vx and vy for each pixel is modified to using the motion compensated predictors and the gradient values of all the pixels within current block as shown in FIG. 8 .
- the square window ⁇ in equation (4) is modified to a window which is equal to current block.
- a weighting factor w(i',j') is considered for deriving vx and vy.
- the w(i',j') is a function of the position of the center pixel (i,j) and the positions of the pixels (I',j') within the window.
- s 1 ⁇ i ′ , j ′ ⁇ ⁇ w i ′ , j ′ ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x 2 ;
- s 3 ⁇ i ′ , j ′ ⁇ ⁇ w i ′ , j ′ I 1 ⁇ I 0 ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x ;
- s 2 ⁇ i ′ , j ′ ⁇ ⁇ w i ′ , j ′ ⁇ 1 ⁇ I 1 / ⁇ x + ⁇ 0 ⁇ I 0 / ⁇ x ⁇ 1 ⁇ I 1 / ⁇ y + ⁇ 0 ⁇ I 0 / ⁇ y ;
- s 5 ⁇ i ′ , j ′ ⁇ ⁇ w i ′
- OBMC Overlapped Block Motion Compensation
- JEM OBMC is performed for all Motion Compensated (MC) block boundaries except the right and bottom boundaries of a CU.
- OBMC may be applied for both luma and chroma components.
- a MC block is corresponding to a coding block.
- each sub-block of the CU is a MC block.
- OBMC is performed at sub-block level for all MC block boundaries, where sub-block size is set equal to 4x4, as illustrated in FIGS. 9A and 9B .
- OBMC applies to the current sub-block
- motion vectors of four connected neighbouring sub-blocks are also used to derive prediction block for the current sub-block.
- These multiple prediction blocks based on multiple motion vectors are combined to generate the final prediction signal of the current sub-block.
- prediction block based on motion vectors of a neighbouring sub-block is denoted as P N , with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as Pc.
- P N is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block
- the OBMC is not performed from P N . Otherwise, every pixel of P N is added to the same pixel in Pc, i.e., four rows/columns of P N are added to Pc.
- the weighting factors ⁇ 1/4, 1/8, 1/16, 1/32 ⁇ are used for P N and the weighting factors ⁇ 3/4, 7/8, 15/16, 31/32 ⁇ are used for P C .
- the exception are small MC blocks, (i.e., when height or width of the coding block is equal to 4 or a CU is coded with sub-CU mode), for which only two rows/columns of P N are added to Pc.
- weighting factors ⁇ 1/4, 1/8 ⁇ are used for P N and weighting factors ⁇ 3/4, 7/8 ⁇ are used for P C .
- BIO may also be applied for the derivation of the prediction block P N .
- a CU level flag is signalled to indicate whether OBMC is applied or not for the current CU.
- OBMC is applied by default.
- the prediction signal by using motion information of the top neighboring block and the left neighboring block is used to compensate the top and left boundaries of the original signal of the current CU, and then the normal motion estimation process is applied.
- BIO potentially provides more than 1% Bjontegaard-Delta bitrate (BD-rate) reduction in JEM4.0
- BIO also potentially introduces significant computational complexity and may necessitate a memory bandwidth increase for both encoder and decoder.
- This disclosure describes techniques that may potentially reduce the computational complexity and required memory bandwidth associated with BIO.
- a video coder may determine an amount of a BIO motion, e.g., the vx and vy values described above, on a sub-block level and use that determined amount of BIO motion to modify sample values of a predictive block on a sample-by-sample basis. Accordingly, the techniques of this disclosure may improve video encoders and video decoders by allowing them to achieve the coding gains of BIO without incurring the substantial processing and memory burdens required for existing implementations of BIO.
- this disclosure introduces techniques for reducing the complexity of BIO by re-defining the window ⁇ .
- Such techniques may, for example be performed by video encoder 20 (e.g., motion estimation unit 42 and/or motion compensation unit 44) or by video decoder 30 (e.g., motion compensation unit 72).
- the window ⁇ is defined as any block within current block covering current pixel with size MxN where M and N are any positive integer.
- current block is divided into non-overlapped sub-blocks andthe window ⁇ is defined as the sub-block which covers current pixel.
- the sub-block is defined as the smallest block for motion vector storage which covers current pixel. In HEVC and JEM, the smallest block size is 4x4.
- the size of the window ⁇ is adaptive according to the coding information such as the size of current block, coding modes.
- coding information such as the size of current block, coding modes.
- a larger window ⁇ can be used.
- current block is coded as sub-block modes such as sub-CU merge, Affine and FRUC mode
- the window ⁇ is set as the sub-block.
- FIG. 11 shows an example of the proposed BIO for an 8x4 block, in accordance wit the techniques of this disclosure, with a window ⁇ for pixel A, B and C.
- equal weightings may be used for solving vx and vy as shown in equation (7).
- unequal weightings can be used for solving vx and vy as shown in equation (10).
- the un-equal weightings can be a function of the distances between the center pixel and the associated pixels.
- the weighting can be calculated using a bi-lateral approach, as for example described at https://en.wikipedia.org/wiki/Bilateral_filter.
- look-up tables can be used to store all the weighting factors for each pixel for the window ⁇ in equation (7).
- the BIO when deriving P N for OBMC, the BIO is only performed for partial pixels when deriving the predictors using the neighbor motions. In one example, the BIO is totally disabled for all the pixels in deriving P N . In yet another example, the BIO is only applied on the pixels in outer two lines as shown in FIGS. 12A-12D .
- BIO for each block, how many lines BIO is applied can be explicitly signaled in slice level of SPS/PPS. Whether BIO is disabled or partially disabled can also be explicitly signaled in slice level of SPS/PPS.
- BIO how many lines BIO is applied can be implicitly based on certain coding conditions, such as CU mode (sub-block mode or non sub-block mode) or block size or the combination of other tools, such as Illumination Compensation (IC) flag signaled.
- CU mode sub-block mode or non sub-block mode
- IC Illumination Compensation
- BIO is disabled or partially disabled can also be implicitly derived based on certain conditions, such as CU mode (sub-block mode or non sub-block mode) or block size or the combination of other tools, such as IC flag signaled.
- FIGS. 12A-12D show examples of the proposed simplified BIO on OBMC according to the techniques of this disclosure, where x represents the predictor derived without BIO and o represents the predictor derived with BIO.
- the motion vector refinement from BIO can be block-based. Let the block size be M-by-N, a weighting function can be used to provide different scale factors to pixels of different locations during calculation of terms in equation (7).
- equations (5) and (6) interpolated pixels and their gradient values gathered from the entire block can be used to solving vx and vy jointly, instead of solving vx and vy individually for each pixel position.
- the window size omega can be defined as a running window centered at each pixel location, and the averaged value by summing up values from all locations is used.
- ⁇ k can be the 5x5 window defined in the current BIO design for each pixel and hence the weighting function can be determined upfront.
- An example of weighting function used for 4x4 sub-block with 5x5 window is shown in FIG. 13.
- FIG. 13 shows an example of a weighting function for a 4x4 sub-block with a 5x5 window.
- the weighting function can be sent in SPS, PPS, or slice header.
- a set of pre-defined weighting functions can be stored and only the indexes of the weighting functions need to be signaled.
- the refined motion vector can be found using pixels lying at the central part of the sub-block.
- the gradient values of central pixels can be calculated using interpolation filter and a window of size M-by-N can be applied to the interpolated pixels to provide different weights to the central pixels, in order to calculate variables s1-s6 in equation (7).
- the gradient values of the central points can be calculated and the averaged value of the central points can be used (equal-weight window).
- a median filter can be used to select the representative pixels to calculate variables s1-s6 in equation (7).
- the window size for each pixel may be modified to be the whole current block, which potentially adds computational complexity to the current design when a current block is larger or equal to 8x4.
- the worst case of the modifications is that a 128x128 window is used for the accumulation of gradients and predictors for each pixel within a 128x128 block.
- JEM-4.0 provides the flexibility to either perform the MC and BIO for each sub-block in parallel or perform the MC and BIO for the larger block aggregated of the sub-blocks with the same MV in one-time effort. For either way, JEM-4.0 provides identical coding results.
- the modified BIO in JVET-D0042 utilizes a block size dependent gradient calculation and weighting factors such that performing MC and BIO for two neighboring same-motion blocks jointly or separately may lead to different results. To avoid different results, it has to be specified that decoder shall perform MC and BIO at either block level or a certain sub-block level. Such a constraint may be too strict and not desirable for practical codec implementation
- the complexity of BIO may be further reduced by re-defining the window S2.
- Two types of the window ⁇ are defined; one is the non-overlapping window and the other one is sliding window.
- current block is divided into non-overlapping sub-blocks and the window ⁇ is defined as the sub-block which covers current pixel as shown in FIG. 11 .
- the window ⁇ is defined as a block centered at a current pixel as shown in FIG. 7 .
- the size of the window ⁇ can be determined using different methods as illustrated below.
- the window ⁇ is a rectangular block with size MxN, where M and N can be any non-negative integer such as (4x4, 8x8, 16x16, 8x4 and so on).
- the window ⁇ is not limited to a rectangular shape and can be any other shape such as a diamond shape.
- the described techniques can also be applied to shapes other than the rectangular shape if applicable.
- the size of the window may be fixed or variable and may be either predetermined or signaled in the bitstream. When the size is signaled, the size may be signalled in the sequence parameter set (SPS), picture parameter set (PPS), slice header, or at the CTU level.
- SPS sequence parameter set
- PPS picture parameter set
- the window size can be jointly determined by the size of the motion compensated (MC) block by the equation below.
- Horizontal window size M min M , MC _ Size ;
- Vertical window size N min N , MC _ Size .
- the motion compensated (MC) block is purely dependent on the coding information such as the size of current block and coding modes.
- the motion compensated (MC) block is set as the whole CU when the current CU is coded with non sub-block modes such as sub-CU merge, Affine and FRUC mode.
- the motion compensated (MC) block is set as sub-block when sub-block modes such as sub-CU merge, Affine and FRUC mode are used, regardless whether the sub-blocks have the same motion information.
- the motion compensated (MC) block is defined as the block of samples within a CU that have the same MVs.
- the motion compensated (MC) block is set as the whole CU when the current CU is coded with non sub-block modes such as sub-CU merge, Affine and FRUC mode.
- sub-block modes such as sub-CU merge, Affine and FRUC mode
- the sub-blocks with same motion information are merged as a motion compensated (MC) block with certain scanning order of sub-block.
- the size of the window ⁇ is adaptive according to the coding information such as the size of current block, coding modes.
- the window ⁇ is set as the whole current block or quarter of the current block when current block is coded as non sub-block modes such as sub-CU merge, Affine and FRUC mode; and the window ⁇ is set as sub-block when the current block is coded as sub-block modes.
- the window size should be smaller or equal to the maximum Transform Unit (TU) size allowed in the video codec system.
- the window size should be larger or equal to the smallest MC block such as 4x4.
- this disclosure introduces techniques for performing the BIO as a post-processing after all motion compensated prediction is finished.
- OBMC can then be applied to generate better predictors for current block.
- BIO is then applied using the motion information of current block to further refine the predictor.
- the motion of the entire block may be used.
- the averaged motion vector from OBMC can be used.
- the median motion vector (for each dimension individually) can be used.
- Weighting functions can be designed differently when considering block-based derivation of motion vector refinement of BIO. Equal weights can be used for any of the above-mentioned methods. Alternatively, more weights can be placed toward the central part of the window. In one example, the weights can be calculated by the inverse distance (including but not limited to L1-norm or L2-norm) between the center of the window to the pixel.
- FIG. 14 is a block diagram illustrating an example of video encoder 20 that may implement techniques for bi-directional optical flow.
- Video encoder 20 may perform intra- and inter-coding of video blocks within video slices.
- Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture.
- Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames or pictures of a video sequence.
- Intra-mode may refer to any of several spatial based coding modes.
- Inter-modes such as uni-directional prediction (P mode) or bi-prediction (B mode), may refer to any of several temporal-based coding modes.
- video encoder 20 receives video data and stores the received video data in video data memory 38.
- Video data memory 38 may store video data to be encoded by the components of video encoder 20.
- the video data stored in video data memory 38 may be obtained, for example, from video source 18.
- Reference picture memory 64 may be a reference picture memory that stores reference video data for use in encoding video data by video encoder 20, e.g., in intra- or inter-coding modes.
- Video data memory 38 and reference picture memory 64 may be formed by any of a variety of memory devices, such as dynamic random-access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices.
- Video data memory 38 and reference picture memory 64 may be provided by the same memory device or separate memory devices.
- video data memory 38 may be on-chip with other components of video encoder 20, or off-chip relative to those components.
- Video encoder 20 receives a current video block within a video frame to be encoded.
- video encoder 20 includes mode select unit 40, reference picture memory 64 (which may also be referred to as a decoded picture buffer (DPB)), summer 50, transform processing unit 52, quantization unit 54, and entropy encoding unit 56.
- Mode select unit 40 includes motion compensation unit 44, motion estimation unit 42, intra-prediction processing unit 46, and partition unit 48.
- video encoder 20 also includes inverse quantization unit 58, inverse transform processing unit 60, and summer 62.
- a deblocking filter (not shown in FIG. 14 ) may also be included to filter block boundaries to remove blockiness artifacts from reconstructed video.
- the deblocking filter would typically filter the output of summer 62. Additional filters (in loop or post loop) may also be used in addition to the deblocking filter. Such filters are not shown for brevity, but if desired, may filter the output of summer 50 (as an in-loop filter).
- video encoder 20 receives a video frame or slice to be coded.
- the frame or slice may be divided into multiple video blocks.
- Motion estimation unit 42 and motion compensation unit 44 perform inter-predictive encoding of the received video block relative to one or more blocks in one or more reference frames to provide temporal prediction.
- Intra-prediction processing unit 46 may alternatively intra-predict the received video block using pixels of one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial prediction.
- Video encoder 20 may perform multiple coding passes, e.g., to select an appropriate coding mode for each block of video data.
- partition unit 48 may partition blocks of video data into sub-blocks, based on evaluation of previous partitioning schemes in previous coding passes. For example, partition unit 48 may initially partition a frame or slice into LCUs, and partition each of the LCUs into sub-CUs based on rate-distortion analysis (e.g., rate-distortion optimization). Mode select unit 40 may further produce a quadtree data structure indicative of partitioning of an LCU into sub-CUs.
- Leaf-node CUs of the quadtree may include one or more PUs and one or more TUs.
- Mode select unit 40 may select one of the prediction modes, intra or inter, e.g., based on error results, and provides the resulting predicted block to summer 50 to generate residual data and to summer 62 to reconstruct the encoded block for use as a reference frame. Mode select unit 40 also provides syntax elements, such as motion vectors, intra-mode indicators, partition information, and other such syntax information, to entropy encoding unit 56.
- Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes.
- Motion estimation performed by motion estimation unit 42, is the process of generating motion vectors, which estimate motion for video blocks.
- a motion vector for example, may indicate the displacement of a PU of a video block within a current video frame or picture relative to a predictive block within a reference frame (or other coded unit) relative to the current block being coded within the current frame (or other coded unit).
- a predictive block is a block that is found to closely match the block to be coded, in terms of pixel difference, which may be determined by sum of absolute difference (SAD), sum of square difference (SSD), or other difference metrics.
- video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in reference picture memory 64. For example, video encoder 20 may interpolate values of one-quarter pixel positions, one-eighth pixel positions, or other fractional pixel positions of the reference picture. Therefore, motion estimation unit 42 may perform a motion search relative to the full pixel positions and fractional pixel positions and output a motion vector with fractional pixel precision.
- Motion estimation unit 42 calculates a motion vector for a PU of a video block in an inter-coded slice by comparing the position of the PU to the position of a predictive block of a reference picture.
- the reference picture may be selected from a first reference picture list (List 0) or a second reference picture list (List 1), each of which identify one or more reference pictures stored in reference picture memory 64.
- Motion estimation unit 42 sends the calculated motion vector to entropy encoding unit 56 and motion compensation unit 44.
- Motion compensation performed by motion compensation unit 44, may involve fetching or generating the predictive block based on the motion vector determined by motion estimation unit 42. Again, motion estimation unit 42 and motion compensation unit 44 may be functionally integrated, in some examples. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate the predictive block to which the motion vector points in one of the reference picture lists. Summer 50 forms a residual video block by subtracting pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values, as discussed below. In general, motion estimation unit 42 performs motion estimation relative to luma components, and motion compensation unit 44 uses motion vectors calculated based on the luma components for both chroma components and luma components. Mode select unit 40 may also generate syntax elements associated with the video blocks and the video slice for use by video decoder 30 in decoding the video blocks of the video slice.
- motion compensation unit 44 may be configured to perform any or all of the techniques of this disclosure (alone or in any combination). Although discussed with respect to motion compensation unit 44, it should be understood that mode select unit 40, motion estimation unit 42, partition unit 48, and/or entropy encoding unit 56 may also be configured to perform certain techniques of this disclosure, alone or in combination with motion compensation unit 44. In one example, motion compensation unit 44 may be configured to perform the BIO techniques discussed herein.
- Intra-prediction processing unit 46 may intra-predict a current block, as an alternative to the inter-prediction performed by motion estimation unit 42 and motion compensation unit 44, as described above. In particular, intra-prediction processing unit 46 may determine an intra-prediction mode to use to encode a current block. In some examples, intra-prediction processing unit 46 may encode a current block using various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction processing unit 46 (or mode select unit 40, in some examples) may select an appropriate intra-prediction mode to use from the tested modes.
- intra-prediction processing unit 46 may calculate rate-distortion values using a rate-distortion analysis for the various tested intra-prediction modes, and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes.
- Rate-distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as a bitrate (that is, a number of bits) used to produce the encoded block.
- Intra-prediction processing unit 46 may calculate ratios from the distortions and rates for the various encoded blocks to determine which intra-prediction mode exhibits the best rate-distortion value for the block.
- intra-prediction processing unit 46 may provide information indicative of the selected intra-prediction mode for the block to entropy encoding unit 56.
- Entropy encoding unit 56 may encode the information indicating the selected intra-prediction mode.
- Video encoder 20 may include in the transmitted bitstream configuration data, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables), definitions of encoding contexts for various blocks, and indications of a most probable intra-prediction mode, an intra-prediction mode index table, and a modified intra-prediction mode index table to use for each of the contexts.
- Video encoder 20 forms a residual video block by subtracting the prediction data from mode select unit 40 from the original video block being coded.
- Summer 50 represents the component or components that perform this subtraction operation.
- Transform processing unit 52 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising transform coefficient values. Wavelet transforms, integer transforms, sub-band transforms, discrete sine transforms (DSTs), or other types of transforms could be used instead of a DCT.
- transform processing unit 52 applies the transform to the residual block, producing a block of transform coefficients.
- the transform may convert the residual information from a pixel domain to a transform domain, such as a frequency domain.
- Transform processing unit 52 may send the resulting transform coefficients to quantization unit 54.
- Quantization unit 54 quantizes the transform coefficients to further reduce bit rate.
- the quantization process may reduce the bit depth associated with some or all of the coefficients.
- the degree of quantization may be modified by adjusting a quantization parameter.
- entropy encoding unit 56 entropy codes the quantized transform coefficients.
- entropy encoding unit 56 may perform context adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), probability interval partitioning entropy (PIPE) coding or another entropy coding technique.
- context may be based on neighboring blocks.
- the encoded bitstream may be transmitted to another device (e.g., video decoder 30) or archived for later transmission or retrieval.
- Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain.
- summer 62 adds the reconstructed residual block to the motion compensated prediction block earlier produced by motion compensation unit 44 or intra-prediction processing unit 46 to produce a reconstructed video block for storage in reference picture memory 64.
- the reconstructed video block may be used by motion estimation unit 42 and motion compensation unit 44 as a reference block to inter-code a block in a subsequent video frame.
- FIG. 15 is a block diagram illustrating an example of video decoder 30 that may implement techniques for bi-directional optical flow.
- video decoder 30 includes an entropy decoding unit 70, motion compensation unit 72, intra-prediction processing unit 74, inverse quantization unit 76, inverse transform processing unit 78, reference picture memory 82 and summer 80.
- Video decoder 30 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 20 ( FIG. 14 ).
- Motion compensation unit 72 may generate prediction data based on motion vectors received from entropy decoding unit 70
- intra-prediction processing unit 74 may generate prediction data based on intra-prediction mode indicators received from entropy decoding unit 70.
- video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements from video encoder 20.
- Video decoder 30 stores the received encoded video bitstream in video data memory 68.
- Video data memory 68 may store video data, such as an encoded video bitstream, to be decoded by the components of video decoder 30.
- the video data stored in video data memory 68 may be obtained, for example, via computer-readable medium 16, from storage media, or from a local video source, such as a camera, or by accessing physical data storage media.
- Video data memory 85 may form a coded picture buffer (CPB) that stores encoded video data from an encoded video bitstream.
- CPB coded picture buffer
- Reference picture memory 82 may be a reference picture memory that stores reference video data for use in decoding video data by video decoder 30, e.g., in intra- or inter-coding modes.
- Video data memory 68 and reference picture memory 82 may be formed by any of a variety of memory devices, such as DRAM, SDRAM, MRAM, RRAM, or other types of memory devices.
- Video data memory 68 and reference picture memory 82 may be provided by the same memory device or separate memory devices.
- video data memory 68 may be on-chip with other components of video decoder 30, or off-chip relative to those components.
- video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements from video encoder 20.
- Entropy decoding unit 70 of video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors or intra-prediction mode indicators, and other syntax elements.
- Entropy decoding unit 70 forwards the motion vectors to and other syntax elements to motion compensation unit 72.
- Video decoder 30 may receive the syntax elements at the video slice level and/or the video block level.
- intra-prediction processing unit 74 may generate prediction data for a video block of the current video slice based on a signaled intra prediction mode and data from previously decoded blocks of the current frame or picture.
- motion compensation unit 72 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 70.
- the predictive blocks may be produced from one of the reference pictures within one of the reference picture lists.
- Video decoder 30 may construct the reference frame lists, List 0 and List 1, using default construction techniques based on reference pictures stored in reference picture memory 82.
- Motion compensation unit 72 determines prediction information for a video block of the current video slice by parsing the motion vectors and other syntax elements, and uses the prediction information to produce the predictive blocks for the current video block being decoded. For example, motion compensation unit 72 uses some of the received syntax elements to determine a prediction mode (e.g., intra- or inter-prediction) used to code the video blocks of the video slice, an inter-prediction slice type (e.g., B slice, P slice, or GPB slice), construction information for one or more of the reference picture lists for the slice, motion vectors for each inter-encoded video block of the slice, inter-prediction status for each inter-coded video block of the slice, and other information to decode the video blocks in the current video slice.
- a prediction mode e.g., intra- or inter-prediction
- an inter-prediction slice type e.g., B slice, P slice, or GPB slice
- construction information for one or more of the reference picture lists for the slice motion vectors for each inter-encoded video
- Motion compensation unit 72 may also perform interpolation based on interpolation filters for sub-pixel precision. Motion compensation unit 72 may use interpolation filters as used by video encoder 20 during encoding of the video blocks to calculate interpolated values for sub-integer pixels of reference blocks. In this case, motion compensation unit 72 may determine the interpolation filters used by video encoder 20 from the received syntax elements and use the interpolation filters to produce predictive blocks.
- motion compensation unit 72 may be configured to perform any or all of the techniques of this disclosure (alone or in any combination).
- motion compensation unit 72 may be configured to perform the BIO techniques discussed herein.
- Inverse quantization unit 76 inverse quantizes, i.e., de-quantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 70.
- the inverse quantization process may include use of a quantization parameter QP Y calculated by video decoder 30 for each video block in the video slice to determine a degree of quantization and, likewise, a degree of inverse quantization that should be applied.
- Inverse transform processing unit 78 applies an inverse transform, e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain.
- an inverse transform e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process
- video decoder 30 forms a decoded video block by summing the residual blocks from inverse transform processing unit 78 with the corresponding predictive blocks generated by motion compensation unit 72.
- Summer 80 represents the component or components that perform this summation operation.
- a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts.
- Other loop filters may also be used to smooth pixel transitions, or otherwise improve the video quality.
- the decoded video blocks in a given frame or picture are then stored in reference picture memory 82, which stores reference pictures used for subsequent motion compensation.
- Reference picture memory 82 also stores decoded video for later presentation on a display device, such as display device 32 of FIG. 1 .
- reference picture memory 82 may store decoded pictures.
- FIG. 16 is a flowchart illustrating an example operation of a video decoder for decoding video data in accordance with a technique of this disclosure.
- the video decoder described with respect to FIG. 16 may, for example, be a video decoder, such as video decoder 30, for outputting displayable decoded video or may be a video decoder implemented in a video encoder, such as the decoding loop of video encoder 20, which includes inverse quantization unit 58, inverse transform processing unit 60, summer 62, and reference picture memory 64, as well as portions of mode select unit 40.
- the video decoder determines a block of video data is encoded using a bi-directional inter prediction mode (200).
- the video decoder determines a first motion vector for the block that points to a first reference picture (202).
- the video decoder determines a second MV for the block that points to a second reference picture, with the first reference picture being different than the second reference picture (204).
- the video decoder uses the first MV to locate a first predictive block in the first reference picture (206).
- the video decoder uses the second MV to locate a second predictive block in the second reference picture (208).
- the video decoder determines a first amount of BIO motion for a first sub-block of the first predictive block (210).
- the first sub-block may be different than a coding unit, a prediction unit, and a transform unit for the block.
- the video decoder may in some examples, determine he first amount of BIO motion based on samples in the first sub-block and samples outside the first sub-block, and in other examples, determine the first amount of BIO motion based only on samples in the first sub-block.
- the first amount of BIO motion may for example include a motion vector field that includes a horizontal component and a vertical component.
- the video decoder determines a first final predictive sub-block for the block of video data based on the first sub-block of the first predictive block, a first sub-block of the second predictive block, and the first amount of BIO motion (212).
- the video decoder may determine the first final predictive sub-block using, for example, equation (2) above.
- the video decoder determines a second amount of BIO motion for a second sub-block of the first predictive block (214).
- the second sub-block may be different than a coding unit, a prediction unit, and a transform unit for the block.
- the video decoder may in some examples, determine the second amount of BIO motion based on samples in the second sub-block and samples outside the second sub-block, and in other example, determine the second amount of BIO motion based only on samples in the second sub-block.
- the second amount of BIO motion may, for example, include a motion vector field that includes a horizontal component and a vertical component.
- the video decoder determines a second final predictive sub-block for the block of video data based on the second sub-block of the first predictive block, a second sub-block of the second predictive block, and the second amount of BIO motion (216).
- the video decoder may, for example, determine the second final predictive sub-block using, for example, equation (2)
- the video decoder determines a final predictive block for the block of video data based on the first final predictive sub-block and the second final predictive sub-block (218).
- the video decoder may, for example, add residual data to the final predictive block to determine a reconstructed block for the block of video data.
- the video decoder may also perform one or more filtering processes on the reconstructed block of video data.
- the video decoder outputs a picture of video data comprising a decoded version of the block of video data (220).
- the video decoder may, for example, output the picture by storing the picture in a reference picture memory, and the video decoder may use the picture as a reference picture in encoding another picture of the video data.
- the video decoder is a video decoder configured to output displayable decoded video
- the video decoder may, for example, output the picture of video data to a display device.
- Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
- Computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave.
- Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
- a computer program product may include a computer-readable medium.
- such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- any connection is properly termed a computer-readable medium.
- a computer-readable medium For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
- DSL digital subscriber line
- Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
- processors such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
- processors such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
- processors such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
- processors such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
- the term "processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
- the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec.
- the techniques could be fully implemented in one or more circuits or logic elements.
- the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set).
- IC integrated circuit
- a set of ICs e.g., a chip set.
- Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Description
- This disclosure relates to video coding.
- Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, so-called "smart phones," video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video coding techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), ITU-T H.265/High Efficiency Video Coding (HEVC), and extensions of such standards. The video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video coding techniques.
- Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (e.g., a video frame or a portion of a video frame) may be partitioned into video blocks, which may also be referred to as treeblocks, coding units (CUs), and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture may be encoded using spatial prediction with respect to reference samples in neighboring blocks in the same picture. Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures. Pictures may be referred to as frames, and reference pictures may be referred to as reference frames.
- Spatial or temporal prediction results in a predictive block for a block to be coded. Residual data represents pixel differences between the original block to be coded and the predictive block. An inter-coded block is encoded according to a motion vector that points to a block of reference samples forming the predictive block, and the residual data indicating the difference between the coded block and the predictive block. An intra-coded block is encoded according to an intra-coding mode and the residual data. For further compression, the residual data may be transformed from the pixel domain to a transform domain, resulting in residual transform coefficients, which then may be quantized. The quantized transform coefficients, initially arranged in a two-dimensional array, may be scanned in order to produce a one-dimensional vector of transform coefficients, and entropy coding may be applied to achieve even more compression.
- In general, this disclosure describes techniques related to bi-directional optical flow (BIO) in video coding. The techniques of this disclosure may be used in conjunction with existing video codecs, such as High Efficiency Video Coding (HEVC), or be an efficient coding tool for future video coding standards.
- The invention as defined in the independent claims to which reference is directed with preferred feature set out in dependent claims.
- The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description, drawings, and claims. Enabling disclosure for the protected invention is provided with the embodiments described in relation to
figures 11 and16 . The other figures, aspects, and embodiments are provided for illustrative purposes and do not represent embodiments of the invention unless when combined with all of the features respectively defined in the independent claims. -
-
FIG. 1 is a block diagram illustrating an example video encoding and decoding system that may utilize techniques for bi-directional optical flow. -
FIG. 2 is a conceptual diagram illustrating an example of unilateral motion estimation (ME) as a block-matching algorithm (BMA) performed for motion compensated frame-rate up-conversion (MC-FRUC). -
FIG. 3 is a conceptual diagram illustrating an example of bilateral ME as a BMA performed for MC-FRUC. -
FIG. 4A shows spatial neighboring MV candidates for merge mode. -
FIG. 4B shows spatial neighboring MV candidates for AMVP modes. -
FIG. 5A shows an example of a TMVP candidate. -
FIG. 5B shows an example of MV scaling. -
FIG. 6 shows an example of optical flow trajectory. -
FIG. 7 shows an example of BIO for an 8x4 block. -
FIG. 8 shows an example of modified BIO for an 8x4 block. -
FIGS. 9A and9B show example illustrations of sub-blocks where OBMC applies. -
FIGS. 10A-10D show examples of OBMC weightings. -
FIG. 11 shows an example for the proposed BIO for an 8x4 block. -
FIGS. 12A-12D show examples of the proposed simplified BIO on OBMC. -
FIG. 13 shows an example weighting function for a 4x4 sub-block with a 5x5 window. -
FIG. 14 is a block diagram illustrating an example of a video encoder. -
FIG. 15 is a block diagram illustrating an example of a video decoder that may implement techniques for bi-directional optical flow. -
FIG. 16 is a flowchart illustrating an example operation of a video decoder, in accordance with a technique of this disclosure. - In general, the techniques of this disclosure are related to improvements of bi-directional optical flow (BIO) video coding techniques. BIO may be applied during motion compensation. As originally proposed, BIO is used to modify predictive sample values for bi-predicted inter coded blocks based on an optical flow trajectory in order to determine better predictive blocks, e.g., predictive blocks that more closely match an original block of video data. The various techniques of this disclosure may be applied, alone or in any combination, to determine when and whether to perform BIO when predicting blocks of video data, e.g., during motion compensation.
- As used in this disclosure, the term video coding generically refers to either video encoding or video decoding. Similarly, the term video coder may generically refer to a video encoder or a video decoder. Moreover, certain techniques described in this disclosure with respect to video decoding may also apply to video encoding, and vice versa. For example, often times video encoders and video decoders are configured to perform the same process, or reciprocal processes. Also, video encoders typically perform video decoding as part of the processes of determining how to encode video data. Therefore, unless explicitly stated to the contrary, it should not be assumed that a technique described with respect to video decoding cannot also be performed by a video encoder, or vice versa.
- This disclosure may also use terms such as current layer, current block, current picture, current slice, etc. In the context of this disclosure, the term current is intended to identify a block, picture, slice, etc. that is currently being coded, as opposed to, for example, previously or already coded blocks, pictures, and slices or yet to be coded blocks, pictures, and slices.
- In general, a picture is divided into blocks, each of which may be predictively coded. A video coder may predict a current block using intra-prediction techniques (using data from the picture including the current block), inter-prediction techniques (using data from a previously coded picture relative to the picture including the current block), or other techniques such as intra block copy, palette mode, dictionary mode, etc. Inter-prediction includes both uni-directional prediction and bi-directional prediction.
- For each inter-predicted block, a video coder may determine a set of motion information. The set of motion information may contain motion information for forward and backward prediction directions. Here, forward and backward prediction directions are two prediction directions of a bi-directional prediction mode. The terms "forward" and "backward" do not necessarily have a geometry meaning. Instead, the terms generally correspond to whether the reference pictures are to be displayed before ("backward") or after ("forward") the current picture. In some examples, "forward" and "backward" prediction directions may correspond to reference picture list 0 (RefPicList0) and reference picture list 1 (RefPicList1) of a current picture. When only one reference picture list is available for a picture or slice, only RefPicList0 is available and the motion information of each block of a slice always refers to a picture of RefPicList0 (e.g., is forward).
- In some cases, a motion vector together with a corresponding reference index may be used in a decoding process. Such a motion vector with and associated reference index is denoted as a uni-predictive set of motion information.
- For each prediction direction, the motion information contains a reference index and a motion vector. In some cases, for simplicity, a motion vector itself may be referred to in a way that it is assumed that the motion vector has an associated reference index. A reference index may be used to identify a reference picture in the current reference picture list (RefPicList0 or RefPicList1). A motion vector has a horizontal (x) and a vertical (y) component. In general, the horizontal component indicates a horizontal displacement within a reference picture, relative to the position of a current block in a current picture, needed to locate an x-coordinate of a reference block, while the vertical component indicates a vertical displacement within the reference picture, relative to the position of the current block, needed to locate a y-coordinate of the reference block.
- Picture order count (POC) values are widely used in video coding standards to identify a display order of a picture. Although there are cases in which two pictures within one coded video sequence may have the same POC value, this typically does not happen within a coded video sequence. Thus, POC values of pictures are generally unique, and thus can uniquely identify corresponding pictures. When multiple coded video sequences are present in a bitstream, pictures having the same POC value may be closer to each other in terms of decoding order. POC values of pictures are typically used for reference picture list construction, derivation of reference picture sets as in HEVC, and motion vector scaling.
- E. Alshina, A. Alshina, J.-H. Min, K. Choi, A. Saxena, M. Budagavi, "Known tools performance investigation for next generation video coding," ITU - Telecommunications Standardization Sector, (hereinafter, "
Alshina 1"), and A. Alshina, E. Alshina, T. Lee, "Bi-directional optical flow for improving motion compensation," Picture Coding Symposium (PCS), Nagoya, Japan, 2010 (hereinafter, "Alshina 2") described a method called bi-directional optical flow (BIO). BIO is based on pixel level optical flow. According toAlshina 1 andAlshina 2, BIO is only applied to blocks that have both forward and backward prediction. BIO as described inAlshina 1 andAlshina 2 is summarized below: -
- I t0 is on the motion trajectory of It. That is, the motion from I t0 to It is considered in the formula.
-
-
-
-
-
- It is further assumed V x0 = V x1 = Vx and V y0 = V y1 = Vy since the motion is along the trajectory. So, equation (D) becomes
-
- With derived Vx and Vy, the final prediction of the block is calculated with equation (E). Vx and Vy is called "BIO motion" for convenience.
- In general, a video coder performs BIO during motion compensation. That is, after the video coder determines a motion vector for a current block, the video coder produces a predicted block for the current block using motion compensation with respect to the motion vector. In general, the motion vector identifies the location of a reference block with respect to the current block in a reference picture. When performing BIO, a video coder modifies the motion vector on a per-pixel basis for the current block. That is, rather than retrieving each pixel of the reference block as a block unit, according to BIO, the video coder determines per-pixel modifications to the motion vector for the current block, and constructs the reference block such that the reference block includes reference pixels identified by the motion vector and the per-pixel modification for the corresponding pixel of the current block. Thus, BIO may be used to produce a more accurate reference block for the current block.
-
FIG. 1 is a block diagram illustrating an example video encoding anddecoding system 10 that may utilize techniques for bi-directional optical flow. As shown inFIG. 1 ,system 10 includes asource device 12 that provides encoded video data to be decoded at a later time by adestination device 14. In particular,source device 12 provides the video data todestination device 14 via a computer-readable medium 16.Source device 12 anddestination device 14 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, so-called "smart" pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or the like. In some cases,source device 12 anddestination device 14 may be equipped for wireless communication. -
Destination device 14 may receive the encoded video data to be decoded via computer-readable medium 16. Computer-readable medium 16 may comprise any type of medium or device capable of moving the encoded video data fromsource device 12 todestination device 14. In one example, computer-readable medium 16 may comprise a communication medium to enablesource device 12 to transmit encoded video data directly todestination device 14 in real-time. The encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted todestination device 14. The communication medium may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet. The communication medium may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication fromsource device 12 todestination device 14. - In some examples, encoded data may be output from
output interface 22 to a storage device. Similarly, encoded data may be accessed from the storage device by input interface. The storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data. In a further example, the storage device may correspond to a file server or another intermediate storage device that may store the encoded video generated bysource device 12.Destination device 14 may access stored video data from the storage device via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting that encoded video data to thedestination device 14. Example file servers include a web server (e.g., for a website), an FTP server, network attached storage (NAS) devices, or a local disk drive.Destination device 14 may access the encoded video data through any standard data connection, including an Internet connection. This may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both that is suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof. - The techniques of this disclosure are not necessarily limited to wireless applications or settings. The techniques may be applied to video coding in support of any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, Internet streaming video transmissions, such as dynamic adaptive streaming over HTTP (DASH), digital video that is encoded onto a data storage medium, decoding of digital video stored on a data storage medium, or other applications. In some examples,
system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony. - In the example of
FIG. 1 ,source device 12 includesvideo source 18,video encoder 20, andoutput interface 22.Destination device 14 includesinput interface 28,video decoder 30, anddisplay device 32. In accordance with this disclosure,video encoder 20 ofsource device 12 may be configured to apply the techniques for bi-directional optical flow. In other examples, a source device and a destination device may include other components or arrangements. For example,source device 12 may receive video data from anexternal video source 18, such as an external camera. Likewise,destination device 14 may interface with an external display device, rather than including an integrated display device. - The illustrated
system 10 ofFIG. 1 is merely one example. Techniques for bi-directional optical flow may be performed by any digital video encoding and/or decoding device. Although generally the techniques of this disclosure are performed by a video encoding device, the techniques may also be performed by a video encoder/decoder, typically referred to as a "CODEC." Moreover, the techniques of this disclosure may also be performed by a video preprocessor.Source device 12 anddestination device 14 are merely examples of such coding devices in whichsource device 12 generates coded video data for transmission todestination device 14. In some examples,devices devices system 10 may support one-way or two-way video transmission betweenvideo devices -
Video source 18 ofsource device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider. As a further alternative,video source 18 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video. In some cases, ifvideo source 18 is a video camera,source device 12 anddestination device 14 may form so-called camera phones or video phones. As mentioned above, however, the techniques described in this disclosure may be applicable to video coding in general, and may be applied to wireless and/or wired applications. In each case, the captured, pre-captured, or computer-generated video may be encoded byvideo encoder 20. The encoded video information may then be output byoutput interface 22 onto a computer-readable medium 16. - Computer-
readable medium 16 may include transient media, such as a wireless broadcast or wired network transmission, or storage media (that is, non-transitory storage media), such as a hard disk, flash drive, compact disc, digital video disc, Blu-ray disc, or other computer-readable media. In some examples, a network server (not shown) may receive encoded video data fromsource device 12 and provide the encoded video data todestination device 14, e.g., via network transmission. Similarly, a computing device of a medium production facility, such as a disc stamping facility, may receive encoded video data fromsource device 12 and produce a disc containing the encoded video data. Therefore, computer-readable medium 16 may be understood to include one or more computer-readable media of various forms, in various examples. -
Input interface 28 ofdestination device 14 receives information from computer-readable medium 16. The information of computer-readable medium 16 may include syntax information defined byvideo encoder 20, which is also used byvideo decoder 30, that includes syntax elements that describe characteristics and/or processing of the video data.Display device 32 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device. -
Video encoder 20 andvideo decoder 30 may operate according to one or more video coding standards, such as ITU-T H.264/AVC (Advanced Video Coding) or High Efficiency Video Coding (HEVC), also referred to as ITU-T H.265. H.264 is described in International Telecommunication Union, "Advanced video coding for generic audiovisual services," SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services - Coding of moving video, H.264, June 2011. H.265 is described in International Telecommunication Union, "High efficiency video coding," SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services - Coding of moving video, April 2015. The techniques of this disclosure may also be applied to any other previous or future video coding standards as an efficient coding tool. - Other video coding standards include ITU-T H.261, ISO/IEC MPEG-1 Visual, ITU-T H.262 or ISO/IEC MPEG-2 Visual, ITU-T H.263, ISO/IEC MPEG-4 Visual and the Scalable Video Coding (SVC) and Multiview Video Coding (MVC) extensions of H.264, as well as the extensions of HEVC, such as the range extension, multiview extension (MV-HEVC) and scalable extension (SHVC). In April 2015, the Video Coding Experts Group (VCEG) started a new research project which targets a next generation of video coding standard. The reference software is called HM-KTA.
- ITU-T VCEG (Q6/16) and ISO/IEC MPEG (
JTC 1/SC 29/WG 11) are now studying the potential need for standardization of future video coding technology with a compression capability that significantly exceeds that of the current HEVC standard (including current extensions of HEVC and near-term extensions for screen content coding and high-dynamic-range coding). The groups are working together on this exploration activity in a joint collaboration effort known as the Joint Video Exploration Team (JVET) to evaluate compression technology designs proposed by their experts in this area. The JVET first met during 19-21 October 2015. The latest version of reference software, i.e., Joint Exploration Model 3 (JEM 3) can be downloaded from: https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/HM-16.6-JEM-4.0/ An algorithm description of Joint Exploration Test Model 3 (JEM3) could be referred to JVET-D1001. - Certain video coding techniques, such as those of H.264 and HEVC that are related to the techniques of this disclosure, are described in this disclosure. Certain techniques of this disclosure may be described with reference to H.264 and/or HEVC to aid in understanding, but the techniques describe are not necessarily limited to H.264 or HEVC and can be used in conjunction with other coding standards and other coding tools.
- Although not shown in
FIG. 1 , in some aspects,video encoder 20 andvideo decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol, or other protocols such as the user datagram protocol (UDP). - In HEVC and other video coding specifications, a video sequence typically includes a series of pictures. Pictures may also be referred to as "frames." A picture may include three sample arrays, denoted SL, SCb, and SCr. SL is a two-dimensional array (i.e., a block) of luma samples. SCb is a two-dimensional array of Cb chrominance samples. Scr is a two-dimensional array of Cr chrominance samples. Chrominance samples may also be referred to herein as "chroma" samples. In other instances, a picture may be monochrome and may only include an array of luma samples.
- To generate an encoded representation of a picture,
video encoder 20 may generate a set of coding tree units (CTUs). Each of the CTUs may comprise a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples, and syntax structures used to code the samples of the coding tree blocks. In monochrome pictures or pictures having three separate color planes, a CTU may comprise a single coding tree block and syntax structures used to code the samples of the coding tree block. A coding tree block may be an NxN block of samples. A CTU may also be referred to as a "tree block" or a "largest coding unit" (LCU). The CTUs of HEVC may be broadly analogous to the macroblocks of other standards, such as H.264/AVC. However, a CTU is not necessarily limited to a particular size and may include one or more coding units (CUs). A slice may include an integer number of CTUs ordered consecutively in a raster scan order. - A CTB contains a quad-tree the nodes of which are coding units. The size of a CTB can be ranges from 16x16 to 64x64 in the HEVC main profile (although technically 8x8 CTB sizes can be supported). A coding unit (CU) could be the same size of a CTB although and as small as 8x8. Each coding unit is coded with one mode. When a CU is inter coded, the CU may be further partitioned into 2 or 4 prediction units (PUs) or become just one PU when further partition does not apply. When two PUs are present in one CU, the two PUs can be half size rectangles or two rectangle size with ¼ or ¾ size of the CU.
- To generate a coded CTU,
video encoder 20 may recursively perform quad-tree partitioning on the coding tree blocks of a CTU to divide the coding tree blocks into coding blocks, hence the name "coding tree units." A coding block may be an NxN block of samples. A CU may comprise a coding block of luma samples and two corresponding coding blocks of chroma samples of a picture that has a luma sample array, a Cb sample array, and a Cr sample array, and syntax structures used to code the samples of the coding blocks. In monochrome pictures or pictures having three separate color planes, a CU may comprise a single coding block and syntax structures used to code the samples of the coding block. -
Video encoder 20 may partition a coding block of a CU into one or more prediction blocks. A prediction block is a rectangular (i.e., square or non-square) block of samples on which the same prediction is applied. A prediction unit (PU) of a CU may comprise a prediction block of luma samples, two corresponding prediction blocks of chroma samples, and syntax structures used to predict the prediction blocks. In monochrome pictures or pictures having three separate color planes, a PU may comprise a single prediction block and syntax structures used to predict the prediction block.Video encoder 20 may generate predictive luma, Cb, and Cr blocks for luma, Cb, and Cr prediction blocks of each PU of the CU. -
Video encoder 20 may use intra prediction or inter prediction to generate the predictive blocks for a PU. Ifvideo encoder 20 uses intra prediction to generate the predictive blocks of a PU,video encoder 20 may generate the predictive blocks of the PU based on decoded samples of the picture associated with the PU. Ifvideo encoder 20 uses inter prediction to generate the predictive blocks of a PU,video encoder 20 may generate the predictive blocks of the PU based on decoded samples of one or more pictures other than the picture associated with the PU. When the CU is inter coded, one set of motion information may be present for each PU. In addition, each PU may be coded with a unique inter-prediction mode to derive the set of motion information. - After
video encoder 20 generates predictive luma, Cb, and Cr blocks for one or more PUs of a CU,video encoder 20 may generate a luma residual block for the CU. Each sample in the CU's luma residual block indicates a difference between a luma sample in one of the CU's predictive luma blocks and a corresponding sample in the CU's original luma coding block. In addition,video encoder 20 may generate a Cb residual block for the CU. Each sample in the CU's Cb residual block may indicate a difference between a Cb sample in one of the CU's predictive Cb blocks and a corresponding sample in the CU's original Cb coding block.Video encoder 20 may also generate a Cr residual block for the CU. Each sample in the CU's Cr residual block may indicate a difference between a Cr sample in one of the CU's predictive Cr blocks and a corresponding sample in the CU's original Cr coding block. - Furthermore,
video encoder 20 may use quad-tree partitioning to decompose the luma, Cb, and Cr residual blocks of a CU into one or more luma, Cb, and Cr transform blocks. A transform block is a rectangular (e.g., square or non-square) block of samples on which the same transform is applied. A transform unit (TU) of a CU may comprise a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax structures used to transform the transform block samples. Thus, each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block. The luma transform block associated with the TU may be a sub-block of the CU's luma residual block. The Cb transform block may be a sub-block of the CU's Cb residual block. The Cr transform block may be a sub-block of the CU's Cr residual block. In monochrome pictures or pictures having three separate color planes, a TU may comprise a single transform block and syntax structures used to transform the samples of the transform block. -
Video encoder 20 may apply one or more transforms to a luma transform block of a TU to generate a luma coefficient block for the TU. A coefficient block may be a two-dimensional array of transform coefficients. A transform coefficient may be a scalar quantity.Video encoder 20 may apply one or more transforms to a Cb transform block of a TU to generate a Cb coefficient block for the TU.Video encoder 20 may apply one or more transforms to a Cr transform block of a TU to generate a Cr coefficient block for the TU. - After generating a coefficient block (e.g., a luma coefficient block, a Cb coefficient block or a Cr coefficient block),
video encoder 20 may quantize the coefficient block. Quantization generally refers to a process in which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, providing further compression. Aftervideo encoder 20 quantizes a coefficient block,video encoder 20 may entropy encode syntax elements indicating the quantized transform coefficients. For example,video encoder 20 may perform Context-Adaptive Binary Arithmetic Coding (CABAC) on the syntax elements indicating the quantized transform coefficients. -
Video encoder 20 may output a bitstream that includes a sequence of bits that forms a representation of coded pictures and associated data. The bitstream may comprise a sequence of NAL units. A NAL unit is a syntax structure containing an indication of the type of data in the NAL unit and bytes containing that data in the form of a RBSP interspersed as necessary with emulation prevention bits. Each of the NAL units includes a NAL unit header and encapsulates a RBSP. The NAL unit header may include a syntax element that indicates a NAL unit type code. The NAL unit type code specified by the NAL unit header of a NAL unit indicates the type of the NAL unit. A RBSP may be a syntax structure containing an integer number of bytes that is encapsulated within a NAL unit. In some instances, an RBSP includes zero bits. - Different types of NAL units may encapsulate different types of RBSPs. For example, a first type of NAL unit may encapsulate an RBSP for a PPS, a second type of NAL unit may encapsulate an RBSP for a coded slice, a third type of NAL unit may encapsulate an RBSP for SEI messages, and so on. NAL units that encapsulate RBSPs for video coding data (as opposed to RBSPs for parameter sets and SEI messages) may be referred to as VCL NAL units.
-
Video decoder 30 may receive a bitstream generated byvideo encoder 20. In addition,video decoder 30 may parse the bitstream to obtain syntax elements from the bitstream.Video decoder 30 may reconstruct the pictures of the video data based at least in part on the syntax elements obtained from the bitstream. The process to reconstruct the video data may be generally reciprocal to the process performed byvideo encoder 20. In addition,video decoder 30 may inverse quantize coefficient blocks associated with TUs of a current CU.Video decoder 30 may perform inverse transforms on the coefficient blocks to reconstruct transform blocks associated with the TUs of the current CU.Video decoder 30 may reconstruct the coding blocks of the current CU by adding the samples of the predictive blocks for PUs of the current CU to corresponding samples of the transform blocks of the TUs of the current CU. By reconstructing the coding blocks for each CU of a picture,video decoder 30 may reconstruct the picture. - In accordance with the techniques of this disclosure,
video encoder 20 and/orvideo decoder 30 may further perform BIO techniques during motion compensation as discussed in greater detail below. -
Video encoder 20 andvideo decoder 30 each may be implemented as any of a variety of suitable encoder or decoder circuitry, as applicable, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuitry, software, hardware, firmware or any combinations thereof. Each ofvideo encoder 20 andvideo decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined video encoder/decoder (CODEC). A device includingvideo encoder 20 and/orvideo decoder 30 may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone. -
FIG. 2 is a conceptual diagram illustrating an example of unilateral motion estimation (ME) as a block-matching algorithm (BMA) performed for motion compensated frame-rate up-conversion (MC-FRUC). In general, a video coder (such asvideo encoder 20 or video decoder 30) performs unilateral ME to obtain motion vectors (MVs), such asMV 112, by searching for the best matching block (e.g., reference block 108) fromreference frame 102 forcurrent block 106 ofcurrent frame 100. Then, the video coder interpolates an interpolatedblock 110 along the motion trajectory ofmotion vector 112 in interpolatedframe 104. That is, in the example ofFIG. 2 ,motion vector 112 passes through midpoints ofcurrent block 106,reference block 108, and interpolatedblock 110. - As shown in
FIG. 2 , three blocks in three frames are involved following the motion trajectory. Althoughcurrent block 106 incurrent frame 100 belongs to a coded block, the best matching block in reference frame 102 (that is, reference block 108) need not fully belong to a coded block (that is, the best matching block might not fall on a coded block boundary, but instead, may overlap such a boundary). Likewise, interpolatedblock 110 in interpolatedframe 104 need not fully belong to a coded block. Consequently, overlapped regions of the blocks and un-filled (holes) regions may occur in interpolatedframe 104. - To handle overlaps, simple FRUC algorithms may involve averaging and overwriting the overlapped pixels. Moreover, holes may be covered by the pixel values from a reference or a current frame. However, these algorithms may result in blocking artifacts and blurring. Hence, motion field segmentation, successive extrapolation using the discrete Hartley transform, and image inpainting may be used to handle holes and overlaps without increasing blocking artifacts and blurring.
-
FIG. 3 is a conceptual diagram illustrating an example of bilateral ME as a BMA performed for MC-FRUC. Bilateral ME is another solution (in MC-FRUC) that can be used to avoid the problems caused by overlaps and holes. A video coder (such asvideo encoder 20 and/or video decoder 30) performing bilateral ME obtainsMVs block 130 of interpolated frame 124 (which is intermediate tocurrent frame 120 and reference frame 122) using temporal symmetry betweencurrent block 126 ofcurrent frame 120 andreference block 128 ofreference frame 122. As a result, the video coder does not generate overlaps and holes in interpolatedframe 124. Assuming thatcurrent block 126 is a block that the video coder processes in a certain order, e.g., as in the case of video coding, a sequence of such blocks would cover the whole intermediate picture without overlap. For example, in the case of video coding, blocks can be processed in the decoding order. Therefore, such a method may be more suitable if FRUC ideas can be considered in a video coding framework. - S.-F. Tu, O. C. Au, Y. Wu, E. Luo and C.-H. Yeun, "A Novel Framework for Frame Rate Up Conversion by Predictive Variable Block-Size Motion Estimated Optical Flow," International Congress on Image Signal Processing (CISP), 2009 described a hybrid block-level motion estimation and pixel-level optical flow method for frame rate up-conversion. Tu stated that the hybrid scene was better than either individual method.
- In the HEVC standard, there are two inter prediction modes, named merge mode (with skip mode considered as a special case of merge mode) and advanced motion vector prediction (AMVP) mode. In either AMVP or merge mode, a video coder maintains a MV candidate list for multiple motion vector predictors. A video coder determines the motion vector(s) for a particular PU, as well as reference indices in the merge mode, be selecting a candidate from the MV candidate list.
- In HEVC, the MV candidate list contains up to 5 candidates for the merge mode and only two candidates for the AMVP mode. Other coding standards may include more or fewer candidates. A merge candidate may contain a set of motion information, e.g., motion vectors corresponding to both reference picture lists (
list 0 and list 1) and the reference indices. A video decoder receives a merge candidate identified by a merge index, and the video decoder predicts a current PU using the identified reference picture(s) and motion vector(s). However, for AMVP mode, for each potential prediction direction from eitherlist 0 orlist 1, a reference index needs to be explicitly signaled, together with an MV predictor (MVP) index to the MV candidate list since the AMVP candidate contains only a motion vector. In AMVP mode, the predicted motion vectors can be further refined. - A merge candidate corresponds to a full set of motion information while an AMVP candidate contains just one motion vector for a specific prediction direction and reference index. The candidates for both modes are derived similarly from the same spatial and temporal neighboring blocks.
-
FIG. 4A shows spatial neighboring MV candidates for merge mode, andFIG. 4B shows spatial neighboring MV candidates for AMVP modes. Spatial MV candidates are derived from the neighboring blocks shown inFIGS. 4A and 4B , for a specific PU (PU0), although the methods generating the candidates from the blocks differ for merge and AMVP modes. - In merge mode, up to four spatial MV candidates can be derived with the orders shown in
FIG. 4A . The ordering is as follows: left (0), above (1), above right (2), below left (3), and above left (4), as shown inFIG. 4A . If all of spatial MV candidates 0-3 are available and unique, then the video coder may not include motion information for the above left block in the candidate list. If, however, one or more of spatial MV candidates 0-3 are not available or not unique, then the video coder may include motion information for the above left block in the candidate list. - In AVMP mode, the neighboring blocks are divided into two groups: left group consisting of the
block blocks FIG. 4B . For each group, the potential candidate in a neighboring block referring to the same reference picture as that indicated by the signaled reference index has the highest priority to be chosen to form a final candidate of the group. It is possible that all neighboring blocks do not contain a motion vector pointing to the same reference picture. Therefore, if such a candidate cannot be found, the first available candidate will be scaled to form the final candidate, thus the temporal distance differences can be compensated. -
FIG. 5A shows an example of a TMVP candidate, andFIG. 5B shows an example of MV scaling. Temporal motion vector predictor (TMVP) candidate, if enabled and available, is added into the MV candidate list after spatial motion vector candidates. The process of motion vector derivation for TMVP candidate is the same for both merge and AMVP modes, however the target reference index for the TMVP candidate in the merge mode is always set to 0. - The primary block location for TMVP candidate derivation is the bottom right block outside of the collocated PU as shown in
FIG. 5A as a block "T", to compensate the bias to the above and left blocks used to generate spatial neighboring candidates. However, if that block is located outside of the current CTB row or motion information is not available, the block is substituted with a center block of the PU. - Motion vector for TMVP candidate is derived from the co-located PU of the co-located picture, indicated in the slice level. The motion vector for the co-located PU is called collocated MV. Similar to temporal direct mode in AVC, to derive the TMVP candidate motion vector, the co-located MV need to be scaled to compensate the temporal distance differences, as shown in
FIGS. 5B . - HEVC also utilizes motion vector scaling. It is assumed that the value of motion vectors is proportional to the distance of pictures in the presentation time. A motion vector associates two pictures, the reference picture, and the picture containing the motion vector (namely the containing picture). When a motion vector is utilized to predict the other motion vector, the distance of the containing picture and the reference picture is calculated based on POC values.
- For a motion vector to be predicted, both the motion vector's associated containing picture and reference picture may be different. Therefore, a new distance (based on POC) is calculated, and the motion vector is scaled based on these two POC distances. For a spatial neighboring candidate, the containing pictures for the two motion vectors are the same, while the reference pictures are different. In HEVC, motion vector scaling applies to both TMVP and AMVP for spatial and temporal neighboring candidates.
- HEVC also utilizes artificial motion vector candidate generation. If a motion vector candidate list is not complete, artificial motion vector candidates are generated and inserted at the end of the list until all available entries in the motion vector candidate list have a candidate. In merge mode, there are two types of artificial MV candidates: combined candidate derived only for B-slices and zero candidates used only for AMVP if the first type does not provide enough artificial candidates. For each pair of candidates that are already in the candidate list and have necessary motion information, bi-directional combined motion vector candidates are derived by a combination of the motion vector of the first candidate referring to a picture in the
list 0 and the motion vector of a second candidate referring to a picture in thelist 1. - HEVC also utilizes a pruning process for candidate insertion. Candidates from different blocks may happen to be the same, which decreases the efficiency of a merge/AMVP candidate list. A pruning process may be applied to solve this problem. A pruning process compares one candidate against the others in the current candidate list to avoid inserting identical candidate. To reduce the complexity, only a limited numbers of pruning process may be applied instead of comparing each potential one with all the other existing ones. As one example, a video coder may apply a pruning process to spatial and temporal neighboring candidates but not to artificially generated candidates.
- Aspects of bi-directional optical flow in JEM will now be described.
FIG. 6 shows an example of optical flow trajectory. BIO utilizes pixel-wise motion refinement which is performed on top of block-wise motion compensation in a case of bi-prediction. As BIO compensates the fine motion inside the block, enabling BIO may effectively result in enlarging block size for motion compensation. Sample-level motion refinement does not require exhaustive searching or signaling but instead utilizes an explicit equation which gives fine motion vector for each sample. -
-
- Here τ 0 and τ 1 denote the distance to reference frames as shown on a
FIG. 6 . Distances τ 0 and τ1 are calculated based on POC for Ref0 and Ref1: τ0=POC(current)-POC(Ref0)), τ1= POC(Ref1)- POC(current). If both predictions come from the same time direction (both from the past or both from the future) then signs are different τ 0 ·τ 1 < 0 . In this case BIO is applied only if prediction come not from the same time moment (τ 0 ≠ τ 1), both referenced regions have non-zero motion ( MVx 0,MVy 0,MVx 1,MVy 1 ≠ 0) and block motion vectors are proportional to the time distance (MVx 0/MVx 1 = MVy 0/MVy 1 = - τ 0/τ 1). - The motion vector field (vx , vy ), also referred to as the amount of BIO motion, is determined by minimizing the difference Δ between values in points A and B (intersection of motion trajectory and reference frame planes on
FIG. 6 ). Model uses only first linear term of local Taylor expansion for Δ: -
-
-
- In some cases, MV refinement of BIO might be unreliable due to noise or irregular motion. Therefore, in BIO, the magnitude of MV refinement is clipped to the certain threshold thBIO. The threshold value is determined based on whether all the reference pictures of the current picture are all from one direction. If all the reference pictures of the current pictures of the current picture are from one direction, the value of the threshold may be set to 12 × 214-d , otherwise, the threshold may be set to 12 × 213-d .
- Gradients for BIO are calculated at the same time with motion compensation interpolation using operations consistent with HEVC motion compensation process (2D separable FIR). The input for this 2D separable FIR is the same reference frame sample as for motion compensation process and fractional position (fracX, fracY) according to the fractional part of block motion vector. In case of horizontal gradient ∂I/∂x signal first interpolated vertically using BIOfilterS corresponding to the fractional position fracY with
de-scaling shift d 8, then gradient filter BIOfilterG is applied in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18-d. In case of vertical gradient ∂I/∂y first gradient filter is applied vertically using BIOfilterG corresponding to the fractional position fracY with de-scaling shift d-8, then signal displacement is performed using BIOfilterS in horizontal direction corresponding to the fractional position fracX with de-scaling shift by 18-d. The length of interpolation filter for gradients calculation BIOfilterG and signal displacement BIOfilterF is shorter (6-tap) in order to maintain reasonable complexity. Table 1 shows the filters used for gradients calculation for different fractional positions of block motion vector in BIO. Table 2 shows the interpolation filters used for prediction signal generation in BIO. -
FIG. 7 shows an example of the gradient calculation for an 8x4 block. For an 8x4 blocks, a video coder fetches the motion compensated predictors and calculates the HOR/VER gradients of all the pixels within current block as well as the outer two lines of pixels because solving vx and vy for each pixel needs the HOR/VER gradient values and motion compensated predictors of the pixels within the window Ω centered in each pixel as shown in equation (4). In JEM, the size of this window is set to 5x5. Therefore, a video coder needs to fetch the motion compensated predictors and calculate the gradients for the outer two lines of pixels.Table 1: Filters for gradients calculation in BIO Fractional pel position Interpolation filter for gradient(BIOfilterG) 0 { 8, -39, -3, 46, -17, 5} 1/16 { 8, -32, -13, 50, -18, 5} 1/8 { 7, -27, -20, 54, -19, 5} 3/16 { 6, -21, -29, 57, -18, 5} ¼ { 4, -17, -36, 60, -15, 4} 5/16 { 3, -9, -44, 61, -15, 4} 3/8 { 1, -4, -48, 61, -13, 3} 7/16 { 0, 1, -54, 60, -9, 2} 1/2 { 1, 4, -57, 57, -4, 1} Table 2: Interpolation filters for prediction signal generation in BIO Fractional pel position Interpolation filter for prediction signal(BIOfilterS) 0 { 0, 0, 64, 0, 0, 0} 1/16 { 1, -3, 64, 4, -2, 0} 1/8 { 1, -6, 62, 9, -3, 1} 3/16 { 2, -8, 60, 14, -5, 1} 1/4 { 2, -9, 57, 19, -7, 2} 5/16 { 3, -10, 53, 24, -8, 2} 3/8 { 3, -11, 50, 29, -9, 2} 7/16 { 3, -11, 44, 35, -10, 3} 1/2 { 1, -7, 38, 38, -7, 1} - In JEM, BIO is applied to all bi-directional predicted blocks when the two predictions are from different reference pictures. When LIC is enabled for a CU, BIO is disabled.
-
FIG. 8 shows an example of modified BIO for 8x4 block proposed in JVET-D0042. At the 4th JVET meeting, a proposal JVET-D0042 (A. Alshina, E. Alshina, "AHG6: On BIO memory bandwidth", JVET-D0042, October 2016) was submitted to modify the BIO operations and reduce the memory access bandwidth. In this proposal, no motion compensated predictors and gradient values are needed for the pixels outside the current block. Moreover, the solving of vx and vy for each pixel is modified to using the motion compensated predictors and the gradient values of all the pixels within current block as shown inFIG. 8 . In other words, the square window Ω in equation (4) is modified to a window which is equal to current block. Besides, a weighting factor w(i',j') is considered for deriving vx and vy. The w(i',j') is a function of the position of the center pixel (i,j) and the positions of the pixels (I',j') within the window. - Aspects of Overlapped Block Motion Compensation (OBMC) in JEM will now be described. OBMC has been used for early generations of video standards, e.g., as in H.263. In JEM, OBMC is performed for all Motion Compensated (MC) block boundaries except the right and bottom boundaries of a CU. Moreover, OBMC may be applied for both luma and chroma components. In JEM, a MC block is corresponding to a coding block. When a CU is coded with sub-CU mode (includes sub-CU merge, Affine and FRUC mode as described in J. Chen, E. Alshina, G. J. Sullivan, J.-R. Ohm, J. Boyce, "Algorithm Description of Joint ), each sub-block of the CU is a MC block. To process CU boundaries in a uniform fashion, OBMC is performed at sub-block level for all MC block boundaries, where sub-block size is set equal to 4x4, as illustrated in
FIGS. 9A and9B . - When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical to the current motion vector, are also used to derive prediction block for the current sub-block. These multiple prediction blocks based on multiple motion vectors are combined to generate the final prediction signal of the current sub-block.
- As shown in
FIG. 10 , prediction block based on motion vectors of a neighbouring sub-block is denoted as PN , with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as Pc. When PN is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block, the OBMC is not performed from PN. Otherwise, every pixel of PN is added to the same pixel in Pc, i.e., four rows/columns of PN are added to Pc. The weighting factors {1/4, 1/8, 1/16, 1/32} are used for PN and the weighting factors {3/4, 7/8, 15/16, 31/32} are used for PC . The exception are small MC blocks, (i.e., when height or width of the coding block is equal to 4 or a CU is coded with sub-CU mode), for which only two rows/columns of PN are added to Pc. In this case weighting factors {1/4, 1/8} are used for PN and weighting factors {3/4, 7/8} are used for PC. For PN generated based on motion vectors of vertically (horizontally) neighbouring sub-block, pixels in the same row (column) of PN are added to Pc with a same weighting factor. BIO may also be applied for the derivation of the prediction block PN. - In JEM, for a CU with size less than or equal to 256 luma samples, a CU level flag is signalled to indicate whether OBMC is applied or not for the current CU. For the CUs with size larger than 256 luma samples or not coded with AMVP mode, OBMC is applied by default. At encoder, when OBMC is applied for a CU, its impact is taken into account during motion estimation stage. The prediction signal by using motion information of the top neighboring block and the left neighboring block is used to compensate the top and left boundaries of the original signal of the current CU, and then the normal motion estimation process is applied.
- Although BIO potentially provides more than 1% Bjontegaard-Delta bitrate (BD-rate) reduction in JEM4.0, BIO also potentially introduces significant computational complexity and may necessitate a memory bandwidth increase for both encoder and decoder. This disclosure describes techniques that may potentially reduce the computational complexity and required memory bandwidth associated with BIO. As one example, according to the techniques of this disclosure, a video coder may determine an amount of a BIO motion, e.g., the vx and vy values described above, on a sub-block level and use that determined amount of BIO motion to modify sample values of a predictive block on a sample-by-sample basis. Accordingly, the techniques of this disclosure may improve video encoders and video decoders by allowing them to achieve the coding gains of BIO without incurring the substantial processing and memory burdens required for existing implementations of BIO.
- Based on equation (4), this disclosure introduces techniques for reducing the complexity of BIO by re-defining the window Ω. Such techniques may, for example be performed by video encoder 20 (e.g.,
motion estimation unit 42 and/or motion compensation unit 44) or by video decoder 30 (e.g., motion compensation unit 72). The window Ω is defined as any block within current block covering current pixel with size MxN where M and N are any positive integer. In one example, current block is divided into non-overlapped sub-blocks andthe window Ω is defined as the sub-block which covers current pixel. In another example as shown inFIG. 11 , the sub-block is defined as the smallest block for motion vector storage which covers current pixel. In HEVC and JEM, the smallest block size is 4x4. In another example, the size of the window Ω is adaptive according to the coding information such as the size of current block, coding modes. When current block size is larger, a larger window Ω can be used. When current block is coded as sub-block modes such as sub-CU merge, Affine and FRUC mode, the window Ω is set as the sub-block. -
FIG. 11 shows an example of the proposed BIO for an 8x4 block, in accordance wit the techniques of this disclosure, with a window Ω for pixel A, B and C. According to the techniques of this disclosure, equal weightings may be used for solving vx and vy as shown in equation (7). In another example, unequal weightings can be used for solving vx and vy as shown in equation (10). The un-equal weightings can be a function of the distances between the center pixel and the associated pixels. Yet in another example, the weighting can be calculated using a bi-lateral approach, as for example described at https://en.wikipedia.org/wiki/Bilateral_filter. Moreover, look-up tables can be used to store all the weighting factors for each pixel for the window Ω in equation (7). - In another example, when deriving PN for OBMC, the BIO is only performed for partial pixels when deriving the predictors using the neighbor motions. In one example, the BIO is totally disabled for all the pixels in deriving PN. In yet another example, the BIO is only applied on the pixels in outer two lines as shown in
FIGS. 12A-12D . - Moreover, for each block, how many lines BIO is applied can be explicitly signaled in slice level of SPS/PPS. Whether BIO is disabled or partially disabled can also be explicitly signaled in slice level of SPS/PPS.
- On the other hand, how many lines BIO is applied can be implicitly based on certain coding conditions, such as CU mode (sub-block mode or non sub-block mode) or block size or the combination of other tools, such as Illumination Compensation (IC) flag signaled. Whether BIO is disabled or partially disabled can also be implicitly derived based on certain conditions, such as CU mode (sub-block mode or non sub-block mode) or block size or the combination of other tools, such as IC flag signaled.
-
FIGS. 12A-12D show examples of the proposed simplified BIO on OBMC according to the techniques of this disclosure, where x represents the predictor derived without BIO and o represents the predictor derived with BIO. The motion vector refinement from BIO can be block-based. Let the block size be M-by-N, a weighting function can be used to provide different scale factors to pixels of different locations during calculation of terms in equation (7). When solving equations (5) and (6), interpolated pixels and their gradient values gathered from the entire block can be used to solving vx and vy jointly, instead of solving vx and vy individually for each pixel position. - In one example, the window size omega can be defined as a running window centered at each pixel location, and the averaged value by summing up values from all locations is used. Specifically,
FIG. 13. FIG. 13 shows an example of a weighting function for a 4x4 sub-block with a 5x5 window. - In another example, the weighting function can be sent in SPS, PPS, or slice header. To reduce the signaling costs, a set of pre-defined weighting functions can be stored and only the indexes of the weighting functions need to be signaled.
- In another example, the refined motion vector can be found using pixels lying at the central part of the sub-block. The gradient values of central pixels can be calculated using interpolation filter and a window of size M-by-N can be applied to the interpolated pixels to provide different weights to the central pixels, in order to calculate variables s1-s6 in equation (7). In one example, the gradient values of the central points can be calculated and the averaged value of the central points can be used (equal-weight window). In another example, a median filter can be used to select the representative pixels to calculate variables s1-s6 in equation (7).
- In JVET-D0042, when solving for BIO offset(s), the window size for each pixel may be modified to be the whole current block, which potentially adds computational complexity to the current design when a current block is larger or equal to 8x4. The worst case of the modifications is that a 128x128 window is used for the accumulation of gradients and predictors for each pixel within a 128x128 block.
- Moreover, when sub-blocks within one CU share the same MV or one inter-coded CU is divided into smaller sub-blocks for motion compensation (MC), JEM-4.0 provides the flexibility to either perform the MC and BIO for each sub-block in parallel or perform the MC and BIO for the larger block aggregated of the sub-blocks with the same MV in one-time effort. For either way, JEM-4.0 provides identical coding results. However, the modified BIO in JVET-D0042 utilizes a block size dependent gradient calculation and weighting factors such that performing MC and BIO for two neighboring same-motion blocks jointly or separately may lead to different results. To avoid different results, it has to be specified that decoder shall perform MC and BIO at either block level or a certain sub-block level. Such a constraint may be too strict and not desirable for practical codec implementation
- Based on the equation (4), the complexity of BIO may be further reduced by re-defining the window S2. Two types of the window Ω are defined; one is the non-overlapping window and the other one is sliding window. For the type of non-overlapping window, current block is divided into non-overlapping sub-blocks and the window Ω is defined as the sub-block which covers current pixel as shown in
FIG. 11 . For the type of sliding window, the window Ω is defined as a block centered at a current pixel as shown inFIG. 7 . - For both types of window S2, the size of the window Ω can be determined using different methods as illustrated below. Hereafter, it may be assumed that the window Ω is a rectangular block with size MxN, where M and N can be any non-negative integer such as (4x4, 8x8, 16x16, 8x4 and so on). The window Ω is not limited to a rectangular shape and can be any other shape such as a diamond shape. The described techniques can also be applied to shapes other than the rectangular shape if applicable.
- The size of the window may be fixed or variable and may be either predetermined or signaled in the bitstream. When the size is signaled, the size may be signalled in the sequence parameter set (SPS), picture parameter set (PPS), slice header, or at the CTU level. The window size can be jointly determined by the size of the motion compensated (MC) block by the equation below.
- In one example, the motion compensated (MC) block is purely dependent on the coding information such as the size of current block and coding modes. For example, the motion compensated (MC) block is set as the whole CU when the current CU is coded with non sub-block modes such as sub-CU merge, Affine and FRUC mode. The motion compensated (MC) block is set as sub-block when sub-block modes such as sub-CU merge, Affine and FRUC mode are used, regardless whether the sub-blocks have the same motion information.
- In another example, the motion compensated (MC) block is defined as the block of samples within a CU that have the same MVs. In this case, the motion compensated (MC) block is set as the whole CU when the current CU is coded with non sub-block modes such as sub-CU merge, Affine and FRUC mode. When a CU is coded with sub-block modes such as sub-CU merge, Affine and FRUC mode, the sub-blocks with same motion information are merged as a motion compensated (MC) block with certain scanning order of sub-block.
- Adaptive size: the size of the window Ω is adaptive according to the coding information such as the size of current block, coding modes. In one example, the window Ω is set as the whole current block or quarter of the current block when current block is coded as non sub-block modes such as sub-CU merge, Affine and FRUC mode; and the window Ω is set as sub-block when the current block is coded as sub-block modes. The adaptive window size can be jointly determined by the size of the motion compensated (MC) block by the equation below.
- For the various techniques for determining the size of window S2, a high-level limitation of the size can be included for friendly hardware or software implementation. For example, the window size should be smaller or equal to the maximum Transform Unit (TU) size allowed in the video codec system. In another example, the window size should be larger or equal to the smallest MC block such as 4x4.
- To further simplify the BIO-related operations, this disclosure introduces techniques for performing the BIO as a post-processing after all motion compensated prediction is finished. To be specific, after conventional MC is done, OBMC can then be applied to generate better predictors for current block. Based on the final predictor, BIO is then applied using the motion information of current block to further refine the predictor. For example, for gradient calculation in BIO, the motion of the entire block may be used. In another example, for each sub-block, the averaged motion vector from OBMC can be used. In another example, for each sub-block, the median motion vector (for each dimension individually) can be used.
- Weighting functions can be designed differently when considering block-based derivation of motion vector refinement of BIO. Equal weights can be used for any of the above-mentioned methods. Alternatively, more weights can be placed toward the central part of the window. In one example, the weights can be calculated by the inverse distance (including but not limited to L1-norm or L2-norm) between the center of the window to the pixel.
-
FIG. 14 is a block diagram illustrating an example ofvideo encoder 20 that may implement techniques for bi-directional optical flow.Video encoder 20 may perform intra- and inter-coding of video blocks within video slices. Intra-coding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video frame or picture. Inter-coding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames or pictures of a video sequence. Intra-mode (I mode) may refer to any of several spatial based coding modes. Inter-modes, such as uni-directional prediction (P mode) or bi-prediction (B mode), may refer to any of several temporal-based coding modes. - As shown in
FIG. 14 ,video encoder 20 receives video data and stores the received video data invideo data memory 38.Video data memory 38 may store video data to be encoded by the components ofvideo encoder 20. The video data stored invideo data memory 38 may be obtained, for example, fromvideo source 18.Reference picture memory 64 may be a reference picture memory that stores reference video data for use in encoding video data byvideo encoder 20, e.g., in intra- or inter-coding modes.Video data memory 38 andreference picture memory 64 may be formed by any of a variety of memory devices, such as dynamic random-access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices.Video data memory 38 andreference picture memory 64 may be provided by the same memory device or separate memory devices. In various examples,video data memory 38 may be on-chip with other components ofvideo encoder 20, or off-chip relative to those components. -
Video encoder 20 receives a current video block within a video frame to be encoded. In the example ofFIG. 14 ,video encoder 20 includes modeselect unit 40, reference picture memory 64 (which may also be referred to as a decoded picture buffer (DPB)),summer 50, transform processingunit 52,quantization unit 54, andentropy encoding unit 56. Modeselect unit 40, in turn, includesmotion compensation unit 44,motion estimation unit 42,intra-prediction processing unit 46, andpartition unit 48. For video block reconstruction,video encoder 20 also includesinverse quantization unit 58, inversetransform processing unit 60, andsummer 62. A deblocking filter (not shown inFIG. 14 ) may also be included to filter block boundaries to remove blockiness artifacts from reconstructed video. If desired, the deblocking filter would typically filter the output ofsummer 62. Additional filters (in loop or post loop) may also be used in addition to the deblocking filter. Such filters are not shown for brevity, but if desired, may filter the output of summer 50 (as an in-loop filter). - During the encoding process,
video encoder 20 receives a video frame or slice to be coded. The frame or slice may be divided into multiple video blocks.Motion estimation unit 42 andmotion compensation unit 44 perform inter-predictive encoding of the received video block relative to one or more blocks in one or more reference frames to provide temporal prediction.Intra-prediction processing unit 46 may alternatively intra-predict the received video block using pixels of one or more neighboring blocks in the same frame or slice as the block to be coded to provide spatial prediction.Video encoder 20 may perform multiple coding passes, e.g., to select an appropriate coding mode for each block of video data. - Moreover,
partition unit 48 may partition blocks of video data into sub-blocks, based on evaluation of previous partitioning schemes in previous coding passes. For example,partition unit 48 may initially partition a frame or slice into LCUs, and partition each of the LCUs into sub-CUs based on rate-distortion analysis (e.g., rate-distortion optimization). Modeselect unit 40 may further produce a quadtree data structure indicative of partitioning of an LCU into sub-CUs. Leaf-node CUs of the quadtree may include one or more PUs and one or more TUs. - Mode
select unit 40 may select one of the prediction modes, intra or inter, e.g., based on error results, and provides the resulting predicted block tosummer 50 to generate residual data and tosummer 62 to reconstruct the encoded block for use as a reference frame. Modeselect unit 40 also provides syntax elements, such as motion vectors, intra-mode indicators, partition information, and other such syntax information, toentropy encoding unit 56. -
Motion estimation unit 42 andmotion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation, performed bymotion estimation unit 42, is the process of generating motion vectors, which estimate motion for video blocks. A motion vector, for example, may indicate the displacement of a PU of a video block within a current video frame or picture relative to a predictive block within a reference frame (or other coded unit) relative to the current block being coded within the current frame (or other coded unit). A predictive block is a block that is found to closely match the block to be coded, in terms of pixel difference, which may be determined by sum of absolute difference (SAD), sum of square difference (SSD), or other difference metrics. In some examples,video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored inreference picture memory 64. For example,video encoder 20 may interpolate values of one-quarter pixel positions, one-eighth pixel positions, or other fractional pixel positions of the reference picture. Therefore,motion estimation unit 42 may perform a motion search relative to the full pixel positions and fractional pixel positions and output a motion vector with fractional pixel precision. -
Motion estimation unit 42 calculates a motion vector for a PU of a video block in an inter-coded slice by comparing the position of the PU to the position of a predictive block of a reference picture. The reference picture may be selected from a first reference picture list (List 0) or a second reference picture list (List 1), each of which identify one or more reference pictures stored inreference picture memory 64.Motion estimation unit 42 sends the calculated motion vector toentropy encoding unit 56 andmotion compensation unit 44. - Motion compensation, performed by
motion compensation unit 44, may involve fetching or generating the predictive block based on the motion vector determined bymotion estimation unit 42. Again,motion estimation unit 42 andmotion compensation unit 44 may be functionally integrated, in some examples. Upon receiving the motion vector for the PU of the current video block,motion compensation unit 44 may locate the predictive block to which the motion vector points in one of the reference picture lists.Summer 50 forms a residual video block by subtracting pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values, as discussed below. In general,motion estimation unit 42 performs motion estimation relative to luma components, andmotion compensation unit 44 uses motion vectors calculated based on the luma components for both chroma components and luma components. Modeselect unit 40 may also generate syntax elements associated with the video blocks and the video slice for use byvideo decoder 30 in decoding the video blocks of the video slice. - Furthermore,
motion compensation unit 44 may be configured to perform any or all of the techniques of this disclosure (alone or in any combination). Although discussed with respect tomotion compensation unit 44, it should be understood that modeselect unit 40,motion estimation unit 42,partition unit 48, and/orentropy encoding unit 56 may also be configured to perform certain techniques of this disclosure, alone or in combination withmotion compensation unit 44. In one example,motion compensation unit 44 may be configured to perform the BIO techniques discussed herein. -
Intra-prediction processing unit 46 may intra-predict a current block, as an alternative to the inter-prediction performed bymotion estimation unit 42 andmotion compensation unit 44, as described above. In particular,intra-prediction processing unit 46 may determine an intra-prediction mode to use to encode a current block. In some examples,intra-prediction processing unit 46 may encode a current block using various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction processing unit 46 (or modeselect unit 40, in some examples) may select an appropriate intra-prediction mode to use from the tested modes. - For example,
intra-prediction processing unit 46 may calculate rate-distortion values using a rate-distortion analysis for the various tested intra-prediction modes, and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes. Rate-distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as a bitrate (that is, a number of bits) used to produce the encoded block.Intra-prediction processing unit 46 may calculate ratios from the distortions and rates for the various encoded blocks to determine which intra-prediction mode exhibits the best rate-distortion value for the block. - After selecting an intra-prediction mode for a block,
intra-prediction processing unit 46 may provide information indicative of the selected intra-prediction mode for the block toentropy encoding unit 56.Entropy encoding unit 56 may encode the information indicating the selected intra-prediction mode.Video encoder 20 may include in the transmitted bitstream configuration data, which may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables), definitions of encoding contexts for various blocks, and indications of a most probable intra-prediction mode, an intra-prediction mode index table, and a modified intra-prediction mode index table to use for each of the contexts. -
Video encoder 20 forms a residual video block by subtracting the prediction data from modeselect unit 40 from the original video block being coded.Summer 50 represents the component or components that perform this subtraction operation. Transform processingunit 52 applies a transform, such as a discrete cosine transform (DCT) or a conceptually similar transform, to the residual block, producing a video block comprising transform coefficient values. Wavelet transforms, integer transforms, sub-band transforms, discrete sine transforms (DSTs), or other types of transforms could be used instead of a DCT. In any case, transform processingunit 52 applies the transform to the residual block, producing a block of transform coefficients. The transform may convert the residual information from a pixel domain to a transform domain, such as a frequency domain. Transform processingunit 52 may send the resulting transform coefficients toquantization unit 54.Quantization unit 54 quantizes the transform coefficients to further reduce bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The degree of quantization may be modified by adjusting a quantization parameter. - Following quantization,
entropy encoding unit 56 entropy codes the quantized transform coefficients. For example,entropy encoding unit 56 may perform context adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), probability interval partitioning entropy (PIPE) coding or another entropy coding technique. In the case of context-based entropy coding, context may be based on neighboring blocks. Following the entropy coding byentropy encoding unit 56, the encoded bitstream may be transmitted to another device (e.g., video decoder 30) or archived for later transmission or retrieval. -
Inverse quantization unit 58 and inversetransform processing unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain. In particular,summer 62 adds the reconstructed residual block to the motion compensated prediction block earlier produced bymotion compensation unit 44 orintra-prediction processing unit 46 to produce a reconstructed video block for storage inreference picture memory 64. The reconstructed video block may be used bymotion estimation unit 42 andmotion compensation unit 44 as a reference block to inter-code a block in a subsequent video frame. -
FIG. 15 is a block diagram illustrating an example ofvideo decoder 30 that may implement techniques for bi-directional optical flow. In the example ofFIG. 15 ,video decoder 30 includes anentropy decoding unit 70,motion compensation unit 72,intra-prediction processing unit 74,inverse quantization unit 76, inversetransform processing unit 78,reference picture memory 82 andsummer 80.Video decoder 30 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder 20 (FIG. 14 ).Motion compensation unit 72 may generate prediction data based on motion vectors received fromentropy decoding unit 70, whileintra-prediction processing unit 74 may generate prediction data based on intra-prediction mode indicators received fromentropy decoding unit 70. - During the decoding process,
video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements fromvideo encoder 20.Video decoder 30 stores the received encoded video bitstream invideo data memory 68.Video data memory 68 may store video data, such as an encoded video bitstream, to be decoded by the components ofvideo decoder 30. The video data stored invideo data memory 68 may be obtained, for example, via computer-readable medium 16, from storage media, or from a local video source, such as a camera, or by accessing physical data storage media. Video data memory 85 may form a coded picture buffer (CPB) that stores encoded video data from an encoded video bitstream.Reference picture memory 82 may be a reference picture memory that stores reference video data for use in decoding video data byvideo decoder 30, e.g., in intra- or inter-coding modes.Video data memory 68 andreference picture memory 82 may be formed by any of a variety of memory devices, such as DRAM, SDRAM, MRAM, RRAM, or other types of memory devices.Video data memory 68 andreference picture memory 82 may be provided by the same memory device or separate memory devices. In various examples,video data memory 68 may be on-chip with other components ofvideo decoder 30, or off-chip relative to those components. - During the decoding process,
video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements fromvideo encoder 20.Entropy decoding unit 70 ofvideo decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors or intra-prediction mode indicators, and other syntax elements.Entropy decoding unit 70 forwards the motion vectors to and other syntax elements tomotion compensation unit 72.Video decoder 30 may receive the syntax elements at the video slice level and/or the video block level. - When the video slice is coded as an intra-coded (I) slice,
intra-prediction processing unit 74 may generate prediction data for a video block of the current video slice based on a signaled intra prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter-coded (i.e., B, P or GPB) slice,motion compensation unit 72 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received fromentropy decoding unit 70. The predictive blocks may be produced from one of the reference pictures within one of the reference picture lists.Video decoder 30 may construct the reference frame lists,List 0 andList 1, using default construction techniques based on reference pictures stored inreference picture memory 82. -
Motion compensation unit 72 determines prediction information for a video block of the current video slice by parsing the motion vectors and other syntax elements, and uses the prediction information to produce the predictive blocks for the current video block being decoded. For example,motion compensation unit 72 uses some of the received syntax elements to determine a prediction mode (e.g., intra- or inter-prediction) used to code the video blocks of the video slice, an inter-prediction slice type (e.g., B slice, P slice, or GPB slice), construction information for one or more of the reference picture lists for the slice, motion vectors for each inter-encoded video block of the slice, inter-prediction status for each inter-coded video block of the slice, and other information to decode the video blocks in the current video slice. -
Motion compensation unit 72 may also perform interpolation based on interpolation filters for sub-pixel precision.Motion compensation unit 72 may use interpolation filters as used byvideo encoder 20 during encoding of the video blocks to calculate interpolated values for sub-integer pixels of reference blocks. In this case,motion compensation unit 72 may determine the interpolation filters used byvideo encoder 20 from the received syntax elements and use the interpolation filters to produce predictive blocks. - Furthermore,
motion compensation unit 72 may be configured to perform any or all of the techniques of this disclosure (alone or in any combination). For example,motion compensation unit 72 may be configured to perform the BIO techniques discussed herein. -
Inverse quantization unit 76 inverse quantizes, i.e., de-quantizes, the quantized transform coefficients provided in the bitstream and decoded byentropy decoding unit 70. The inverse quantization process may include use of a quantization parameter QPY calculated byvideo decoder 30 for each video block in the video slice to determine a degree of quantization and, likewise, a degree of inverse quantization that should be applied. - Inverse
transform processing unit 78 applies an inverse transform, e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain. - After
motion compensation unit 72 generates the predictive block for the current video block based on the motion vectors and other syntax elements,video decoder 30 forms a decoded video block by summing the residual blocks from inversetransform processing unit 78 with the corresponding predictive blocks generated bymotion compensation unit 72.Summer 80 represents the component or components that perform this summation operation. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. Other loop filters (either in the coding loop or after the coding loop) may also be used to smooth pixel transitions, or otherwise improve the video quality. The decoded video blocks in a given frame or picture are then stored inreference picture memory 82, which stores reference pictures used for subsequent motion compensation.Reference picture memory 82 also stores decoded video for later presentation on a display device, such asdisplay device 32 ofFIG. 1 . For example,reference picture memory 82 may store decoded pictures. -
FIG. 16 is a flowchart illustrating an example operation of a video decoder for decoding video data in accordance with a technique of this disclosure. The video decoder described with respect toFIG. 16 may, for example, be a video decoder, such asvideo decoder 30, for outputting displayable decoded video or may be a video decoder implemented in a video encoder, such as the decoding loop ofvideo encoder 20, which includesinverse quantization unit 58, inversetransform processing unit 60,summer 62, andreference picture memory 64, as well as portions of modeselect unit 40. - In accordance with the techniques of
FIG. 16 , the video decoder determines a block of video data is encoded using a bi-directional inter prediction mode (200). The video decoder determines a first motion vector for the block that points to a first reference picture (202). The video decoder determines a second MV for the block that points to a second reference picture, with the first reference picture being different than the second reference picture (204). The video decoder uses the first MV to locate a first predictive block in the first reference picture (206). The video decoder uses the second MV to locate a second predictive block in the second reference picture (208). - The video decoder determines a first amount of BIO motion for a first sub-block of the first predictive block (210). The first sub-block may be different than a coding unit, a prediction unit, and a transform unit for the block. To determine the first amount of BIO motion, the video decoder may in some examples, determine he first amount of BIO motion based on samples in the first sub-block and samples outside the first sub-block, and in other examples, determine the first amount of BIO motion based only on samples in the first sub-block. The first amount of BIO motion may for example include a motion vector field that includes a horizontal component and a vertical component.
- The video decoder determines a first final predictive sub-block for the block of video data based on the first sub-block of the first predictive block, a first sub-block of the second predictive block, and the first amount of BIO motion (212). To determine the first final predictive sub-block for the block of video data based on the first sub-block of the first predictive block, the first sub-block of the second predictive block, and the first amount of BIO motion, the video decoder may determine the first final predictive sub-block using, for example, equation (2) above.
- The video decoder determines a second amount of BIO motion for a second sub-block of the first predictive block (214). The second sub-block may be different than a coding unit, a prediction unit, and a transform unit for the block. To determine the second amount of BIO motion, the video decoder may in some examples, determine the second amount of BIO motion based on samples in the second sub-block and samples outside the second sub-block, and in other example, determine the second amount of BIO motion based only on samples in the second sub-block. The second amount of BIO motion may, for example, include a motion vector field that includes a horizontal component and a vertical component.
- The video decoder determines a second final predictive sub-block for the block of video data based on the second sub-block of the first predictive block, a second sub-block of the second predictive block, and the second amount of BIO motion (216). To determine the second final predictive sub-block for the block of video data based on the second sub-block of the first predictive block, a second sub-block of the second predictive block, and the second amount of BIO motion, the video decoder may, for example, determine the second final predictive sub-block using, for example, equation (2)
- The video decoder determines a final predictive block for the block of video data based on the first final predictive sub-block and the second final predictive sub-block (218). The video decoder may, for example, add residual data to the final predictive block to determine a reconstructed block for the block of video data. The video decoder may also perform one or more filtering processes on the reconstructed block of video data.
- The video decoder outputs a picture of video data comprising a decoded version of the block of video data (220). When the decoding is performed as part of a decoding loop of a video encoding process, then the video decoder may, for example, output the picture by storing the picture in a reference picture memory, and the video decoder may use the picture as a reference picture in encoding another picture of the video data. When the video decoder is a video decoder configured to output displayable decoded video, then the video decoder may, for example, output the picture of video data to a display device.
- It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.
- In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
- By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
- Instructions may be executed by one or more processors, such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
- The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
- Various examples have been described. These and other examples are within the scope of the following claims.
Claims (13)
- A method of decoding video data, the method comprising:determining a block of video data is encoded using a bi-directional inter prediction mode;determining a first motion vector, MV, for the block, wherein the first MV points to a first reference picture;determining a second MV for the block, wherein the second MV points to a second reference picture, the first reference picture being different than the second reference picture;using the first MV, locating a first predictive block in the first reference picture;using the second MV, locating a second predictive block in the second reference picture;for a first 4x4 sub-block of the first predictive block, determining a first amount of bi-directional optical flow, BIO, motion based on gradient values of samples of the first 4x4 sub-block;determining a first final predictive 4x4 sub-block for the block of video data based on the first 4x4 sub-block of the first predictive block, a 4x4 first sub-block of the second predictive block, and the first amount of BIO motion, wherein determining the first final predictive 4x4 sub-block for the block of video data based on the first 4x4 sub-block of the first predictive block, the first 4x4 sub-block of the second predictive block, and the first amount of BIO motion comprises, for each sample of the first final predictive 4x4 sub-block, determining a sample value of the first final predictive 4x4 sub-block based on co-located samples of the first 4x4 sub-block of the first predictive block and the first 4x4 sub-block of the second predictive block using the first amount of BIO motion;;for a second 4x4 sub-block of the first predictive block, determining a second amount of BIO motion based on gradient values of samples in the second 4x4 sub-block;determining a second final predictive 4x4 sub-block for the block of video data based on the second 4x4 sub-block of the first predictive block, a second 4x4 sub-block of the second predictive block, and the second amount of BIO motion, wherein determining the second final predictive 4x4 sub-block for the block of video data based on the second 4x4 sub-block of the first predictive block, the second 4x4 sub-block of the second predictive block, and the second amount of BIO motion comprises, for each sample of the second final predictive 4x4 sub-block, determining a sample value of the second final predictive 4x4 sub-block based on co-located samples of the second 4x4 sub-block of the first predictive block and the second 4x4 sub-block of the second predictive block using the second amount of BIO motion;based on the first final predictive 4x4 sub-block and the second final predictive 4x4 sub-block, determining a final predictive block for the block of video data; andoutputting a picture of video data comprising a decoded version of the block of video data.
- The method of claim 1, wherein determining the first amount of BIO motion comprises determining the first amount of BIO motion based only on samples in the first 4x4 sub-block.
- The method of claim 1, wherein determining the second amount of BIO motion comprises determining the second amount of BIO motion based only on samples in the second sub-block.
- The method of claim 1, wherein the first amount of BIO motion comprises a motion vector field comprising a horizontal component and a vertical component.
- The method of claim 1, wherein the first 4x4 sub-block is different than a coding unit, a prediction unit, and a transform unit for the block.
- The method of claim 1, further comprising:
adding residual data to the final predictive block to determine a reconstructed block for the block of video data. - The method of claim 1, wherein determining the first final predictive 4x4 sub-block for the block of video data based on the first 4x4 sub-block of the first predictive block, the first 4x4 sub-block of the second predictive block, and the first amount of BIO motion comprises determining the first final predictive 4x4 sub-block according to the equation:
predBIO comprises a sample value of the first final predictive sub-block;I(0) comprises a sample value of the first 4x4 sub-block of the first predictive block;I(1) comprises a sample value of the first 4x4 sub-block of the second predictive block;vx comprises a horizontal component of the first amount of BIO motion;vy comprises a vertical component of the first amount of BIO motion;τ 0 comprise a distance to the first reference picture; andτ 1 comprises a distance to the second reference picture. - The method of claim 1, wherein the method of decoding is performed as part of a decoding loop of a video encoding process, and wherein outputting the picture of video data comprising the decoded version of the block of video data comprises storing the picture of video data comprising the decoded version of the block of video data in a reference picture memory, the method further comprising:
using the picture of video data comprising the decoded version of the block of video data as a reference picture in encoding another picture of the video data. - The method of claim 1, wherein outputting the picture of video data comprising the decoded version of the block of video data comprises outputting the picture of video data comprising the decoded version of the block of video data to a display device.
- A computer readable storage medium storing instructions that when executed by one or more processors cause the one or more processors to any one preceding claim.
- An apparatus for decoding video data, the apparatus comprising:means for determining a block of video data is encoded using a bi-directional inter prediction mode;means for determining a first motion vector, MV, for the block, wherein the first MV points to a first reference picture;means for determining a second MV for the block, wherein the second MV points to a second reference picture, the first reference picture being different than the second reference picture;means for locating a first predictive block in the first reference picture using the first MV;means for locating a second predictive block in the second reference picture using the second MV;means for determining a first amount of bi-directional optical flow, BIO, motion for a first 4x4 sub-block of the first predictive block, based on gradient values of samples of the first 4x4 sub-block;means for determining a first final predictive 4x4 sub-block for the block of video data based on the first 4x4 sub-block of the first predictive block, a first 4x4 sub-block of the second predictive block, and the first amount of BIO motion, wherein determining the first final predictive 4x4 sub-block for the block of video data based on the first 4x4 sub-block of the first predictive block, the first 4x4 sub-block of the second predictive block, and the first amount of BIO motion comprises, for each sample of the first final predictive 4x4 sub-block, determining a sample value of the first final predictive 4x4 sub-block based on co-located samples of the first 4x4 sub-block of the first predictive block and the first 4x4 sub-block of the second predictive block using the first amount of BIO motion;means for determining a second amount of BIO motion for a second 4x4 sub-block of the first predictive block, based on gradient values of samples in the second 4x4 sub-block;means for determining a second final predictive 4x4 sub-block for the block of video data based on the second 4x4 sub-block of the first predictive block, a second 4x4 sub-block of the second predictive block, and the second amount of BIO motion, , wherein determining the second final predictive 4x4 sub-block for the block of video data based on the second 4x4 sub-block of the first predictive block, the second 4x4 sub-block of the second predictive block, and the second amount of BIO motion comprises, for each sample of the second final predictive 4x4 sub-block, determining a sample value of the second final predictive 4x4 sub-block based on co-located samples of the second 4x4 sub-block of the first predictive block and the second 4x4 sub-block of the second predictive block using the second amount of BIO motion;means for determining a final predictive block for the block of video data based on the first final predictive 4x4 sub-block and the second final predictive 4x4 sub-block; andmeans for outputting a picture of video data comprising a decoded version of the block of video data.
- The apparatus of claim 11, wherein the apparatus comprises a wireless communication device, further comprising a receiver configured to receive encoded video data; and preferably
wherein the wireless communication device comprises a telephone handset and wherein the receiver is configured to demodulate, according to a wireless communication standard, a signal comprising the encoded video data. - The apparatus of claim 11, wherein the apparatus comprises a wireless communication device, further comprising a transmitter configured to transmit encoded video data;
and preferably wherein the wireless communication device comprises a telephone handset and wherein the transmitter is configured to modulate, according to a wireless communication standard, a signal comprising the encoded video data.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762442357P | 2017-01-04 | 2017-01-04 | |
US201762445152P | 2017-01-11 | 2017-01-11 | |
US15/861,515 US10931969B2 (en) | 2017-01-04 | 2018-01-03 | Motion vector reconstructions for bi-directional optical flow (BIO) |
PCT/US2018/012360 WO2018129172A1 (en) | 2017-01-04 | 2018-01-04 | Motion vector reconstructions for bi-directional optical flow (bio) |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3566441A1 EP3566441A1 (en) | 2019-11-13 |
EP3566441C0 EP3566441C0 (en) | 2024-07-31 |
EP3566441B1 true EP3566441B1 (en) | 2024-07-31 |
Family
ID=62711435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18701947.6A Active EP3566441B1 (en) | 2017-01-04 | 2018-01-04 | Motion vector reconstructions for bi-directional optical flow (bio) |
Country Status (13)
Country | Link |
---|---|
US (1) | US10931969B2 (en) |
EP (1) | EP3566441B1 (en) |
JP (1) | JP7159166B2 (en) |
KR (1) | KR102579523B1 (en) |
CN (1) | CN110036638B (en) |
AU (1) | AU2018205783B2 (en) |
BR (1) | BR112019013684A2 (en) |
CA (1) | CA3043050A1 (en) |
CL (1) | CL2019001393A1 (en) |
CO (1) | CO2019007120A2 (en) |
TW (1) | TWI761415B (en) |
WO (1) | WO2018129172A1 (en) |
ZA (1) | ZA201904373B (en) |
Families Citing this family (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017164645A2 (en) * | 2016-03-24 | 2017-09-28 | 인텔렉추얼디스커버리 주식회사 | Method and apparatus for encoding/decoding video signal |
KR102437110B1 (en) | 2016-07-14 | 2022-08-26 | 삼성전자주식회사 | Video decoding method and device therefor, and video encoding method and device therefor |
CN116708830A (en) * | 2017-04-24 | 2023-09-05 | Sk电信有限公司 | Apparatus for encoding and decoding video data, method for storing encoded video data bit stream |
EP4451678A2 (en) | 2017-04-27 | 2024-10-23 | Panasonic Intellectual Property Corporation of America | Encoding device, decoding device, encoding method and decoding method |
CN116866584A (en) | 2017-05-17 | 2023-10-10 | 株式会社Kt | Method for decoding and encoding video and apparatus for storing compressed video data |
WO2018212111A1 (en) | 2017-05-19 | 2018-11-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Encoding device, decoding device, encoding method and decoding method |
KR20240095362A (en) * | 2017-06-26 | 2024-06-25 | 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 | Encoding device, decoding device, encoding method and decoding method |
EP3713236A4 (en) * | 2017-12-14 | 2021-04-21 | LG Electronics Inc. | Method and device for image decoding according to inter-prediction in image coding system |
WO2019155971A1 (en) * | 2018-02-06 | 2019-08-15 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Coding device, decoding device, coding method, and decoding method |
US11109053B2 (en) * | 2018-03-05 | 2021-08-31 | Panasonic Intellectual Property Corporation Of America | Encoding method, decoding method, encoder, and decoder |
EP3777167A1 (en) * | 2018-03-30 | 2021-02-17 | Vid Scale, Inc. | Template-based inter prediction techniques based on encoding and decoding latency reduction |
RU2020135518A (en) * | 2018-04-06 | 2022-04-29 | Вид Скейл, Инк. | BIDIRECTIONAL OPTICAL FLOW METHOD WITH SIMPLIFIED GRADIENT DETECTION |
US10841575B2 (en) * | 2018-04-15 | 2020-11-17 | Arris Enterprises Llc | Unequal weight planar motion vector derivation |
CN116684594A (en) * | 2018-04-30 | 2023-09-01 | 寰发股份有限公司 | Illumination compensation method and corresponding electronic device |
EP3788787A1 (en) | 2018-06-05 | 2021-03-10 | Beijing Bytedance Network Technology Co. Ltd. | Interaction between ibc and atmvp |
WO2019238008A1 (en) | 2018-06-11 | 2019-12-19 | Mediatek Inc. | Method and apparatus of bi-directional optical flow for video coding |
TWI739120B (en) | 2018-06-21 | 2021-09-11 | 大陸商北京字節跳動網絡技術有限公司 | Unified constrains for the merge affine mode and the non-merge affine mode |
EP4307671A3 (en) | 2018-06-21 | 2024-02-07 | Beijing Bytedance Network Technology Co., Ltd. | Sub-block mv inheritance between color components |
CN110809156B (en) | 2018-08-04 | 2022-08-12 | 北京字节跳动网络技术有限公司 | Interaction between different decoder-side motion vector derivation modes |
CN112585972B (en) | 2018-08-17 | 2024-02-09 | 寰发股份有限公司 | Inter-frame prediction method and device for video encoding and decoding |
US11245922B2 (en) | 2018-08-17 | 2022-02-08 | Mediatek Inc. | Shared candidate list |
WO2020035054A1 (en) * | 2018-08-17 | 2020-02-20 | Mediatek Inc. | Methods and apparatuses of video processing with bi-direction predicition in video coding systems |
US11665365B2 (en) | 2018-09-14 | 2023-05-30 | Google Llc | Motion prediction coding with coframe motion vectors |
CN118474345A (en) * | 2018-09-21 | 2024-08-09 | Vid拓展公司 | Complexity reduction and bit width control for bi-directional optical flow |
GB2591906B (en) | 2018-09-24 | 2023-03-08 | Beijing Bytedance Network Tech Co Ltd | Bi-prediction with weights in video coding and decoding |
US11146800B2 (en) * | 2018-09-24 | 2021-10-12 | Tencent America LLC | Low latency local illumination compensation |
TW202029755A (en) * | 2018-09-26 | 2020-08-01 | 美商Vid衡器股份有限公司 | Bi-prediction for video coding |
WO2020070612A1 (en) | 2018-10-06 | 2020-04-09 | Beijing Bytedance Network Technology Co., Ltd. | Improvement for temporal gradient calculating in bio |
CN112889284A (en) * | 2018-10-22 | 2021-06-01 | 北京字节跳动网络技术有限公司 | Subblock-based decoder-side motion vector derivation |
WO2020084474A1 (en) * | 2018-10-22 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Gradient computation in bi-directional optical flow |
CN111083484B (en) | 2018-10-22 | 2024-06-28 | 北京字节跳动网络技术有限公司 | Sub-block based prediction |
CN111093080B (en) | 2018-10-24 | 2024-06-04 | 北京字节跳动网络技术有限公司 | Sub-block motion candidates in video coding |
WO2020094000A1 (en) | 2018-11-05 | 2020-05-14 | Beijing Bytedance Network Technology Co., Ltd. | Interpolation for inter prediction with refinement |
CN112970262B (en) | 2018-11-10 | 2024-02-20 | 北京字节跳动网络技术有限公司 | Rounding in trigonometric prediction mode |
JP7334246B2 (en) | 2018-11-12 | 2023-08-28 | 北京字節跳動網絡技術有限公司 | Simplification of inter-intra compound prediction |
WO2020103852A1 (en) | 2018-11-20 | 2020-05-28 | Beijing Bytedance Network Technology Co., Ltd. | Difference calculation based on patial position |
WO2020103877A1 (en) | 2018-11-20 | 2020-05-28 | Beijing Bytedance Network Technology Co., Ltd. | Coding and decoding of video coding modes |
WO2020103944A1 (en) | 2018-11-22 | 2020-05-28 | Beijing Bytedance Network Technology Co., Ltd. | Sub-block based motion candidate selection and signaling |
WO2020113156A1 (en) | 2018-11-30 | 2020-06-04 | Tencent America LLC | Method and apparatus for video coding |
WO2020125804A1 (en) * | 2018-12-21 | 2020-06-25 | Beijing Bytedance Network Technology Co., Ltd. | Inter prediction using polynomial model |
JP2022516433A (en) * | 2018-12-21 | 2022-02-28 | ヴィド スケール インコーポレイテッド | Symmetric motion vector differential coding |
JP2022521554A (en) | 2019-03-06 | 2022-04-08 | 北京字節跳動網絡技術有限公司 | Use of converted one-sided prediction candidates |
JP6867611B2 (en) | 2019-03-11 | 2021-04-28 | Kddi株式会社 | Image decoding device, image decoding method and program |
FI3941060T3 (en) * | 2019-03-12 | 2023-10-02 | Lg Electronics Inc | Inter-prediction method and device based on dmvr and bdof |
CN113574880B (en) | 2019-03-13 | 2023-04-07 | 北京字节跳动网络技术有限公司 | Partitioning with respect to sub-block transform modes |
CN113545081B (en) | 2019-03-14 | 2024-05-31 | 寰发股份有限公司 | Method and apparatus for processing video data in video codec system |
KR20220112864A (en) * | 2019-03-15 | 2022-08-11 | 베이징 다지아 인터넷 인포메이션 테크놀로지 컴퍼니 리미티드 | Methods and devices for bit-width control for bi-directional optical flow |
CN117478876A (en) | 2019-03-17 | 2024-01-30 | 北京字节跳动网络技术有限公司 | Calculation of prediction refinement based on optical flow |
WO2020200269A1 (en) | 2019-04-02 | 2020-10-08 | Beijing Bytedance Network Technology Co., Ltd. | Decoder side motion vector derivation |
KR20220063312A (en) | 2019-04-25 | 2022-05-17 | 베이징 다지아 인터넷 인포메이션 테크놀로지 컴퍼니 리미티드 | Methods and apparatuses for prediction refinement with optical flow |
CN113812155B (en) * | 2019-05-11 | 2023-10-27 | 北京字节跳动网络技术有限公司 | Interaction between multiple inter-frame coding and decoding methods |
JP7377894B2 (en) | 2019-05-21 | 2023-11-10 | 北京字節跳動網絡技術有限公司 | Syntax signaling in subblock merge mode |
CN113411610B (en) * | 2019-06-21 | 2022-05-27 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN114128293A (en) | 2019-06-21 | 2022-03-01 | 松下电器(美国)知识产权公司 | Encoding device, decoding device, encoding method, and decoding method |
US11272203B2 (en) * | 2019-07-23 | 2022-03-08 | Tencent America LLC | Method and apparatus for video coding |
WO2021027776A1 (en) | 2019-08-10 | 2021-02-18 | Beijing Bytedance Network Technology Co., Ltd. | Buffer management in subpicture decoding |
JP7481430B2 (en) | 2019-08-13 | 2024-05-10 | 北京字節跳動網絡技術有限公司 | Motion Accuracy in Subblock-Based Inter Prediction |
EP4032300A4 (en) | 2019-09-20 | 2022-11-09 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods of video encoding and/or decoding with bidirectional optical flow simplification on shift operations and related apparatus |
CN114424536A (en) | 2019-09-22 | 2022-04-29 | 北京字节跳动网络技术有限公司 | Combined inter-frame intra prediction based on transform units |
WO2021056217A1 (en) * | 2019-09-24 | 2021-04-01 | 北京大学 | Video processing method and apparatus |
WO2020256601A2 (en) * | 2019-10-03 | 2020-12-24 | Huawei Technologies Co., Ltd. | Method and apparatus of picture-level signaling for bidirectional optical flow and decoder side motion vector refinement |
EP4032290A4 (en) | 2019-10-18 | 2022-11-30 | Beijing Bytedance Network Technology Co., Ltd. | Syntax constraints in parameter set signaling of subpictures |
US20210337192A1 (en) * | 2020-04-24 | 2021-10-28 | Realtek Semiconductor Corp. | Image processing method and associated encoder |
WO2024206162A1 (en) * | 2023-03-24 | 2024-10-03 | Bytedance Inc. | Method, apparatus, and medium for video processing |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003061293A1 (en) * | 2002-01-17 | 2003-07-24 | Koninklijke Philips Electronics N.V. | Unit for and method of estimating a current motion vector |
JP2007179354A (en) * | 2005-12-28 | 2007-07-12 | Fujitsu Ltd | Optical flow calculation device, optical flow calculation method, optical flow calculation program and recording medium |
WO2011126309A2 (en) * | 2010-04-06 | 2011-10-13 | 삼성전자 주식회사 | Method and apparatus for video encoding and method and apparatus for video decoding |
CN103039075B (en) * | 2010-05-21 | 2015-11-25 | Jvc建伍株式会社 | Picture coding device, method for encoding images and picture decoding apparatus, picture decoding method |
JP5686019B2 (en) * | 2010-05-21 | 2015-03-18 | 株式会社Jvcケンウッド | Image decoding apparatus, image decoding method, and image decoding program |
CN103327327B (en) * | 2013-06-03 | 2016-03-30 | 电子科技大学 | For the inter prediction encoding unit selection method of high-performance video coding HEVC |
CN107925775A (en) * | 2015-09-02 | 2018-04-17 | 联发科技股份有限公司 | The motion compensation process and device of coding and decoding video based on bi-directional predicted optic flow technique |
CN105261038B (en) * | 2015-09-30 | 2018-02-27 | 华南理工大学 | Finger tip tracking based on two-way light stream and perception Hash |
CN109496430B (en) * | 2016-05-13 | 2022-06-14 | Vid拓展公司 | System and method for generalized multi-hypothesis prediction for video coding |
WO2017205704A1 (en) * | 2016-05-25 | 2017-11-30 | Arris Enterprises Llc | General block partitioning method |
CN114173117B (en) * | 2016-12-27 | 2023-10-20 | 松下电器(美国)知识产权公司 | Encoding method, decoding method, and transmitting method |
-
2018
- 2018-01-03 US US15/861,515 patent/US10931969B2/en active Active
- 2018-01-04 JP JP2019536162A patent/JP7159166B2/en active Active
- 2018-01-04 AU AU2018205783A patent/AU2018205783B2/en active Active
- 2018-01-04 EP EP18701947.6A patent/EP3566441B1/en active Active
- 2018-01-04 TW TW107100393A patent/TWI761415B/en active
- 2018-01-04 CN CN201880004708.0A patent/CN110036638B/en active Active
- 2018-01-04 WO PCT/US2018/012360 patent/WO2018129172A1/en active Application Filing
- 2018-01-04 BR BR112019013684A patent/BR112019013684A2/en unknown
- 2018-01-04 CA CA3043050A patent/CA3043050A1/en active Pending
- 2018-01-04 KR KR1020197019234A patent/KR102579523B1/en active IP Right Grant
-
2019
- 2019-05-23 CL CL2019001393A patent/CL2019001393A1/en unknown
- 2019-07-02 CO CONC2019/0007120A patent/CO2019007120A2/en unknown
- 2019-07-03 ZA ZA2019/04373A patent/ZA201904373B/en unknown
Also Published As
Publication number | Publication date |
---|---|
ZA201904373B (en) | 2023-03-29 |
EP3566441C0 (en) | 2024-07-31 |
CL2019001393A1 (en) | 2019-09-27 |
TW201830966A (en) | 2018-08-16 |
BR112019013684A2 (en) | 2020-01-28 |
KR20190103171A (en) | 2019-09-04 |
JP2020503799A (en) | 2020-01-30 |
AU2018205783A1 (en) | 2019-05-23 |
CA3043050A1 (en) | 2018-07-12 |
US10931969B2 (en) | 2021-02-23 |
CN110036638A (en) | 2019-07-19 |
CO2019007120A2 (en) | 2019-09-18 |
CN110036638B (en) | 2023-06-27 |
JP7159166B2 (en) | 2022-10-24 |
WO2018129172A1 (en) | 2018-07-12 |
TWI761415B (en) | 2022-04-21 |
AU2018205783B2 (en) | 2023-02-02 |
EP3566441A1 (en) | 2019-11-13 |
US20180192072A1 (en) | 2018-07-05 |
KR102579523B1 (en) | 2023-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3566441B1 (en) | Motion vector reconstructions for bi-directional optical flow (bio) | |
EP3643066B1 (en) | A memory-bandwidth-efficient design for bi-directional optical flow (bio) | |
US10523964B2 (en) | Inter prediction refinement based on bi-directional optical flow (BIO) | |
US10757442B2 (en) | Partial reconstruction based template matching for motion vector derivation | |
US11265551B2 (en) | Decoder-side motion vector derivation | |
US10595035B2 (en) | Constraining motion vector information derived by decoder-side motion vector derivation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190711 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20210416 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20240226 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018072402 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
U01 | Request for unitary effect filed |
Effective date: 20240819 |
|
U07 | Unitary effect registered |
Designated state(s): AT BE BG DE DK EE FI FR IT LT LU LV MT NL PT RO SE SI Effective date: 20240902 |