[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2015010317A1 - 一种基于p帧的多假设运动补偿方法 - Google Patents

一种基于p帧的多假设运动补偿方法 Download PDF

Info

Publication number
WO2015010317A1
WO2015010317A1 PCT/CN2013/080172 CN2013080172W WO2015010317A1 WO 2015010317 A1 WO2015010317 A1 WO 2015010317A1 CN 2013080172 W CN2013080172 W CN 2013080172W WO 2015010317 A1 WO2015010317 A1 WO 2015010317A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
motion vector
image block
current image
prediction block
Prior art date
Application number
PCT/CN2013/080172
Other languages
English (en)
French (fr)
Inventor
王荣刚
陈蕾
王振宇
马思伟
高文
黄铁军
王文敏
董胜富
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Priority to PCT/CN2013/080172 priority Critical patent/WO2015010317A1/zh
Priority to CN201380003167.7A priority patent/CN104488271B/zh
Publication of WO2015010317A1 publication Critical patent/WO2015010317A1/zh
Priority to US15/006,147 priority patent/US10298950B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/533Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/55Motion estimation with spatial constraints, e.g. at image or region borders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/553Motion estimation dealing with occlusions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/557Motion estimation characterised by stopping computation or iteration based on certain criteria, e.g. error magnitude being too large or early exit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction

Definitions

  • Multi-hypothesis motion compensation method based on P frame
  • the present application relates to the field of video coding technologies, and in particular, to a multi-hypothesis motion compensation method based on a P frame.
  • mainstream video coding standards such as AVS, H.264, HEVC, etc. mostly use hybrid coding frameworks. Because of the combined use of motion estimation and motion compensation techniques, the time domain correlation between video frames is well utilized. Video compression efficiency has been improved.
  • the prediction block is only related to a single motion vector obtained after motion estimation, which makes the accuracy of the obtained prediction block not very high.
  • the bidirectional motion compensation method such as B frame
  • after motion estimation it obtains two motion vectors, forward and backward, and correspondingly obtains two prediction blocks
  • the final prediction block is obtained by weighting and averaging the two prediction blocks. This makes the resulting prediction block more accurate, but because of the need to pass in two motion vectors in the code stream, the code rate is increased.
  • the present application provides a multi-hypothesis motion compensation method that can improve the accuracy of a P-frame motion compensation prediction block without increasing the code rate.
  • the P-frame based multi-hypothesis motion compensation method includes:
  • a first motion vector of the current image block is obtained by using a motion vector of the reference image block with the adjacent coded image block of the current image block as a reference image block, the first motion vector pointing to the first prediction block.
  • the adjacent coded image block of the current image block is used as the reference image block, and the three image blocks in the adjacent coded image block of the current image block are used as the reference image block.
  • obtaining a first motion vector of the current image block by using a motion vector of the reference image block including:
  • the motion vector is used as the first motion vector of the current image block; otherwise, the following steps are continued: determining that one of the three reference image blocks has a reference When the horizontal component of the motion vector of the image block is opposite to the horizontal component of the motion vector of the other two reference image blocks, the average of the horizontal components of the motion vectors of the two reference image blocks is taken as the first motion of the current image block.
  • Calculating the distance of any two reference image blocks in the horizontal direction taking the average value of the horizontal components of the motion vectors of the two reference image blocks with the smallest distance as the horizontal component of the first motion vector of the current image block; calculating any two reference images
  • the distance of the block in the vertical direction, the average value of the vertical components of the motion vectors of the two reference image blocks having the smallest distance is taken as the vertical component of the first motion vector of the current image block.
  • the weight sum of the first prediction block and the second prediction block is
  • the weights of the first prediction block and the second prediction block are each 1/2.
  • the method further includes: adding the residual information of the current image block and the final prediction block, and the second motion vector to the coded code stream of the current image block.
  • the final prediction block is not only related to the motion vector obtained after the motion estimation, but also the motion of the adjacent coded image block.
  • the vector correlation, the final prediction block is determined by the first motion vector and the second motion vector, the first motion vector is determined by the motion vector of its adjacent coded image block, and the second motion vector is combined by using the first motion vector as a reference value.
  • the motion estimation is obtained, and the final prediction block is obtained by weighted averaging of the first prediction block and the second prediction block pointed by the first motion vector and the second motion vector.
  • the prediction block of the current image block to be encoded can be made more accurate without increasing the code rate.
  • FIG. 1 is a schematic diagram of a reference image block in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a reference image block in another embodiment of the present application.
  • Figure 3 is a coding block diagram of the current mainstream video coding standard
  • FIG. 4 is a flowchart of a P-frame based multi-hypothesis motion compensation method according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart of a method for deriving a first motion vector according to an embodiment of the present application
  • FIG. 7 is a block diagram of a decoding of a multi-hypothesis motion compensation method based on a P frame according to an embodiment of the present application.
  • the embodiment of the present application provides a multi-hypothesis motion compensation method based on a P frame, which is used in the technical field of video coding.
  • the inventive concept of the present application is to combine the advantages and disadvantages of the motion compensation methods of B-frames and P-frames, and propose a multi-hypothesis motion compensation method based on P-frames, which is not only advantageous With the time domain correlation between video frames, the spatial correlation is also utilized, so that the accuracy of the prediction block is higher, but at the same time only one motion vector needs to be transmitted in the code stream without increasing the code stream rate.
  • each frame image is usually divided into macroblocks, each of which has a fixed size, and one frame of image is sequentially followed from left to right and top to bottom from the first image block at the upper left.
  • Each image block in the process is processed. Please refer to FIG. 1 , for example, dividing a frame image into macroblocks (image blocks) of 1 6 * 16 pixels, each macroblock having a size of 16*16 pixels, and processing the image in order from the left
  • the image block of the first line is processed to the right, and then the second line is processed in sequence until the entire frame image is processed.
  • the first motion vector of the current image block is calculated with the motion vector of the reference image block as a reference value. Since each image block in the frame image has the highest similarity to its adjacent coded image block, in general, the reference image block uses the adjacent coded image block of the current image block. As shown in Fig. 1, the reference image blocks of the current image block P are A, B, C, D.
  • the upper block, the upper right block, and the left block image block adjacent to the current image block may also be selected as the reference image block, such as the reference image block of the current image block P in FIG. For, B, C; If the upper right block image block of the current image block does not exist (the current image block is in the first column on the right) or the image block C does not have a motion vector, replace it with the upper left block image block of the current image block.
  • the reference image block of the current image block P in FIG. 1 is selected as A, B, D.
  • the image block is further divided into sub-image blocks when encoding the image block, for example, the image block of 16*16 pixels is subdivided into sub-image blocks of 4*4 pixels, please Refer to Figure 2.
  • the neighboring encoded sub-image block is taken as a reference image block as an example.
  • the current image is used in this embodiment.
  • the adjacent coded sub-image blocks of the block are collectively referred to as adjacent coded image blocks of the current image block, and A, B, and C as shown in FIG. 2 are selected as adjacent coded image blocks of the current image block P,
  • D is substituted for C as an adjacent coded image block of the current image block P.
  • FIG. 3 Please refer to Figure 3 for the coding block diagram of the current mainstream video coding standard.
  • intra prediction intra prediction
  • interframe coding motion compensation
  • the derivation of the first motion vector MVL 1 utilizes the adjacent coded image block of the current block Motion vector information, the second motion vector MVL2 reference MVL1 is obtained by joint motion estimation; when the motion prediction portion obtains the final prediction block, the final prediction block is first and second prediction blocks pointed to by MVL1 and MVL2 The weighted average is obtained.
  • MVL2 motion vector
  • the residual information of the current image block and the final pre-J block need to be transmitted.
  • this embodiment provides a P-frame based multi-hypothesis motion compensation method, including:
  • Step 10 Using the adjacent coded image block of the current image block as the reference image block, obtain the first motion vector MVL1 of the current image block by using the motion vector of the reference image block, and the first motion vector MVL1 points to the first prediction block PL1.
  • Step 20 Perform joint motion estimation on the current image block with the first motion vector MVL1 as a reference value to obtain a second motion vector MVL2 of the current image block, and the second motion vector MVL2 points to the second prediction block PL2.
  • Step 30 Perform weighted averaging on the first prediction block PL1 and the second prediction block PL2 to obtain a final prediction block PL of the current image block.
  • step 10 in this embodiment, since the reference image block is selected as shown in FIG. 2, when the adjacent coded image block of the current image block is selected as the reference image block, the adjacent image block is selected to be adjacent.
  • the three image blocks in the coded image block are described as reference image blocks, that is, A, B, and C in FIG. 2 are reference image blocks of the current image block.
  • C does not have a motion vector
  • D is selected instead of C.
  • other adjacent encoded image blocks around the current image block may also be selected as reference image blocks.
  • the acquisition of the first motion vector MVL1 is related to the adjacent coded image blocks A, B, C, D (reference image block) of the current image block, and the acquisition formula is as shown in the formula (1).
  • MVA, MVB, MVC, MVD are motion vectors of four reference image blocks, and f is a function of these four motion vectors.
  • the method includes:
  • Step 101 Determine whether only one of the three reference image blocks A, B, and C has a motion vector, and the motion vector indicates that the reference image block is valid, and if there is no motion vector, the reference image block is invalid. Then go to step 102, if not go to step 103.
  • the reference image block D is selected instead of the reference image block when the reference image block C is invalid.
  • Step 102 The motion vector of the valid reference image block is used as the first motion vector MVL1 of the current image block.
  • Step 103 Determine whether there is one reference image block among the three reference image blocks A, B, and C.
  • the horizontal/vertical component of the motion vector is opposite to the horizontal/vertical component of the motion vector of the other two reference image blocks, if yes, go to step 104, if not, go to step 105.
  • Step 104 The average value of the horizontal components of the motion vectors of the two reference image blocks having the same direction is taken as the horizontal component of the first motion vector MVL1 of the current image block.
  • Steps 103 and 104 may be expressed as follows: the horizontal components of the reference image blocks A, B, and C are MVAx, MVBx, MVCx, and the vertical components are MVAy, MVBy, MVCy; the horizontal component and the vertical component of the first motion vector MVL1 are MVLlx, MVLly, then:
  • MVLlx (MVBx + MVCx) / 2.
  • MVLlx (MVAx + MVCx) / 2.
  • MVLlx (MVAx + MVBx) / 2.
  • MVLly (MVBy+MVCy)/2.
  • MVLly (MVAy+MVCy)/2.
  • MVLly (MVAy+MVBy)/2.
  • Step 105 Calculate the distance between any two reference image blocks A, B, in the horizontal/vertical direction.
  • the distance can be expressed in the following manner, and the distances between 8 and 8 and (, A and C in the horizontal/vertical direction are ABSVABx, ABSVBCx, ABSVCAx, ABSVABy, ABSVBCy, ABSVCAy, ie,
  • ABSVABx
  • , ABSVBCx
  • ABSVCAx I MVCx - MVAx
  • ABSVABy
  • ABSVBCy I MVBy - MVCy I
  • ABSVCAy
  • Step 106 The average value of the horizontal/vertical components of the motion vectors of the two reference image blocks having the smallest distance is taken as the horizontal/vertical component of the first motion vector MVL1 of the current image block, that is, if ABSVABx ⁇ ABSVBCx JL ABSVABx ⁇ ABSVCAx,
  • ABSVCAy ABSVABy JL ABSVCAy ⁇ ABSVBCy
  • the second motion vector MVL2 is derived by means of joint motion estimation with the first motion vector MVLl as a reference value, and the specific derivation formula can be as shown in formula (2).
  • f is a function of joint motion estimation associated with the first motion vector MVL1.
  • the estimation process of the joint motion estimation used by the second motion vector MVL2 is the same as the conventional motion estimation process (e.g., the conventional B-frame motion estimation process), and therefore will not be described again. Since the second motion vector MVL2 is derived by the joint motion estimation in the present embodiment, the first motion vector MVL1 is referenced. Therefore, when the Lagrangian cost function is obtained, the search range is made as shown in the formula (3). The minimum motion vector of the Lagrangian cost function is taken as the second motion vector MVL2.
  • MVL2pred is the predicted value of MVL2
  • R (MVL2_MVL2pred) represents the number of bits of the motion vector residual
  • ⁇ sad is a weight coefficient of R(MVL2-MVL2pred)
  • Dsad(S, MVL2, MVL1) represents the current image block S With the residual of the prediction block, it can be further obtained by equation (4).
  • MVLlx, MVLly, MVL2x, MVL2y are the horizontal and vertical components of MVL1 and MVL2, respectively, and Sref represents the reference frame.
  • FIG. 6 is a schematic diagram of obtaining a prediction block of a current image block in the embodiment, where a frame image of time t-1 is used as a forward reference frame, and a frame image of time t is a current coded frame.
  • the method further includes: adding the residual information of the current image block and the final prediction block, and the second motion vector MVL2 to the coded code stream of the current image block. Since the encoded code stream contains only one motion vector MVL2, The P-frame based multi-hypothesis motion compensation method provided by the embodiment can improve the accuracy of the P frame prediction block without increasing the code stream code rate.
  • FIG. 7 is a decoding block diagram used in the embodiment.
  • the decoding end after the code stream is input, after entropy decoding, inverse quantization, and inverse transform, whether a frame selector or an interframe code is selected by a selector, Inter-frame coding, obtaining the prediction block of the current image block by decoding the information and the reconstructed frame in the reference buffer, and adding the prediction block to the residual block to obtain the reconstructed block.
  • the value of MVL 1 can be obtained by derivation. The specific derivation process is shown in the derivation process of MVL 1 in the coding end.
  • the value of MVL2 is obtained by entropy decoding, and MVL 1 and MVL2 respectively point to the corresponding reference frame.
  • the blocks PL1 and PL2 are predicted, and the final block PL is obtained by weighted averaging by PL1 and PL2.
  • the P-frame may be separately encoded by using the multi-hypothesis motion compensation method provided in the embodiment of the present application, or the multi-hypothesis motion compensation method may be added to the P frame as a new coding mode.
  • the coding mode after the mode decision process, a coding mode that minimizes the coding cost is finally selected to encode the P frame.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种基于P帧的多假设运动补偿方法,包括:以当前图像块的相邻已编码图像块作为参考图像块,利用参考图像块的运动矢量获得当前图像块的第一运动矢量,所述第一运动矢量指向第一预测块;以所述第一运动矢量为参考值,对当前图像块进行联合运动估计获得当前图像块的第二运动矢量,所述第二运动矢量指向第二预测块;对所述第一预测块和所述第二预测块进行加权平均,获得当前图像块的最终预测块。该方法可以使当前图像块获得的预测块的准确性更高,且不会增大码流码率。

Description

一种基于 P帧的多假设运动补偿方法 技术领域
本申请涉及视频编码技术领域, 具体涉及一种基于 P帧的多假设运 动补偿方法。
背景技术
目前, 主流的视频编码标准如 AVS、 H. 264、 HEVC等大多使用混合编 码框架, 由于综合使用了运动估计和运动补偿技术, 使得视频帧之间的 时域相关性获得了很好的利用, 视频的压缩效率得到了提高。
在传统的 P帧运动补偿方法中, 预测块只与经过运动估计之后获得 的单个运动矢量有关, 这使得得到的预测块的准确性并不是很高。 对于 如 B帧的双向运动补偿方法, 经过运动估计之后, 它获得前向和后向两 个运动矢量, 并相应的获得两个预测块, 最终预测块由两个预测块进行 加权求平均获得, 这使得得到的预测块更准确, 但由于需要在码流中传 入两个运动矢量, 使得码率增大。
发明内容
本申请提供一种可以在不增加码率的前提下, 提高 P帧运动补偿预 测块准确性的多假设运动补偿方法。
该基于 P帧的多假设运动补偿方法, 包括:
以当前图像块的相邻已编码图像块作为参考图像块, 利用参考图像 块的运动矢量获得当前图像块的第一运动矢量, 所述第一运动矢量指向 第一预测块。
以所述第一运动矢量为参考值, 对当前图像块进行联合运动估计获 得当前图像块的第二运动矢量, 所述第二运动矢量指向第二预测块。
对所述第一预测块和所述第二预测块进行加权平均, 获得当前图像 块的最终预测块。
在一具体实例中, 以当前图像块的相邻已编码图像块作为参考图像 块, 为: 以当前图像块的相邻已编码图像块中的三个图像块作为参考图 像块。
进一步, 利用参考图像块的运动矢量获得当前图像块的第一运动矢 量, 包括:
判断到三个参考图像块中只有一个参考图像块具有运动矢量, 则将 该运动矢量作为当前图像块的第一运动矢量; 否则,继续执行下面步骤: 判断到三个参考图像块中有一个参考图像块的运动矢量的水平分量 与另外两个参考图像块的运动矢量的水平分量方向相反时, 将所述两个 参考图像块的运动矢量的水平分量的平均值作为当前图像块的第一运动 矢量的水平分量; 判断到三个参考图像块中有一个参考图像块的运动矢 量的垂直分量与另外两个参考图像块的运动矢量的垂直分量方向相反 时, 将所述两个参考图像块的运动矢量的垂直分量的平均值作为当前图 像块的第一运动矢量的垂直分量; 否则, 继续执行下面步骤:
计算任意两个参考图像块在水平方向的距离, 将距离最小的两个参 考图像块的运动矢量的水平分量的平均值作为当前图像块的第一运动矢 量的水平分量; 计算任意两个参考图像块在垂直方向的距离, 将距离最 小的两个参考图像块的运动矢量的垂直分量的平均值作为当前图像块的 第一运动矢量的垂直分量。
在一实施例中,对所述第一预测块和所述第二预测块进行加权平均, 获得当前图像块的最终预测块时, 第一预测块和第二预测块的权重和为
1。 具体的, 第一预测块和第二预测块的权重各为 1 / 2。
在一实施例中, 在获得当前图像块的最终预测块后, 还包括: 将当前图像块和最终预测块的残差信息、 第二运动矢量加入到当前 图像块的编码码流中。
本申请提供的一种基于 P帧的多假设运动补偿方法, 对于待编码的 当前图像块, 其最终预测块不仅与经过运动估计之后获得的运动矢量有 关, 还与其相邻已编码图像块的运动矢量有关, 其最终预测块由第一运 动矢量和第二运动矢量决定, 第一运动矢量由其相邻已编码图像块的运 动矢量决定, 第二运动矢量以第一运动矢量为参考值通过联合运动估计 获得, 最终预测块由第一运动矢量和第二运动矢量指向的第一预测块和 第二预测块的加权平均得到。 使用多假设运动补偿的方法, 可以使得待 编码的当前图像块的预测块具有更高的准确性, 且不会增大码率。
附图说明
下面结合附图和具体实施方式作进一步详细的说明。
图 1为本申请一种实施例中参考图像块的示意图;
图 2为本申请另一种实施例中参考图像块的示意图;
图 3为当前主流的视频编码标准釆用的编码框图;
图 4为本申请一种实施例中基于 P帧的多假设运动补偿方法流程图; 图 5为本申请一种实施例中第一运动矢量的导出方法流程图; 图 6为本申请一种实施例中当前图像块的预测块的获取示意图; 图 7为本申请一种实施例中基于 P帧的多假设运动补偿方法相应的 解码框图。
具体实施方式
本申请实施例提供了一种基于 P帧的多假设运动补偿方法, 用于视 频编码技术领域。 本申请的发明构思在于, 综合 B帧和 P帧的运动补偿 方法的利弊, 提出一种基于 P帧的多假设运动补偿方法, 该方法不仅利 用视频帧之间的时域相关性, 还利用了空域相关性, 使得预测块的准确 性更高, 但同时又只需要在码流中传入一个运动矢量, 无需增大码流码 率。
在视频编码时, 通常将每一帧图像划分宏块, 每个宏块具有固定大 小, 从左上方的第一图像块开始依次按照从左往右、 从上往下的顺序依 次对一帧图像中的每一个图像块进行处理。 请参考图 1 , 例如将一帧图 像划分为 1 6 * 1 6像素的宏块(图像块), 每一宏块的大小为 1 6 * 1 6像素, 对图像的处理顺序为, 先从左到右处理第一行的图像块, 然后再依次处 理第二行, 直到整帧图像被处理完毕。
假设图像块 P为当前图像块, 在某些实施例中, 在对当前图像块 P 进行运动补偿时, 以参考图像块的运动矢量作为参考值来计算当前图像 块的第一运动矢量。 由于帧图像中的每一个图像块与其相邻已编码图像 块具有最高的相似性, 因此, 一般的, 参考图像块釆用当前图像块的相 邻已编码图像块。 如图 1中, 当前图像块 P的参考图像块为 A、 B、 C、 D。
在某些实施例中, 参考图像块在选择时, 也可以选择当前图像块相 邻的上块、 右上块和左块图像块作为参考图像块, 例如图 1中当前图像 块 P的参考图像块为 、 B、 C ; 如果当前图像块的右上块图像块不存在 (当前图像块位于右边第一列时) 或者图像块 C不具有运动矢量时, 则 用当前图像块的左上块图像块来代替, 例如图 1中当前图像块 P的参考 图像块选为 A、 B、 D。
在某些实施例中, 在对图像块进行编码时还会将图像块再进一步划 分子图像块,例如将 1 6 * 1 6像素的图像块再划分为 4 * 4像素的子图像块, 请参考图 2。
本实施例中, 在获得当前图像块的第一运动矢量时, 以其相邻已编 码的子图像块作为参考图像块为例进行说明,为了便于对本申请的理解, 本实施例中将当前图像块的相邻已编码的子图像块统称为当前图像块的 相邻已编码图像块, 并且, 选择如图 2中的 A、 B、 C作为当前图像块 P 的相邻已编码图像块, 在 C不具有运动矢量时, 则将 D代替 C作为当前 图像块 P的相邻已编码图像块。
请参考图 3 , 为当前主流的视频编码标准釆用的编码框图。 对输入 的帧图像划分成若干宏块 (图像块) , 然后对当前图像块进行帧内预测 (帧内编码) 或运动补偿 (帧间编码) , 通过选择器选择编码代价最小 的编码模式, 从而得到当前图像块的预测块, 当前图像块与预测块相差 得到残差值, 并对残差进行变换、 量化、 扫描和熵编码, 形成码流序列 输出。
在本申请中, 对其中的运动估计和运动补偿部分提出了改进。 在运 动估计部分,第一运动矢量 MVL 1的导出利用当前块的相邻已编码图像块 的运动矢量信息, 第二运动矢量 MVL2参考 MVL1通过联合运动估计的方 式进行搜索获得; 在运动补偿部分获得最终预测块时, 最终预测块由 MVL1和 MVL2指向的第一预测块和第二预测块的加权平均获得。 本实施 例中, 在进行熵编码时, 也只需要传输一个运动矢量 (MVL2 ) 及当前图 像块与最终预 'J块的残差信息。
请参考图 4,本实施例提供了一种基于 P帧的多假设运动补偿方法, 包括:
步骤 10: 以当前图像块的相邻已编码图像块作为参考图像块, 利用 参考图像块的运动矢量获得当前图像块的第一运动矢量 MVL1 , 第一运动 矢量 MVL1指向第一预测块 PL1。
步骤 20: 以第一运动矢量 MVL1为参考值, 对当前图像块进行联合 运动估计获得当前图像块的第二运动矢量 MVL2, 第二运动矢量 MVL2指 向第二预测块 PL2。
步骤 30: 对第一预测块 PL1和第二预测块 PL2进行加权平均, 获得 当前图像块的最终预测块 PL。
在步骤 10 中, 本实施例由于参考图像块选择如图 2 所示的方式, 因此, 在选择当前图像块的相邻已编码图像块作为参考图像块时, 以选 择当前图像块的相邻已编码图像块中的三个图像块作为参考图像块进行 说明, 即, 如图 2中 A、 B、 C为当前图像块的参考图像块, 当 C不具有 运动矢量时, 选择 D来代替 C作为当前图像块的参考图像块。 在其它实 施例中, 为了得到与当前图像块的周围图像块有关联的第一运动矢量, 还可以选择当前图像块周围的其它相邻已编码图像块作为参考图像块。
本实施例中, 第一运动矢量 MVL1 的获取与当前图像块的相邻已编 码图像块 A、 B、 C、 D (参考图像块)有关, 其获取公式如公式 (1 ) 所 示。
MVLl=f ( MVA, MVB, MVC, MVD ) …… ( 1 )
其中, MVA、 MVB、 MVC、 MVD是四个参考图像块的运动矢量, f 是关 于这四个运动矢量的函数。
请参考图 5, 步骤 10中利用参考图像块的运动矢量获得当前图像块 的第一运动矢量 MVL1时, 包括:
步骤 101: 判断三个参考图像块 A、 B、 C中是否只有一个参考图像 块具有运动矢量, 具有运动矢量则说明该参考图像块有效, 不具有运动 矢量则说明该参考图像块无效, 如果是则转到步骤 102, 如果否则转到 步骤 103。 当参考图像块 C无效时选择参考图像块 D代替参考图像块(。
步骤 102: 将该有效的参考图像块的运动矢量作为当前图像块的第 一运动矢量 MVL1。
步骤 103: 判断三个参考图像块 A、 B、 C中是否有一个参考图像块 的运动矢量的水平 /垂直分量与另外两个参考图像块的运动矢量的水平 / 垂直分量方向相反, 如果是则转到步骤 104, 如果否则转到步骤 105。
步骤 104: 将方向相同的两个参考图像块的运动矢量的水平分量的 平均值作为当前图像块的第一运动矢量 MVL1的水平分量
步骤 103、 104可以通过如下方式表示, 记参考图像块 A、 B、 C 的 水平分量为 MVAx、 MVBx、 MVCx, 垂直分量为 MVAy、 MVBy、 MVCy; 第一运 动矢量 MVL1的水平分量、 垂直分量为 MVLlx、 MVLly, 则:
如果 MVAx<0且 MVBx>0且 MVCx>0,或者 MVAx>0且 MVBx<0且 MVCx<0, 则 MVLlx=(MVBx+MVCx) / 2。
如果 MVBx<0且 MVAx>0且 MVCx>0,或者 MVBx>0且 MVAx<0且 MVCx<0, 则 MVLlx=(MVAx+MVCx) / 2。
如果 MVCx<0且 MVAx>0且 MVBx>0,或者 MVCx>0且 MVAx<0且 MVBx<0, 则 MVLlx=(MVAx+MVBx) / 2。
如果 MVAy<0且 MVBy>0且 MVCy>0,或者 MVAy>0且 MVBy<0且 MVCy<0, 则 MVLly= (MVBy+MVCy) /2。
如果 MVBy<0且 MVAy>0且 MVCy>0,或者 MVBy>0且 MVAy<0且 MVCy<0, 则 MVLly= (MVAy+MVCy) /2。
如果 MVCy<0且 MVAy>0且 MVBy>0,或者 MVCy>0且 MVAy<0且 MVBy<0, 则 MVLly= (MVAy+MVBy) /2。
步骤 105: 计算三个参考图像块 A、 B、 中任意两个在水平 /垂直方 向的距离。 该距离可以用下面方式表示, 记 和8、 8和(、 A和 C之间 在水平 /垂直方向的距离分别为 ABSVABx、 ABSVBCx、 ABSVCAx, ABSVABy, ABSVBCy, ABSVCAy, 即,
ABSVABx = |MVAx - MVBx | , ABSVBCx = |MVBx - MVCx I ,
ABSVCAx = I MVCx - MVAx | , ABSVABy = |MVAy - MVBy | ,
ABSVBCy = I MVBy - MVCy I , ABSVCAy = |MVCy - MVAy | 0
步骤 106: 将距离最小的两个参考图像块的运动矢量的水平 /垂直分 量的平均值作为当前图像块的第一运动矢量 MVL1的水平 /垂直分量,即, 如果 ABSVABx < ABSVBCx JL ABSVABx < ABSVCAx,
则 MVLlx = (MVAx + MVBx) /2;
如果 ABSVBCx < ABSVABx JL ABSVBCx < ABSVCAx,
则 MVLlx = (MVBX + MVCX) /2;
如果 ABSVCAx < ABSVABx JL ABSVCAx < ABSVBCx,
则 MVLlx = (MVAx + MVCx) /2;
如果 ABSVABy < ABSVBCy JL ABSVABy < ABSVCAy,
则 MVLly = (MVAy + MVBy) /2;
如果 ABSVBCy < ABSVABy JL ABSVBCy < ABSVCAy, 则 MVLly = (MVBY + MVCY) /2;
如果 ABSVCAy < ABSVABy JL ABSVCAy < ABSVBCy,
则 MVLly = (MVAy + MVCy) /2。
第二运动矢量 MVL2以第一运动矢量 MVLl为参考值通过联合运动估 计的方式导出, 其具体导出公式可如公式 (2 ) 所示。
MVL2=f (MVLl ) …… ( 2 )
其中, f 是一个与第一运动矢量 MVLl有关的进行联合运动估计的函 数。
在本实例中, 第二运动矢量 MVL2 使用的联合运动估计的估计过程 与常规的运动估计过程一样 (例如常规的 B帧运动估计过程) , 因此在 此不再赘述。由于本实施例中第二运动矢量 MVL2通过联合运动估计的方 式导出时参考了第一运动矢量 MVL1, 因此, 在求拉格朗日代价函数时, 在搜索范围内使得如公式( 3 )所示的拉格朗日代价函数最小的运动矢量 即作为第二运动矢量 MVL2。
J ( λ sad, MVL2) =Dsad (S, MVL2, MVLl) + λ sad*R (MVL2-MVL2pred)
…… (3)
其中, MVL2pred是 MVL2的预测值, R (MVL2_MVL2pred)表示编码运 动矢量残差的比特数, λ sad 为 R(MVL2-MVL2pred)的一个权重系数, Dsad(S,MVL2,MVLl)表示当前图像块 S 与预测块的残差, 它可以由公式 (4 ) 进一步获取。
Dsad(S,MVL2,MVLl)=
^ |S(x,y)-(Sref(x+MVL2x,y+MVL2y)
(x,y)
+Sref(x+MVLlx,y+MVLly))»l I (4)
其中, x、 y为当前图像块 S内的像素点在当前编码帧内的相对坐标 位置, MVLlx、 MVLly, MVL2x、 MVL2y分别为 MVLl和 MVL2的水平和垂直 分量, Sref 代表参考帧。
请参考图 6, 为本实施例中当前图像块的预测块的获取示意图, 其 中, 时间为 t-1的帧图像作为前向参考帧, 时间为 t的帧图像为当前编 码帧。 在步骤 30中对第一预测块 PL 1和第二预测块 PL 2进行加权平均 , 获得当前图像块 S的最终预测块 PL, 即 PL=aPLl+bPL2, a、 b为加权系 数, a+b=l, 本实施例中, a=b=l/2, 即第一预测块 PLl和第二预测块 PL2 的权重为 1/2。
本实施例中, 在获得当前图像块的最终预测块后, 还包括: 将当前 图像块和最终预测块的残差信息、第二运动矢量 MVL2加入到当前图像块 的编码码流中。 由于编码码流中只包含一个运动矢量 MVL2, 因此, 本实 施例提供的基于 P帧的多假设运动补偿方法可以在不增大码流码率的前 提下, 提高 P帧预测块的准确性。
请参考图 7 , 为本实施例釆用的解码框图, 在解码端, 当码流输入 之后, 经过熵解码、 反量化、 反变换, 通过一选择器选择是帧内编码还 是帧间编码, 对于帧间编码, 通过解码信息和参考緩冲区中的重建帧获 得当前图像块的预测块, 再将预测块与残差块相加, 即得到重建块。 对 于本申请来说, MVL 1的值可以通过推导求出, 具体推导过程见编码端中 MVL 1的导出过程, MVL2的值通过熵解码得到, MVL 1和 MVL2分别在参考 重建帧上指向对应的预测块 PL1和 PL2 , 最终预测块 PL由 PL1和 PL2加 权求平均得到。
在具体的编码过程中, 可以单独釆用本申请实施例中提供的多假设 运动补偿方法对 P帧进行编码, 也可以将该多假设运动补偿方法作为一 种新的编码模式加入到 P帧的编码模式中, 经过模式决策过程, 最终选 择一种使得编码代价最小的编码模式对 P帧进行编码。
本领域技术人员可以理解, 上述实施方式中各种方法的全部或部分 步骤可以通过程序来指令相关硬件完成, 该程序可以存储于一计算机可 读存储介质中, 存储介质可以包括: 只读存储器、 随机存储器、 磁盘或 光盘等。
以上内容是结合具体的实施方式对本申请所作的进一步详细说明, 不能认定本申请的具体实施只局限于这些说明。 对于本申请所属技术领 域的普通技术人员来说, 在不脱离本申请发明构思的前提下, 还可以做 出若干简单推演或替换。

Claims

权 利 要 求
1. 一种基于 P帧的多假设运动补偿方法, 其特征在于, 包括: 以当前图像块的相邻已编码图像块作为参考图像块, 利用参考图像 块的运动矢量获得当前图像块的第一运动矢量, 所述第一运动矢量指向 第一预测块;
以所述第一运动矢量为参考值, 对当前图像块进行联合运动估计获 得当前图像块的第二运动矢量, 所述第二运动矢量指向第二预测块; 对所述第一预测块和所述第二预测块进行加权平均, 获得当前图像 块的最终预测块。
2.如权利要求 1所述的方法, 其特征在于, 以当前图像块的相邻已 编码图像块作为参考图像块, 为:
以当前图像块的相邻已编码图像块中的三个图像块作为参考图像 块。
3.如权利要求 2所述的方法, 其特征在于, 利用参考图像块的运动 矢量获得当前图像块的第一运动矢量, 包括:
判断到三个参考图像块中只有一个参考图像块具有运动矢量, 则将 该运动矢量作为当前图像块的第一运动矢量; 否则,继续执行下面步骤: 判断到三个参考图像块中有一个参考图像块的运动矢量的水平分量 与另外两个参考图像块的运动矢量的水平分量方向相反时, 将所述两个 参考图像块的运动矢量的水平分量的平均值作为当前图像块的第一运动 矢量的水平分量; 判断到三个参考图像块中有一个参考图像块的运动矢 量的垂直分量与另外两个参考图像块的运动矢量的垂直分量方向相反 时, 将所述两个参考图像块的运动矢量的垂直分量的平均值作为当前图 像块的第一运动矢量的垂直分量; 否则, 继续执行下面步骤:
计算任意两个参考图像块在水平方向的距离, 将距离最小的两个参 考图像块的运动矢量的水平分量的平均值作为当前图像块的第一运动矢 量的水平分量; 计算任意两个参考图像块在垂直方向的距离, 将距离最 小的两个参考图像块的运动矢量的垂直分量的平均值作为当前图像块的 第一运动矢量的垂直分量。
4.如权利要求 1所述的方法, 其特征在于, 对所述第一预测块和所 述第二预测块进行加权平均, 获得当前图像块的最终预测块时, 第一预 测块和第二预测块的权重和为 1。
5.如权利要求 4所述的方法, 其特征在于, 第一预测块和第二预测 块的权重各为 1 / 2。
6.如权利要求 1-5任一项所述的方法, 其特征在于, 在获得当前图 像块的最终预测块后, 还包括: 将当前图像块和最终预测块的残差信息、 第二运动矢量加入到当前 图像块的编码码流中。
PCT/CN2013/080172 2013-07-26 2013-07-26 一种基于p帧的多假设运动补偿方法 WO2015010317A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2013/080172 WO2015010317A1 (zh) 2013-07-26 2013-07-26 一种基于p帧的多假设运动补偿方法
CN201380003167.7A CN104488271B (zh) 2013-07-26 2013-07-26 一种基于p帧的多假设运动补偿方法
US15/006,147 US10298950B2 (en) 2013-07-26 2016-01-26 P frame-based multi-hypothesis motion compensation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/080172 WO2015010317A1 (zh) 2013-07-26 2013-07-26 一种基于p帧的多假设运动补偿方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/006,147 Continuation-In-Part US10298950B2 (en) 2013-07-26 2016-01-26 P frame-based multi-hypothesis motion compensation method

Publications (1)

Publication Number Publication Date
WO2015010317A1 true WO2015010317A1 (zh) 2015-01-29

Family

ID=52392628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/080172 WO2015010317A1 (zh) 2013-07-26 2013-07-26 一种基于p帧的多假设运动补偿方法

Country Status (3)

Country Link
US (1) US10298950B2 (zh)
CN (1) CN104488271B (zh)
WO (1) WO2015010317A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020108560A1 (en) * 2018-11-30 2020-06-04 Mediatek Inc. Video processing methods and apparatuses of determining motion vectors for storage in video coding systems
TWI729402B (zh) * 2018-05-31 2021-06-01 大陸商北京字節跳動網絡技術有限公司 加權交織預測

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582679A (zh) 2016-05-24 2023-08-11 韩国电子通信研究院 图像编码/解码方法和用于所述方法的记录介质
WO2019000443A1 (zh) * 2017-06-30 2019-01-03 华为技术有限公司 一种帧间预测的方法及装置
CN112236995B (zh) 2018-02-02 2024-08-06 苹果公司 基于多假设运动补偿技术的视频编码、解码方法及编码器、解码器
US11924440B2 (en) 2018-02-05 2024-03-05 Apple Inc. Techniques of multi-hypothesis motion compensation
JP7104186B2 (ja) 2018-06-05 2022-07-20 北京字節跳動網絡技術有限公司 Ibcとatmvpとの間でのインタラクション
CN110636298B (zh) 2018-06-21 2022-09-13 北京字节跳动网络技术有限公司 对于Merge仿射模式和非Merge仿射模式的统一约束
CN115426497A (zh) 2018-06-21 2022-12-02 抖音视界有限公司 颜色分量之间的子块运动矢量继承
WO2020058955A1 (en) 2018-09-23 2020-03-26 Beijing Bytedance Network Technology Co., Ltd. Multiple-hypothesis affine mode
CN110944193B (zh) 2018-09-24 2023-08-11 北京字节跳动网络技术有限公司 视频编码和解码中的加权双向预测
WO2020084470A1 (en) 2018-10-22 2020-04-30 Beijing Bytedance Network Technology Co., Ltd. Storage of motion parameters with clipping for affine mode
CN112970262B (zh) 2018-11-10 2024-02-20 北京字节跳动网络技术有限公司 三角预测模式中的取整
WO2020098714A1 (en) 2018-11-13 2020-05-22 Beijing Bytedance Network Technology Co., Ltd. Multiple hypothesis for sub-block prediction blocks
CN113597760B (zh) 2019-01-02 2024-08-16 北京字节跳动网络技术有限公司 视频处理的方法
KR102330781B1 (ko) * 2020-01-17 2021-11-24 주식회사 와이젯 무선 환경 영상처리 방법
US11625938B2 (en) 2020-12-29 2023-04-11 Industrial Technology Research Institute Method and device for detecting human skeletons

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1523896A (zh) * 2003-09-12 2004-08-25 浙江大学 视频编解码中运动矢量的预测方法和装置
US20060280253A1 (en) * 2002-07-19 2006-12-14 Microsoft Corporation Timestamp-Independent Motion Vector Prediction for Predictive (P) and Bidirectionally Predictive (B) Pictures
CN101176350A (zh) * 2005-05-26 2008-05-07 株式会社Ntt都科摩 对运动和预测加权参数进行编码的方法和装置
CN101272494A (zh) * 2008-01-25 2008-09-24 浙江大学 利用合成参考帧的视频编解码方法及装置
CN101610413A (zh) * 2009-07-29 2009-12-23 清华大学 一种视频的编码/解码方法及装置
CN102668562A (zh) * 2009-10-20 2012-09-12 汤姆森特许公司 运动矢量预测和细化

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE50103996D1 (de) * 2000-04-14 2004-11-11 Siemens Ag Verfahren und vorrichtung zum speichern und bearbeiten von bildinformation zeitlich aufeinanderfolgender bilder
US7003035B2 (en) * 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
EP2039171B1 (en) * 2006-07-07 2016-10-05 Telefonaktiebolaget LM Ericsson (publ) Weighted prediction for video coding
CN101557514B (zh) * 2008-04-11 2011-02-09 华为技术有限公司 一种帧间预测编解码方法、装置及系统
US8175163B2 (en) * 2009-06-10 2012-05-08 Samsung Electronics Co., Ltd. System and method for motion compensation using a set of candidate motion vectors obtained from digital video
US8873626B2 (en) * 2009-07-02 2014-10-28 Qualcomm Incorporated Template matching for video coding
WO2012096173A1 (ja) * 2011-01-12 2012-07-19 パナソニック株式会社 動画像符号化方法および動画像復号化方法
US9531990B1 (en) * 2012-01-21 2016-12-27 Google Inc. Compound prediction using multiple sources or prediction modes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060280253A1 (en) * 2002-07-19 2006-12-14 Microsoft Corporation Timestamp-Independent Motion Vector Prediction for Predictive (P) and Bidirectionally Predictive (B) Pictures
CN1523896A (zh) * 2003-09-12 2004-08-25 浙江大学 视频编解码中运动矢量的预测方法和装置
CN101176350A (zh) * 2005-05-26 2008-05-07 株式会社Ntt都科摩 对运动和预测加权参数进行编码的方法和装置
CN101272494A (zh) * 2008-01-25 2008-09-24 浙江大学 利用合成参考帧的视频编解码方法及装置
CN101610413A (zh) * 2009-07-29 2009-12-23 清华大学 一种视频的编码/解码方法及装置
CN102668562A (zh) * 2009-10-20 2012-09-12 汤姆森特许公司 运动矢量预测和细化

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI729402B (zh) * 2018-05-31 2021-06-01 大陸商北京字節跳動網絡技術有限公司 加權交織預測
WO2020108560A1 (en) * 2018-11-30 2020-06-04 Mediatek Inc. Video processing methods and apparatuses of determining motion vectors for storage in video coding systems
CN113170174A (zh) * 2018-11-30 2021-07-23 联发科技股份有限公司 视频编码系统中用于决定储存用运动向量的视频处理方法和装置
TWI737055B (zh) * 2018-11-30 2021-08-21 聯發科技股份有限公司 視訊編碼系統中用於決定儲存用運動向量的視訊處理方法和裝置
US11290739B2 (en) 2018-11-30 2022-03-29 Mediatek Inc. Video processing methods and apparatuses of determining motion vectors for storage in video coding systems
US11785242B2 (en) 2018-11-30 2023-10-10 Hfi Innovation Inc. Video processing methods and apparatuses of determining motion vectors for storage in video coding systems
CN113170174B (zh) * 2018-11-30 2024-04-12 寰发股份有限公司 视频编码系统中用于决定储存用运动向量的视频处理方法和装置

Also Published As

Publication number Publication date
US20160142728A1 (en) 2016-05-19
CN104488271B (zh) 2019-05-07
US10298950B2 (en) 2019-05-21
CN104488271A (zh) 2015-04-01

Similar Documents

Publication Publication Date Title
WO2015010317A1 (zh) 一种基于p帧的多假设运动补偿方法
WO2015010319A1 (zh) 一种基于p帧的多假设运动补偿编码方法
CN111385569B (zh) 一种编解码方法及其设备
WO2020135034A1 (zh) 视频编解码
TWI738251B (zh) 用以解碼影像的裝置
CN103647972B (zh) 运动图像解码方法和运动图像编码方法
US8098731B2 (en) Intraprediction method and apparatus using video symmetry and video encoding and decoding method and apparatus
CN102047665B (zh) 运动图像编码方法以及运动图像解码方法
JP5061179B2 (ja) 照明変化補償動き予測符号化および復号化方法とその装置
JP5310614B2 (ja) 動画像符号化装置、動画像符号化方法及び動画像復号装置ならびに動画像復号方法
CN112887732B (zh) 一种权值可配置的帧间帧内联合预测编解码的方法及装置
CN112449180B (zh) 一种编解码方法、装置及其设备
KR101078525B1 (ko) 다중시점 영상의 부호화 방법
JP4642033B2 (ja) 参照フレームの数を固定する符号化方式で画像の参照ブロックを取得する方法
KR20130137558A (ko) 멀티 뷰 비디오 처리에서 예측을 수행하는 방법
KR20160087206A (ko) 영상 트랜스코더 및 트랜스코딩 방법
KR20120079561A (ko) 선택적 다중 예측을 통한 적응적 인트라 예측 부호화/복호화 장치 및 방법
Singh Adaptive Fast Search Block Motion Estimation In Video Compression
KR20120008271A (ko) 주변 화소의 정합을 이용한 예측 움직임 벡터 선택 장치 및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13890012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13890012

Country of ref document: EP

Kind code of ref document: A1