[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2009096721A2 - Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation - Google Patents

Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation Download PDF

Info

Publication number
WO2009096721A2
WO2009096721A2 PCT/KR2009/000441 KR2009000441W WO2009096721A2 WO 2009096721 A2 WO2009096721 A2 WO 2009096721A2 KR 2009000441 W KR2009000441 W KR 2009000441W WO 2009096721 A2 WO2009096721 A2 WO 2009096721A2
Authority
WO
WIPO (PCT)
Prior art keywords
affine
current block
transformation
block
motion compensation
Prior art date
Application number
PCT/KR2009/000441
Other languages
French (fr)
Other versions
WO2009096721A3 (en
Inventor
Dong Hyung Kim
Se Yoon Jeong
Jin Soo Choi
Won Sik Cheong
Kyung Ae Moon
Jin Woo Hong
Original Assignee
Electronics And Telecommunications Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics And Telecommunications Research Institute filed Critical Electronics And Telecommunications Research Institute
Priority to US12/865,069 priority Critical patent/US8665958B2/en
Priority claimed from KR1020090007038A external-priority patent/KR101003105B1/en
Publication of WO2009096721A2 publication Critical patent/WO2009096721A2/en
Publication of WO2009096721A3 publication Critical patent/WO2009096721A3/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/543Motion estimation other than block-based using regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes

Definitions

  • the present invention relates to a video encoding method and apparatus and a video decoding method and apparatus in which a video signal can be encoded through affine transformation-based motion compensation.
  • the present invention is based on research (Project Management No.: 2007-S-004-01, Project Title: Development of Rich Media Broadcasting Technology through Advancement of AV codec) conducted as part of Information Technology (IT) Growth Power Technology Development Project launched by Ministry of Information and Communication and Institute for Information Technology Advancement (IITA).
  • Inter-frame encoding such as H.264 video encoding is similar to other various video encoding methods in terms of predicting a current block through block-oriented motion estimation and encoding the predicted current block.
  • inter-frame encoding is differentiated from other various video encoding methods by using various macroblock modes and adopting different block sizes for the various macroblock modes so as to perform motion estimation and motion compensation.
  • Inter-frame encoding generally includes performing motion estimation in each of the various macroblock modes, choosing whichever of the various macroblock modes is determined to be optimal in consideration of rate-distortion performance, and encoding a prediction error in the chosen macroblock mode, i.e., the difference(s) between a current block and a block obtained by performing motion estimation on the current block.
  • motion estimation and motion compensation are performed only in consideration of horizontal and vertical translational motion components. That is, referring to FIG. 1, motion estimation and motion compensation may be performed on a current block only in consideration of horizontal and vertical motions (mv x and mv y ) with respect to a reference frame.
  • coding complexity may decrease, but it may not be able to achieve high encoding efficiency especially when an object in a picture to be encoded has an affine transformation such as rotation, enlargement or reduction.
  • affine transformation such as rotation, enlargement or reduction.
  • encoding efficiency may increase, but coding complexity, and particularly, the complexity of motion estimation, may considerably increase.
  • the present invention provides a video encoding method and apparatus and a video decoding method and apparatus which can achieve high encoding efficiency even when a block to be encoded includes an affine-transformation object having an affine transformation such as rotation, enlargement or reduction.
  • a video encoding method including determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
  • a video encoding apparatus including a motion estimation unit calculating a motion vector of a current block with reference to a reference block; an affine-transformation object calculation unit determining whether a current block to be subjected to motion compensation includes an affine-transformation object having an affine transformation and outputting an affine-transformation object detection signal corresponding to the results of the determination; and a motion compensation unit generating a prediction block by performing either affine transformation-based motion compensation or motion vector-based motion compensation on the current block in response to the affine-transformation object detection signal.
  • a video decoding method including determining whether an affine-transformation object exists in a reference block; if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
  • a video decoding apparatus including an affine-transformation object detection unit determining whether an affine-transformation object exists in a reference block and outputting a signal indicating the results of the determination; a motion compensation unit generating a predicted block by performing one of affine transformation-based motion compensation and motion vector-based motion compensation on the reference block in response to the signal output by the affine-transformation object detection unit; and an adding unit which generates a current block by adding the predicted block and a residual signal.
  • a computer-readable recording medium having recorded thereon a program for executing a video encoding method including determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
  • a computer-readable recording medium having recorded thereon a program for executing a video decoding method including determining whether an affine-transformation object exists in a reference block; if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
  • affine transformation-based motion estimation/compensation may be performed on each block including an affine-transformation object having an affine transformation.
  • the present invention it is possible to establish an affine model only based on the motion in a previously-encoded macroblock.
  • the present invention can be readily applied to an encoding apparatus (such as an H.264 encoding apparatus) performing encoding in units of macroblocks.
  • FIG. 1 illustrates a diagram for explaining conventional motion estimation and compensation methods in which only horizontal and vertical translational motions are considered
  • FIG. 2 illustrates a diagram for explaining a typical inter-frame encoding method
  • FIG. 3 illustrates a block diagram of a video encoding apparatus according to an exemplary embodiment of the present invention
  • FIG. 4 illustrates a diagram for explaining a video encoding method according to an exemplary embodiment of the present invention
  • FIG. 5 illustrates a diagram for explaining how to divide an 8 ⁇ 8 block into eight triangular blocks
  • FIG. 6 illustrates a diagram for explaining motion vectors used to deduce an affine transformation at each of a plurality of triangular blocks in an 8 ⁇ 8 block
  • FIGS. 7 and 8 illustrate diagrams for affine transformations that can be used in the present invention.
  • FIG. 2 illustrates a diagram for explaining a typical inter-frame encoding method.
  • the typical inter-frame encoding method may largely involve four phases: Phases 1 through 4.
  • Phases 1 and 2 may be phases for estimating motion. More specifically, in phase 1, a motion vector for each of an inter 16 ⁇ 16 block, inter 16 ⁇ 8 blocks, inter 8 ⁇ 16 blocks may be estimated. In phase 2, a motion vector for each of a plurality of sub-blocks of an inter 8 ⁇ 8 block, i.e., a motion vector for each of an inter 8 ⁇ 8 block, inter 8 ⁇ 4 blocks, inter 4 ⁇ 8 blocks, and inter 4 ⁇ 4 blocks may be estimated.
  • a sub-macroblock mode may be chosen for a sub-macroblock in an inter 8 ⁇ 8 macroblock by using a rate-distortion function.
  • the rate-distortion function may be represented by Equation (1):
  • Rate indicates a bitrate used to encode side information such as a prediction error (i.e., the differences between a block currently being encoded and a restored block obtained by compensation using a motion vector of the current block) and a motion vector and Distortion indicates the sum of the squares of the differences between the current block and the restored block.
  • a prediction error i.e., the differences between a block currently being encoded and a restored block obtained by compensation using a motion vector of the current block
  • Distortion indicates the sum of the squares of the differences between the current block and the restored block.
  • an optimum macroblock mode may be chosen from all macroblock modes available, including a skip mode and an intra macroblock mode, in consideration of rate-distortion performance.
  • affine transformation-based motion compensation may be applied only to phases 3 and 4 in consideration of coding complexity. That is, in the video encoding and decoding methods according to the present invention, only horizontal and vertical translational motions may be taken into consideration during the estimation of a motion vector, and affine transformation-based motion compensation, in which the rotation, enlargement or reduction of an object is considered, may be performed in the phase of motion compensation. Therefore, it is possible to minimize coding complexity and provide high encoding efficiency.
  • affine transformation-based motion compensation may be performed only on blocks that are believed to include affine transformations such as rotation, enlargement and reduction.
  • affine transformations such as rotation, enlargement and reduction.
  • the video encoding and decoding methods according to the present invention suggest ways to skip an inverse matrix calculation process for deducing an affine model from blocks to be subjected to affine transformation-based motion compensation. Therefore, it is possible to achieve high encoding efficiency with less computation.
  • FIG. 3 illustrates a block diagram of a video encoding apparatus according to an exemplary embodiment of the present invention.
  • the video encoding apparatus may include a motion estimation unit 110, an affine-transformation object calculation unit 120 and a motion compensation unit 130.
  • the motion estimation unit 110 may calculate a motion vector of a current block based on a reference block.
  • the affine-transformation object calculation unit 120 may determine whether the current block includes an affine-transformation object.
  • the motion compensation unit 130 may generate a prediction block by compensating for the current block based on an affine-object-detection signal provided by the affine-transformation object calculation unit 120 or the motion vector provided by the motion estimation unit 110.
  • the video encoding apparatus may also include an encoding unit (not shown) generating a bitstream by encoding a differential signal generated based on the difference(s) between the current block and the prediction block and a signal including side information such as the motion vector of the current block.
  • FIG. 4 illustrates a diagram for explaining a video encoding method according to an exemplary embodiment of the present invention.
  • the video encoding method may be largely divided into two phases: phases 1 and 2 (200 and 220).
  • phase 1 (200) the affine-transformation object calculation unit 120 may determine whether a current block includes an affine-transformation object.
  • Phase 2 (220) may involve compensating for the current block through affine transformation-based motion compensation using information such as the motion vectors of blocks adjacent to the current block (221) if it is determined in phase 1 that the current block includes an affine-transformation object; and performing typical motion compensation on the current block (223) if it is determined in phase 1 that the current block does not include any affine-transformation object.
  • phase 1 (200) may involve determining whether the current block includes an affine-transformation object having an affine transformation based on the motion vector of the current block and the motion vectors of blocks adjacent to the current block, a reference frame and macroblock mode information used to encode the current block .
  • the current block includes any affine-transformation object: if the maximum of the angles between the motion vector of the current block and the motion vectors of blocks adjacent to the current block is within a predefined range; and if a maximum variation obtained by applying affine transformation-based motion compensation is less than a reference value. If the current block satisfies neither the first nor second condition, the current block may not be subjected to affine transformation-based motion compensation.
  • the current block may not be subjected to affine transformation-based motion compensation if at least one of the blocks adjacent to the current block is intra-encoded, if the current block is located on the upper left corner of a corresponding frame or if the current block references a different reference frame from the blocks adjacent to the current block.
  • video encoding or decoding may be performed in units of 8 ⁇ 8 blocks. If it is determined in phase 1 that the current block has an affine-transformation object, affine transformation-based motion compensation may be performed on the current block by using only the motion vectors within a range that establishes causality. Therefore, it is possible to address the problems associated with two-pass coding such as high coding complexity.
  • an 8 ⁇ 8 block may be divided into eight triangular blocks 300 through 307.
  • the triangular blocks 300 through 307 may be motion-compensated using different affine models.
  • FIG. 6 illustrates motion vectors used to deduce an affine model for each of a plurality of triangular blocks in an 8 ⁇ 8 block.
  • the affine model for each of a plurality of triangular blocks (i.e., blocks 0 through 7) in a current (8 ⁇ 8) block may vary according to a macroblock mode of the current block and a macroblock mode of a number of blocks adjacent to the current block. If the current block is located at the lower right corner of a macroblock and the macroblock mode of the current block is a 16 ⁇ 16 mode, the motion vectors of blocks 0 through 7 may all be the same.
  • the affine models for blocks 0 through 7 may all include translations only and may thus have the same model formula.
  • Equation (2) An affine transformation formula between ( x , y ) and ( x ', y ') may be represented by Equation (2):
  • a total of six equations may be required to determine the values of parameters a , b , c , d , e and f in Equation (2). For this, at least three displacement values for ( x , y ) may be required. If there are more than three displacement values, a least square solution may be used to determine the values of parameters a , b , c , d , e and f in Equation (2).
  • an affine model for each of a plurality of triangular blocks in an 8 ⁇ 8 block may be deduced using variations at the apexes of each of the triangular blocks.
  • Equations (3) there is no need to calculate the inverse matrix of matrix A, i.e., A -1 , because the inverse 6 ⁇ 6 matrix A -1 can be easily obtained from eight inverse matrices respectively corresponding to blocks 0 through 7, which are all calculated in advance. Thus, it is possible to reduce coding complexity.
  • FIG. 7 illustrates the case in which a current block to be encoded includes an object which is reduced by 1/2 with respect to the vertical axis of a previous frame and is inclined to the right at an angle of 45 degrees.
  • Three points of displacement for obtaining an affine model for block 0 of a current block are ( x 0 , y 0 ) ⁇ ( x 0 + mv x 0 , y 0 + mv y 0 ), ( x 1 , y 1 ) ⁇ ( x 1 + mv x 1 , y 1 + mv y 1 ), and ( x 2 , y 2 ) ⁇ ( x 2 + mv x 2 , y 2 + mv y2 ).
  • the minimum size of macroblocks that can have a motion vector is 4 ⁇ 4.
  • motion vectors mvx0 through mvx2 may be different from one another.
  • all 4 ⁇ 4 blocks in the current block have the same motion vector if theminimum size of macroblocks that can have a motion vector is 4 ⁇ 4 in blocks adjacent to the current block.
  • An affine model for block 0 may be obtained using the three points of displacement, as indicated by Equations (4):
  • matrix A includes the coordinates of the current block and the coordinates of each of the blocks adjacent to the current block. If the point ( x 0 , y 0 ) is mapped to the origin (0,0), matrix A can be commonly applied to blocks 0 through 7 regardless of the position of the current block in a corresponding macroblock.
  • Equations (4) may be transformed into Equations (5), and Equations (6) may be obtained by applying (x1,y1) to Equations (5).
  • Equations (5) and Equations (6) are as follows:
  • motion estimation may be performed in units of 1/4 pixels, and thus, the distance between a pair of adjacent pixels may be 4. Therefore, if a pixel at a point (4,-12) is determined to have been moved to (4+ mv x 2 + ⁇ x , -12+ mv y 2 + ⁇ y ) based on an affine model, the pixel may be determined to have the same displacement ( ⁇ x , ⁇ y ) at any arbitrary block location. This is very important for the reduction of computation because, according to the present invention, it is possible to easily obtain an affine model simply using eight inverse matrices respectively corresponding to blocks 0 through 7 without the need to calculate the inverse matrix of matrix A.
  • An affine model for each of blocks 1 through 7 may be obtained using the same method used to obtain the affine model for block 0.
  • motion compensation may be performed on the current block, as indicated by Equation (7):
  • affine transformation-based motion compensation may be performed on the current block, thereby maintaining high encoding efficiency.
  • a video decoding method may be performed by inversely performing the above-mentioned video encoding method. That is, it may be determined whether an affine-transformation object exists in a reference block. Thereafter, if an affine-transformation object exists in the reference block, a predicted block may be generated by performing affine transformation-based motion compensation on the reference block. On the other hand, if no affine-transformation object exists in the reference block, a predicted block may be generated by performing motion vector-based motion compensation on the reference block. Thereafter, a current block may be generated using a predicted block and a residual signal included in a video signal to be decoded. Therefore, a video decoding apparatus according to an exemplary embodiment of the present invention, unlike a typical video decoding apparatus, may include an affine-transformation object calculation unit determining whether an affine-transformation object exists in the reference block.
  • the present invention can be realized as computer-readable code written on a computer-readable recording medium.
  • the computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet).
  • the computer-readable recording medium can be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.
  • the present invention can be effectively applied to the encoding or decoding of a video signal and can thus achieve high efficiency especially when a block to be encoded includes an affine-transformation object having an affine transformation such as rotation, enlargement or reduction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video encoding method and apparatus are provided. The video encoding method includes determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block. Therefore, it is possible to achieve high video encoding/decoding efficiency even when a block to be encoded or decoded includes an affine transformation.

Description

METHOD AND APPARATUS FOR ENCODING AND DECODING VIDEO SIGNAL USING MOTION COMPENSATION BASED ON AFFINE TRANSFORMATION
The present invention relates to a video encoding method and apparatus and a video decoding method and apparatus in which a video signal can be encoded through affine transformation-based motion compensation.
The present invention is based on research (Project Management No.: 2007-S-004-01, Project Title: Development of Rich Media Broadcasting Technology through Advancement of AV codec) conducted as part of Information Technology (IT) Growth Power Technology Development Project launched by Ministry of Information and Communication and Institute for Information Technology Advancement (IITA).
Inter-frame encoding such as H.264 video encoding is similar to other various video encoding methods in terms of predicting a current block through block-oriented motion estimation and encoding the predicted current block. However, inter-frame encoding is differentiated from other various video encoding methods by using various macroblock modes and adopting different block sizes for the various macroblock modes so as to perform motion estimation and motion compensation. Inter-frame encoding generally includes performing motion estimation in each of the various macroblock modes, choosing whichever of the various macroblock modes is determined to be optimal in consideration of rate-distortion performance, and encoding a prediction error in the chosen macroblock mode, i.e., the difference(s) between a current block and a block obtained by performing motion estimation on the current block.
In inter-frame encoding, like in other various video encoding methods, motion estimation and motion compensation are performed only in consideration of horizontal and vertical translational motion components. That is, referring to FIG. 1, motion estimation and motion compensation may be performed on a current block only in consideration of horizontal and vertical motions (mvx and mvy) with respect to a reference frame.
If motion estimation and motion compensation are performed only in consideration of horizontal and/or vertical motions, coding complexity may decrease, but it may not be able to achieve high encoding efficiency especially when an object in a picture to be encoded has an affine transformation such as rotation, enlargement or reduction. On the other hand, if motion estimation and motion compensation are performed in consideration of all possible transformations of an object, encoding efficiency may increase, but coding complexity, and particularly, the complexity of motion estimation, may considerably increase.
The present invention provides a video encoding method and apparatus and a video decoding method and apparatus which can achieve high encoding efficiency even when a block to be encoded includes an affine-transformation object having an affine transformation such as rotation, enlargement or reduction.
According to an aspect of the present invention, there is provided a video encoding method including determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
According to another aspect of the present invention, there is provided a video encoding apparatus including a motion estimation unit calculating a motion vector of a current block with reference to a reference block; an affine-transformation object calculation unit determining whether a current block to be subjected to motion compensation includes an affine-transformation object having an affine transformation and outputting an affine-transformation object detection signal corresponding to the results of the determination; and a motion compensation unit generating a prediction block by performing either affine transformation-based motion compensation or motion vector-based motion compensation on the current block in response to the affine-transformation object detection signal.
According to another aspect of the present invention, there is provided a video decoding method including determining whether an affine-transformation object exists in a reference block; if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
According to another aspect of the present invention, there is provided a video decoding apparatus including an affine-transformation object detection unit determining whether an affine-transformation object exists in a reference block and outputting a signal indicating the results of the determination; a motion compensation unit generating a predicted block by performing one of affine transformation-based motion compensation and motion vector-based motion compensation on the reference block in response to the signal output by the affine-transformation object detection unit; and an adding unit which generates a current block by adding the predicted block and a residual signal.
According to another aspect of the present invention, there is provided a computer-readable recording medium having recorded thereon a program for executing a video encoding method including determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
According to another aspect of the present invention, there is provided a computer-readable recording medium having recorded thereon a program for executing a video decoding method including determining whether an affine-transformation object exists in a reference block; if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
According to the present invention, affine transformation-based motion estimation/compensation may be performed on each block including an affine-transformation object having an affine transformation. Thus, it is possible to overcome the shortcomings of conventional video encoding and decoding methods in which motion estimation and motion compensation prediction are preformed in units of blocks only in consideration of translational motions. Therefore, it is possible to prevent the performance of encoding from deteriorating even when an object in a block to be encoded rotates, the size or shape of the object changes or there is camera movement.
In addition, according to the present invention, it is possible to establish an affine model only based on the motion in a previously-encoded macroblock. Thus, the present invention can be readily applied to an encoding apparatus (such as an H.264 encoding apparatus) performing encoding in units of macroblocks.
FIG. 1 illustrates a diagram for explaining conventional motion estimation and compensation methods in which only horizontal and vertical translational motions are considered;
FIG. 2 illustrates a diagram for explaining a typical inter-frame encoding method;
FIG. 3 illustrates a block diagram of a video encoding apparatus according to an exemplary embodiment of the present invention;
FIG. 4 illustrates a diagram for explaining a video encoding method according to an exemplary embodiment of the present invention;
FIG. 5 illustrates a diagram for explaining how to divide an 8×8 block into eight triangular blocks;
FIG. 6 illustrates a diagram for explaining motion vectors used to deduce an affine transformation at each of a plurality of triangular blocks in an 8×8 block; and
FIGS. 7 and 8 illustrate diagrams for affine transformations that can be used in the present invention.
The present invention will hereinafter be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown.
FIG. 2 illustrates a diagram for explaining a typical inter-frame encoding method. Referring to FIG. 2, the typical inter-frame encoding method may largely involve four phases: Phases 1 through 4.
Phases 1 and 2 may be phases for estimating motion. More specifically, in phase 1, a motion vector for each of an inter 16×16 block, inter 16×8 blocks, inter 8×16 blocks may be estimated. In phase 2, a motion vector for each of a plurality of sub-blocks of an inter 8×8 block, i.e., a motion vector for each of an inter 8×8 block, inter 8×4 blocks, inter 4×8 blocks, and inter 4×4 blocks may be estimated.
In phase 3, a sub-macroblock mode may be chosen for a sub-macroblock in an inter 8×8 macroblock by using a rate-distortion function. The rate-distortion function may be represented by Equation (1):
Figure PCTKR2009000441-appb-I000001
where Rate indicates a bitrate used to encode side information such as a prediction error (i.e., the differences between a block currently being encoded and a restored block obtained by compensation using a motion vector of the current block) and a motion vector and Distortion indicates the sum of the squares of the differences between the current block and the restored block.
In phase 4, an optimum macroblock mode may be chosen from all macroblock modes available, including a skip mode and an intra macroblock mode, in consideration of rate-distortion performance.
In video encoding and decoding methods according to the present invention, unlike in the typical inter-frame encoding method, affine transformation-based motion compensation may be applied only to phases 3 and 4 in consideration of coding complexity. That is, in the video encoding and decoding methods according to the present invention, only horizontal and vertical translational motions may be taken into consideration during the estimation of a motion vector, and affine transformation-based motion compensation, in which the rotation, enlargement or reduction of an object is considered, may be performed in the phase of motion compensation. Therefore, it is possible to minimize coding complexity and provide high encoding efficiency.
More specifically, in the video encoding and decoding methods according to the present invention, affine transformation-based motion compensation may be performed only on blocks that are believed to include affine transformations such as rotation, enlargement and reduction. Thus, it is possible to minimize coding complexity. In addition, the video encoding and decoding methods according to the present invention suggest ways to skip an inverse matrix calculation process for deducing an affine model from blocks to be subjected to affine transformation-based motion compensation. Therefore, it is possible to achieve high encoding efficiency with less computation.
FIG. 3 illustrates a block diagram of a video encoding apparatus according to an exemplary embodiment of the present invention. Referring to FIG. 3, the video encoding apparatus may include a motion estimation unit 110, an affine-transformation object calculation unit 120 and a motion compensation unit 130.
The motion estimation unit 110 may calculate a motion vector of a current block based on a reference block. The affine-transformation object calculation unit 120 may determine whether the current block includes an affine-transformation object. The motion compensation unit 130 may generate a prediction block by compensating for the current block based on an affine-object-detection signal provided by the affine-transformation object calculation unit 120 or the motion vector provided by the motion estimation unit 110. The video encoding apparatus may also include an encoding unit (not shown) generating a bitstream by encoding a differential signal generated based on the difference(s) between the current block and the prediction block and a signal including side information such as the motion vector of the current block.
FIG. 4 illustrates a diagram for explaining a video encoding method according to an exemplary embodiment of the present invention. Referring to FIG. 4, the video encoding method may be largely divided into two phases: phases 1 and 2 (200 and 220). In phase 1 (200), the affine-transformation object calculation unit 120 may determine whether a current block includes an affine-transformation object. Phase 2 (220) may involve compensating for the current block through affine transformation-based motion compensation using information such as the motion vectors of blocks adjacent to the current block (221) if it is determined in phase 1 that the current block includes an affine-transformation object; and performing typical motion compensation on the current block (223) if it is determined in phase 1 that the current block does not include any affine-transformation object.
More specifically, phase 1 (200) may involve determining whether the current block includes an affine-transformation object having an affine transformation based on the motion vector of the current block and the motion vectors of blocks adjacent to the current block, a reference frame and macroblock mode information used to encode the current block .
There are two conditions for determining whether the current block includes any affine-transformation object: if the maximum of the angles between the motion vector of the current block and the motion vectors of blocks adjacent to the current block is within a predefined range; and if a maximum variation obtained by applying affine transformation-based motion compensation is less than a reference value. If the current block satisfies neither the first nor second condition, the current block may not be subjected to affine transformation-based motion compensation.
Even if the current block includes an affine-transformation object, the current block may not be subjected to affine transformation-based motion compensation if at least one of the blocks adjacent to the current block is intra-encoded, if the current block is located on the upper left corner of a corresponding frame or if the current block references a different reference frame from the blocks adjacent to the current block.
In short, it may be determined whether the current block includes an affine-transformation object based on the first and second conditions. Thereafter, it may be determined whether to apply typical motion compensation or affine transformation-based motion compensation to the current block based on whether the current block includes an affine-transformation object.
In the exemplary embodiment of FIG. 4, video encoding or decoding may be performed in units of 8×8 blocks. If it is determined in phase 1 that the current block has an affine-transformation object, affine transformation-based motion compensation may be performed on the current block by using only the motion vectors within a range that establishes causality. Therefore, it is possible to address the problems associated with two-pass coding such as high coding complexity.
Referring to FIG. 5, an 8×8 block may be divided into eight triangular blocks 300 through 307. The triangular blocks 300 through 307 may be motion-compensated using different affine models.
FIG. 6 illustrates motion vectors used to deduce an affine model for each of a plurality of triangular blocks in an 8×8 block. Referring to FIG. 6, the affine model for each of a plurality of triangular blocks (i.e., blocks 0 through 7) in a current (8×8) block may vary according to a macroblock mode of the current block and a macroblock mode of a number of blocks adjacent to the current block. If the current block is located at the lower right corner of a macroblock and the macroblock mode of the current block is a 16×16 mode, the motion vectors of blocks 0 through 7 may all be the same. The affine models for blocks 0 through 7 may all include translations only and may thus have the same model formula.
An affine transformation formula between (x,y) and (x', y') may be represented by Equation (2):
Figure PCTKR2009000441-appb-I000002
A total of six equations may be required to determine the values of parameters a, b, c, d, e and f in Equation (2). For this, at least three displacement values for (x, y) may be required. If there are more than three displacement values, a least square solution may be used to determine the values of parameters a, b, c, d, e and f in Equation (2). If (x0, y0), (x1, y1), (x2, y2), (x'0, y'0), (x'1, y'1), and (x'2, y'2) are provided as displacement values for (x, y), the values of the parameters a, b, c, d, e and f may be determined using Equations (3):
Figure PCTKR2009000441-appb-I000003
In the video encoding and decoding methods according to the present invention, an affine model for each of a plurality of triangular blocks in an 8×8 block may be deduced using variations at the apexes of each of the triangular blocks. Referring to Equations (3), there is no need to calculate the inverse matrix of matrix A, i.e., A-1, because the inverse 6×6 matrix A-1 can be easily obtained from eight inverse matrices respectively corresponding to blocks 0 through 7, which are all calculated in advance. Thus, it is possible to reduce coding complexity.
FIG. 7 illustrates the case in which a current block to be encoded includes an object which is reduced by 1/2 with respect to the vertical axis of a previous frame and is inclined to the right at an angle of 45 degrees.
Three points of displacement for obtaining an affine model for block 0 of a current block are (x 0,y 0)→(x 0+mv x 0, y 0+mv y 0), (x 1,y 1)→(x 1+mv x 1, y 1+mv y 1), and (x 2,y 2)→(x 2+mv x 2, y 2+mv y2). According to existing video encoding standards as H.264, the minimum size of macroblocks that can have a motion vector is 4×4. Therefore, motion vectors mvx0 through mvx2 may be different from one another. However, assume that all 4×4 blocks in the current block have the same motion vector if theminimum size of macroblocks that can have a motion vector is 4×4 in blocks adjacent to the current block.
An affine model for block 0 may be obtained using the three points of displacement, as indicated by Equations (4):
Figure PCTKR2009000441-appb-I000004
Referring to Equations (4), matrix A includes the coordinates of the current block and the coordinates of each of the blocks adjacent to the current block. If the point (x 0,y 0) is mapped to the origin (0,0), matrix A can be commonly applied to blocks 0 through 7 regardless of the position of the current block in a corresponding macroblock.
Equations (4) may be transformed into Equations (5), and Equations (6) may be obtained by applying (x1,y1) to Equations (5). Equations (5) and Equations (6) are as follows:
Figure PCTKR2009000441-appb-I000005
Figure PCTKR2009000441-appb-I000006
According to the H.264 standard, motion estimation may be performed in units of 1/4 pixels, and thus, the distance between a pair of adjacent pixels may be 4. Therefore, if a pixel at a point (4,-12) is determined to have been moved to (4+mv x 2x, -12+mv y 2y) based on an affine model, the pixel may be determined to have the same displacement (Δxy) at any arbitrary block location. This is very important for the reduction of computation because, according to the present invention, it is possible to easily obtain an affine model simply using eight inverse matrices respectively corresponding to blocks 0 through 7 without the need to calculate the inverse matrix of matrix A.
An affine model for each of blocks 1 through 7 may be obtained using the same method used to obtain the affine model for block 0.
Once the affine models for blocks 1 through 7 are all obtained, motion compensation may be performed on the current block, as indicated by Equation (7):
Figure PCTKR2009000441-appb-I000007
In short, if a current block includes an affine-transformation object, affine transformation-based motion compensation may be performed on the current block, thereby maintaining high encoding efficiency.
A video decoding method according to an exemplary embodiment of the present invention may be performed by inversely performing the above-mentioned video encoding method. That is, it may be determined whether an affine-transformation object exists in a reference block. Thereafter, if an affine-transformation object exists in the reference block, a predicted block may be generated by performing affine transformation-based motion compensation on the reference block. On the other hand, if no affine-transformation object exists in the reference block, a predicted block may be generated by performing motion vector-based motion compensation on the reference block. Thereafter, a current block may be generated using a predicted block and a residual signal included in a video signal to be decoded. Therefore, a video decoding apparatus according to an exemplary embodiment of the present invention, unlike a typical video decoding apparatus, may include an affine-transformation object calculation unit determining whether an affine-transformation object exists in the reference block.
The video encoding and decoding methods according to the present invention are not restricted to the exemplary embodiments set forth herein. Therefore, variations and combinations of the exemplary embodiments set forth herein may fall within the scope of the present invention.
The present invention can be realized as computer-readable code written on a computer-readable recording medium. The computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet). The computer-readable recording medium can be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
The present invention can be effectively applied to the encoding or decoding of a video signal and can thus achieve high efficiency especially when a block to be encoded includes an affine-transformation object having an affine transformation such as rotation, enlargement or reduction.

Claims (20)

  1. A video encoding method comprising:
    determining whether a current block includes an affine-transformation object having an affine transformation;
    if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and
    if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
  2. The video encoding method of claim 1, further comprising generating a differential signal based on a difference between the current block and the prediction block.
  3. The video encoding method of claim 2, further comprising generating a bitstream including data obtained by encoding the differential signal.
  4. The video encoding method of claim 1, further comprising calculating the motion vector of the current block with reference to a reference block.
  5. The video encoding method of claim 1, further comprising, if the current block includes an affine-transformation object and is located on the upper left corner of a frame, performing affine transformation-based motion compensation on the current block.
  6. The video encoding method of claim 1, further comprising, if the current block includes an affine-transformation object, one of a number of blocks adjacent to the current block is intra-encoded, and the current block references a different reference block from the adjacent blocks, performing motion vector-based motion compensation on the current block.
  7. The video encoding method of claim 1, further comprising, if the current block includes an affine-transformation object and a maximum of the angles between the motion vector of the current block and the motion vectors of a number of blocks adjacent to the current block is greater than a reference value, performing motion vector-based motion compensation on the current block.
  8. The video encoding method of claim 1, further comprising, if the current block includes an affine-transformation object and a maximum variation of the affine-transformation object is greater than a reference value, performing motion vector-based motion compensation on the current block.
  9. The video encoding method of claim 1, wherein the performing of affine transformation-based motion compensation comprises dividing the current block into a number of triangular blocks and applying different affine models to the triangular blocks.
  10. A video encoding apparatus comprising:
    a motion estimation unit calculating a motion vector of a current block with reference to a reference block;
    an affine-transformation object calculation unit determining whether a current block to be subjected to motion compensation includes an affine-transformation object having an affine transformation and outputting an affine-transformation object detection signal corresponding to the results of the determination; and
    a motion compensation unit generating a prediction block by performing either affine transformation-based motion compensation or motion vector-based motion compensation on the current block in response to the affine-transformation object detection signal.
  11. The video encoding apparatus of claim 10, further comprising a differential unit generating a differential signal based on a difference between the current block and the prediction block.
  12. The video encoding apparatus of claim 11, further comprising an encoding unit generating a bitstream including data obtained by encoding the differential signal.
  13. A video decoding method comprising:
    determining whether an affine-transformation object exists in a reference block;
    if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and
    if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
  14. The video decoding method of claim 13, further comprising generating a residual signal and a motion vector for performing motion compensation on the reference block by reconfiguring the bitstream of an input video signal.
  15. The video decoding method of claim 14, further comprising generating a current block based on the predicted block and the residual signal.
  16. The video decoding method of claim 13, wherein the performing of affine transformation-based motion compensation comprises dividing the current block into a number of triangular blocks and applying different affine models to the triangular blocks.
  17. A video decoding apparatus comprising:
    an affine-transformation object detection unit determining whether an affine-transformation object exists in a reference block and outputting a signal indicating the results of the determination;
    a motion compensation unit generating a predicted block by performing one of affine transformation-based motion compensation and motion vector-based motion compensation on the reference block in response to the signal output by the affine-transformation object detection unit; and
    an adding unit which generates a current block by adding the predicted block and a residual signal.
  18. The video decoding apparatus of claim 17, further comprising a decoding unit generating a residual signal and a motion vector for performing motion compensation on the reference block by reconfiguring the bitstream of an input video signal.
  19. A computer-readable recording medium having recorded thereon a program for executing a video encoding method comprising:
    determining whether a current block includes an affine-transformation object having an affine transformation;
    if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and
    if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
  20. A computer-readable recording medium having recorded thereon a program for executing a video decoding method comprising:
    determining whether an affine-transformation object exists in a reference block;
    if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and
    if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
PCT/KR2009/000441 2008-01-29 2009-01-29 Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation WO2009096721A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/865,069 US8665958B2 (en) 2008-01-29 2009-01-29 Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2008-0009120 2008-01-29
KR20080009120 2008-01-29
KR1020090007038A KR101003105B1 (en) 2008-01-29 2009-01-29 Method for encoding and decoding video signal using motion compensation based on affine transform and apparatus thereof
KR10-2009-0007038 2009-01-29

Publications (2)

Publication Number Publication Date
WO2009096721A2 true WO2009096721A2 (en) 2009-08-06
WO2009096721A3 WO2009096721A3 (en) 2009-11-05

Family

ID=40913419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2009/000441 WO2009096721A2 (en) 2008-01-29 2009-01-29 Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation

Country Status (1)

Country Link
WO (1) WO2009096721A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965869A (en) * 2015-08-29 2018-12-07 华为技术有限公司 The method and apparatus of image prediction
CN109792533A (en) * 2016-10-05 2019-05-21 高通股份有限公司 The motion vector prediction of affine motion model is used in video coding
CN110024403A (en) * 2016-12-29 2019-07-16 高通股份有限公司 The motion vector of affine motion model for video coding generates

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8665958B2 (en) 2008-01-29 2014-03-04 Electronics And Telecommunications Research Institute Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3679426B2 (en) * 1993-03-15 2005-08-03 マサチューセッツ・インスティチュート・オブ・テクノロジー A system that encodes image data into multiple layers, each representing a coherent region of motion, and motion parameters associated with the layers.
US6026182A (en) * 1995-10-05 2000-02-15 Microsoft Corporation Feature segmentation
WO2007138526A2 (en) * 2006-06-01 2007-12-06 Philips Intellectual Property & Standards Gmbh Hierarchical motion estimation

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965869A (en) * 2015-08-29 2018-12-07 华为技术有限公司 The method and apparatus of image prediction
CN108965869B (en) * 2015-08-29 2023-09-12 华为技术有限公司 Image prediction method and device
US11979559B2 (en) 2015-08-29 2024-05-07 Huawei Technologies Co., Ltd. Image prediction method and device
CN109792533A (en) * 2016-10-05 2019-05-21 高通股份有限公司 The motion vector prediction of affine motion model is used in video coding
CN109792533B (en) * 2016-10-05 2023-08-15 高通股份有限公司 Method and device for decoding and encoding video data
CN110024403A (en) * 2016-12-29 2019-07-16 高通股份有限公司 The motion vector of affine motion model for video coding generates

Also Published As

Publication number Publication date
WO2009096721A3 (en) 2009-11-05

Similar Documents

Publication Publication Date Title
WO2010068020A9 (en) Multi- view video coding/decoding method and apparatus
WO2009157669A2 (en) Intra prediction method and apparatus, and image encoding/decoding method and apparatus using same
WO2013005941A2 (en) Apparatus and method for coding and decoding an image
WO2013062191A1 (en) Method and apparatus for image encoding with intra prediction mode
WO2011115356A1 (en) Surveillance system
US20100329347A1 (en) Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation
WO2010058895A2 (en) Apparatus and method for encoding/decoding a video signal
WO2012008790A2 (en) Method and apparatus for encoding and decoding image through intra prediction
WO2011145819A2 (en) Image encoding/decoding device and method
WO2013070006A1 (en) Method and apparatus for encoding and decoding video using skip mode
WO2013002549A2 (en) Method and apparatus for coding/decoding image
WO2013183918A1 (en) Image processing apparatus and method for three-dimensional (3d) image
WO2011010858A2 (en) Motion vector prediction method, and apparatus and method for encoding and decoding image using the same
JP2008178149A (en) Apparatus and method for compressing a motion vector field
WO2011102597A1 (en) Coding structure
WO2012081877A2 (en) Multi-view video encoding/decoding apparatus and method
WO2013133627A1 (en) Method of processing video signals
WO2018070556A1 (en) Method and apparatus for extracting intra prediction mode data of square or rectangular block
WO2009096721A2 (en) Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation
KR102492073B1 (en) Method and apparatus for video coding/decoding using intra prediction
JP3633204B2 (en) Signal encoding apparatus, signal encoding method, signal recording medium, and signal transmission method
WO2012026734A2 (en) Encoding/decoding apparatus and method using motion vector sharing of a color image and a depth image
WO2014058207A1 (en) Multiview video signal encoding method and decoding method, and device therefor
WO2014171709A1 (en) Object-based adaptive brightness compensation method and apparatus
CN102113325A (en) Intensity compensation techniques in video processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09705890

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 12865069

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09705890

Country of ref document: EP

Kind code of ref document: A2