WO2014058207A1 - 다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 - Google Patents
다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 Download PDFInfo
- Publication number
- WO2014058207A1 WO2014058207A1 PCT/KR2013/008982 KR2013008982W WO2014058207A1 WO 2014058207 A1 WO2014058207 A1 WO 2014058207A1 KR 2013008982 W KR2013008982 W KR 2013008982W WO 2014058207 A1 WO2014058207 A1 WO 2014058207A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction information
- texture block
- weight prediction
- view texture
- weight
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/521—Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the present invention relates to a method and apparatus for coding a multiview video signal.
- Compression coding refers to a series of signal processing techniques that transmit digitized information through a communication line or store the data in a form suitable for a storage medium.
- the object of compression encoding includes objects such as voice, video, text, and the like.
- a technique of performing compression encoding on an image is called video image compression.
- a general feature of a multiview video image is that it has spatial redundancy, temporal redundancy and inter-view redundancy.
- An object of the present invention is to improve the coding efficiency of a video signal.
- the present invention is characterized by deriving the weight prediction information of the current view texture block based on the weight prediction information of the neighbor view texture block.
- weight prediction information of a neighboring view texture block is determined based on an average value of pixels of a picture including a current view texture block and a difference average value of pixels, an average value of pixels of a picture including a neighbor view texture block, and a difference average value of pixels.
- the weight prediction information of the current view texture block is derived.
- the present invention can improve coding efficiency by reducing the amount of data to be transmitted by deriving the weight prediction information used for weight compensation of the current view texture block based on the weight prediction information of the neighbor view texture block.
- the accuracy of the video data prediction may be improved by modifying the weight prediction information of the neighbor view in consideration of the characteristics of the picture including the current view texture block.
- the processing step may be simplified to reduce the signal processing complexity of the encoder / decoder.
- FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- FIG. 2 is a schematic block diagram of a video encoder according to an embodiment to which the present invention is applied.
- FIG. 5 is a diagram for describing a weight prediction method in a video image having one viewpoint.
- 6 and 7 are diagrams for describing a weight prediction method in a multiview video image.
- FIG. 8 is a flowchart illustrating a method for deriving weight prediction information in an embodiment to which the present invention is applied.
- FIG. 9 is a flowchart illustrating a method for generating weight prediction information in an embodiment to which the present invention is applied.
- the decoding method of a video signal may include obtaining weighted prediction information of a neighboring view texture block corresponding to a current view texture block, and using the weighted prediction information of the neighboring view texture block. Deriving weight prediction information of a texture block and performing weight compensation on the current view texture block by using the derived weight prediction information.
- the apparatus for decoding a video signal obtains weight prediction information of a neighbor view texture block corresponding to a current view texture block, and uses the current weight prediction information of the neighbor view texture block.
- a weight prediction information derivation unit for deriving weight prediction information of the viewpoint texture block and a weight compensation unit for performing weight compensation on the current viewpoint texture block by using the derived weight prediction information are examples of weight prediction information.
- the encoding method of a video signal may include generating weight prediction information of a current view texture block, generating weight prediction information of a neighbor view texture block, and the neighbor view texture block and the If the reference block of the current view texture block has the same POC, activating a weight prediction information flag of the current view texture block.
- Techniques for compression encoding or decoding multi-view video signal data take into account spatial redundancy, temporal redundancy, and redundancy existing between viewpoints.
- a multiview texture image photographed from two or more viewpoints may be coded to implement a 3D image.
- the term coding in this specification may include both the concepts of encoding and decoding, and may be flexibly interpreted according to the technical spirit and technical scope of the present invention.
- the current block and the current picture mean a block and a picture to be processed (or coded), and the current view means a view to be processed.
- the neighbor view may be a view other than the current view, which is a reference view used for inter-view prediction of a multiview video image, and may mean a base view or an independent view.
- the texture block of the neighboring viewpoint may be specified using the inter-view displacement vector.
- a video decoder includes a NAL parser 110, an entropy decoder 120, an inverse quantization / inverse transform unit 130, an intra predictor 140, an in-loop filter unit 150, and a decoded picture.
- the buffer unit 160 and the inter prediction unit 170 may be included.
- the NAL parser 110 may receive a bitstream including multi-view texture data.
- the entropy decoding unit 120 may extract quantized transform coefficients, coding information for prediction of a texture picture, and the like through entropy decoding.
- the intra predictor 140 may perform intra prediction using the reconstructed texture data in the current view texture picture.
- the coding information used for intra prediction may include intra prediction mode and partition information of intra prediction.
- the in-loop filter unit 150 may apply an in-loop filter to each coded block to reduce block distortion.
- the filter can smooth the edges of the block to improve the quality of the decoded picture.
- Filtered texture pictures may be output or stored in the decoded picture buffer unit 160 for use as a reference picture.
- the decoded picture buffer unit 160 stores or opens a previously coded texture picture in order to perform inter prediction.
- frame_num and POC Picture Order Count, a value indicating an output order of pictures
- the inter prediction unit 170 may perform motion compensation on the current block by using the reference picture and the motion information stored in the decoded picture buffer unit 160.
- the motion information may be understood as a broad concept including a motion vector and reference index information.
- the inter prediction unit 170 may perform temporal inter prediction to perform motion compensation.
- Temporal inter prediction may refer to inter prediction using motion information of a reference picture and a current view texture block located at the same time point and a different time zone than the current view texture block.
- inter-view inter prediction may be further performed as well as temporal inter prediction.
- Inter-view inter prediction may refer to inter prediction using motion information of a reference picture located at a different viewpoint than the current view texture block and the current view texture block.
- Weight prediction is a coding technique for predicting how much darker and brighter a reference picture is located at the same time and different time zones as the current view texture block during inter prediction.
- the reference picture is adaptively weighted to predict a signal. Can be.
- the inter prediction unit 170 may further perform weighted prediction when performing temporal inter prediction / motion compensation to perform weight compensation to compensate for the luminance change of the reference picture and the current texture block. Detailed description thereof will be described with reference to FIG. 3.
- FIG. 2 is a schematic block diagram of a video encoder according to an embodiment to which the present invention is applied.
- the video encoder includes a transformer / quantizer 210, an inverse quantizer / inverse transformer 220, a filter 230, an intra predictor 240, an inter predictor 250, and a decoded picture.
- the buffer unit 260 and the entropy coding unit 270 may be included.
- the converter converts texture data of an input video signal to obtain transform coefficients.
- a discrete cosine transform (DCT) or a wavelet transform (DCT) may be used as the transform method, and the quantization unit quantizes the transform coefficient values output from the transform unit.
- the inverse quantization / inverse transform unit 220 may apply a quantization parameter to the quantized transform coefficients to obtain transform coefficients, and inversely transform the transform coefficients to restore texture data.
- the decoded texture data may include residual data according to the prediction process.
- the intra predictor 240 may perform intra prediction using the reconstructed texture data in the current view texture picture.
- the coding information used for intra prediction may include intra prediction mode and partition information of intra prediction.
- the inter predictor 250 may perform motion compensation on the current block by using the reference picture and the motion information stored in the decoded picture buffer 160.
- the inter predictor 250 may include a motion compensator 251 and a negative sound estimator 252.
- the motion estimation unit 252 may perform temporal inter prediction or inter-view inter prediction using the reference picture and motion information.
- the motion compensator 251 may perform inter-screen motion compensation using the motion vector value predicted by the motion estimator 252.
- the weight predicting unit 300 may include a derivation determining unit 310, a weight prediction information deriving unit 320, and a weight compensating unit 330.
- the induction determining unit 310 may determine whether to derive the weight prediction information of the current texture block based on the weight prediction information of the neighboring texture block based on the weight prediction information flag of the current texture block.
- the weight prediction information refers to weight information for predicting the luminance change of the reference picture and the current texture block and compensating the current texture block.
- the weight prediction information may include a weighting factor and an additive offset.
- the weight prediction information flag refers to a flag indicating whether the weight prediction information of the current view texture block is derived from the weight prediction information of the neighboring view texture block or directly obtained from the bit stream. For example, when the weight prediction information flag is activated (for example, when the flag value is '1'), the determination determiner 310 may use the weight prediction information of the neighboring viewpoint texture block to estimate the weight prediction information of the current viewpoint texture block. Can be determined to induce. On the contrary, when the weight prediction information flag is deactivated (for example, when the flag value is '0'), the induction determination unit 310 may determine that the weight prediction information of the current view texture block is directly obtained from the bitstream. have.
- the weight prediction information derivation unit 320 may derive the weight prediction information of the current view texture block by using the weight prediction information of the neighboring view texture block.
- the weight prediction information derivation unit 320 may obtain weight prediction information of the neighboring viewpoint texture block and derive the weight prediction information of the current viewpoint texture block by using the obtained weight prediction information.
- a method of deriving weight prediction information a) a method of deriving weight prediction information of a neighbor view texture block as it is and deriving it into weight prediction information of a current view texture block; There is a method of deriving weight prediction information of a block.
- the method inherits the weight prediction information of the neighboring viewpoint texture block as it is and uses it as the weight prediction information of the current viewpoint texture block.
- the temporal contrast changes between the viewpoints due to the same or similar characteristics.
- x means weight prediction information of the neighboring viewpoint texture block
- y means weight prediction information of the derived current viewpoint texture block.
- the coefficients of the derivation function such as a and b may be extracted from the bitstream through a sequence parameter set, a slice header, and the like, or may be information calculated in the decoder. The derivation of a and b is shown in Equation 2 below.
- the weight compensation unit 330 performs weight compensation using the weight prediction information of the current view texture block derived from the weight prediction information deriving unit 320 or the weight prediction information obtained from the bitstream. As described above, the brightness of the current view texture block may be compensated by adaptively weighting the reference picture based on the weight prediction information.
- the weight predictor 400 may include a weight prediction information generator 410 and a weight prediction information flag generator 420.
- the weight prediction information generator 410 may generate weight prediction information of the texture block at each view.
- the weight prediction information flag generator 420 may activate or deactivate the weight prediction information flag of the current view texture block to be coded.
- the weight prediction information flag generator 420 activates the weight prediction information flag when the reference picture of the current texture block and the neighboring view texture block have the same picture of counter (POC) (for example, '1'). Can be set to ').
- POC picture of counter
- the weight prediction information flag generator 420 may code the activated or deactivated weight prediction information flag in the slice header.
- a weighted prediction table (WP table) is applied to a reference picture at a time point T0 that has been decoded and reconstructed to compensate for brightness of a current picture being processed at time point T1. can do.
- 6 and 7 are diagrams for describing a weight prediction method in a multiview video image.
- FIG. 8 is a flowchart illustrating a method for deriving weight prediction information in an embodiment to which the present invention is applied.
- the decoder when the weight prediction information flag is set to 1 (S810, Yes), the decoder according to an embodiment of the present invention may obtain weight prediction information of the neighboring viewpoint texture block (S820). In operation S830, the weight prediction information of the current view texture block may be derived using the weight prediction information of the neighboring view texture block. In contrast, when the weight prediction information flag is set to 0 (S810, NO), the weight prediction information of the current view texture block may be obtained (S850).
- the decoder may perform weight compensation on the current texture block using the weight prediction information of the current block (S860).
- FIG. 9 is a flowchart illustrating a method for generating weight prediction information in an embodiment to which the present invention is applied.
- the encoder may generate weight prediction information of each viewpoint in the multiview video signal (S910). Then, the difference value between the weight prediction information of the neighboring viewpoint texture block and the weight prediction information of the current viewpoint texture block is calculated to set the weight prediction information flag of the current viewpoint texture block to 1 when the threshold value is less than or equal to (S920). It may be (S930). On the contrary, if it is not less than the threshold value (S920, NO), the weight prediction information flag of the current view texture block may be set to 0 (S940).
- the encoder may set the weight prediction information flag to 1 only when the weight prediction information of the neighboring view texture block and the weight prediction information of the current view texture block are the same.
- the decoding / encoding device to which the present invention is applied may be provided in a multimedia broadcasting transmission / reception device such as DMB (Digital Multimedia Broadcasting), and may be used to decode video signals and data signals.
- the multimedia broadcasting transmission / reception apparatus may include a mobile communication terminal.
- the decoding / encoding method to which the present invention is applied may be stored in a computer-readable recording medium that is produced as a program for execution on a computer, and multimedia data having a data structure according to the present invention may also be read by a computer. It can be stored in the recording medium.
- the computer readable recording medium includes all kinds of storage devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, which are also implemented in the form of a carrier wave (eg, transmission over the Internet). It also includes.
- the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted using a wired / wireless communication network.
- the present invention can be used to code a video signal.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (14)
- 현재 시점 텍스쳐 블록에 대응하는 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 획득하는 단계;상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용하여 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 단계; 및상기 유도된 가중치 예측 정보를 이용하여 상기 현재 시점 텍스쳐 블록에 대해 가중치 보상을 수행하는 단계;를 포함하는 다시점 비디오 신호의 디코딩 방법.
- 제1항에 있어서,현재 시점 텍스쳐 블록의 가중치 보상에 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용하는지 여부를 지시하는 가중치 예측 정보 플래그를 획득하는 단계;를 더 포함하고,상기 가중치 예측 정보를 유도하는 단계는,상기 가중치 예측 정보 플래그가 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용하는 것을 지시하는 경우에만 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 것을 특징으로 하는 다시점 비디오 신호의 디코딩 방법.
- 제1항에 있어서,상기 가중치 예측 정보를 유도하는 단계는,상기 이웃 시점 텍스쳐 블록의 참조 픽쳐과 상기 현재 시점 텍스쳐 블록의 참조 픽쳐의 POC(Picture Of Count)가 동일한 경우에만 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 것을 특징으로 하는 다시점 비디오 신호의 디코딩 방법.
- 제1항에 있어서,상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 단계는,상기 현재 시점 텍스쳐 블록을 포함하는 픽쳐의 픽셀의 평균값 및 픽셀의 차분평균값과 상기 이웃 시점 텍스쳐 블록을 포함하는 픽쳐의 픽셀의 평균값 및 픽셀의 차분평균값을 기초로 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 변형하여 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 것을 특징으로 하는 다시점 비디오 신호의 디코딩 방법.
- 제1항에 있어서,상기 가중치 예측 정보는,인터 예측을 위한 참조 픽쳐과 상기 현재 시점 텍스쳐 블록의 명암 차이를 보상하기 위한 가중치 정보인 것을 특징으로 하는 다시점 비디오 신호의 디코딩 방법.
- 현재 시점 텍스쳐 블록에 대응하는 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 획득하고, 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용하여 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 가중치 예측 정보 유도부; 및상기 유도된 가중치 예측 정보를 이용하여 상기 현재 시점 텍스쳐 블록에 대해 가중치 보상을 수행하는 가중치 보상부;를 포함하는 다시점 비디오 신호의 디코딩 장치.
- 제6항에 있어서,현재 시점 텍스쳐 블록의 가중치 보상에 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용하는지 여부를 지시하는 가중치 예측 정보 플래그를 획득하여, 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용여부를 판단하는 유도 판단부;를 더 포함하고,상기 가중치 예측 정보 유도부는, 상기 가중치 예측 정보 플래그가 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 이용하는 것을 지시하는 경우에만 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 것을 특징으로 하는 다시점 비디오 신호의 디코딩 장치.
- 제6항에 있어서,상기 가중치 예측 정보 유도부는,상기 이웃 시점 텍스쳐 블록의 참조 픽쳐와 상기 현재 시점 텍스쳐 블록의 참조 픽쳐의 POC(Picture Of Count)가 동일한 경우에만 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 것을 특징으로 하는 다시점 비디오 신호의 디코딩 장치.
- 제6항에 있어서,상기 가중치 예측 정보 유도부는,상기 현재 시점 텍스쳐 블록을 포함하는 픽쳐의 픽셀의 평균값 및 픽셀의 차분평균값과 상기 이웃 시점 텍스쳐 블록을 포함하는 픽쳐의 픽셀의 평균값 및 픽셀의 차분평균값을 기초로 상기 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 변형하여 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보를 유도하는 것을 특징으로 하는 다시점 비디오 신호의 디코딩 장치.
- 제6항에 있어서,상기 가중치 예측 정보는,인터 예측을 위한 참조 블록과 상기 현재 시점 텍스쳐 블록의 명암 차이를 보상하기 위한 가중치 정보인 것을 특징으로 하는 다시점 비디오 신호의 디코딩 장치.
- 현재 시점 텍스쳐 블록의 가중치 예측 정보를 생성하는 단계;이웃 시점 텍스쳐 블록의 가중치 예측 정보를 생성하는 단계; 및상기 이웃 시점 텍스쳐 블록과 상기 현재 시점 텍스쳐 블록의 참조 픽쳐가 동일한 POC를 갖는 경우, 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보 플래그를 활성화 하는 단계;를 포함하는 다시점 비디오 신호의 인코딩 방법.
- 제11항에 있어서,상기 가중치 예측 정보 플래그를 활성화 하는 단계는,상기 이웃 시점 텍스쳐 블록 및 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보의 차이가 임계값이내인 경우에만 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보 플래그를 활성화 하는 것을 특징으로 하는 다시점 비디오 신호의 인코딩 방법.
- 현재 시점 텍스쳐 블록의 가중치 예측 정보를 생성하고, 이웃 시점 텍스쳐 블록의 가중치 예측 정보를 생성하는 가중치 예측 정보 생성부; 및상기 이웃 시점 텍스쳐 블록과 상기 현재 시점 텍스쳐 블록의 참조 픽쳐가 동일한 POC를 갖는 경우, 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보 플래그를 활성화 하는 가중치 예측 정보 플래그 생성부;를 포함하는 다시점 비디오의 인코딩 장치.
- 제13항에 있어서,상기 가중치 예측 정보 플래그 생성부는,상기 이웃 시점 텍스쳐 블록 및 상기 현재 시점 텍스쳐 블록의 가중치 예측 정보의 차이가 임계값이내인 경우에만 상기 가중치 예측 정보 플래그를 활성화 하는 것을 특징으로 하는 다시점 비디오 신호의 인코딩 장치.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/434,282 US9894384B2 (en) | 2012-10-08 | 2013-10-08 | Multiview video signal encoding method and decoding method, and device therefor |
KR1020157012093A KR20150090057A (ko) | 2012-10-08 | 2013-10-08 | 다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261710783P | 2012-10-08 | 2012-10-08 | |
US61/710,783 | 2012-10-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014058207A1 true WO2014058207A1 (ko) | 2014-04-17 |
Family
ID=50477616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/008982 WO2014058207A1 (ko) | 2012-10-08 | 2013-10-08 | 다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9894384B2 (ko) |
KR (1) | KR20150090057A (ko) |
WO (1) | WO2014058207A1 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017188782A3 (ko) * | 2016-04-29 | 2018-08-02 | 세종대학교 산학협력단 | 영상 신호 부호화/복호화 방법 및 장치 |
CN110490235A (zh) * | 2019-07-23 | 2019-11-22 | 武汉大学 | 一种面向2d图像的车辆对象视点预测与三维模型恢复方法及装置 |
US10939125B2 (en) | 2016-04-29 | 2021-03-02 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3804331A4 (en) | 2018-06-15 | 2021-08-11 | Huawei Technologies Co., Ltd. | INTRA PREDICTION PROCESS AND APPARATUS |
CN115988202B (zh) * | 2018-06-29 | 2023-11-03 | 华为技术有限公司 | 一种用于帧内预测的设备和方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070000022A (ko) * | 2005-06-24 | 2007-01-02 | 삼성전자주식회사 | 다계층 기반의 가중 예측을 이용한 비디오 코딩 방법 및장치 |
KR20070116527A (ko) * | 2006-06-05 | 2007-12-10 | 엘지전자 주식회사 | 비디오 신호의 디코딩/인코딩 방법 및 장치 |
KR20100000011A (ko) * | 2008-06-24 | 2010-01-06 | 에스케이 텔레콤주식회사 | 인트라 예측 방법 및 장치와 그를 이용한 영상부호화/복호화 방법 및 장치 |
KR20110082428A (ko) * | 2010-01-11 | 2011-07-19 | 삼성테크윈 주식회사 | 불량화소 보정 장치 및 방법 |
WO2011149291A2 (ko) * | 2010-05-26 | 2011-12-01 | 엘지전자 주식회사 | 비디오 신호의 처리 방법 및 장치 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100968204B1 (ko) * | 2007-01-11 | 2010-07-06 | 전자부품연구원 | 다시점 비디오 코덱에서의 영상 예측 방법 및 이를 위한프로그램을 기록한 컴퓨터로 판독 가능한 기록매체 |
JP4266233B2 (ja) * | 2007-03-28 | 2009-05-20 | 株式会社東芝 | テクスチャ処理装置 |
EP2752001A4 (en) * | 2011-08-30 | 2015-04-15 | Nokia Corp | APPARATUS, METHOD AND COMPUTER PROGRAM FOR VIDEO ENCODING AND DECODING |
EP2898689B1 (en) * | 2012-09-21 | 2020-05-06 | Nokia Technologies Oy | Method and apparatus for video coding |
US20140098883A1 (en) * | 2012-10-09 | 2014-04-10 | Nokia Corporation | Method and apparatus for video coding |
-
2013
- 2013-10-08 KR KR1020157012093A patent/KR20150090057A/ko not_active Application Discontinuation
- 2013-10-08 US US14/434,282 patent/US9894384B2/en active Active
- 2013-10-08 WO PCT/KR2013/008982 patent/WO2014058207A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070000022A (ko) * | 2005-06-24 | 2007-01-02 | 삼성전자주식회사 | 다계층 기반의 가중 예측을 이용한 비디오 코딩 방법 및장치 |
KR20070116527A (ko) * | 2006-06-05 | 2007-12-10 | 엘지전자 주식회사 | 비디오 신호의 디코딩/인코딩 방법 및 장치 |
KR20100000011A (ko) * | 2008-06-24 | 2010-01-06 | 에스케이 텔레콤주식회사 | 인트라 예측 방법 및 장치와 그를 이용한 영상부호화/복호화 방법 및 장치 |
KR20110082428A (ko) * | 2010-01-11 | 2011-07-19 | 삼성테크윈 주식회사 | 불량화소 보정 장치 및 방법 |
WO2011149291A2 (ko) * | 2010-05-26 | 2011-12-01 | 엘지전자 주식회사 | 비디오 신호의 처리 방법 및 장치 |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017188782A3 (ko) * | 2016-04-29 | 2018-08-02 | 세종대학교 산학협력단 | 영상 신호 부호화/복호화 방법 및 장치 |
US10939125B2 (en) | 2016-04-29 | 2021-03-02 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
US11363280B2 (en) | 2016-04-29 | 2022-06-14 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
US11750823B2 (en) | 2016-04-29 | 2023-09-05 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
US11856208B1 (en) | 2016-04-29 | 2023-12-26 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
US11876983B2 (en) | 2016-04-29 | 2024-01-16 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signals using weight prediction parameter sets based on size of current block |
US11909990B2 (en) | 2016-04-29 | 2024-02-20 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signals using weight prediction parameter sets based on neighboring regions |
US12028532B2 (en) | 2016-04-29 | 2024-07-02 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signals using weight prediction parameters |
US12126818B2 (en) | 2016-04-29 | 2024-10-22 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
US12137232B2 (en) | 2016-04-29 | 2024-11-05 | Industry Academy Cooperation Foundation Of Sejong University | Method and apparatus for encoding/decoding image signal |
CN110490235A (zh) * | 2019-07-23 | 2019-11-22 | 武汉大学 | 一种面向2d图像的车辆对象视点预测与三维模型恢复方法及装置 |
CN110490235B (zh) * | 2019-07-23 | 2021-10-22 | 武汉大学 | 一种面向2d图像的车辆对象视点预测与三维模型恢复方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US9894384B2 (en) | 2018-02-13 |
KR20150090057A (ko) | 2015-08-05 |
US20150271523A1 (en) | 2015-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014010935A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
JP6178017B2 (ja) | ステレオビデオのための深度認識向上 | |
WO2010068020A2 (ko) | 다시점 영상 부호화, 복호화 방법 및 그 장치 | |
WO2015142054A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2011139121A2 (ko) | 생략 부호화를 이용한 영상 부호화 및 복호화 장치 및 그 방법 | |
WO2010087589A2 (ko) | 경계 인트라 코딩을 이용한 비디오 신호 처리 방법 및 장치 | |
WO2014107083A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013133648A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014058207A1 (ko) | 다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 | |
WO2013176485A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013133627A1 (ko) | 비디오 신호 처리 방법 | |
WO2013191436A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
US11647181B2 (en) | Prediction weighted table-based image/video coding method and apparatus | |
WO2012081877A2 (ko) | 다시점 비디오 부호화/복호화 장치 및 방법 | |
WO2014010918A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2010090462A2 (en) | Apparatus and method for encoding and decoding multi-view image | |
WO2014107029A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009098A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014073877A1 (ko) | 다시점 비디오 신호의 처리 방법 및 이에 대한 장치 | |
WO2016003209A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2014054896A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014109547A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009091A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014054897A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013157839A1 (ko) | 인간의 시각 특성을 이용한 오프셋 값 결정 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13846159 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14434282 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20157012093 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13846159 Country of ref document: EP Kind code of ref document: A1 |