WO2016003209A1 - 다시점 비디오 신호 처리 방법 및 장치 - Google Patents
다시점 비디오 신호 처리 방법 및 장치 Download PDFInfo
- Publication number
- WO2016003209A1 WO2016003209A1 PCT/KR2015/006797 KR2015006797W WO2016003209A1 WO 2016003209 A1 WO2016003209 A1 WO 2016003209A1 KR 2015006797 W KR2015006797 W KR 2015006797W WO 2016003209 A1 WO2016003209 A1 WO 2016003209A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- depth
- current texture
- prediction
- partition
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to a method and apparatus for coding a video signal.
- High efficiency image compression techniques can be used to solve these problems caused by high resolution and high quality image data.
- An inter-screen prediction technique for predicting pixel values included in the current picture from a picture before or after the current picture using an image compression technique an intra prediction technique for predicting pixel values included in a current picture using pixel information in the current picture
- An object of the present invention is to provide a method and apparatus for performing inter-view prediction using a disparity vector in encoding / decoding a multiview video signal.
- An object of the present invention is to provide a method and apparatus for deriving a disparity vector of a texture block using depth data of a depth block in encoding / decoding a multiview video signal.
- An object of the present invention is to provide a method and apparatus for deriving a disparity vector from a neighboring block of a current texture block in encoding / decoding a multiview video signal.
- An object of the present invention is to provide an inter prediction method and apparatus through depth-based block partitioning in encoding / decoding a multiview video signal.
- the method and apparatus for decoding a multi-layer video signal according to the present invention may determine a prediction mode of a current depth block, and when the current texture block is a block encoded in an inter mode, determine a partition mode of the current texture block. A motion vector is obtained for each partition according to the partition mode, and inter prediction on the current texture block is performed using the motion vector through depth-based block partitioning.
- the performing of the inter prediction may include generating a first prediction block of the current texture block by using a motion vector of a first partition of the current texture block. And generating a second prediction block of the current texture block by using the motion vector of the second partition of the current texture block, and according to the partition pattern of the depth block corresponding to the current texture block.
- the final prediction block is generated by combining the prediction blocks.
- the partition pattern of the depth block is based on a comparison between the reconstructed depth value of the depth block and a predetermined threshold value. It is characterized by being divided into.
- the predetermined threshold is an average value of samples located at a corner of a depth block
- the first region is a sample having a depth value larger than the predetermined threshold
- the second region is a region composed of samples having a depth value smaller than the predetermined threshold.
- the step of performing inter prediction on the current texture block may be selectively performed based on a depth block partitioning flag.
- the depth block partitioning flag is signaled when the partition mode of the current texture block is not 2Nx2N or NxN.
- the multi-layer video signal encoding method and apparatus encode a prediction mode of a current depth block, encode a partition mode of the current texture block when the current texture block is a block encoded in an inter mode, A motion vector is obtained for each partition according to the partition mode, and inter prediction on the current texture block is performed using the motion vector through depth-based block partitioning.
- the performing of the inter prediction may include generating a first prediction block of the current texture block using a motion vector of a first partition of the current texture block. And generating a second prediction block of the current texture block by using the motion vector of the second partition of the current texture block, and according to the partition pattern of the depth block corresponding to the current texture block.
- the final prediction block is generated by combining the prediction blocks.
- the partition pattern of the depth block is based on a comparison between the reconstructed depth value of the depth block and a predetermined threshold value. It is characterized by being divided into.
- the predetermined threshold is an average value of samples located at a corner of a depth block
- the first region is a sample having a depth value larger than the predetermined threshold
- the second region is a region composed of samples having a depth value smaller than the predetermined threshold.
- the step of performing inter prediction on the current texture block is selectively performed based on a depth block partitioning flag.
- the depth block partitioning flag is encoded when the partition mode of the current texture block is not 2Nx2N or NxN.
- inter-view prediction can be efficiently performed using the disparity vector.
- the present invention it is possible to effectively derive the disparity vector of the current texture block from the depth data of the current depth block or the disparity vector of the neighboring texture block.
- FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- FIG. 2 illustrates a method of performing inter-view prediction based on a disparity vector according to an embodiment to which the present invention is applied.
- FIG 3 illustrates a method of deriving a disparity vector of a current texture block using depth data of a depth image as an embodiment to which the present invention is applied.
- FIG. 4 illustrates a candidate of a spatial / temporal neighboring block of a current texture block as an embodiment to which the present invention is applied.
- FIG. 5 illustrates a method of performing inter prediction of a current texture block according to an embodiment to which the present invention is applied.
- FIG. 6 illustrates a method of performing inter prediction using depth based block partitioning according to an embodiment to which the present invention is applied.
- FIG. 7 to 9 illustrate a method of signaling a depth block partitioning flag according to an embodiment to which the present invention is applied.
- the method and apparatus for decoding a multi-layer video signal according to the present invention may determine a prediction mode of a current depth block, and when the current texture block is a block encoded in an inter mode, determine a partition mode of the current texture block. A motion vector is obtained for each partition according to the partition mode, and inter prediction on the current texture block is performed using the motion vector through depth-based block partitioning.
- the performing of the inter prediction may include generating a first prediction block of the current texture block by using a motion vector of a first partition of the current texture block. And generating a second prediction block of the current texture block by using the motion vector of the second partition of the current texture block, and according to the partition pattern of the depth block corresponding to the current texture block.
- the final prediction block is generated by combining the prediction blocks.
- the partition pattern of the depth block is based on a comparison between the reconstructed depth value of the depth block and a predetermined threshold value. It is characterized by being divided into.
- the predetermined threshold is an average value of samples located at a corner of a depth block
- the first region is a sample having a depth value larger than the predetermined threshold
- the second region is a region composed of samples having a depth value smaller than the predetermined threshold.
- the step of performing inter prediction on the current texture block may be selectively performed based on a depth block partitioning flag.
- the depth block partitioning flag is signaled when the partition mode of the current texture block is not 2Nx2N or NxN.
- the multi-layer video signal encoding method and apparatus encode a prediction mode of a current depth block, encode a partition mode of the current texture block when the current texture block is a block encoded in an inter mode, A motion vector is obtained for each partition according to the partition mode, and inter prediction on the current texture block is performed using the motion vector through depth-based block partitioning.
- the performing of the inter prediction may include generating a first prediction block of the current texture block using a motion vector of a first partition of the current texture block. And generating a second prediction block of the current texture block by using the motion vector of the second partition of the current texture block, and according to the partition pattern of the depth block corresponding to the current texture block.
- the final prediction block is generated by combining the prediction blocks.
- the partition pattern of the depth block is based on a comparison between the reconstructed depth value of the depth block and a predetermined threshold value. It is characterized by being divided into.
- the predetermined threshold is an average value of samples located at a corner of a depth block
- the first region is a sample having a depth value larger than the predetermined threshold
- the second region is a region composed of samples having a depth value smaller than the predetermined threshold.
- the step of performing inter prediction on the current texture block is selectively performed based on a depth block partitioning flag.
- the depth block partitioning flag is encoded when the partition mode of the current texture block is not 2Nx2N or NxN.
- Techniques for compression encoding or decoding multi-view video signal data take into account spatial redundancy, temporal redundancy, and redundancy existing between views.
- a multiview texture image photographed from two or more viewpoints may be coded to implement a 3D image.
- depth data corresponding to a multiview texture image may be further coded as necessary.
- compression coding may be performed in consideration of spatial redundancy, temporal redundancy, or inter-view redundancy.
- Depth data represents distance information between a camera and a corresponding pixel
- depth data may be flexibly interpreted as information related to depth, such as a depth value, a depth information, a depth image, a depth picture, a depth sequence, and a depth bitstream.
- coding in this specification may include both the concepts of encoding and decoding, and may be flexibly interpreted according to the technical spirit and technical scope of the present invention.
- FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- a video decoder includes a NAL parser 100, an entropy decoder 200, an inverse quantization / inverse transform unit 300, an intra predictor 400, an in-loop filter unit 500, and a decoded picture.
- the buffer unit 600 and the inter prediction unit 700 may be included.
- the NAL parser 100 may receive a bitstream including multi-view texture data.
- the bitstream including the encoded depth data may be further received.
- the input texture data and the depth data may be transmitted in one bitstream or may be transmitted in separate bitstreams.
- the NAL parser 100 may parse the NAL unit to decode the input bitstream.
- the input bitstream is multi-view related data (eg, 3-Dimensional Video)
- the input bitstream may further include a camera parameter.
- Camera parameters can have intrinsic camera parameters and extrinsic camera parameters, and inherent camera parameters include focal length, aspect ratio, and principal. point) and the like, and the non-unique camera parameter may include location information of the camera in the world coordinate system.
- the entropy decoding unit 200 may extract quantized transform coefficients, coding information for prediction of a texture picture, and the like through entropy decoding.
- the inverse quantization / inverse transform unit 300 may apply a quantization parameter to the quantized transform coefficients to obtain transform coefficients, and inversely transform the transform coefficients to decode texture data or depth data.
- the decoded texture data or depth data may mean residual data according to a prediction process.
- the quantization parameter for the depth block may be set in consideration of the complexity of the texture data. For example, when the texture block corresponding to the depth block is a region of high complexity, a low quantization parameter may be set, and in the case of a region of low complexity, a high quantization parameter may be set.
- the complexity of the texture block may be determined based on a difference value between pixels adjacent to each other in the reconstructed texture picture as shown in Equation 1 below.
- Equation 1 E denotes the complexity of the texture data, C denotes the restored texture data, and N denotes the number of pixels in the texture data area to which the complexity is to be calculated.
- the complexity of the texture data corresponds to the difference value between the texture data corresponding to the (x, y) position and the texture data corresponding to the (x-1, y) position and the (x, y) position. It may be calculated using a difference value between the texture data and the texture data corresponding to the position (x + 1, y).
- the complexity may be calculated for the texture picture and the texture block, respectively, and the quantization parameter may be derived using Equation 2 below.
- the quantization parameter for the depth block may be determined based on a ratio of the complexity of the texture picture and the complexity of the texture block.
- ⁇ and ⁇ may be variable integers derived at the decoder, or may be predetermined integers in the decoder.
- the intra predictor 400 may perform intra prediction using the reconstructed texture data in the current texture picture. Intra-prediction may be performed on the depth picture in the same manner as the texture picture.
- coding information used for intra prediction of a texture picture may be similarly used in a step picture.
- the coding information used for intra prediction may include intra prediction mode and partition information of intra prediction.
- the in-loop filter unit 500 may apply an in-loop filter to each coded block to reduce block distortion.
- the filter can smooth the edges of the block to improve the quality of the decoded picture.
- Filtered texture pictures or depth pictures may be output or stored in the decoded picture buffer unit 600 for use as a reference picture.
- the coding efficiency may be reduced.
- a separate in-loop filter for depth data may be defined.
- an in-loop filtering method for efficiently coding depth data a region-based adaptive loop filter and a trilateral loop filter will be described.
- the region-based adaptive loop filter it may be determined whether to apply the region-based adaptive loop filter based on the variation of the depth block.
- the variation amount of the depth block may be defined as the difference between the maximum pixel value and the minimum pixel value in the depth block.
- Whether to apply the filter may be determined by comparing the change amount of the depth block with a predetermined threshold. For example, when the amount of change in the depth block is greater than or equal to the predetermined threshold value, since the difference between the maximum pixel value and the minimum pixel value in the depth block is large, it may be determined to apply an area-based adaptive loop filter. . In contrast, when the depth change amount is smaller than the predetermined threshold, it may be determined that the region-based adaptive loop filter is not applied.
- the pixel value of the filtered depth block may be derived by applying a predetermined weight to the neighboring pixel value.
- the predetermined weight may be determined based on a position difference between the pixel currently being filtered and the neighboring pixel and / or a difference value between the pixel value currently being filtered and the neighboring pixel value.
- the neighbor pixel value may mean any one of the pixel values included in the depth block except for the pixel value currently being filtered.
- the trilateral loop filter according to the present invention is similar to the region-based adaptive loop filter except that it additionally considers texture data.
- the trilateral loop filter compares the following three conditions and extracts depth data of neighboring pixels satisfying the following three conditions.
- Condition 1 is to compare the positional difference between the current pixel p and the neighboring pixel q in the depth block with a predetermined parameter sigma 1
- condition 2 is the depth data of the current pixel p and the depth of the neighboring pixel q.
- the difference between the data is compared with the predetermined parameter? 2
- condition 3 is comparing the difference between the texture data of the current pixel p and the texture data of the neighboring pixel q with the predetermined parameter? 3.
- the neighboring pixels satisfying the three conditions may be extracted, and the current pixel p may be filtered by the median or average value of the depth data.
- the decoded picture buffer unit 600 stores or opens a previously coded texture picture or a depth picture in order to perform inter prediction.
- the frame_num and the POC (Picture Order Count) of each picture may be used.
- some of the previously coded pictures may have depth pictures that are different from the current depth picture, and thus, view identification information identifying a view point of the depth picture may be used to use these pictures as reference pictures. have.
- the decoded picture buffer unit 600 may manage the reference picture using an adaptive memory management control method and a sliding window method in order to more flexibly implement inter prediction.
- the depth pictures may be marked with a separate mark to distinguish them from texture pictures in the decoded picture buffer unit, and information for identifying each depth picture may be used in the marking process.
- the inter prediction unit 700 may perform motion compensation of the current block by using the reference picture and the motion information stored in the decoded picture buffer unit 600.
- the motion information may be understood as a broad concept including a motion vector and reference index information.
- the inter prediction unit 700 may perform temporal inter prediction to perform motion compensation.
- Temporal inter prediction may refer to inter prediction using motion information of a reference picture and a current texture block located at the same time point and different time zone as the current texture block.
- temporal inter prediction may refer to inter prediction using motion information of a reference picture and a current texture block located at the same time point and different time zone as the current texture block.
- the motion information used for the inter-view prediction may include a disparity vector or an inter-view motion vector. A method of performing inter-view prediction using the disparity vector will be described below with reference to FIG. 2.
- FIG. 2 illustrates a method of performing inter-view prediction based on a disparity vector according to an embodiment to which the present invention is applied.
- a disparity vector of a current texture block may be derived (S200).
- a disparity vector may be derived from a depth image corresponding to a current texture block, which will be described in detail with reference to FIG. 3.
- It may also be derived from a neighboring block spatially adjacent to the current texture block, or may be derived from a temporal neighboring block located at a different time zone than the current texture block.
- a method of deriving a disparity vector from a spatial / temporal neighboring block of the current texture block will be described with reference to FIG. 4.
- inter-view prediction of the current texture block may be performed using the disparity vector derived in step S200 (S210).
- texture data of the current texture block may be predicted or reconstructed using the texture data of the reference block specified by the disparity vector.
- the reference block may belong to a view used for inter-view prediction of the current texture block, that is, a reference view.
- the reference block may belong to a reference picture located at the same time zone as the current texture block.
- a reference block belonging to a reference view may be specified using the disparity vector
- a temporal motion vector of a current texture block may be derived using the temporal motion vector of the specified reference block.
- the temporal motion vector refers to a motion vector used for temporal inter prediction, and may be distinguished from a disparity vector used for inter-view prediction.
- FIG 3 illustrates a method of deriving a disparity vector of a current texture block using depth data of a depth image as an embodiment to which the present invention is applied.
- location information of a depth block (hereinafter, referred to as a current depth block) in a depth picture corresponding to the current texture block may be obtained based on the location information of the current texture block (S300).
- the position of the current depth block may be determined in consideration of the spatial resolution between the depth picture and the current picture.
- the position of the current depth block may be determined as a block having the same position as the current texture block of the current picture.
- the current picture and the depth picture may be coded at different spatial resolutions. This is because the coding efficiency may not be significantly reduced even if the spatial resolution is coded at a lower level due to the characteristics of the depth information representing the distance information between the camera and the object. Therefore, when the spatial resolution of the depth picture is coded lower than the current picture, the decoder may involve an upsampling process for the depth picture before acquiring position information of the current depth block.
- offset information may be additionally considered when acquiring position information of the current depth block in the upsampled depth picture.
- the offset information may include at least one of top offset information, left offset information, right offset information, and bottom offset information.
- the top offset information may indicate a position difference between at least one pixel located at the top of the upsampled depth picture and at least one pixel located at the top of the current picture.
- Left, right, and bottom offset information may also be defined in the same manner.
- depth data corresponding to position information of a current depth block may be obtained (S310).
- depth data corresponding to corner pixels of the current depth block may be used.
- depth data corresponding to the center pixel of the current depth block may be used.
- any one of a maximum value, a minimum value, and a mode value may be selectively used among the plurality of depth data corresponding to the plurality of pixels, or an average value of the plurality of depth data may be used.
- the disparity vector of the current texture block may be derived using the depth data obtained in operation S310 (S320).
- the disparity vector of the current texture block may be derived as in Equation 3 below.
- v denotes depth data
- a denotes a scaling factor
- f denotes an offset used to derive a disparity vector.
- the scaling factor a and offset f may be signaled in a video parameter set or slice header, or may be a value pre-set in the decoder.
- n is a variable representing the value of the bit shift, which may be variably determined according to the accuracy of the disparity vector.
- FIG. 4 illustrates a candidate of a spatial / temporal neighboring block of a current texture block as an embodiment to which the present invention is applied.
- the spatial neighboring block includes a left neighboring block A1, an upper neighboring block B1, a lower left neighboring block A0, an upper right neighboring block B0, or an upper left neighboring block of the current texture block. It may include at least one of the blocks (B2).
- the temporal neighboring block may mean a block at the same position as the current texture block.
- the temporal neighboring block is a block belonging to a picture located at a different time zone from the current texture block, and includes a block BR corresponding to the lower right pixel of the current texture block, a block CT corresponding to the center pixel of the current texture block, or the like. It may include at least one of the blocks (TL) corresponding to the upper left pixel of the current texture block.
- the disparity vector of the current texture block may be derived from a disparity-compensated prediction block (hereinafter, referred to as a DCP block) among the spatial / temporal neighboring blocks.
- the DCP block may mean a block encoded through inter-view texture prediction using a disparity vector.
- the DCP block may perform inter-view prediction using texture data of the reference block specified by the disparity vector.
- the disparity vector of the current texture block may be predicted or reconstructed using the disparity vector used by the DCP block for inter-view texture prediction.
- the disparity vector of the current texture block may be derived from a disparity vector based-motion compensation prediction block (hereinafter referred to as DV-MCP block) among the spatial neighboring blocks.
- the DV-MCP block may mean a block encoded through inter-view motion prediction using a disparity vector.
- the DV-MCP block may perform temporal inter prediction using the temporal motion vector of the reference block specified by the disparity vector.
- the disparity vector of the current texture block may be predicted or reconstructed using the disparity vector used by the DV-MCP block to obtain the temporal motion vector of the reference block.
- the current texture block may search whether a spatial / temporal neighboring block corresponds to a DCP block according to a pre-defined priority, and derive a disparity vector from the first found DCP block.
- the search may be performed with the priority of the spatial neighboring block-> temporal neighboring block, and among the spatial neighboring blocks with the priority of A1-> B1-> B0-> A0-> B2. It may be found whether it corresponds to the DCP block.
- this is only an embodiment of the priority, and may be determined differently within the scope apparent to those skilled in the art.
- the spatial / temporal neighboring blocks corresponds to the DCP block, it can additionally search whether the spatial neighboring block corresponds to the DV-MCP block, and likewise derive the disparity vector from the first searched DV-MCP block. .
- FIG. 5 illustrates a method of performing inter prediction of a current texture block according to an embodiment to which the present invention is applied.
- the prediction mode of the current texture block may be determined (S500).
- the prediction mode of the current texture block may be determined using a prediction mode flag encoded and transmitted by the encoding apparatus. For example, if the value of the prediction mode flag is 0, the current texture block may indicate that the block is encoded in the inter mode, and if the value is 1, the current texture block may indicate that the block is encoded in the intra mode.
- a partition mode of the current texture block may be determined (S510).
- the partition mode of the present invention may specify whether the current texture block is encoded into a square partition or a rectangular partition, and may specify whether the current texture block is encoded into a symmetric partition or an asymmetric partition.
- partition modes include 2Nx2N, Nx2N, 2NxN, NxN, 2NxnU, 2NxnD, nLx2N, nRx2N, and the like.
- the current texture block may be divided into at least one partition.
- a basic unit (ie, prediction unit) for performing inter prediction on the current texture block may be determined according to the partition mode.
- the partition mode of the current texture block may be determined using partition mode information (part_mode) encoded and transmitted by the encoding apparatus.
- a motion vector may be obtained for each partition according to the partition mode determined in operation S510 (S520).
- each partition of the current texture block obtains a motion vector prediction value using a motion vector of either a spatial neighboring block or a temporal neighboring block, and reconstructs the motion vector using the motion vector prediction value and the encoded motion vector difference value.
- the spatial neighboring block may mean a neighboring block adjacent to the top or left side of the partition
- the temporal neighboring block may mean a block having the same position as the current texture block in a picture located at a different time from the current texture block.
- inter prediction may be performed on a current texture block using the motion vector through depth-based block partitioning (S530).
- Depth-based block partitioning generates a plurality of prediction blocks using different motion vectors of a plurality of partitions according to the partition mode of the current texture block, and generates a plurality of prediction blocks according to the partition pattern of the depth block.
- a method of generating a final prediction block by combining this will be described in detail with reference to FIG. 6.
- inter prediction using depth-based block partitioning may be selectively performed based on the depth block partitioning flag. For example, if the depth block partitioning flag has a value of 1, the current texture block performs inter prediction using depth based block partitioning, and if the value is 0, the current texture block uses depth based block partitioning. May indicate no. A method of signaling the depth block partitioning flag will be described with reference to FIGS. 7 to 9.
- inter prediction using the depth-based block partitioning may be limited not to be performed.
- FIG. 6 illustrates a method of performing inter prediction using depth based block partitioning according to an embodiment to which the present invention is applied.
- a first prediction block P T0 (x, y) of the current texture block may be generated using the motion vector of the first partition.
- the first partition when the current texture block is encoded in a partition mode of N ⁇ 2N, the first partition may be a left partition (eg, a partition having a partition index of 0) or a right partition (eg, a partition index) of the current texture block. One partition).
- the first partition when the current texture block is encoded into a partition of 2N ⁇ N, the first partition may mean either the upper partition or the lower partition.
- Inter prediction may be performed by applying a motion vector relating to the first partition to the current texture block.
- the first prediction block P T0 (x, y) may have the same size as that of the current texture block, that is, 2N ⁇ 2N.
- a second prediction block P T1 (x, y) of the current texture block may be generated using the motion vector of the second partition.
- the second partition is a partition belonging to the current texture block and may mean a partition other than the first partition. Inter prediction is performed by applying the motion vector of the second partition to the current texture block.
- the second prediction block P T1 (x, y) has the same size as that of the current texture block, that is, 2N ⁇ 2N. Can have
- the first prediction block P T0 (x, y) and the second prediction block P T1 (x, y) are combined in accordance with the partition pattern of the depth block corresponding to the current texture block to determine the final prediction block ((P T). (x, y))
- the partition pattern of the depth block may be determined based on a comparison between the restored depth value of the depth block and a predetermined threshold value.
- the predetermined threshold may be determined by the value of any one of the samples located at the corners of the depth block, or may be determined by the average, mode, minimum, or maximum values of the samples located at the corners.
- Samples located at the corners of the depth block may include at least two of a left-top corner sample, a right-top corner sample, a left-bottom corner sample, and a right-bottom corner sample.
- the depth block may be divided into a first region and a second region by comparing the restored depth value of the depth block with a predetermined threshold.
- the first region may mean a set of samples having a depth value larger than a predetermined threshold
- the second region may mean a set of samples having a depth value smaller than a predetermined threshold.
- 1 may be allocated to a sample of the first region and 0 may be allocated to a sample of the second region.
- 0 can be assigned to the sample of the first region and 1 can be assigned to the sample of the second region.
- a partition pattern may be determined.
- a prediction signal corresponding to a second region according to the partition pattern (hereinafter, referred to as a second prediction signal m D1 (x, y)) may be extracted from a second prediction block P T1 (x, y).
- the final prediction block of the current texture block may be generated by combining the extracted first prediction signal m D0 (x, y) and the second prediction signal m D1 (x, y).
- FIG. 7 to 9 illustrate a method of signaling a depth block partitioning flag according to an embodiment to which the present invention is applied.
- a depth block partitioning flag may be obtained based on a depth_based_blk_part_flag [layerId] and a prediction mode of a current texture block (S700).
- the depth block partitioning enable flag may indicate whether depth-based block partitioning is used in at least one slice or coding block belonging to a layer ID layerId. For example, if the value of the depth block partitioning available flag is 1, depth-based block partitioning is used in at least one slice or coding block belonging to the layer, and if the value is 0, all slices or belonging to the layer are defined. It may indicate that depth based block partitioning is not used in the coding block. Therefore, the depth block partitioning flag may be obtained only when the value of the depth block partitioning enable flag is 1.
- depth-based block partitioning is a method of generating each prediction signal from one coding block using different motion vectors and synthesizing the prediction signal generated according to the partition pattern of the depth block, the coding block prediction
- the mode is an intra mode or a skip mode performed in units of coding blocks
- depth-based block partitioning cannot be used. Therefore, the encoding efficiency can be improved by obtaining the depth block partitioning flag only when the prediction mode of the current texture block is the inter mode.
- a depth block partitioning flag may be obtained based on a depth block partitioning enable flag (depth_based_blk_part_flag [layerId]), a prediction mode of a current texture block, and a partition mode of a current texture block (S800).
- depth_based_blk_part_flag [layerId] a depth block partitioning enable flag
- prediction mode of a current texture block a prediction mode of a current texture block
- partition mode of a current texture block S800
- the depth block partitioning enable flag may indicate whether depth-based block partitioning is used in at least one slice or coding block belonging to a layer ID layerId. For example, if the value of the depth block partitioning available flag is 1, depth-based block partitioning is used in at least one slice or coding block belonging to the layer, and if the value is 0, all slices or belonging to the layer are defined. It may indicate that depth based block partitioning is not used in the coding block. Therefore, the depth block partitioning flag may be obtained only when the value of the depth block partitioning enable flag is 1.
- the encoding efficiency may be increased by acquiring the depth block partitioning flag only when the prediction mode of the current texture block is the inter mode.
- depth-based block partitioning requires generating a prediction signal using different motion vectors of a plurality of partitions constituting one coding block
- the depth is limited only when the partition mode of the current texture block is not 2Nx2N. It may be restricted to obtain the block partitioning flag.
- a depth block partitioning flag may be obtained based on a depth block partitioning enable flag (depth_based_blk_part_flag [layerId]), a prediction mode of a current texture block, and a partition mode of a current texture block (S900).
- the depth block partitioning enable flag may indicate whether depth-based block partitioning is used in at least one slice or coding block belonging to a layer ID layerId. For example, if the value of the depth block partitioning available flag is 1, depth-based block partitioning is used in at least one slice or coding block belonging to the layer, and if the value is 0, all slices or belonging to the layer are defined. It may indicate that depth based block partitioning is not used in the coding block. Therefore, the depth block partitioning flag may be obtained only when the value of the depth block partitioning enable flag is 1.
- the encoding efficiency may be increased by acquiring the depth block partitioning flag only when the prediction mode of the current texture block is the inter mode.
- the depth block partitioning flag may be limited to be acquired only when the current texture block is not encoded as a square partition (for example, when the partition mode of the current texture block is not 2Nx2N or not NxN).
- the present invention can be used to code a video signal.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 현재 뎁스 블록의 예측 모드를 결정하는 단계;상기 현재 텍스쳐 블록이 인터 모드로 부호화된 블록인 경우, 상기 현재 텍스쳐 블록의 파티션 모드를 결정하는 단계;상기 파티션 모드에 따른 각 파티션에 대하여 모션 벡터를 획득하는 단계; 및뎁스 기반 블록 파티션닝을 통해 상기 모션 벡터를 이용하여 상기 현재 텍스쳐 블록에 대한 인터 예측을 수행하는 단계를 포함하는 다시점 비디오 신호 디코딩 방법.
- 제1항에 있어서, 상기 인터 예측을 수행하는 단계는,상기 현재 텍스쳐 블록의 제1 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제1 예측 블록을 생성하고, 상기 현재 텍스쳐 블록의 제2 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제2 예측 블록을 생성하는 단계; 및상기 현재 텍스쳐 블록에 대응하는 뎁스 블록의 파티션 패턴에 따라 상기 제1 예측 블록과 제2 예측 블록을 조합하여 최종 예측 블록을 생성하는 단계를 포함하는 다시점 비디오 신호 디코딩 방법.
- 제2항에 있어서,상기 뎁스 블록의 파티션 패턴은 뎁스 블록의 복원된 뎁스 값과 소정의 문턱값(threshold value) 간의 비교에 기초하여 제1 영역과 제2 영역으로 구분되되,상기 소정의 문턱값은 뎁스 블록의 코너에 위치한 샘플들의 평균값이고,상기 제1 영역은 상기 소정의 문턱값보다 큰 뎁스 값을 가진 샘플들로 구성된 영역을 의미하고, 상기 제2 영역은 상기 소정의 문턱값보다 작은 뎁스 값을 가진 샘플들로 구성된 영역인 것을 특징으로 하는 다시점 비디오 신호 디코딩 방법.
- 제1항에 있어서, 상기 현재 텍스쳐 블록에 대한 인터 예측을 수행하는 단계는 뎁스 블록 파티션닝 플래그에 기초하여 선택적으로 수행되되,상기 뎁스 블록 파티션닝 플래그는 상기 현재 텍스쳐 블록의 파티션 모드가 2Nx2N 또는 NxN이 아닌 경우에 시그날링되는 것을 특징으로 하는 다시점 비디오 신호 디코딩 방법.
- 현재 뎁스 블록의 예측 모드를 결정하고, 상기 현재 텍스쳐 블록이 인터 모드로 부호화된 블록인 경우, 상기 현재 텍스쳐 블록의 파티션 모드를 결정하는 엔트로피 디코딩부; 및상기 파티션 모드에 따른 각 파티션에 대하여 모션 벡터를 획득하고, 뎁스 기반 블록 파티션닝을 통해 상기 모션 벡터를 이용하여 상기 현재 텍스쳐 블록에 대한 인터 예측을 수행하는 인터 예측부를 포함하는 다시점 비디오 신호 디코딩 장치.
- 제5항에 있어서, 상기 인터 예측부는,상기 현재 텍스쳐 블록의 제1 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제1 예측 블록을 생성하고, 상기 현재 텍스쳐 블록의 제2 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제2 예측 블록을 생성하며, 상기 현재 텍스쳐 블록에 대응하는 뎁스 블록의 파티션 패턴에 따라 상기 제1 예측 블록과 제2 예측 블록을 조합하여 최종 예측 블록을 생성하는 것을 특징으로 하는 다시점 비디오 신호 디코딩 장치.
- 제6항에 있어서,상기 뎁스 블록의 파티션 패턴은 뎁스 블록의 복원된 뎁스 값과 소정의 문턱값(threshold value) 간의 비교에 기초하여 제1 영역과 제2 영역으로 구분되되,상기 소정의 문턱값은 뎁스 블록의 코너에 위치한 샘플들의 평균값이고,상기 제1 영역은 상기 소정의 문턱값보다 큰 뎁스 값을 가진 샘플들로 구성된 영역을 의미하고, 상기 제2 영역은 상기 소정의 문턱값보다 작은 뎁스 값을 가진 샘플들로 구성된 영역인 것을 특징으로 하는 다시점 비디오 신호 디코딩 장치.
- 제5항에 있어서, 상기 인터 예측부는,뎁스 블록 파티션닝 플래그에 기초하여 상기 뎁스 기반 블록 파티션닝을 통한 인터 예측을 선택적으로 수행하되,상기 뎁스 블록 파티션닝 플래그는 상기 현재 텍스쳐 블록의 파티션 모드가 2Nx2N 또는 NxN이 아닌 경우에 시그날링되는 것을 특징으로 하는 다시점 비디오 신호 디코딩 다시점 비디오 신호 디코딩 장치.
- 현재 뎁스 블록의 예측 모드를 부호화하는 단계;상기 현재 텍스쳐 블록이 인터 모드로 부호화된 블록인 경우, 상기 현재 텍스쳐 블록의 파티션 모드를 부호화하는 단계;상기 파티션 모드에 따른 각 파티션에 대하여 모션 벡터를 획득하는 단계; 및뎁스 기반 블록 파티션닝을 통해 상기 모션 벡터를 이용하여 상기 현재 텍스쳐 블록에 대한 인터 예측을 수행하는 단계를 포함하는 다시점 비디오 신호 인코딩 방법.
- 제9항에 있어서, 상기 인터 예측을 수행하는 단계는,상기 현재 텍스쳐 블록의 제1 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제1 예측 블록을 생성하고, 상기 현재 텍스쳐 블록의 제2 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제2 예측 블록을 생성하는 단계; 및상기 현재 텍스쳐 블록에 대응하는 뎁스 블록의 파티션 패턴에 따라 상기 제1 예측 블록과 제2 예측 블록을 조합하여 최종 예측 블록을 생성하는 단계를 포함하는 다시점 비디오 신호 인코딩 방법.
- 제10항에 있어서,상기 뎁스 블록의 파티션 패턴은 뎁스 블록의 복원된 뎁스 값과 소정의 문턱값(threshold value) 간의 비교에 기초하여 제1 영역과 제2 영역으로 구분되되,상기 소정의 문턱값은 뎁스 블록의 코너에 위치한 샘플들의 평균값이고,상기 제1 영역은 상기 소정의 문턱값보다 큰 뎁스 값을 가진 샘플들로 구성된 영역을 의미하고, 상기 제2 영역은 상기 소정의 문턱값보다 작은 뎁스 값을 가진 샘플들로 구성된 영역인 것을 특징으로 하는 다시점 비디오 신호 인코딩 방법.
- 제9항에 있어서,상기 현재 텍스쳐 블록에 대한 인터 예측을 수행하는 단계는 뎁스 블록 파티션닝 플래그에 기초하여 선택적으로 수행되되,상기 뎁스 블록 파티션닝 플래그는 상기 현재 텍스쳐 블록의 파티션 모드가 2Nx2N 또는 NxN이 아닌 경우에 부호화되는 것을 특징으로 하는 다시점 비디오 신호 인코딩 방법.
- 현재 뎁스 블록의 예측 모드를 부호화하고, 상기 현재 텍스쳐 블록이 인터 모드로 부호화된 블록인 경우, 상기 현재 텍스쳐 블록의 파티션 모드를 부호화하는 엔트로피 인코딩부; 및상기 파티션 모드에 따른 각 파티션에 대하여 모션 벡터를 획득하고, 뎁스 기반 블록 파티션닝을 통해 상기 모션 벡터를 이용하여 상기 현재 텍스쳐 블록에 대한 인터 예측을 수행하는 인터 예측부를 포함하는 다시점 비디오 신호 인코딩 장치.
- 제13항에 있어서, 상기 인터 예측부는,상기 현재 텍스쳐 블록의 제1 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제1 예측 블록을 생성하고, 상기 현재 텍스쳐 블록의 제2 파티션의 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 제2 예측 블록을 생성하며, 상기 현재 텍스쳐 블록에 대응하는 뎁스 블록의 파티션 패턴에 따라 상기 제1 예측 블록과 제2 예측 블록을 조합하여 최종 예측 블록을 생성하는 것을 특징으로 하는 다시점 비디오 신호 인코딩 장치.
- 제13항에 있어서,상기 뎁스 블록의 파티션 패턴은 뎁스 블록의 복원된 뎁스 값과 소정의 문턱값(threshold value) 간의 비교에 기초하여 제1 영역과 제2 영역으로 구분되되,상기 소정의 문턱값은 뎁스 블록의 코너에 위치한 샘플들의 평균값이고,상기 제1 영역은 상기 소정의 문턱값보다 큰 뎁스 값을 가진 샘플들로 구성된 영역을 의미하고, 상기 제2 영역은 상기 소정의 문턱값보다 작은 뎁스 값을 가진 샘플들로 구성된 영역인 것을 특징으로 하는 다시점 비디오 신호 인코딩 장치.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/322,200 US10187658B2 (en) | 2014-07-03 | 2015-07-02 | Method and device for processing multi-view video signal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20140083343 | 2014-07-03 | ||
KR10-2014-0083343 | 2014-07-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016003209A1 true WO2016003209A1 (ko) | 2016-01-07 |
Family
ID=55019653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/006797 WO2016003209A1 (ko) | 2014-07-03 | 2015-07-02 | 다시점 비디오 신호 처리 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10187658B2 (ko) |
KR (1) | KR20160004946A (ko) |
WO (1) | WO2016003209A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017131475A1 (ko) * | 2016-01-27 | 2017-08-03 | 한국전자통신연구원 | 예측을 사용하는 비디오의 부호화 및 복호화를 위한 방법 및 장치 |
CN108605139A (zh) * | 2016-01-27 | 2018-09-28 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10881956B2 (en) * | 2018-12-28 | 2021-01-05 | Intel Corporation | 3D renderer to video encoder pipeline for improved visual quality and low latency |
KR102612539B1 (ko) | 2019-12-17 | 2023-12-11 | 한국전자통신연구원 | 다시점 비디오 부호화 및 복호화 방법 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120068743A (ko) * | 2010-12-17 | 2012-06-27 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
WO2013030456A1 (en) * | 2011-08-30 | 2013-03-07 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
KR20130038360A (ko) * | 2010-11-25 | 2013-04-17 | 엘지전자 주식회사 | 영상 정보의 시그널링 방법 및 이를 이용한 영상 정보의 복호화 방법 |
KR20130079261A (ko) * | 2011-12-30 | 2013-07-10 | (주)휴맥스 | 3차원 영상 부호화 방법 및 장치, 및 복호화 방법 및 장치 |
-
2015
- 2015-07-02 WO PCT/KR2015/006797 patent/WO2016003209A1/ko active Application Filing
- 2015-07-02 KR KR1020150094440A patent/KR20160004946A/ko not_active Application Discontinuation
- 2015-07-02 US US15/322,200 patent/US10187658B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130038360A (ko) * | 2010-11-25 | 2013-04-17 | 엘지전자 주식회사 | 영상 정보의 시그널링 방법 및 이를 이용한 영상 정보의 복호화 방법 |
KR20120068743A (ko) * | 2010-12-17 | 2012-06-27 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
WO2013030456A1 (en) * | 2011-08-30 | 2013-03-07 | Nokia Corporation | An apparatus, a method and a computer program for video coding and decoding |
KR20130079261A (ko) * | 2011-12-30 | 2013-07-10 | (주)휴맥스 | 3차원 영상 부호화 방법 및 장치, 및 복호화 방법 및 장치 |
Non-Patent Citations (1)
Title |
---|
FABIAN JAGER: "Depth-based block partitioning for 3D video coding.", PICTURE CODING SYMPOSIUM (PCS, 11 December 2013 (2013-12-11), pages 410 - 413, XP032566956 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017131475A1 (ko) * | 2016-01-27 | 2017-08-03 | 한국전자통신연구원 | 예측을 사용하는 비디오의 부호화 및 복호화를 위한 방법 및 장치 |
CN108605139A (zh) * | 2016-01-27 | 2018-09-28 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
CN115442614A (zh) * | 2016-01-27 | 2022-12-06 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
CN115460410A (zh) * | 2016-01-27 | 2022-12-09 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
CN115460409A (zh) * | 2016-01-27 | 2022-12-09 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
CN115460408A (zh) * | 2016-01-27 | 2022-12-09 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
CN115460407A (zh) * | 2016-01-27 | 2022-12-09 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
US20170142443A1 (en) | 2017-05-18 |
US10187658B2 (en) | 2019-01-22 |
KR20160004946A (ko) | 2016-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015142054A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2013169031A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015142057A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2010068020A2 (ko) | 다시점 영상 부호화, 복호화 방법 및 그 장치 | |
WO2013162273A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2010087589A2 (ko) | 경계 인트라 코딩을 이용한 비디오 신호 처리 방법 및 장치 | |
WO2014107083A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014010935A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016056822A1 (ko) | 3d 비디오 코딩 방법 및 장치 | |
WO2013165143A1 (ko) | 멀티뷰 영상의 부호화 방법, 부호화 장치, 복호화 방법, 복호화 장치 | |
WO2019198997A1 (ko) | 인트라 예측에 기반한 영상 코딩 방법 및 그 장치 | |
WO2018062699A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2016056782A1 (ko) | 비디오 코딩에서 뎁스 픽처 코딩 방법 및 장치 | |
WO2014168443A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013133627A1 (ko) | 비디오 신호 처리 방법 | |
WO2016056821A1 (ko) | 3d 비디오 코딩을 위한 움직임 정보 압축 방법 및 장치 | |
WO2013176485A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013191436A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2019194500A1 (ko) | 인트라 예측에 기반한 영상 코딩 방법 및 그 장치 | |
WO2014010918A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016003209A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2015199376A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2013133587A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016003210A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2015182927A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15815060 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15322200 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/04/2017) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15815060 Country of ref document: EP Kind code of ref document: A1 |