WO2013141665A1 - 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 - Google Patents
비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 Download PDFInfo
- Publication number
- WO2013141665A1 WO2013141665A1 PCT/KR2013/002424 KR2013002424W WO2013141665A1 WO 2013141665 A1 WO2013141665 A1 WO 2013141665A1 KR 2013002424 W KR2013002424 W KR 2013002424W WO 2013141665 A1 WO2013141665 A1 WO 2013141665A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- substream
- substreams
- lcu
- decoding
- bitstream
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to video encoding and decoding techniques, and more particularly, to parallel decoding of video.
- High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
- inter prediction and intra prediction may be used.
- the pixel value of the current picture is predicted by referring to information of another picture
- the intra prediction method the pixel value is predicted by using the correlation between pixels in the same picture. do.
- Various methods for making an image identical to an original may be applied to a processing unit, eg, a block, of a predicted image. This allows the decoding apparatus to decode the image more accurately (more consistent with the original), and the encoding apparatus can encode the image so that it can be more accurately reconstructed.
- An object of the present invention is to provide a method and apparatus for organizing video information to effectively perform parallel decoding.
- An object of the present invention is to provide a substream structure capable of effectively performing parallel decoding.
- An embodiment of the present invention is a video decoding method, comprising: receiving a bitstream including substreams that are rows of large coding units (LCUs) and decoding the substreams in parallel;
- the number may be equal to the number of entry points.
- Another embodiment of the present invention provides a video encoding method, comprising: encoding in parallel substreams that are rows of large coding units (LCUs) and transmitting a bitstream including the encoded substreams, wherein the substreams are encoded.
- the number of streams may be equal to the number of entry points.
- Another embodiment of the present invention is a video decoding apparatus, comprising: decoding a bitstream including substreams that are rows of large coding units (LCUs) in parallel, the number of substreams being equal to the number of entry points .
- LCUs large coding units
- Another embodiment of the present invention is a video encoding apparatus, comprising: encoding in parallel substreams that are rows of large coding units (LCUs) and transmitting a bitstream including the encoded substreams, wherein the number of substreams is an entry It may be equal to the number of points.
- LCUs large coding units
- video information can be configured to effectively perform parallel decoding.
- a substream may be configured to effectively perform parallel decoding.
- the present invention it is possible to effectively perform parallel decoding corresponding to the configuration of various processing cores. For example, according to the present invention, even when the number of processing cores is varied, parallel decoding can be effectively performed.
- FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram schematically illustrating a video decoding apparatus according to an embodiment of the present invention.
- FIG. 3 is a diagram schematically illustrating an example of a slice.
- FIG. 4 is a diagram schematically illustrating an example of a tile and a slice.
- FIG. 5 is a diagram schematically illustrating another example of a tile and a slice.
- FIG. 6 is a diagram schematically illustrating a WPP substream.
- FIG. 7 schematically illustrates an example in which a predetermined region in a picture is divided into substreams.
- FIG. 8 is a diagram schematically illustrating an LCU order in a bitstream according to the example of FIG. 7.
- FIG. 10 is a view schematically illustrating an example in which LCU rows are each substreamed according to the present invention.
- FIG. 11 is a diagram schematically illustrating LCUs in an aligned bitstream, in accordance with the present invention.
- FIG. 12 is a flowchart schematically illustrating a video encoding method according to the present invention.
- FIG. 13 is a flowchart schematically illustrating a video decoding method according to the present invention.
- each of the components in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions in the video encoding apparatus / decoding apparatus, each component is a separate hardware or separate software It does not mean that it is implemented.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- the encoding apparatus 100 may include a picture divider 105, a predictor 110, a transformer 115, a quantizer 120, a reordering unit 125, an entropy encoding unit 130, An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150 are provided.
- the picture dividing unit 105 may divide the input picture into at least one processing unit block.
- the block as the processing unit may be a prediction unit (hereinafter referred to as a PU), a transform unit (hereinafter referred to as a TU), or a coding unit (hereinafter referred to as "CU"). It may be called.
- the processing unit blocks divided by the picture divider 105 may have a quad-tree structure.
- the predictor 110 includes an inter predictor for performing inter prediction and an intra predictor for performing intra prediction, as described below.
- the prediction unit 110 generates a prediction block by performing prediction on the processing unit of the picture in the picture division unit 105.
- the processing unit of the picture in the prediction unit 110 may be a CU, a TU, or a PU.
- the prediction unit 110 may determine whether the prediction performed on the processing unit is inter prediction or intra prediction, and determine specific contents (eg, prediction mode, etc.) of each prediction method.
- the processing unit in which the prediction is performed and the processing unit in which the details of the prediction method and the prediction method are determined may be different.
- the prediction method and the prediction mode may be determined in units of PUs, and the prediction may be performed in units of TUs.
- a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
- a prediction block may be generated by performing prediction based on pixel information in a current picture.
- a skip mode, a merge mode, a motion vector prediction (MVP), and the like can be used.
- a reference picture may be selected for a PU and a reference block corresponding to the PU may be selected.
- the reference block may be selected in integer pixel units.
- a prediction block is generated in which a residual signal with the current PU is minimized and the size of the motion vector is also minimized.
- the prediction block may be generated in integer sample units, or may be generated in sub-pixel units such as 1/2 pixel unit or 1/4 pixel unit.
- the motion vector may also be expressed in units of integer pixels or less.
- the residual may be used as the reconstructed block, and thus the residual may not be generated, transformed, quantized, or transmitted.
- a prediction mode When performing intra prediction, a prediction mode may be determined in units of PUs, and prediction may be performed in units of PUs. In addition, a prediction mode may be determined in units of PUs, and intra prediction may be performed in units of TUs.
- the prediction mode may have 33 directional prediction modes and at least two non-directional modes.
- the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
- a prediction block may be generated after applying a filter to a reference sample.
- whether to apply the filter to the reference sample may be determined according to the intra prediction mode and / or the size of the current block.
- the PU may be a block of various sizes / types, for example, in the case of inter prediction, the PU may be a 2N ⁇ 2N block, a 2N ⁇ N block, an N ⁇ 2N block, an N ⁇ N block (N is an integer), or the like.
- the PU In the case of intra prediction, the PU may be a 2N ⁇ 2N block or an N ⁇ N block (where N is an integer).
- the PU of the N ⁇ N block size may be set to apply only in a specific case.
- the NxN block size PU may be used only for the minimum size CU or only for intra prediction.
- PUs such as N ⁇ mN blocks, mN ⁇ N blocks, 2N ⁇ mN blocks, or mN ⁇ 2N blocks (m ⁇ 1) may be further defined and used.
- the residual value (the residual block or the residual signal) between the generated prediction block and the original block is input to the converter 115.
- the prediction mode information, the motion vector information, etc. used for the prediction are encoded by the entropy encoding unit 130 together with the residual value and transmitted to the decoding apparatus.
- the transform unit 115 performs transform on the residual block in units of transform blocks and generates transform coefficients.
- the transform block is a rectangular block of samples to which the same transform is applied.
- the transform block can be a transform unit (TU) and can have a quad tree structure.
- the transformer 115 may perform the transformation according to the prediction mode applied to the residual block and the size of the block.
- the residual block is transformed using a discrete sine transform (DST), otherwise the residual block is transformed into a discrete cosine transform (DCT). Can be converted using.
- DST discrete sine transform
- DCT discrete cosine transform
- the transform unit 115 may generate a transform block of transform coefficients by the transform.
- the quantization unit 120 may generate quantized transform coefficients by quantizing the residual values transformed by the transform unit 115, that is, the transform coefficients.
- the value calculated by the quantization unit 120 is provided to the inverse quantization unit 135 and the reordering unit 125.
- the reordering unit 125 rearranges the quantized transform coefficients provided from the quantization unit 120. By rearranging the quantized transform coefficients, the encoding efficiency of the entropy encoding unit 130 may be increased.
- the reordering unit 125 may rearrange the quantized transform coefficients in the form of a 2D block into a 1D vector form through a coefficient scanning method.
- the entropy encoding unit 130 may perform entropy encoding on the quantized transform coefficients rearranged by the reordering unit 125.
- Entropy encoding may include, for example, encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
- the entropy encoding unit 130 may include quantized transform coefficient information, block type information, prediction mode information, partition unit information, PU information, transmission unit information, and motion vector of the CUs received from the reordering unit 125 and the prediction unit 110.
- Various information such as information, reference picture information, interpolation information of a block, and filtering information may be encoded.
- the entropy encoding unit 130 may apply a constant change to a parameter set or syntax to be transmitted.
- the inverse quantizer 135 inversely quantizes the quantized values (quantized transform coefficients) in the quantizer 120, and the inverse transformer 140 inversely transforms the inverse quantized values in the inverse quantizer 135.
- the reconstructed block may be generated by combining the residual values generated by the inverse quantizer 135 and the inverse transform unit 140 and the prediction blocks predicted by the prediction unit 110.
- a reconstructed block is generated by adding a residual block and a prediction block through an adder.
- the adder may be viewed as a separate unit (restore block generation unit) for generating a reconstruction block.
- the filter unit 145 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
- ALF adaptive loop filter
- SAO sample adaptive offset
- the deblocking filter may remove distortion generated at the boundary between blocks in the reconstructed picture.
- the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter. ALF may be performed only when high efficiency is applied.
- the SAO restores the offset difference from the original image on a pixel-by-pixel basis to the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
- the filter unit 145 may not apply filtering to the reconstructed block used for inter prediction.
- the video decoding apparatus 200 includes an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, a prediction unit 230, and a filter unit 235.
- Memory 240 may be included.
- the input bitstream may be decoded according to a procedure in which image information is processed in the video encoding apparatus.
- VLC variable length coding
- 'VLC' variable length coding
- CABAC CABAC
- Information for generating the prediction block among the information decoded by the entropy decoding unit 210 is provided to the predictor 230, and a residual value where entropy decoding is performed by the entropy decoding unit 210, that is, a quantized transform coefficient It may be input to the reordering unit 215.
- the reordering unit 215 may reorder the information of the bitstream entropy decoded by the entropy decoding unit 210, that is, the quantized transform coefficients, based on the reordering method in the encoding apparatus.
- the reordering unit 215 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block.
- the reordering unit 215 may generate an array of coefficients (quantized transform coefficients) in the form of a 2D block by scanning coefficients based on the prediction mode applied to the current block (transform block) and the size of the transform block.
- the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoding apparatus and the coefficient values of the rearranged block.
- the inverse transform unit 225 may perform inverse DCT and / or inverse DST on the DCT and the DST performed by the transform unit of the encoding apparatus with respect to the quantization result performed by the video encoding apparatus.
- the inverse transformation may be performed based on a transmission unit determined by the encoding apparatus or a division unit of an image.
- the DCT and / or DST in the encoding unit of the encoding apparatus may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transform unit 225 of the decoding apparatus is configured in the transformation unit of the encoding apparatus.
- Inverse transformation may be performed based on the performed transformation information.
- the prediction unit 230 may generate the prediction block based on the prediction block generation related information provided by the entropy decoding unit 210 and the previously decoded block and / or picture information provided by the memory 240.
- intra prediction for generating a prediction block based on pixel information in the current picture may be performed.
- the residual is not transmitted and the prediction block may be a reconstruction block.
- the reconstructed block and / or picture may be provided to the filter unit 235.
- the filter unit 235 may apply deblocking filtering, sample adaptive offset (SAO), and / or ALF to the reconstructed block and / or picture.
- SAO sample adaptive offset
- the memory 240 may store the reconstructed picture or block to use as a reference picture or reference block and provide the reconstructed picture to the output unit.
- the encoding apparatus and the decoding apparatus may divide the picture into predetermined units and process (encode / decode) the picture.
- a picture may be divided into slices and tiles.
- a slice is a sequence of one or more slice segments.
- the slice sequence starts with an independent slice segment and includes dependent slice segments that exist until the next independent slice segment.
- the current picture 300 is divided into two slices by the slice boundary 350.
- the first slice includes an independent slice segment 310 comprising four coding tree units and a first dependent slice segment 320 comprising 32 CTUs before and after the slice segment boundary 330 and 24 CTUs.
- the second dependent slice segment 340 may be configured.
- Another independent slice segment 360 is composed of 28 CTUs.
- a tile may also be a sequence of CTUs or LCUs.
- the CTU is a coding unit of a quad-tree structure and may be an LCU.
- CTU and LCU may be mixed and used where necessary to help the understanding of the present invention.
- FIG. 4 is a diagram schematically illustrating an example of a tile and a slice.
- FIG. 5 is a diagram schematically illustrating another example of a tile and a slice.
- the current picture 500 is divided into two tiles to the left and right of the tile boundary 510.
- the tile to the left of the tile boundary 510 includes two slices based on the slice boundary 550.
- Slices above slice boundary 550 include independent slice segments 520 and dependent slice segments 540, and slices below slice boundary 550 include independent slice segments 530 and dependent slice segments ( 560).
- the next slice based on slice boundary 590, ie, the slice in the second tile, includes an independent slice segment 530 and a dependent slice segment 580.
- Encoding and decoding may be performed in units of tiles, or in units of rows of CTUs (hereinafter, for convenience of description, rows of CTUs or rows (or streams) of LCUs are referred to as 'substreams'). May be Each sample in a tile or substream may be processed in units of CTU or LCU.
- the decoding process may be processed in parallel.
- the decoding process may be performed in parallel for each tile or in parallel for each substream.
- each tile may be decoded simultaneously.
- the maximum number of tiles that can be processed in parallel may be predetermined. For example, a maximum of four tiles may be set to be processed in parallel.
- the decoding apparatus may process 1 to 4 tiles at once.
- the substream may be a row of LCUs or CTUs as part of a bitstream to be decoded in each decoding process when a plurality of decoding processes are performed in parallel.
- the relevant context information is stored.
- the first CTU (LCU) of the n + 1th substream may be entropy decoded based on the context information for the second CTU (LCU) of the nth substream.
- WPP wavefront parallel processing
- the tile structure and the WPP allow the encoding device to divide the picture into several parts, which can then be decoded in a parallel manner in the decoding device.
- An access point on a bitstream for proceeding decoding in parallel using a tile structure (tile substream) or a WPP substream is called an entry point.
- the entry point may be the start point of each WPP substream or the start point of each tile on the bitstream.
- FIG. 6 is a diagram schematically illustrating a WPP substream.
- the predetermined area 600 in the picture includes a plurality of substreams such as substream A 610, substream B 620, substream C 630, and the like.
- Each substream is decoded sequentially from the first LCU.
- Second and subsequent LCUs of each substream may be entropy decoded based on the entropy decoding result of previous LCUs, that is, the context.
- each substream can be decoded in parallel, and in the substreams after the first substream, the first LCUs can be entropy decoded based on the value of the context variables for the second LCU of the previous substream. have.
- the decoding process proceeds from the first LCU A1 of the first row 610 in the decoding object region 600.
- the decoding device stores the value of the context variables for A2.
- the first LCU B1 of the second row 620 is entropy decoded based on the value of the context variables for the second LCU A2 of the first row 610. Once the entropy decoding for the second LCU B2 of the second row 620 is complete, the decoding device stores the values of the context variables for B2.
- the first LCU C1 of the third row 630 is entropy decoded based on the value of the context variables for the second LCU B2 of the second row 610.
- the decoding device stores the values of the context variables for B2.
- the fourth and subsequent rows can also be entropy decoded using the context variable values for the second LCU of the previous row.
- the entry point may indicate the decoding start point (access point) for each substream.
- a predetermined area 600 of a picture may be a partial area of the current picture, a slice of the current picture, or an entire area of the current picture.
- an area for example, an entire picture or a slice
- an entry as a position to access the first bit of each substream in the slice header Points can be signaled.
- FIG. 7 schematically illustrates an example in which a predetermined region in a picture is divided into substreams.
- FIG. 7 an example in which a predetermined region 700 in a picture is divided into three substreams of substreams 0 710 and 740, substreams 1 720 and 750, and substreams 2730 is described as an example. have.
- the first row 710 of substream 0 consists of a 0 th LCU to a fifth LCU
- the second row 740 of substream 0 consists of an 18 th LCU and a 23 th LCU.
- the first row 720 of substream 1 consists of the 6th LCU to the 11th LCU
- the second row 750 of substream 1 consists of the 24th LCU to 29th LCU.
- substream 2 730 is composed of a 12 th LCU to a 17 th LCU.
- the substreams are transmitted in a sequence in the bitstream, where the access point of each substream can be signaled to the entry point.
- the first processing core may decode the first row 710 of the substream 0 and then sequentially decode the second row 720.
- the second processing core may also decode the first row 720 of substream 1 and then sequentially decode the second row 740.
- the predetermined area 700 in the picture may be an entire picture or may be a slice, a slice segment, or a tile in the picture.
- FIG. 8 is a diagram schematically illustrating an LCU order in a bitstream according to the example of FIG. 7.
- LCUs may be rearranged for each substream in the bitstream. Reordering of the LCUs may be performed in the reordering unit 125 of FIG. 1, for example.
- substream 1 and substream 2 are transmitted following substream 0.
- the access point of substream 0 is indicated by entry point 810
- the access point of substream 1 is indicated by entry point 820
- the access point of substream 2 is indicated by entry point 830. Can be.
- the first processing core decodes substream 0
- the second processing core decodes substream 1
- the third processing core is substream 2.
- the first processing core decodes from the 0th LCU to the 5th LCU and then from the 18th LCU to the 23rd LCU.
- the second processing core decodes from the 6th LCU to the 11th LCU and then from the 24th LCU to the 29th LCU.
- the LCUs are rearranged by substreams as shown in FIG. 7 in the bitstream.
- the complexity of the decoding process may increase or decoding may become difficult. Can be.
- FIG. 9 is a diagram schematically illustrating an example of decoding a bitstream according to the example of FIG. 7 using one processing core.
- the reordering of LCUs may be performed, for example, in the reordering unit 125 of FIG. 1.
- the processing core first decodes the 0 th LCU to the 5 th LCU (1).
- the processing core then decodes from the 6th LCU to the 11th LCU (2), and decodes from the 12th LCU to the 17th LCU (3). Then, it moves to the front of the bitstream again to decode from the 18th LCU to the 23rd LCU (4), and decodes from the 24th LCU to the 29th LCU (5).
- decoding is performed while moving back and forth of the bitstream.
- the entry point indicating the access point indicates only the viewpoints 910, 920, 930 of the first LCU in each substream, as shown.
- the processing core needs to access six points (0th LCU, 6th LCU, 12th LCU, 18th LCU, 24th LCU) while moving back and forth, but only 3 entry points indicating the access point are transmitted. Occurs.
- Various types of devices may also be used as a decoding device (decoder) that performs video decoding in a situation in which types and demands of video services are diversifying. In other words, there is a situation in which the same video stream is decoded using a plurality of processing cores in some cases, and in some cases using a single processing core.
- maximizing the number of substreams that are units of decoding processing means that one LCU row becomes one substream in a picture.
- the substreams are configured such that the number of substreams is equal to the number of LCU rows. If the number of substreams is the same as the number of LCU rows, the order of the LCUs in the bitstream may be the order of the raster scan.
- FIG. 10 is a view schematically illustrating an example in which LCU rows are each substreamed according to the present invention.
- FIG. 10 a case where a predetermined region 1000 in a picture is composed of five LCU rows will be described as an example.
- substream 0 is a first LCU row consisting of a 0 th LCU to a 5 th LCU.
- Substream 1 1020 is a second LCU row consisting of the 6th to 11th LCUs.
- Substream 2 1030 is a third row consisting of a 12 th LCU to a 17 th LCU.
- Substream 3 1040 is the fourth row consisting of the 18 th LCU to the 13 th LCU.
- Substream 4 1050 is the fifth LCU row consisting of the 24 th LCU to the 29 th LCU.
- each LCU column is one substream.
- the predetermined region 1000 in the picture may be an entire picture, or may be a slice, a slice segment, or a tile in the picture.
- FIG. 11 is a diagram schematically illustrating LCUs in an aligned bitstream, in accordance with the present invention.
- FIG. 11 a case in which the substreams according to FIG. 10 are rearranged in the bitstream will be described as an example.
- the bisstream is arranged in the order of the first substream, the second substream, the third substream, the fourth substream, and the fifth substream.
- the processing core decodes the first substream (1), decodes the second substream (2), then decodes the third substream (3), decodes the fourth substream (4), and finally Decode the fifth substream (5).
- the decoding apparatus may sequentially decode the bitstream.
- each LCU row is one substream
- a padding bit for byte alignment may be added for each LCU row, that is, for each substream.
- a predetermined change may be applied to the picture parameter set (PPS) and the slice header.
- a predetermined syntax element may be signaled to specify the number of substreams.
- a syntax element such as num_substream_minus1 may be transmitted to specify the number of substreams in a picture.
- the number of substreams may be specified by the number of LCU rows or may be specified by the number of entry points.
- the number of substreams in a picture may be specified by the number of LCU rows in a picture or by the number of entry points in a picture.
- the number of entry points may be the number of entry point offsets + 1, and the number of entry point offsets may be signaled in the slice header.
- the number of entry points may be signaled by a predetermined syntax element.
- the number of substreams in the slice may be equal to the number of entry points in the slice.
- the number of substreams in the slice segment may be equal to the number of entry points in the slice segment.
- the number of entry points may be specified by signaling the number of entry point offsets instead of directly signaling the number of entry points.
- the first entry point may be signaled and the second entry point may be specified by sending an offset between the first entry point and the second entry point.
- the third entry point may then be specified by the second entry point and the offset between the second entry point and the third entry point.
- the n th entry point may be specified by the previous entry point and the offset, and the number of the entry points may be specified by the number of offsets.
- the number of entry points is equal to the number of substreams, that is, the number of LCU rows, and thus a syntax element for specifying the number of entry point offsets, that is, a syntax element for indicating the number of entry points. It may not transmit.
- the number of substreams may be specified through information that may specify the number of entry points, for example, a syntax element that specifies the number of entry point offsets.
- the number of entry point offsets may be specified. If the syntax element to be referred to is num_entry_point_offset, and the number of entry point offsets specified by num_entry_point_offset is n, the number of entry points, that is, the number of substreams and the number of LCU rows is n + 1.
- Table 1 shows an example of a slice header signaling the number of entry point offsets.
- the number of entry point offsets in the slice header may be transmitted.
- num_entry_point_offsets specifies the number of entry point offsets in the slice header, and the number of entry points in the slice header has a value larger than the number of entry point offsets in the slice header.
- num_entry_point_offsets specifies the number of syntax elements entry_point_offset_minus1 [i] in the slice header. If there is no num_entry_point_offsets in the slice header, the value of num_entry_point_offsets can be estimated as 0.
- num_entry_point_offsets may have a value of 0 or more CTB units of a picture height (ie, the number of CTBs in the height direction of the picture, PicHeightInCtbs) -1 or less.
- num_entry_point_offsets may have a value of 0 or more tile columns (num_tile_coulmn_minus1 + 1) * number of tile rows (num_tile_row_minus1 + 1)-1 or less.
- num_entry_point_offsets is the number of tile columns above 0 (num_tile_coulmn_minus1 + 1) * CTB units of picture height (i.e., the number of CTBs in the picture's height direction, PicHeightInCtbs) Can have
- a syntax element whose num_entry_point_offsets specifies its number entry_point_offset_minus1 [i] + 1 means an offset with respect to the i th entry point.
- Slice segment data according to the slice segment header may be configured as a substream of num_entry_point_offsets + 1 when num_entry_point_offsets specifies the number of entry point offsets in the slice segment.
- the substream index specifying the substream in the slice segment may have a value of 0 or more and num_entry_point_offsets or less.
- the substream 0, which is the 0th substream may be composed of bytes equal to or larger than 0 and entry_point_offset_minus1 [0] less than the slice segment data.
- iniByte [k] and finByte [k] may be defined as in Equation 1.
- finByte [k] iniByte [k] + entry_point_offset_minus1 [k]
- the number of substreams (ie, num_entry_point_offsets + 1) becomes equal to the number of LCU rows in the slice segment.
- the number of entry point offsets is simply transmitted in the slice header.
- the slice header in which the number of entry point offsets is transmitted may be a slice segment header.
- whether a tile is applied or a WPP is indicated by one indicator in contrast, it may be transmitted to each syntax element whether a tile is applied to decode in parallel or WPP is applied. For example, whether to apply WPP may be transmitted through a flag.
- Table 2 briefly illustrates a case in which information about the number of entry point offsets described in Table 1 is transmitted in a slice segment header.
- num_entry_point_offsets specifies the number of entry point offsets in the slice segment header, and the number of entry points in the slice segment header has a value larger than the number of entry point offsets in the slice segment header.
- the number of substreams or the number of LCU rows may be specified through the number of entry points (number of entry point offsets).
- the information indicating whether the LCU is rearranged is signaled from the encoding device to the decoding device. For example, it may be indicated whether the LCU (CTU) is rearranged through ctb_reordering_flag.
- num_substream_minus1 When the value of ctb_reordering_flag is false, information indicating the number of substreams (for example, num_substream_minus1) does not exist in the PPS.
- the number of substreams in a slice is equal to the number of entry points. That is, it is assumed that the value num_substream_minus1 intends to be equal to the value of the entry point offset, that is, the value indicated by num_entry_point_offset.
- ctb_reordering_flag If the value of ctb_reordering_flag is true, information indicating the number of substreams (eg, num_substream_minus1) exists in the PPS.
- both pictures coded with a single WPP substream and pictures coded with multiple WPP substreams can be decoded using a single processing core or multiple processing cores. However, there may be a difference in ease of decoding.
- ctb_reordering_flag if the value of ctb_reordering_flag is false, it may be determined that it is better to use a single processing core for decoding the bitstream. In addition, if the value of ctb_reordering_flag is true, it may be determined that it is preferable to use a multiprocessing core for decoding the bitstream.
- Table 3 shows an example of the modified PPS according to the present invention.
- ctb_reordering_flag 1
- a syntax element indicating the number of substreams eg, num_substreams_minus1
- the coded treeblocks within the bitstream are not arranged in the raster scan order. To indicate that it can.
- the value of ctb_reordering_flag is 0, there is no syntax element indicating the number of substreams (for example, num_substreams_minus1), and the coded treeblocks coded in the bitstream are arranged in the raster scan order. Indicates that there is.
- the number of substreams in a slice is equal to the number of entry points, and the number of entry points may be specified by the number of entry point offsets. For example, the number of entry points may be one greater than the number of entry point offsets.
- syntax element num_entry_point_offset indicating the number of entry point offsets in the slice header may be changed.
- num_entry_point_offset transmitted in the slice header specifies the number of entry point offsets in the slice header.
- the number of entry point offsets is equal to or greater than the number of tile columns (num_tile_coulmn_minus1 + 1) * The number of tile rows (num_tile_row_minus1 + 1)-may have a value of 1 or less.
- the number of entry point offsets may have a value of 0 or more and a number of substreams-1 (num_substreams_minus1) or less.
- the number of entry point offsets may have a value of 0 or more and less than LCU unit height-1 (PicHeightInCtbs-1) of the picture.
- num_entry_point_offset may be estimated as zero.
- FIG. 12 is a flowchart schematically illustrating a video encoding method according to the present invention.
- the encoding apparatus encodes the input video (S1210).
- a detailed method of video encoding performed by the encoding apparatus is as described with reference to FIG. 1.
- the encoding apparatus may encode substreams that are rows of LCUs (Largest Coding Units) in parallel.
- encoding is performed on the nth (n is an integer) substream, and after encoding of the second CTU or LCU of the nth substream is completed, encoding on the n + 1th substream is performed. Can proceed.
- the first LCU of the n + 1th substream is based on the context information for the second LCU of the nth substream. It can be entropy encoded.
- the number of substreams in a picture, a slice, or a slice segment is equal to the number of LCU rows.
- the number of substreams may be equal to the number of entry points.
- the number of entry points may be specified by the number of entry point offsets.
- the number of entry points may have a value one greater than the number of entry point offsets.
- the encoding device may signal the encoded video information in the bitstream (S1220).
- the bitstream may include information specifying the number of entry point offsets.
- Information specifying the number of entry point offsets may be transmitted in the PPS, or may be transmitted in a slice header or slice segment header.
- a bit for byte alignment may be added at the end of each substream, so that byte alignment may be performed in units of substreams.
- the access point of each substream may be specified by an entry point.
- the access point of the second substream may be the point where the entry point offset is added to the first entry point.
- FIG. 13 is a flowchart schematically illustrating a video decoding method according to the present invention.
- the decoding apparatus receives video information through a bitstream (S1310).
- the bitstream includes substreams that are rows of Largest Coding Units (LCUs).
- the bitstream may include information specifying the number of entry point offsets. Information specifying the number of entry point offsets may be transmitted in the PPS, or may be transmitted in a slice header or slice segment header.
- a bit for byte alignment may be added at the end of each substream, so that byte alignment may be performed in units of substreams.
- the decoding apparatus may decode the received video information in operation S1320.
- the decoding apparatus may perform parallel decoding for each tile or for each substream.
- nth (n is an integer) substream after decoding is performed on the nth (n is an integer) substream, after decoding of the second CTU or LCU of the nth substream is completed, the decoding is performed on the n + 1th substream. Decoding may proceed.
- entropy decoding when entropy decoding for the second LCU of the nth substream is completed, relevant context information is stored, and the first LCU of the n + 1th substream is the second LCU of the nth substream. It may be entropy decoded based on the context information for.
- the access point of each substream may be specified by an entry point.
- the access point of the second substream may be the point where the entry point offset is added to the first entry point.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims (18)
- LCU(Largest Coding Unit)들의 행인 서브스트림들을 포함하는 비트스트림을 수신하는 단계; 및
상기 서브스트림들을 병렬적으로 디코딩하는 단계를 포함하며,
상기 서브스트림들의 개수는 상기 LCU들의 행들의 개수와 동일한 것을 특징으로 하는 비디오 디코딩 방법. - 제1항에 있어서, 상기 디코딩 단계에서는,
이전 서브스트림의 두 번째 LCU에 대한 디코딩이 완료된 후, 상기 서브스트림의 두 번째 LCU에 대한 콘텍스트 정보에 기반하여 현재 서브스트림의 첫 번째 LCU에 대한 디코딩을 개시하는 것을 특징으로 하는 비디오 디코딩 방법. - 제1항에 있어서, 상기 서브스트림들의 개수는 신택스 요소에 의해 지시되지 않고 엔트리 포인트 오프셋들의 개수에 의해 특정되는 것을 특징으로 하는 비디오 디코딩 방법.
- 제3항에 있어서, 상기 서브스트림들의 개수는 상기 엔트리 포인트 오프셋들의 개수보다 하나 더 많은 것을 특징으로 하는 비디오 디코딩 방법.
- 제1항에 있어서, 하나의 서브스트림은 하나의 LCU 행에 대응하는 것을 특징으로 하는 비디오 디코딩 방법.
- 제1항에 있어서, 상기 비트스트림은 바이트 단위 정렬을 위한 비트를 포함하며,
상기 바이트 단위 정렬을 위한 비트는 상기 서브스트림들 각각의 마지막에 추가되어, 상기 서브스트림들을 각각 바이트 단위로 정렬시키는 것을 특징으로 하는 비디오 디코딩 방법. - 제1항에 있어서, 상기 비트스트림은 상기 서브스트림 LCU들이 재배열되는지를 지시하는 정보를 포함하는 것을 특징으로 하는 비디오 디코딩 방법.
- LCU(Largest Coding Unit)들의 행인 서브스트림들을 병렬적으로 인코딩하는 단계; 및
상기 인코딩된 서브스트림들을 포함하는 비트스트림을 전송하는 단계를 포함하며,
상기 서브스트림들의 개수는 상기 LCU들의 행들의 개수와 동일한 것을 특징으로 하는 비디오 인코딩 방법. - 제8항에 있어서, 상기 인코딩 단계에서는,
이전 서브스트림의 두 번째 LCU에 대한 인코딩이 완료된 후, 상기 서브스트림의 두 번째 LCU에 대한 콘텍스트 정보에 기반하여 현재 서브스트림의 첫 번째 LCU에 대한 인코딩을 개시하는 것을 특징으로 하는 비디오 인코딩 방법. - 제8항에 있어서, 상기 서브스트림들의 개수는 신택스 요소에 의해 특정되지 않고 엔트리 포인트 오프셋들의 개수에 의해 특정되는 것을 특징으로 하는 비디오 인코딩 방법.
- 제10항에 있어서, 상기 서브스트림들의 개수는 상기 엔트리 포인트 오프셋들의 개수보다 하나 더 많은 것을 특징으로 하는 비디오 인코딩 방법.
- 제8항에 있어서, 하나의 서브스트림은 하나의 LCU 행에 대응하는 것을 특징으로 하는 비디오 디코딩 방법.
- 제8항에 있어서, 상기 비트스트림은 바이트 단위 정렬을 위한 비트를 포함하며,
상기 바이트 단위 정렬을 위한 비트는 상기 서브스트림들 각각의 마지막에 추가되어, 상기 서브스트림들을 각각 바이트 단위로 정렬시키는 것을 특징으로 하는 비디오 인코딩 방법. - 제8항에 있어서, 상기 비트스트림은 상기 서브스트림 LCU들이 재배열되는지를 지시하는 정보를 포함하는 것을 특징으로 하는 비디오 인코딩 방법.
- LCU(Largest Coding Unit)들의 행인 서브스트림들을 포함하는 비트스트림을 병렬적으로 디코딩하며,
상기 서브스트림들의 개수는 엔트리 포인트들의 개수와 동일한 것을 특징으로 하는 비디오 디코딩 장치. - 제15항에 있어서, 상기 비트스트림 내 서브스트림들에 대하여,
이전 서브스트림의 두 번째 LCU에 대한 디코딩이 완료된 후, 상기 서브스트림의 두 번째 LCU에 대한 콘텍스트 정보에 기반하여 현재 서브스트림의 첫 번째 LCU에 대한 디코딩을 개시하는 것을 특징으로 하는 비디오 디코딩 장치. - LCU(Largest Coding Unit)들의 행인 서브스트림들을 병렬적으로 인코딩하고 상기 인코딩된 서브스트림들을 포함하는 비트스트림을 전송하며,
상기 서브스트림들의 개수는 엔트리 포인트들의 개수와 동일한 것을 특징으로 하는 비디오 인코딩 장치. - 제17항에 있어서, 상기 비트스트림에 포함되되는 서브스트림에 대한 인코딩은,
이전 서브스트림의 두 번째 LCU에 대한 인코딩이 완료된 후, 상기 서브스트림의 두 번째 LCU에 대한 콘텍스트 정보에 기반하여 현재 서브스트림의 첫 번째 LCU에 대한 인코딩을 개시하는 것을 특징으로 하는 비디오 인코딩 장치.
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020147021858A KR102132784B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
KR1020217004595A KR102261939B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
KR1020227032496A KR102489001B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
KR1020217032395A KR102361012B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
KR1020217016821A KR102312989B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
KR1020207019399A KR102219089B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
KR1020227003996A KR102447003B1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
US14/387,002 US9955178B2 (en) | 2012-03-22 | 2013-03-22 | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
US15/925,169 US10218993B2 (en) | 2012-03-22 | 2018-03-19 | Video encoding method, video decoding method and apparatus using same |
US16/245,778 US10708610B2 (en) | 2012-03-22 | 2019-01-11 | Method for encoding and decoding in parallel processing and apparatus using same |
US16/886,400 US11202090B2 (en) | 2012-03-22 | 2020-05-28 | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
US17/518,984 US11838526B2 (en) | 2012-03-22 | 2021-11-04 | Method for encoding and decoding substreams and wavefront parallel processing, and apparatus using same |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261614504P | 2012-03-22 | 2012-03-22 | |
US61/614,504 | 2012-03-22 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/387,002 A-371-Of-International US9955178B2 (en) | 2012-03-22 | 2013-03-22 | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
US15/925,169 Continuation US10218993B2 (en) | 2012-03-22 | 2018-03-19 | Video encoding method, video decoding method and apparatus using same |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013141665A1 true WO2013141665A1 (ko) | 2013-09-26 |
Family
ID=49223024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/002424 WO2013141665A1 (ko) | 2012-03-22 | 2013-03-22 | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 |
Country Status (3)
Country | Link |
---|---|
US (5) | US9955178B2 (ko) |
KR (7) | KR102312989B1 (ko) |
WO (1) | WO2013141665A1 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015122550A1 (ko) * | 2014-02-12 | 2015-08-20 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
KR20180004186A (ko) * | 2014-02-12 | 2018-01-10 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
WO2020071829A1 (ko) * | 2018-10-04 | 2020-04-09 | 엘지전자 주식회사 | 히스토리 기반 영상 코딩 방법 및 그 장치 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9955178B2 (en) | 2012-03-22 | 2018-04-24 | Lg Electronics Inc. | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
BR122015024098B1 (pt) * | 2013-01-04 | 2020-12-29 | Samsung Electronics Co Ltd | método de decodificação de vídeo |
US11438609B2 (en) * | 2013-04-08 | 2022-09-06 | Qualcomm Incorporated | Inter-layer picture signaling and related processes |
US9736488B2 (en) * | 2013-11-27 | 2017-08-15 | Nxp Usa, Inc. | Decoding for high efficiency video transcoding |
EP4422175A3 (en) * | 2018-09-14 | 2024-11-06 | Huawei Technologies Co., Ltd. | Slicing and tiling in video coding |
US11019359B2 (en) * | 2019-01-15 | 2021-05-25 | Tencent America LLC | Chroma deblock filters for intra picture block compensation |
JP2022526023A (ja) * | 2019-04-10 | 2022-05-20 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | エンコーダ、デコーダ、および対応する方法 |
CA3136342A1 (en) * | 2019-05-03 | 2020-11-12 | Fnu HENDRY | An encoder, a decoder and corresponding methods |
KR20210055278A (ko) | 2019-11-07 | 2021-05-17 | 라인플러스 주식회사 | 하이브리드 비디오 코딩 방법 및 시스템 |
US12015796B2 (en) | 2019-11-14 | 2024-06-18 | Lg Electronics Inc. | Image coding method on basis of entry point-related information in video or image coding system |
WO2021118076A1 (ko) * | 2019-12-12 | 2021-06-17 | 엘지전자 주식회사 | 비디오 또는 영상 코딩 시스템에서의 일부 엔트리 포인트 관련 정보에 기반한 영상 코딩 방법 |
US20230328261A1 (en) * | 2020-09-24 | 2023-10-12 | Lg Electronics Inc. | Media file processing method and device therefor |
CN116406505A (zh) * | 2020-09-24 | 2023-07-07 | Lg电子株式会社 | 媒体文件处理方法和装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080002649A1 (en) * | 2006-06-28 | 2008-01-03 | Pengfei Xia | System and method for digital communications using multiple parallel encoders |
EP2381685A1 (en) * | 2010-04-13 | 2011-10-26 | Research In Motion Limited | Methods and devices for load balancing in parallel entropy coding and decoding |
US20110274162A1 (en) * | 2010-05-04 | 2011-11-10 | Minhua Zhou | Coding Unit Quantization Parameters in Video Coding |
US20120014429A1 (en) * | 2010-07-15 | 2012-01-19 | Jie Zhao | Methods and Systems for Parallel Video Encoding and Parallel Video Decoding |
WO2012008608A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Kabushiki Kaisha | Parallel video coding based on prediction type |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8615039B2 (en) * | 2009-05-21 | 2013-12-24 | Microsoft Corporation | Optimized allocation of multi-core computation for video encoding |
US9215473B2 (en) * | 2011-01-26 | 2015-12-15 | Qualcomm Incorporated | Sub-slices in video coding |
FR2972588A1 (fr) * | 2011-03-07 | 2012-09-14 | France Telecom | Procede de codage et decodage d'images, dispositif de codage et decodage et programmes d'ordinateur correspondants |
US9338465B2 (en) * | 2011-06-30 | 2016-05-10 | Sharp Kabushiki Kaisha | Context initialization based on decoder picture buffer |
CN108600753B (zh) * | 2011-12-29 | 2020-10-27 | Lg 电子株式会社 | 视频编码和解码方法和使用该方法的装置 |
US9749661B2 (en) * | 2012-01-18 | 2017-08-29 | Qualcomm Incorporated | Sub-streams for wavefront parallel processing in video coding |
KR102210228B1 (ko) * | 2012-01-20 | 2021-02-01 | 지이 비디오 컴프레션, 엘엘씨 | 병렬 처리, 전송 디멀티플렉서 및 비디오 비트스트림을 허용하는 코딩 개념 |
US9955178B2 (en) * | 2012-03-22 | 2018-04-24 | Lg Electronics Inc. | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
-
2013
- 2013-03-22 US US14/387,002 patent/US9955178B2/en active Active
- 2013-03-22 KR KR1020217016821A patent/KR102312989B1/ko active IP Right Grant
- 2013-03-22 WO PCT/KR2013/002424 patent/WO2013141665A1/ko active Application Filing
- 2013-03-22 KR KR1020207019399A patent/KR102219089B1/ko active IP Right Grant
- 2013-03-22 KR KR1020227003996A patent/KR102447003B1/ko active IP Right Grant
- 2013-03-22 KR KR1020227032496A patent/KR102489001B1/ko active IP Right Grant
- 2013-03-22 KR KR1020217032395A patent/KR102361012B1/ko active IP Right Grant
- 2013-03-22 KR KR1020217004595A patent/KR102261939B1/ko active IP Right Grant
- 2013-03-22 KR KR1020147021858A patent/KR102132784B1/ko active IP Right Grant
-
2018
- 2018-03-19 US US15/925,169 patent/US10218993B2/en active Active
-
2019
- 2019-01-11 US US16/245,778 patent/US10708610B2/en active Active
-
2020
- 2020-05-28 US US16/886,400 patent/US11202090B2/en active Active
-
2021
- 2021-11-04 US US17/518,984 patent/US11838526B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080002649A1 (en) * | 2006-06-28 | 2008-01-03 | Pengfei Xia | System and method for digital communications using multiple parallel encoders |
EP2381685A1 (en) * | 2010-04-13 | 2011-10-26 | Research In Motion Limited | Methods and devices for load balancing in parallel entropy coding and decoding |
US20110274162A1 (en) * | 2010-05-04 | 2011-11-10 | Minhua Zhou | Coding Unit Quantization Parameters in Video Coding |
US20120014429A1 (en) * | 2010-07-15 | 2012-01-19 | Jie Zhao | Methods and Systems for Parallel Video Encoding and Parallel Video Decoding |
WO2012008608A1 (en) * | 2010-07-15 | 2012-01-19 | Sharp Kabushiki Kaisha | Parallel video coding based on prediction type |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101895295B1 (ko) * | 2014-02-12 | 2018-09-05 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
US20170013268A1 (en) * | 2014-02-12 | 2017-01-12 | Chips & Media,Inc | Method and apparatus for processing video |
KR20180004186A (ko) * | 2014-02-12 | 2018-01-10 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
KR20180004187A (ko) * | 2014-02-12 | 2018-01-10 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
KR101834237B1 (ko) * | 2014-02-12 | 2018-03-06 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
KR101847899B1 (ko) * | 2014-02-12 | 2018-04-12 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
WO2015122550A1 (ko) * | 2014-02-12 | 2015-08-20 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
KR101895296B1 (ko) * | 2014-02-12 | 2018-09-05 | 주식회사 칩스앤미디어 | 동영상 처리 방법 및 장치 |
US10757431B2 (en) | 2014-02-12 | 2020-08-25 | Chips & Media, Inc | Method and apparatus for processing video |
WO2020071829A1 (ko) * | 2018-10-04 | 2020-04-09 | 엘지전자 주식회사 | 히스토리 기반 영상 코딩 방법 및 그 장치 |
US11025945B2 (en) | 2018-10-04 | 2021-06-01 | Lg Electronics Inc. | History-based image coding method and apparatus |
US11445209B2 (en) | 2018-10-04 | 2022-09-13 | Lg Electronics Inc. | History-based image coding method and apparatus |
US11729414B2 (en) | 2018-10-04 | 2023-08-15 | Lg Electronics Inc. | History-based image coding method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
US20220060730A1 (en) | 2022-02-24 |
KR20220130274A (ko) | 2022-09-26 |
KR20210021122A (ko) | 2021-02-24 |
US20180213245A1 (en) | 2018-07-26 |
US10708610B2 (en) | 2020-07-07 |
KR102132784B1 (ko) | 2020-07-13 |
US11202090B2 (en) | 2021-12-14 |
KR102361012B1 (ko) | 2022-02-09 |
US11838526B2 (en) | 2023-12-05 |
US20190149832A1 (en) | 2019-05-16 |
US20200296400A1 (en) | 2020-09-17 |
KR20210127772A (ko) | 2021-10-22 |
KR20210068620A (ko) | 2021-06-09 |
US9955178B2 (en) | 2018-04-24 |
KR102261939B1 (ko) | 2021-06-07 |
KR102312989B1 (ko) | 2021-10-14 |
US10218993B2 (en) | 2019-02-26 |
US20150055715A1 (en) | 2015-02-26 |
KR20200085921A (ko) | 2020-07-15 |
KR102447003B1 (ko) | 2022-09-22 |
KR102489001B1 (ko) | 2023-01-18 |
KR20140145114A (ko) | 2014-12-22 |
KR20220020439A (ko) | 2022-02-18 |
KR102219089B1 (ko) | 2021-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013141665A1 (ko) | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 | |
KR102412470B1 (ko) | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 | |
KR102238127B1 (ko) | 비디오 인코딩 및 디코딩 방법과 이를 이용하는 장치 | |
CN107734346B (zh) | 视频编码方法、视频解码方法和使用其的设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13764991 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20147021858 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14387002 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13764991 Country of ref document: EP Kind code of ref document: A1 |