US20120294353A1 - Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components - Google Patents
Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components Download PDFInfo
- Publication number
- US20120294353A1 US20120294353A1 US13/311,953 US201113311953A US2012294353A1 US 20120294353 A1 US20120294353 A1 US 20120294353A1 US 201113311953 A US201113311953 A US 201113311953A US 2012294353 A1 US2012294353 A1 US 2012294353A1
- Authority
- US
- United States
- Prior art keywords
- adaptive offset
- sample adaptive
- chroma
- block
- blocks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present invention claims priority to U.S. Provisional Patent Application No. 61/486,504, filed May 16, 2011, entitled “Sample Adaptive Offset for Luma and Chroma Components”, U.S. Provisional Patent Application No. 61/498,949, filed Jun. 20, 2011, entitled “LCU-based Syntax for Sample Adaptive Offset”, and U.S. Provisional Patent Application No. 61/503,870, filed Jul. 1, 2011, entitled “LCU-based Syntax for Sample Adaptive Offset”.
- the present invention is also related to U.S. Non-Provisional patent application Ser. No. 13/158,427, entitled “Apparatus and Method of Sample Adaptive Offset for Video Coding”, filed on Jun. 12, 2011.
- the U.S. Provisional Patent Applications and U.S. Non-Provisional patent application are hereby incorporated by reference in their entireties.
- the present invention relates to video processing.
- the present invention relates to apparatus and method for adaptive in-loop filtering including sample adaptive offset compensation and adaptive loop filter.
- the video data are subject to various processing such as prediction, transform, quantization, deblocking, and adaptive loop filtering.
- certain characteristics of the processed video data may be altered from the original video data due to the operations applied to video data.
- the mean value of the processed video may be shifted. Intensity shift may cause visual impairment or artifacts, which is especially more noticeable when the intensity shift varies from frame to frame. Therefore, the pixel intensity shift has to be carefully compensated or restored to reduce the artifacts.
- Some intensity offset schemes have been used in the field. For example, an intensity offset scheme, termed as sample adaptive offset (SAO), classifies each pixel in the processed video data into one of multiple categories according to a context selected.
- SAO sample adaptive offset
- the conventional SAO scheme is only applied to the luma component. It is desirable to extend SAO processing to the chroma components as well.
- the SAO scheme usually requires incorporating SAO information in the video bitstream, such as partition information to divide a picture or slice into blocks and the SAO offset values for each block so that a decoder can operate properly.
- the SAO information may take up a noticeable portion of the bitrate of compressed video and it is desirable to develop efficient coding to incorporate the SAO information.
- adaptive loop filter ALF
- ALF adaptive loop filter
- ALF information such as partition information and filter parameters has to be incorporated in the video bitstream so that a decoder can operate properly. Therefore, it is also desirable to develop efficient coding to incorporate the ALF information in the video bitstream.
- a method and apparatus for processing reconstructed video using in-loop filter in a video decoder comprises deriving reconstructed video data from a video bitstream, wherein the reconstructed video data comprises luma component and chroma components; receiving chroma in-loop filter indication from the video bitstream if luma in-loop filter indication in the video bitstream indicates that in-loop filter processing is applied to the luma component; determining chroma in-loop filter information if the chroma in-loop filter indication indicates that the in-loop filter processing is applied to the chroma components; and applying the in-loop filter processing to the chroma components according to the chroma in-loop filter information if the chroma in-loop filter indication indicates that the in-loop filter processing is applied to the chroma components.
- the chroma components may use a single chroma in-loop filter flag or each of the chroma components may use its own chroma in-loop filter flag to control whether the in-loop filter processing is applied.
- An entire picture may share the in-loop filter information. Alternatively, the picture may be divided into blocks and each block uses its own in-loop filter information.
- the in-loop filter information for a current block may be derived from neighboring blocks in order to increase coding efficiency.
- in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
- a method and apparatus for processing reconstructed video using in-loop filter in a video decoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, are disclosed.
- the method and apparatus comprise deriving reconstructed block from a video bitstream; receiving in-loop filter information from the video bitstream if a current reconstructed block is a new partition; deriving the in-loop filter information from a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and applying in-loop filter processing to the current reconstructed block using the in-loop filter information.
- a merge flag in the video bitstream may be used for the current block to indicate the in-loop filter information sharing with one of neighboring blocks if more than one neighboring block exists. If only one neighboring block exists, the in-loop filter information sharing is inferred without the need for the merge flag.
- a candidate block may be eliminated from merging with the current reconstructed block so as to increase coding efficiency.
- a method and apparatus for processing reconstructed video using in-loop filter in a corresponding video encoder are disclosed. Furthermore, a method and apparatus for processing reconstructed video using in-loop filter in a corresponding video encoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, are also disclosed.
- FIG. 1 illustrates a system block diagram of an exemplary video encoder incorporating a reconstruction loop, where the in-loop filter processing includes deblocking filter (DF), sample adaptive offset (SAO) and adaptive loop filter (ALF).
- DF deblocking filter
- SAO sample adaptive offset
- ALF adaptive loop filter
- FIG. 2 illustrates a system block diagram of an exemplary video decoder incorporating a reconstruction loop, where the in-loop filter processing includes deblocking filter (DF), sample adaptive offset (SAO) and adaptive loop filter (ALF).
- DF deblocking filter
- SAO sample adaptive offset
- ALF adaptive loop filter
- FIG. 3 illustrates an example of sample adaptive offset (SAO) coding for current block C using information from neighboring blocks A, D, B and E.
- SAO sample adaptive offset
- FIG. 4A illustrates an example of quadtree-based picture partition for sample adaptive offset (SAO) processing.
- FIG. 4B illustrates an example of LCU-based picture partition for sample adaptive offset (SAO) processing.
- FIG. 5A illustrates an example of allowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition.
- FIG. 5B illustrates another example of allowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition.
- FIG. 5C illustrates an example of unallowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition.
- FIG. 6A illustrates an example of allowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition.
- FIG. 6B illustrates another example of allowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition.
- FIG. 6C illustrates an example of unallowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition.
- FIG. 7 illustrates an exemplary syntax design to incorporate a flag in SPS to indicate whether SAO is enable or disabled for the sequence.
- FIG. 8 illustrates an exemplary syntax design for sao_param( ), where separate SAO information is allowed for the chroma components.
- FIG. 9 illustrates an exemplary syntax design for sao_split_param( ), where syntax sao_split_param( ) includes “component” parameter and “component” indicates either the luma component or one of the chroma components.
- FIG. 10 illustrates an exemplary syntax design for sao_offset_param( ), where syntax sao_offset_param( ) includes “component” as a parameter and “component” indicates either the luma component or one of the chroma components.
- FIG. 11 illustrates an example of quadtree-based picture partition for sample adaptive offset (SAO) type determination.
- FIG. 12A illustrates an example of picture-based sample adaptive offset (SAO), where the entire picture uses same SAO parameters.
- SAO sample adaptive offset
- FIG. 12B illustrates an example of LCU-based sample adaptive offset (SAO), where each LCU uses its own SAO parameters.
- SAO sample adaptive offset
- FIG. 13 illustrates an example of using a run equal to two for SAO information sharing of the first three LCUs.
- FIG. 14 illustrates an example of using run signals and merge-above flags to encode SAO information sharing.
- FIG. 15 illustrates an example of using run signals, run prediction and merge-above flags to encode SAO information sharing.
- Adaptive Offset In High Efficiency Video Coding (HEVC), a technique named Adaptive Offset (AO) is introduced to compensate the offset of reconstructed video and AO is applied inside the reconstruction loop.
- a method and system for offset compensation is disclosed in U.S. Non-Provisional patent application Ser. No. 13/158,427, entitled “Apparatus and Method of Sample Adaptive Offset for Video Coding”. The method and system classify each pixel into a category and apply intensity shift compensation or restoration to processed video data based on the category of each pixel.
- Adaptive Loop Filter ALF has also been introduced in HEVC to improve video quality.
- ALF applies spatial filter to reconstructed video inside the reconstruction loop. Both AO and ALF are considered as a type of in-loop filter in this disclosure.
- Intra-prediction 110 is responsible to provide prediction data based on video data in the same picture.
- motion estimation (ME) and motion compensation (MC) 112 is used to provide prediction data based on video data from other picture or pictures.
- Switch 114 selects intra-prediction or inter-prediction data and the selected prediction data are supplied to adder 116 to form prediction errors, also called residues.
- the prediction error is then processed by transformation (T) 118 followed by quantization (Q) 120 .
- the transformed and quantized residues are then coded by entropy coding 122 to form a bitstream corresponding to the compressed video data.
- the bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area.
- the side information may also be subject to entropy coding to reduce required bandwidth. Accordingly the data associated with the side information are provided to entropy coding 122 as shown in FIG. 1 .
- entropy coding 122 When an inter-prediction mode is used, a reference picture or reference pictures have to be reconstructed at the encoder end. Consequently, the transformed and quantized residues are processed by inverse quantization (IQ) 124 and inverse transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at reconstruction (REC) 128 to reconstruct video data.
- the reconstructed video data may be stored in reference picture buffer 134 and used for prediction of other frames.
- incoming video data undergo a series of processing in the encoding system.
- the reconstructed video data from REC 128 may be subject to intensity shift and other noises due to the series of processing.
- deblocking filter 130 sample adaptive offset (SAO) 131 and adaptive loop filter (ALF) 132 are applied to the reconstructed video data before the reconstructed video data are stored in the reference picture buffer 134 in order to improve video quality.
- the adaptive offset information and adaptive loop filter information may have to be transmitted in the bitstream so that a decoder can properly recover the required information in order to apply the adaptive offset and adaptive loop filter.
- adaptive offset information from AO 131 and adaptive loop filter information from ALF 132 are provided to entropy coding 122 for incorporation into the bitstream.
- the encoder may need to access to the original video data in order to derive AO information and ALF information.
- the paths from the input to AO 131 and ALF 132 are not explicitly shown in FIG. 1 .
- FIG. 2 illustrates a system block diagram of an exemplary video decoder including deblocking filter and adaptive loop filter. Since the encoder also contains a local decoder for reconstructing the video data, some decoder components are already used in the encoder except for the entropy decoder 222 . Furthermore, only motion compensation 212 is required for the decoder side.
- the switch 214 selects intra-prediction or inter-prediction and the selected prediction data are supplied to reconstruction (REC) 128 to be combined with recovered residues.
- entropy decoding 222 is also responsible for entropy decoding of side information and provides the side information to respective blocks.
- intra mode information is provided to intra-prediction 110
- inter mode information is provided to motion compensation 212
- adaptive offset information is provided to SAO 131
- adaptive loop filter information is provided to ALF 132
- residues are provided to inverse quantization 124 .
- the residues are processed by IQ 124 , IT 126 and subsequent reconstruction process to reconstruct the video data.
- reconstructed video data from REC 128 undergo a series of processing including IQ 124 and IT 126 as shown in FIG. 2 and are subject to intensity shift.
- the reconstructed video data are further processed by deblocking filter 130 , sample adaptive offset 131 and adaptive loop filter 132 .
- the in-loop filtering is only applied to the luma component of reconstructed video according to the current HEVC standard. It is beneficial to apply in-loop filtering to chroma components of reconstructed video as well.
- the information associated with in-loop filtering for the chroma components may be sizeable.
- a chroma component typically results in much smaller compressed data than the luma component. Therefore, it is desirable to develop a method and apparatus for applying in-loop filtering to the chroma components efficiently. Accordingly, an efficient method and apparatus of SAO for chroma component are disclosed.
- an indication is provided for signaling whether in-loop filtering is turned ON or not for chroma components when SAO for the luma component is turned ON. If SAO for the luma component is not turned ON, the SAO for the chroma components is also not turned ON. Therefore, there is no need to provide the indication for signaling whether in-loop filtering is turned ON or not for the chroma components in this case.
- a example of pseudo codes for the embodiment mentioned above is shown below:
- a flag is signaled to indicate whether SAO for chroma is turned ON or not. Else; The flag is signaled.
- chroma in-loop filter indication since it can be used for SAO as well as ALF.
- SAO is one example of in-loop filter processing, where the in-loop filter processing may be ALF.
- individual indications are provided for signaling whether in-loop filtering is turned ON or not for chroma components Cb and Cr when SAO for the luma component is turned ON. If SAO for the luma component is not turned ON, the SAO for the two chroma components is also not turned ON. Therefore, there is no need to provide the individual indications for signaling whether in-loop filtering is turned ON or not for the two chroma components in this case.
- a example of pseudo codes for the embodiment mentioned above is shown below:
- a first flag is signaled to indicate whether SAO for Cb is turned ON or not;
- a second flag is signaled to indicate whether SAO for Cr is turned ON or not. Else; Neither the first flag nor the second flag is signaled.
- FIG. 3 illustrates an example of utilizing neighboring block to reduce SAO information.
- Block C is the current block being processed by SAO.
- Blocks B, D, E and A are previously processed neighboring blocks around C, as shown in FIG. 3 .
- the block-based syntax represents the parameters of current processing block.
- a block can be a coding unit (CU), a largest coding unit (LCU), or multiple LCUs.
- a flag can be used to indicate that the current block shares the SAO parameters with neighboring blocks to reduce the rate. If the processing order of blocks is raster scan, the parameters of blocks D, B, E, and A are available when the parameters of block C are encoded. When the block parameters are available from neighboring blocks, these block parameters can be used to encode the current block. The amount of data required to send the flag to indicate SAO parameter sharing is usually much less than that for SAO parameters. Therefore, efficient SAO is achieved. While SAO is used as an example of in-loop filter to illustrate parameter sharing based on neighboring blocks, the technique can also be applied to other in-loop filter such as ALF.
- the quadtree-based algorithm can be used to adaptively divide a picture region into four sub-regions to achieve better performance.
- the encoding algorithm for the quadtree-based SAO partition has to be efficiently designed.
- the SAO parameters (SAOP) include SAO type index and offset values of the selected type.
- An exemplary quadtree-based SAO partition is shown in FIGS. 4A and 4B .
- FIG. 4A represents a picture being partitioned using quadtree partition, where each small square corresponds to an LCU.
- the first partition (depth 0 partition) is indicated by split — 0( ).
- a value 0 implies no split and a value 1 indicates a split applied.
- the picture consists of twelve LCUs as labeled by P 1 , P 2 , . . . , P 12 in FIG. 4B .
- the depth-0 quadtree partition, split — 0(1) splits the picture into four regions: upper left, upper right, lower left and lower right regions. Since the lower left and lower right regions have only one row of blocks, no further quadtree partition is applied. Therefore, depth-1 quadtree partition is only considered for the upper left and upper right regions.
- the example in FIG. 4A shows that the upper left region is not split as indicated by split — 1(0) and the upper right region is further split into four regions as indicated by split — 1(1). Accordingly, the quadtree partition results in seven partitions labeled as P′ 0 , . . . , P′ 6 in FIG. 4A , where:
- each LCU can be a new partition or merged with other LCUs. If the current LCU is merged, several merge candidates can be selected.
- the syntax design is illustrated as follows:
- Block C is not the first block of the picture, Use one flag to indicate block C is a new partition. Else, Block C is inferred as a new partition. If block C is a new partition, Encode SAO parameters. Otherwise, If a left neighbor and a top neighbor exist, Send a mergeLeftFlag. If mergeLeftFlag is true, then block C is merged with block A. Otherwise, block C is merged with block B. Else, If a left neighbor exists, then block C is merged with block A. Otherwise, block C is merged with block B.
- the relation with neighboring blocks (LCUs) and the properties of quadtree partition are used to reduce the amount of data required to transmit SAO related information.
- the boundary condition of a picture region such as a slice may introduce some redundancy in dependency among neighboring blocks and the boundary condition can be used to reduce the amount of data required to transmit SAO related information.
- the relation among neighboring blocks may also introduce redundancy in dependency among neighboring blocks and the relation among neighboring blocks may be used to reduce the amount of data required to transmit SAO related information.
- FIGS. 5A-C An example of redundancy in dependency among neighboring blocks is illustrated in FIGS. 5A-C .
- blocks D and A are in the same partition and block B is in another partition, blocks A and C will be in different partitions as shown in FIG. 5A and FIG. 5B .
- the case shown in FIG. 5C is not allowed in quadtree partition. Therefore, the merge-candidate in FIG. 5C is redundant and there is no need to assign a code to represent the merge flag corresponding to FIG. 5C .
- Exemplary pseudo codes to implement the merge algorithm are shown as follows:
- Block C is a new partition as shown in Fig.5A. Otherwise, Block C is merged with block B without signaling as shown in Fig.5B
- block C is a new partition or block C is merged with block B. Therefore, a single bit for newPartitionFlag is adequate to identify the two cases.
- blocks D and B are in the same partition and block A is in another partition, blocks B and C will be in different partitions as shown in FIG. 6A and FIG. 6B .
- the case shown in FIG. 6C is not allowed according to quadtree partition. Therefore, the merge-candidate associated with the case in FIG. 6C is redundant and there is no need to assign a code to represent the merge flag corresponding to FIG. 6C .
- Exemplary pseudo codes to implement the merge algorithm are shown as follows:
- Block C is a new partition as shown in Fig.6A. Otherwise, Block C is merged with block A without signaling as shown in Fig. 6B.
- FIGS. 5A-C and FIG. 6A-C illustrate two examples of utilizing redundancy in dependency among neighboring blocks to further reduce transmitted data associated with SAO information for the current block.
- the system can take advantage of the redundancy in dependency among neighboring blocks. For example, if blocks A, B and D are in the same partition, then block C cannot be in another partition. Therefore, block C must be in the same partition as A, B, and D and there is no need to transmit an indication of SAO information sharing.
- the LCU block in the slice boundary can be taken into consideration to reduce the transmitted data associated with SAO information for the current block. For example, if block A does not exist, only one direction can be merged. If block B does not exist, only one direction can be merged as well.
- both blocks A and B do not exist, there is no need to transmit a flag to indicate block C as a new partition.
- a flag can be used to indicate that current slice uses only one SAO type without any LCU-based signaling.
- the slice is a single partition, the number of transmitted syntax elements can also be reduced.
- LCU is used as a unit of block in the above examples, other block configurations (such as block size and shape) may also be used.
- slice is mentioned here as an example of picture area that the blocks are grouped to share common information, other picture areas such as group of slices and a picture may also be used.
- chroma and luma components may share the same SAO information for color video data.
- the SAO information may also be shared between chroma components.
- chroma components Cb and Cr
- Cb and Cr may use the partition information of luma so that there is no need to signal the partition information for the chroma components.
- Cb and Cr may share the same SAO parameters (SAOP) and therefore only one set of SAOP needs to be transmitted for Cb and Cr to share.
- SAO syntax for luma can be used for chroma components where the SAO syntax may include quadtree syntax and LCU-based syntax.
- the examples of utilizing redundancy in dependency among neighboring blocks as shown in FIGS. 5A-C and FIG. 6A-C to reduce transmitted data associated with SAO information can also be applied to the chroma components.
- the SAOP including SAO type and SAO offset values of the selected type can be coded before partitioning information, and therefore an SAO parameter set (SAOPS) can be formed. Accordingly, indexing can be used to identify SAO parameters from the SAOPS for the current block where the data transmitted for the index is typically less than the data transmitted for the SAO parameters.
- partition information is encoded, the selection among SAOPS can be encoded at the same time. The number of SAOPS can be increased dynamically.
- the number of SAOP in SAOPS will be increased by one.
- the number of bits can be dynamically adjusted to match the data range. For example, three bits are required to represent SAOPS having five to eight members.
- the number of SAOPS will grow to nine and four bits will be needed to represent the SAOPS having nine members.
- SAO parameters can be transmitted in a predicted form, such as the difference between SAO parameters for a current block and the SAO parameters for a neighboring block or neighboring blocks.
- Another embodiment according to the present invention is to reduce SAO parameters for chroma.
- Edge-based Offset (EO) classification classifies each pixel into four categories for the luma component.
- the number of EO categories for the chroma components can be reduced to two to reduce the transmitted data associated with SAO information for the current block.
- the number of bands for band offset (BO) classification is usually sixteen for the luma component. In yet another example, the number of bands for band offset (BO) classification may be reduced to eight for the chroma components.
- FIG. 3 illustrates a case that current block C has four merge candidates, i.e., blocks A, B, D and E.
- the number of merge candidates can be reduced if the merge candidates are in the same partition. Accordingly, the number of bits to indicate which merge candidate is selected can be reduced or saved.
- SAO will avoid fetching data from any other slice and skip the current processing pixel to avoid data from other slices.
- a flag may be used to control whether the SAO processing avoids fetching data from any other slice.
- the control flag regarding whether the SAO processing avoids fetching data from any other slice can be incorporated in a sequence level or a picture level.
- the control flag regarding whether the SAO processing avoids fetching data from any other slice can also be shared with the non-crossing slice boundary flag of adaptive loop filter (ALF) or deblocking filter (DF).
- ALF adaptive loop filter
- DF deblocking filter
- the ON/OFF control of chroma SAO depend on luma SAO ON/OFF information.
- the category of chroma SAO can be a subset of luma SAO for a specific SAO type.
- FIG. 7 illustrates an example of incorporating sao_used_flag in the sequence level data, such as Sequence Parameter Set (SPS).
- SPS Sequence Parameter Set
- sao_used_flag has a value 0
- SAO is disabled for the sequence.
- sao_used_flag has a value 1
- SAO is enabled for the sequence.
- FIG. 8 An exemplary syntax for SAO parameters is shown in FIG. 8 , where the sao_param( ) syntax can be incorporated in Adaptation Parameter Set (APS), Picture Parameter Set (PPS) or slice header.
- APS is another picture-level header in addition to the PPS to accommodate parameters that are likely to change from picture to picture.
- the syntax will include split parameter sao_split_param(0, 0, 0, 0) and offset parameter sao_offset_param(0, 0, 0, 0) for the luma component. Furthermore, the syntax also includes SAO flag sao_flag_cb for the Cb component and SAO flag sao_flag_cr for the Cr component. If sao_flag_cb indicates that the SAO for the Cb component is enabled, the syntax will include split parameter sao_split_param(0, 0, 0, 1) and offset parameter sao_offset_param(0, 0, 0, 1) for chroma component Cb.
- FIG. 9 illustrates an exemplary syntax for sao_split_param(rx, ry, Depth, component), where the syntax is similar to a conventional sao_split_param ( ) except that an additional parameter “component” is added, where “component” is used to indicate the luma or one of the chroma components.
- sao_offset_param(rx, ry, Depth, component) illustrates an exemplary syntax for sao_offset_param(rx, ry, Depth, component), where the syntax is similar to a conventional sao_offset_param ( ) except that an additional parameter “component” is added.
- the syntax includes sao_type_idx [component] [Depth][ry][rx] if the split flag sao_split_flag [component] [Depth][ry][rx] indicates the region is not further split.
- Syntax sao_type_idx [component] [Depth][ry][rx] specification is shown in Table 1.
- the sample adaptive offset (SAO) adopted in HM-3.0 uses a quadtree-based syntax, which divides a picture region into four sub-regions using a split flag recursively, as shown in FIG. 11 .
- Each leaf region has its own SAO parameters (SAOP), where the SAOP includes the information of SAO type and the offset values to be applied for the region.
- SAOP SAO parameters
- FIG. 11 illustrates an example where the picture is divided into seven leaf regions, 1110 through 1170 , where band offset (BO) type SAO is applied to leaf regions 1110 and 1150 , edge offset (EO) type SAO is applied to leaf regions 1130 , 1140 and 1160 , and SAO is turned off for leaf regions 1120 and 1170 .
- BO band offset
- EO edge offset
- FIGS. 12A illustrates an example of picture-based SAO
- FIG. 12B illustrates a block-based SAO, where each region is one LCU and there are fifteen LCUs in the picture.
- picture-based SAO the entire picture shares one SAOP.
- slice-based SAO it is also possible to use slice-based SAO so that the entire slice or multiple slices share one SAOP.
- each LCU has its own SAOP and SAOP 1 through SAOP 15 are used by the fifteen LCUs (LCU 1 through LCU 15 ) respectively.
- SAOP for each LCU may be shared by following LCUs.
- the number of consecutive subsequent LCUs sharing the same SAOP may be indicated by a run signal.
- FIG. 13 illustrates an example where SAOP 1 , SAOP 2 and SAOP 3 are the same.
- the SAOP of the first LCU is SAOP 1
- SAOP 1 is used for the subsequent two LCUs.
- the LCU in a following row according to the raster scan order may share the SAOP of a current LCU.
- a merge-above flag may be used to indicate the case that the current LCU shares the SAOP of the LCU above if the above LCU is available. If the merge-above flag is set to “1”, the current LCU will use the SAOP of the LCU above.
- both SAOP 1 and SAOP 3 are shared by two subsequent LCUs and SAOP 4 is shared by four subsequent LCUs. Accordingly, the run signal for SAOP 1 , SAOP 3 and SAOP 4 are 2, 2 and 4 respectively. Since none of them shares SAOP with LCUs above, the merge-above syntax has a value 0 for blocks associated SAOP 1 , SAOP 3 and SAOP 4 .
- the run signal of the above LCU can be used as a predictor for the run signal of the current LCU.
- the difference of the two run signals is encoded, where the difference is denoted as d_run as shown in FIG. 15 .
- the run prediction value can be the run of the above LCU group subtracted by the number of LCUs that are prior to the above LCU in the same LCU group.
- the first LCU sharing SAOP 3 has a run value of 2 and the first LCU above also has a run value of 2 (sharing SAOP 1 ).
- d_run for the LCU sharing SAOP 3 has a value of 0.
- the first LCU sharing SAOP 4 has a run value of 4 and the first LCU above also has a run value of 2 (sharing SAOP 3 ). Accordingly, d_run for the LCU sharing SAOP 4 has a value of 2.
- the run may be encoded by using an unsigned variable length code (U_VLC).
- U_VLC unsigned variable length code
- S_VLC signed variable length code
- the U_VLC and S_VLC can be k-th order exp-Golomb coding, Golomb-Rice coding, or a binarization process of CABAC coding.
- a flag may be used to indicate that all SAOPs in the current LCU row are the same as those in the above LCU row.
- a flag, RepeatedRow for each LCU row can be used to indicate that all SAOPs in this LCU row are the same as those in the above LCU row. If RepeatedRow flag is equal to 1, no more information needs to be coded. For each LCU in the current LCU row, the related SAOP is copied from the LCU in the above LCU row. If RepeatedRow flag is equal to 0, the SAOPs of this LCU row are coded.
- a flag may be used to signal whether RepeatedRow flag is used or not.
- the EnableRepeatedRow flag can be used to indicate whether RepeatedRow flag is used or not.
- the EnableRepeatedRow flag can be signaled at a slice or picture level. If EnableRepeatedRow is equal to 0, the RepeatedRow flag is not coded for each LCU row. If EnableRepeatedRow is equal to 1, the RepeatedRow flag is coded for each LCU row.
- the RepeatedRow flag at the first LCU row of a picture or a slice can be saved.
- the RepeatedRow flag of the first LCU row can be saved.
- the RepeatedRow flag of the first LCU row in a slice can be saved; otherwise, the RepeatedRow flag will be signaled.
- the method of saving RepeatedRow flag at the first LCU row of one picture or one slice can also be applied to the case where the EnableRepeatedRow flag is used.
- an embodiment according to the present invention uses a run signal to indicate that all of SAOPs in the following LCU rows are the same as those in the above LCU row. For example, for N consecutive LCU rows containing the same SAOP, the SAOP and a run signal equal to N ⁇ 1 are signaled at the first LCU row of the N consecutive repeated LCU rows.
- the maximum and minimum runs of the repeated LCU rows in one picture or slice can be derived and signaled at slice or picture level. Based on the maximum and minimum values, the run number can be coded using a fixed-length code word. The word length of the fixed-length code can be determined according to the maximum and minimum run values and thus can be adaptively changed at slice or picture level.
- the run number in the first LCU row of a picture or a slice is coded.
- a run is coded to indicate the number of LCUs sharing the SAOP.
- U_VLC unsigned variable length code
- the word length can be coded adaptively based on the image width, the coded runs, or the remaining LCU, or the word length can be fixed based on the image width or be signaled to the decoder.
- the maximum number of run is N ⁇ 1 ⁇ k.
- the word length of the to-be-coded run is floor(log 2(N ⁇ 1 ⁇ k)+1).
- the maximum and minimum number of run in a slice or picture can be calculated first. Based on the maximum and minimum value, the word length of the fixed-length code can be derived and coded.
- the information for the number of runs and delta-runs can be incorporated at slice level.
- the number of runs, delta-runs or the number of LCUs, NumSaoRun is signaled at slice level.
- the number of LCUs for the current coding SAOP can be specified using the NumSaoRun flag.
- the number of runs and delta-runs or the number of LCUs can be predicted using the number of LCUs in one coding picture.
- the prediction equation is given by:
- NumTBsInPicture is the number of LCUs in one picture and sao_num_run_info is the predicted residual value.
- sao_num_run_info can be coded using a signed or unsigned variable-length.
- sao_num_run_info may also be coded using a signed or unsigned fixed-length code word.
- Embodiment of in-loop filter according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
Description
- The present invention claims priority to U.S. Provisional Patent Application No. 61/486,504, filed May 16, 2011, entitled “Sample Adaptive Offset for Luma and Chroma Components”, U.S. Provisional Patent Application No. 61/498,949, filed Jun. 20, 2011, entitled “LCU-based Syntax for Sample Adaptive Offset”, and U.S. Provisional Patent Application No. 61/503,870, filed Jul. 1, 2011, entitled “LCU-based Syntax for Sample Adaptive Offset”. The present invention is also related to U.S. Non-Provisional patent application Ser. No. 13/158,427, entitled “Apparatus and Method of Sample Adaptive Offset for Video Coding”, filed on Jun. 12, 2011. The U.S. Provisional Patent Applications and U.S. Non-Provisional patent application are hereby incorporated by reference in their entireties.
- The present invention relates to video processing. In particular, the present invention relates to apparatus and method for adaptive in-loop filtering including sample adaptive offset compensation and adaptive loop filter.
- In a video coding system, the video data are subject to various processing such as prediction, transform, quantization, deblocking, and adaptive loop filtering. Along the processing path in the video coding system, certain characteristics of the processed video data may be altered from the original video data due to the operations applied to video data. For example, the mean value of the processed video may be shifted. Intensity shift may cause visual impairment or artifacts, which is especially more noticeable when the intensity shift varies from frame to frame. Therefore, the pixel intensity shift has to be carefully compensated or restored to reduce the artifacts. Some intensity offset schemes have been used in the field. For example, an intensity offset scheme, termed as sample adaptive offset (SAO), classifies each pixel in the processed video data into one of multiple categories according to a context selected. The conventional SAO scheme is only applied to the luma component. It is desirable to extend SAO processing to the chroma components as well. The SAO scheme usually requires incorporating SAO information in the video bitstream, such as partition information to divide a picture or slice into blocks and the SAO offset values for each block so that a decoder can operate properly. The SAO information may take up a noticeable portion of the bitrate of compressed video and it is desirable to develop efficient coding to incorporate the SAO information. Besides SAO, adaptive loop filter (ALF) is another type of in-loop filter often applied to the reconstructed video to improve video quality. Similarly, it is desirable to apply ALF to the chroma component as well to improve video quality. Again, ALF information such as partition information and filter parameters has to be incorporated in the video bitstream so that a decoder can operate properly. Therefore, it is also desirable to develop efficient coding to incorporate the ALF information in the video bitstream.
- A method and apparatus for processing reconstructed video using in-loop filter in a video decoder are disclosed. The method and apparatus incorporating an embodiment according to the present invention comprises deriving reconstructed video data from a video bitstream, wherein the reconstructed video data comprises luma component and chroma components; receiving chroma in-loop filter indication from the video bitstream if luma in-loop filter indication in the video bitstream indicates that in-loop filter processing is applied to the luma component; determining chroma in-loop filter information if the chroma in-loop filter indication indicates that the in-loop filter processing is applied to the chroma components; and applying the in-loop filter processing to the chroma components according to the chroma in-loop filter information if the chroma in-loop filter indication indicates that the in-loop filter processing is applied to the chroma components. The chroma components may use a single chroma in-loop filter flag or each of the chroma components may use its own chroma in-loop filter flag to control whether the in-loop filter processing is applied. An entire picture may share the in-loop filter information. Alternatively, the picture may be divided into blocks and each block uses its own in-loop filter information. When in-loop filter processing is applied to blocks, the in-loop filter information for a current block may be derived from neighboring blocks in order to increase coding efficiency. Various embodiments according to the present invention to increase coding efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
- A method and apparatus for processing reconstructed video using in-loop filter in a video decoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, are disclosed. The method and apparatus comprise deriving reconstructed block from a video bitstream; receiving in-loop filter information from the video bitstream if a current reconstructed block is a new partition; deriving the in-loop filter information from a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and applying in-loop filter processing to the current reconstructed block using the in-loop filter information. In order to increase coding efficiency, a merge flag in the video bitstream may be used for the current block to indicate the in-loop filter information sharing with one of neighboring blocks if more than one neighboring block exists. If only one neighboring block exists, the in-loop filter information sharing is inferred without the need for the merge flag. According to the quadtree-partition property and merge information of said one or more candidate blocks, a candidate block may be eliminated from merging with the current reconstructed block so as to increase coding efficiency.
- A method and apparatus for processing reconstructed video using in-loop filter in a corresponding video encoder are disclosed. Furthermore, a method and apparatus for processing reconstructed video using in-loop filter in a corresponding video encoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, are also disclosed.
-
FIG. 1 illustrates a system block diagram of an exemplary video encoder incorporating a reconstruction loop, where the in-loop filter processing includes deblocking filter (DF), sample adaptive offset (SAO) and adaptive loop filter (ALF). -
FIG. 2 illustrates a system block diagram of an exemplary video decoder incorporating a reconstruction loop, where the in-loop filter processing includes deblocking filter (DF), sample adaptive offset (SAO) and adaptive loop filter (ALF). -
FIG. 3 illustrates an example of sample adaptive offset (SAO) coding for current block C using information from neighboring blocks A, D, B and E. -
FIG. 4A illustrates an example of quadtree-based picture partition for sample adaptive offset (SAO) processing. -
FIG. 4B illustrates an example of LCU-based picture partition for sample adaptive offset (SAO) processing. -
FIG. 5A illustrates an example of allowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition. -
FIG. 5B illustrates another example of allowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition. -
FIG. 5C illustrates an example of unallowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition. -
FIG. 6A illustrates an example of allowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition. -
FIG. 6B illustrates another example of allowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition. -
FIG. 6C illustrates an example of unallowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition. -
FIG. 7 illustrates an exemplary syntax design to incorporate a flag in SPS to indicate whether SAO is enable or disabled for the sequence. -
FIG. 8 illustrates an exemplary syntax design for sao_param( ), where separate SAO information is allowed for the chroma components. -
FIG. 9 illustrates an exemplary syntax design for sao_split_param( ), where syntax sao_split_param( ) includes “component” parameter and “component” indicates either the luma component or one of the chroma components. -
FIG. 10 illustrates an exemplary syntax design for sao_offset_param( ), where syntax sao_offset_param( ) includes “component” as a parameter and “component” indicates either the luma component or one of the chroma components. -
FIG. 11 illustrates an example of quadtree-based picture partition for sample adaptive offset (SAO) type determination. -
FIG. 12A illustrates an example of picture-based sample adaptive offset (SAO), where the entire picture uses same SAO parameters. -
FIG. 12B illustrates an example of LCU-based sample adaptive offset (SAO), where each LCU uses its own SAO parameters. -
FIG. 13 illustrates an example of using a run equal to two for SAO information sharing of the first three LCUs. -
FIG. 14 illustrates an example of using run signals and merge-above flags to encode SAO information sharing. -
FIG. 15 illustrates an example of using run signals, run prediction and merge-above flags to encode SAO information sharing. - In High Efficiency Video Coding (HEVC), a technique named Adaptive Offset (AO) is introduced to compensate the offset of reconstructed video and AO is applied inside the reconstruction loop. A method and system for offset compensation is disclosed in U.S. Non-Provisional patent application Ser. No. 13/158,427, entitled “Apparatus and Method of Sample Adaptive Offset for Video Coding”. The method and system classify each pixel into a category and apply intensity shift compensation or restoration to processed video data based on the category of each pixel. Besides adaptive offset, Adaptive Loop Filter (ALF) has also been introduced in HEVC to improve video quality. ALF applies spatial filter to reconstructed video inside the reconstruction loop. Both AO and ALF are considered as a type of in-loop filter in this disclosure.
- The exemplary encoder shown in
FIG. 1 represents a system using intra/inter-prediction.Intra-prediction 110 is responsible to provide prediction data based on video data in the same picture. For inter-prediction, motion estimation (ME) and motion compensation (MC) 112 is used to provide prediction data based on video data from other picture or pictures.Switch 114 selects intra-prediction or inter-prediction data and the selected prediction data are supplied to adder 116 to form prediction errors, also called residues. The prediction error is then processed by transformation (T) 118 followed by quantization (Q) 120. The transformed and quantized residues are then coded byentropy coding 122 to form a bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area. The side information may also be subject to entropy coding to reduce required bandwidth. Accordingly the data associated with the side information are provided toentropy coding 122 as shown inFIG. 1 . When an inter-prediction mode is used, a reference picture or reference pictures have to be reconstructed at the encoder end. Consequently, the transformed and quantized residues are processed by inverse quantization (IQ) 124 and inverse transformation (IT) 126 to recover the residues. The residues are then added back toprediction data 136 at reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored inreference picture buffer 134 and used for prediction of other frames. As it is shown inFIG. 1 , incoming video data undergo a series of processing in the encoding system. The reconstructed video data fromREC 128 may be subject to intensity shift and other noises due to the series of processing. Accordingly,deblocking filter 130, sample adaptive offset (SAO) 131 and adaptive loop filter (ALF) 132 are applied to the reconstructed video data before the reconstructed video data are stored in thereference picture buffer 134 in order to improve video quality. The adaptive offset information and adaptive loop filter information may have to be transmitted in the bitstream so that a decoder can properly recover the required information in order to apply the adaptive offset and adaptive loop filter. Therefore, adaptive offset information fromAO 131 and adaptive loop filter information fromALF 132 are provided toentropy coding 122 for incorporation into the bitstream. The encoder may need to access to the original video data in order to derive AO information and ALF information. The paths from the input toAO 131 andALF 132 are not explicitly shown inFIG. 1 . -
FIG. 2 illustrates a system block diagram of an exemplary video decoder including deblocking filter and adaptive loop filter. Since the encoder also contains a local decoder for reconstructing the video data, some decoder components are already used in the encoder except for theentropy decoder 222. Furthermore, onlymotion compensation 212 is required for the decoder side. Theswitch 214 selects intra-prediction or inter-prediction and the selected prediction data are supplied to reconstruction (REC) 128 to be combined with recovered residues. Besides performing entropy decoding on compressed video data,entropy decoding 222 is also responsible for entropy decoding of side information and provides the side information to respective blocks. For example, intra mode information is provided tointra-prediction 110, inter mode information is provided tomotion compensation 212, adaptive offset information is provided toSAO 131, adaptive loop filter information is provided toALF 132 and residues are provided toinverse quantization 124. The residues are processed byIQ 124,IT 126 and subsequent reconstruction process to reconstruct the video data. Again, reconstructed video data fromREC 128 undergo a series ofprocessing including IQ 124 andIT 126 as shown inFIG. 2 and are subject to intensity shift. The reconstructed video data are further processed by deblockingfilter 130, sample adaptive offset 131 andadaptive loop filter 132. - The in-loop filtering is only applied to the luma component of reconstructed video according to the current HEVC standard. It is beneficial to apply in-loop filtering to chroma components of reconstructed video as well. The information associated with in-loop filtering for the chroma components may be sizeable. However, a chroma component typically results in much smaller compressed data than the luma component. Therefore, it is desirable to develop a method and apparatus for applying in-loop filtering to the chroma components efficiently. Accordingly, an efficient method and apparatus of SAO for chroma component are disclosed.
- In one example incorporating an embodiment of the present invention, an indication is provided for signaling whether in-loop filtering is turned ON or not for chroma components when SAO for the luma component is turned ON. If SAO for the luma component is not turned ON, the SAO for the chroma components is also not turned ON. Therefore, there is no need to provide the indication for signaling whether in-loop filtering is turned ON or not for the chroma components in this case. A example of pseudo codes for the embodiment mentioned above is shown below:
-
If SAO for luma is turned ON; A flag is signaled to indicate whether SAO for chroma is turned ON or not. Else; The flag is signaled. - The flag to indicate if SAO for chroma is turned ON is called chroma in-loop filter indication since it can be used for SAO as well as ALF. SAO is one example of in-loop filter processing, where the in-loop filter processing may be ALF. In another example incorporating an embodiment of the present invention, individual indications are provided for signaling whether in-loop filtering is turned ON or not for chroma components Cb and Cr when SAO for the luma component is turned ON. If SAO for the luma component is not turned ON, the SAO for the two chroma components is also not turned ON. Therefore, there is no need to provide the individual indications for signaling whether in-loop filtering is turned ON or not for the two chroma components in this case. A example of pseudo codes for the embodiment mentioned above is shown below:
-
If SAO for luma is turned ON; A first flag is signaled to indicate whether SAO for Cb is turned ON or not; A second flag is signaled to indicate whether SAO for Cr is turned ON or not. Else; Neither the first flag nor the second flag is signaled. - As mentioned before, it is desirable to develop efficient in-loop filtering method. For example, it is desired to reduce information required to provide indication regarding whether SAO is turned ON and SAO parameters if SAO is turned ON. Since neighboring blocks often have similar characteristics, neighboring blocks may be useful in reducing requiring SAO information.
FIG. 3 illustrates an example of utilizing neighboring block to reduce SAO information. Block C is the current block being processed by SAO. Blocks B, D, E and A are previously processed neighboring blocks around C, as shown inFIG. 3 . The block-based syntax represents the parameters of current processing block. A block can be a coding unit (CU), a largest coding unit (LCU), or multiple LCUs. A flag can be used to indicate that the current block shares the SAO parameters with neighboring blocks to reduce the rate. If the processing order of blocks is raster scan, the parameters of blocks D, B, E, and A are available when the parameters of block C are encoded. When the block parameters are available from neighboring blocks, these block parameters can be used to encode the current block. The amount of data required to send the flag to indicate SAO parameter sharing is usually much less than that for SAO parameters. Therefore, efficient SAO is achieved. While SAO is used as an example of in-loop filter to illustrate parameter sharing based on neighboring blocks, the technique can also be applied to other in-loop filter such as ALF. - In the current HEVC standard, the quadtree-based algorithm can be used to adaptively divide a picture region into four sub-regions to achieve better performance. In order to maintain the coding gain of SAO, the encoding algorithm for the quadtree-based SAO partition has to be efficiently designed. The SAO parameters (SAOP) include SAO type index and offset values of the selected type. An exemplary quadtree-based SAO partition is shown in
FIGS. 4A and 4B .FIG. 4A represents a picture being partitioned using quadtree partition, where each small square corresponds to an LCU. The first partition (depth 0 partition) is indicated by split—0( ). Avalue 0 implies no split and avalue 1 indicates a split applied. The picture consists of twelve LCUs as labeled by P1, P2, . . . , P12 inFIG. 4B . The depth-0 quadtree partition, split—0(1) splits the picture into four regions: upper left, upper right, lower left and lower right regions. Since the lower left and lower right regions have only one row of blocks, no further quadtree partition is applied. Therefore, depth-1 quadtree partition is only considered for the upper left and upper right regions. The example inFIG. 4A shows that the upper left region is not split as indicated by split—1(0) and the upper right region is further split into four regions as indicated by split—1(1). Accordingly, the quadtree partition results in seven partitions labeled as P′0, . . . , P′6 inFIG. 4A , where: -
- SAOP of P1 is the same as SAOP for P2, P5, and P6;
- SAOP of P9 is the same as SAOP for P10; and
- SAOP of P11 is the same as SAOP for P12.
- According to the partition information of SAO, each LCU can be a new partition or merged with other LCUs. If the current LCU is merged, several merge candidates can be selected. To illustrate an exemplary syntax design to allow information sharing, only two merge candidates are allowed for quad-tree partitioning of
FIG. 3 . While two candidates are illustrated in the example, more candidates from the neighboring blocks may be used to practice the present invention. The syntax design is illustrated as follows: -
If block C is not the first block of the picture, Use one flag to indicate block C is a new partition. Else, Block C is inferred as a new partition. If block C is a new partition, Encode SAO parameters. Otherwise, If a left neighbor and a top neighbor exist, Send a mergeLeftFlag. If mergeLeftFlag is true, then block C is merged with block A. Otherwise, block C is merged with block B. Else, If a left neighbor exists, then block C is merged with block A. Otherwise, block C is merged with block B. - In another embodiment according to the present invention, the relation with neighboring blocks (LCUs) and the properties of quadtree partition are used to reduce the amount of data required to transmit SAO related information. Furthermore, the boundary condition of a picture region such as a slice may introduce some redundancy in dependency among neighboring blocks and the boundary condition can be used to reduce the amount of data required to transmit SAO related information. The relation among neighboring blocks may also introduce redundancy in dependency among neighboring blocks and the relation among neighboring blocks may be used to reduce the amount of data required to transmit SAO related information.
- An example of redundancy in dependency among neighboring blocks is illustrated in
FIGS. 5A-C . According to the property of quadtree partition, if blocks D and A are in the same partition and block B is in another partition, blocks A and C will be in different partitions as shown inFIG. 5A andFIG. 5B . On the other hand, the case shown inFIG. 5C is not allowed in quadtree partition. Therefore, the merge-candidate inFIG. 5C is redundant and there is no need to assign a code to represent the merge flag corresponding toFIG. 5C . Exemplary pseudo codes to implement the merge algorithm are shown as follows: -
If blocks A and D are in the same partition and blocks B and D are in different partitions, Send newPartitionFlag to indicate that block C is a new partition. If newPartitionFlag is true, Block C is a new partition as shown in Fig.5A. Otherwise, Block C is merged with block B without signaling as shown in Fig.5B - As shown in the above example, there are only two allowed cases, i.e. block C is a new partition or block C is merged with block B. Therefore, a single bit for newPartitionFlag is adequate to identify the two cases. In another example, blocks D and B are in the same partition and block A is in another partition, blocks B and C will be in different partitions as shown in
FIG. 6A andFIG. 6B . On the other hand, the case shown inFIG. 6C is not allowed according to quadtree partition. Therefore, the merge-candidate associated with the case inFIG. 6C is redundant and there is no need to assign a code to represent the merge flag corresponding toFIG. 6C . Exemplary pseudo codes to implement the merge algorithm are shown as follows: -
If blocks B and D are in the same partition and blocks A and D are in different partitions, Send newPartitionFlag to indicate that block C is a new partition. If newPartitionFlag is true, Block C is a new partition as shown in Fig.6A. Otherwise, Block C is merged with block A without signaling as shown in Fig. 6B. -
FIGS. 5A-C andFIG. 6A-C illustrate two examples of utilizing redundancy in dependency among neighboring blocks to further reduce transmitted data associated with SAO information for the current block. There are many other conditions that the system can take advantage of the redundancy in dependency among neighboring blocks. For example, if blocks A, B and D are in the same partition, then block C cannot be in another partition. Therefore, block C must be in the same partition as A, B, and D and there is no need to transmit an indication of SAO information sharing. The LCU block in the slice boundary can be taken into consideration to reduce the transmitted data associated with SAO information for the current block. For example, if block A does not exist, only one direction can be merged. If block B does not exist, only one direction can be merged as well. If both blocks A and B do not exist, there is no need to transmit a flag to indicate block C as a new partition. To further reduce the number of transmitted syntax elements, a flag can be used to indicate that current slice uses only one SAO type without any LCU-based signaling. When the slice is a single partition, the number of transmitted syntax elements can also be reduced. While LCU is used as a unit of block in the above examples, other block configurations (such as block size and shape) may also be used. While slice is mentioned here as an example of picture area that the blocks are grouped to share common information, other picture areas such as group of slices and a picture may also be used. - In addition, chroma and luma components may share the same SAO information for color video data. The SAO information may also be shared between chroma components. For example, chroma components (Cb and Cr) may use the partition information of luma so that there is no need to signal the partition information for the chroma components. In another example, Cb and Cr may share the same SAO parameters (SAOP) and therefore only one set of SAOP needs to be transmitted for Cb and Cr to share. SAO syntax for luma can be used for chroma components where the SAO syntax may include quadtree syntax and LCU-based syntax.
- The examples of utilizing redundancy in dependency among neighboring blocks as shown in
FIGS. 5A-C andFIG. 6A-C to reduce transmitted data associated with SAO information can also be applied to the chroma components. The SAOP including SAO type and SAO offset values of the selected type can be coded before partitioning information, and therefore an SAO parameter set (SAOPS) can be formed. Accordingly, indexing can be used to identify SAO parameters from the SAOPS for the current block where the data transmitted for the index is typically less than the data transmitted for the SAO parameters. When partition information is encoded, the selection among SAOPS can be encoded at the same time. The number of SAOPS can be increased dynamically. For example, after a new SAOP is signaled, the number of SAOP in SAOPS will be increased by one. To represent the number of SAOPS, the number of bits can be dynamically adjusted to match the data range. For example, three bits are required to represent SAOPS having five to eight members. When a new SAOP is signaled, the number of SAOPS will grow to nine and four bits will be needed to represent the SAOPS having nine members. - If the processing of SAO refers to the data located in the other slice, SAO will avoid fetching data from any other slice by use a padding technique or change pattern to replace data from other slices. To reduce data required for SAO information, SAO parameters can be transmitted in a predicted form, such as the difference between SAO parameters for a current block and the SAO parameters for a neighboring block or neighboring blocks. Another embodiment according to the present invention is to reduce SAO parameters for chroma. For example, Edge-based Offset (EO) classification classifies each pixel into four categories for the luma component. The number of EO categories for the chroma components can be reduced to two to reduce the transmitted data associated with SAO information for the current block. The number of bands for band offset (BO) classification is usually sixteen for the luma component. In yet another example, the number of bands for band offset (BO) classification may be reduced to eight for the chroma components.
- The example in
FIG. 3 illustrates a case that current block C has four merge candidates, i.e., blocks A, B, D and E. The number of merge candidates can be reduced if the merge candidates are in the same partition. Accordingly, the number of bits to indicate which merge candidate is selected can be reduced or saved. If the processing of SAO refers to the data located in the other slice, SAO will avoid fetching data from any other slice and skip the current processing pixel to avoid data from other slices. In addition, a flag may be used to control whether the SAO processing avoids fetching data from any other slice. The control flag regarding whether the SAO processing avoids fetching data from any other slice can be incorporated in a sequence level or a picture level. The control flag regarding whether the SAO processing avoids fetching data from any other slice can also be shared with the non-crossing slice boundary flag of adaptive loop filter (ALF) or deblocking filter (DF). In order to further reduce the transmitted data associated with SAO information, the ON/OFF control of chroma SAO depend on luma SAO ON/OFF information. The category of chroma SAO can be a subset of luma SAO for a specific SAO type. - Exemplary syntax design incorporating various embodiments according to the present invention is illustrated below.
FIG. 7 illustrates an example of incorporating sao_used_flag in the sequence level data, such as Sequence Parameter Set (SPS). When sao_used_flag has avalue 0, SAO is disabled for the sequence. When sao_used_flag has avalue 1, SAO is enabled for the sequence. An exemplary syntax for SAO parameters is shown inFIG. 8 , where the sao_param( ) syntax can be incorporated in Adaptation Parameter Set (APS), Picture Parameter Set (PPS) or slice header. The APS is another picture-level header in addition to the PPS to accommodate parameters that are likely to change from picture to picture. If sao_flag indicates that the SAO is enabled, the syntax will include split parameter sao_split_param(0, 0, 0, 0) and offset parameter sao_offset_param(0, 0, 0, 0) for the luma component. Furthermore, the syntax also includes SAO flag sao_flag_cb for the Cb component and SAO flag sao_flag_cr for the Cr component. If sao_flag_cb indicates that the SAO for the Cb component is enabled, the syntax will include split parameter sao_split_param(0, 0, 0, 1) and offset parameter sao_offset_param(0, 0, 0, 1) for chroma component Cb. If sao_flag_cr indicates that the SAO for the Cr component is enabled, the syntax will include split parameter sao_split_param(0, 0, 0, 2) and offset parameter sao_offset_param(0, 0, 0, 2) for chroma component Cr.FIG. 9 illustrates an exemplary syntax for sao_split_param(rx, ry, Depth, component), where the syntax is similar to a conventional sao_split_param ( ) except that an additional parameter “component” is added, where “component” is used to indicate the luma or one of the chroma components.FIG. 10 illustrates an exemplary syntax for sao_offset_param(rx, ry, Depth, component), where the syntax is similar to a conventional sao_offset_param ( ) except that an additional parameter “component” is added. In sao_offset_param(rx, ry, Depth, component), the syntax includes sao_type_idx [component] [Depth][ry][rx] if the split flag sao_split_flag [component] [Depth][ry][rx] indicates the region is not further split. Syntax sao_type_idx [component] [Depth][ry][rx] specification is shown in Table 1. -
TABLE 1 Number of categories, sample adaptive offset nSaoLength sao_type_idx type to be used [sao_type_idx] 0 None 0 1 1-D 0-degree pattern edge offset 4 2 1-D 90-degree pattern edge offset 4 3 1-D 135-degree pattern edge offset 4 4 1-D 45-degree pattern edge offset 4 5 central bands band offset 16 6 side bands band offset 16 - The sample adaptive offset (SAO) adopted in HM-3.0 uses a quadtree-based syntax, which divides a picture region into four sub-regions using a split flag recursively, as shown in
FIG. 11 . Each leaf region has its own SAO parameters (SAOP), where the SAOP includes the information of SAO type and the offset values to be applied for the region.FIG. 11 illustrates an example where the picture is divided into seven leaf regions, 1110 through 1170, where band offset (BO) type SAO is applied toleaf regions leaf regions leaf regions FIGS. 12A illustrates an example of picture-based SAO andFIG. 12B illustrates a block-based SAO, where each region is one LCU and there are fifteen LCUs in the picture. In picture-based SAO, the entire picture shares one SAOP. It is also possible to use slice-based SAO so that the entire slice or multiple slices share one SAOP. In LCU-based SAO, each LCU has its own SAOP and SAOP1 through SAOP15 are used by the fifteen LCUs (LCU1 through LCU15) respectively. - In another embodiment according to the present invention, SAOP for each LCU may be shared by following LCUs. The number of consecutive subsequent LCUs sharing the same SAOP may be indicated by a run signal.
FIG. 13 illustrates an example where SAOP1, SAOP2 and SAOP3 are the same. In other words, the SAOP of the first LCU is SAOP1, and SAOP1 is used for the subsequent two LCUs. In this case, a syntax “run=2” will be encoded to signal the number of consecutive subsequent LCUs sharing the same SAOP. Since the SAOP for the next two LCUs is not transmitted, the rate of encoding their SAOPs can be saved. In yet another embodiment according to the present invention, in addition to use a run signal, the LCU in a following row according to the raster scan order may share the SAOP of a current LCU. A merge-above flag may be used to indicate the case that the current LCU shares the SAOP of the LCU above if the above LCU is available. If the merge-above flag is set to “1”, the current LCU will use the SAOP of the LCU above. As shown inFIG. 14 , SAOP2 is shared by four LCUs, 1410 through 1440, where “run=1” and “no merge-above” are used to indicateLCUs LCUs value 0 for blocks associated SAOP1, SAOP3 and SAOP4. - In order to reduce the bitrate for the run signal, the run signal of the above LCU can be used as a predictor for the run signal of the current LCU. Instead of encoding the run signal directly, the difference of the two run signals is encoded, where the difference is denoted as d_run as shown in
FIG. 15 . When the above LCU is not the first LCU of an LCU group with a run value, the run prediction value can be the run of the above LCU group subtracted by the number of LCUs that are prior to the above LCU in the same LCU group. The first LCU sharing SAOP3 has a run value of 2 and the first LCU above also has a run value of 2 (sharing SAOP1). Accordingly, d_run for the LCU sharing SAOP3 has a value of 0. The first LCU sharing SAOP4 has a run value of 4 and the first LCU above also has a run value of 2 (sharing SAOP3). Accordingly, d_run for the LCU sharing SAOP4 has a value of 2. If the predictor of a run is not available, the run may be encoded by using an unsigned variable length code (U_VLC). If the predictor exists, the delta run, d_run may be encoded by using a signed variable length code (S_VLC). The U_VLC and S_VLC can be k-th order exp-Golomb coding, Golomb-Rice coding, or a binarization process of CABAC coding. - In one embodiment according to the present invention, a flag may be used to indicate that all SAOPs in the current LCU row are the same as those in the above LCU row. For example, a flag, RepeatedRow, for each LCU row can be used to indicate that all SAOPs in this LCU row are the same as those in the above LCU row. If RepeatedRow flag is equal to 1, no more information needs to be coded. For each LCU in the current LCU row, the related SAOP is copied from the LCU in the above LCU row. If RepeatedRow flag is equal to 0, the SAOPs of this LCU row are coded.
- In another embodiment according to the present invention, a flag may be used to signal whether RepeatedRow flag is used or not. For example, the EnableRepeatedRow flag can be used to indicate whether RepeatedRow flag is used or not. The EnableRepeatedRow flag can be signaled at a slice or picture level. If EnableRepeatedRow is equal to 0, the RepeatedRow flag is not coded for each LCU row. If EnableRepeatedRow is equal to 1, the RepeatedRow flag is coded for each LCU row.
- In yet another embodiment according to the present invention, the RepeatedRow flag at the first LCU row of a picture or a slice can be saved. For the case of a picture having only one slice, the RepeatedRow flag of the first LCU row can be saved. For the case of one picture with multiple slices, if the SAO process is slice-independent operation, the RepeatedRow flag of the first LCU row in a slice can be saved; otherwise, the RepeatedRow flag will be signaled. The method of saving RepeatedRow flag at the first LCU row of one picture or one slice can also be applied to the case where the EnableRepeatedRow flag is used.
- To reduce transmitted data associated with SAOP, an embodiment according to the present invention uses a run signal to indicate that all of SAOPs in the following LCU rows are the same as those in the above LCU row. For example, for N consecutive LCU rows containing the same SAOP, the SAOP and a run signal equal to N−1 are signaled at the first LCU row of the N consecutive repeated LCU rows. The maximum and minimum runs of the repeated LCU rows in one picture or slice can be derived and signaled at slice or picture level. Based on the maximum and minimum values, the run number can be coded using a fixed-length code word. The word length of the fixed-length code can be determined according to the maximum and minimum run values and thus can be adaptively changed at slice or picture level.
- In another embodiment according to the present invention, the run number in the first LCU row of a picture or a slice is coded. In the method of entropy coding of runs and delta-runs mentioned earlier for the first LCU row of one picture or one slice, if the SAOP is repeated for consecutive LCUs, a run is coded to indicate the number of LCUs sharing the SAOP. If the predictor of a run is not available, the run can be encoded by using unsigned variable length code (U_VLC) or fixed-length code word. If the fixed-length code is used, the word length can be coded adaptively based on the image width, the coded runs, or the remaining LCU, or the word length can be fixed based on the image width or be signaled to the decoder. For example, an LCU row in a picture has N LCUs and the LCU being SAO processed is the k-th LCU in the LCU row, where k=0 . . . N−1. If a run needs to be coded, the maximum number of run is N−1−k. The word length of the to-be-coded run is floor(log 2(N−1−k)+1). In another example, the maximum and minimum number of run in a slice or picture can be calculated first. Based on the maximum and minimum value, the word length of the fixed-length code can be derived and coded.
- In yet another embodiment according to the present invention, the information for the number of runs and delta-runs can be incorporated at slice level. The number of runs, delta-runs or the number of LCUs, NumSaoRun, is signaled at slice level. The number of LCUs for the current coding SAOP can be specified using the NumSaoRun flag. Furthermore, the number of runs and delta-runs or the number of LCUs can be predicted using the number of LCUs in one coding picture. The prediction equation is given by:
-
NumSaoRun=sao_num_run_info+NumTBsInPicture, - where NumTBsInPicture is the number of LCUs in one picture and sao_num_run_info is the predicted residual value. Syntax sao_num_run_info can be coded using a signed or unsigned variable-length. Syntax sao_num_run_info may also be coded using a signed or unsigned fixed-length code word.
- Embodiment of in-loop filter according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
- The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (59)
1. A method for processing reconstructed video using Sample Adaptive Offset in a video decoder, the method comprising:
deriving reconstructed video data from a video bitstream, wherein the reconstructed video data comprises luma component and chroma components;
receiving chroma Sample Adaptive Offset indication from the video bitstream if luma Sample Adaptive Offset indication in the video bitstream indicates that Sample Adaptive Offset processing is applied to the luma component;
determining chroma Sample Adaptive Offset information if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components; and
applying the Sample Adaptive Offset processing to the chroma components according to the chroma Sample Adaptive Offset information if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components.
2. The method of claim 1 , wherein the chroma Sample Adaptive Offset indication uses a single chroma Sample Adaptive Offset flag for the chroma components to share.
3. The method of claim 1 , wherein the chroma Sample Adaptive Offset indication uses individual chroma Sample Adaptive Offset flags for the chroma components respectively.
4. The method of claim 1 , wherein a chroma picture area of the reconstructed video is partitioned into chroma blocks and the chroma Sample Adaptive Offset is applied to the chroma blocks; wherein the chroma Sample Adaptive Offset information is received from the video bitstream if a current reconstructed chroma block corresponding to one of the chroma components is a new partition; the chroma Sample Adaptive Offset information is derived from a target chroma block if the current reconstructed chroma block is not said new partition; and wherein the current reconstructed chroma block is merged with the target chroma block selected from one or more candidate chroma blocks corresponding to one or more neighboring chroma blocks of the current reconstructed chroma block.
5. The method of claim 4 , wherein the chroma Sample Adaptive Offset information is determined based on a merge flag in the video bitstream if said one or more neighboring chroma blocks contain more than one neighboring chroma block; and wherein the chroma Sample Adaptive Offset information is inferred if said one or more neighboring chroma blocks contain one neighboring chroma block.
6. The method of claim 5 , wherein at least one of said one or more candidate chroma blocks is eliminated from merging with the current reconstructed chroma block according to quadtree-partition property and merge information of said one or more candidate chroma blocks.
7. The method of claim 1 , wherein a picture area of the reconstructed video is partitioned into blocks; wherein luma Sample Adaptive Offset and the chroma Sample Adaptive Offset are applied to luma blocks and chroma blocks respectively; and wherein partition information for the chroma components are derived from the partition information for the luma component.
8. The method of claim 7 , wherein the chroma components share the chroma Sample Adaptive Offset information.
9. The method of claim 7 , wherein the picture area of the reconstructed video is partitioned into the blocks using quadtree partition; and wherein quadtree-based syntax for the chroma components is derived from the quadtree-based syntax for the luma component.
10. The method of claim 1 , wherein a picture area of the reconstructed video is partitioned into blocks; wherein luma Sample Adaptive Offset and the chroma Sample Adaptive Offset are applied to luma blocks and chroma blocks using luma Sample Adaptive Offset information and the chroma Sample Adaptive Offset information respectively; and wherein the luma Sample Adaptive Offset information associated with each luma block or the chroma Sample Adaptive Offset information associated with each chroma block is encoded using an index pointing to a first set of luma Sample Adaptive Offset information or a second set of chroma Sample Adaptive Offset information.
11. The method of claim 10 , wherein a first set size corresponding to a number of luma Sample Adaptive Offset information in the first set is updated when new luma Sample Adaptive Offset information is signaled or a second set size corresponding to the number of the chroma Sample Adaptive Offset information in the second set is updated when new chroma Sample Adaptive Offset information is signaled.
12. The method of claim 11 , wherein a first bit length to represent the first set size or a second bit length to represent the second set size is dynamically adjusted to accommodate the first set size or the second set size.
13. The method of claim 1 , wherein the Sample Adaptive Offset processing applied to the chroma components replaces exterior data from one or more other chroma picture areas with known data or current chroma picture area data or the Sample Adaptive Offset processing is skipped if the Sample Adaptive Offset processing for the current chroma picture area refers to the exterior data.
14. The method of claim 13 , wherein a control flag is used to indicate whether the Sample Adaptive Offset processing replaces exterior data or whether to skip the Sample Adaptive Offset processing if the Sample Adaptive Offset processing for the current chroma picture area refers to the exterior data.
15. The method of claim 14 , wherein the control flag is a sequence level flag or a picture level flag.
16. The method of claim 14 , wherein the control flag is shared by multiple Sample Adaptive Offsets.
17. The method of claim 1 , wherein a picture area of the reconstructed video is partitioned into blocks; wherein luma Sample Adaptive Offset and the chroma Sample Adaptive Offset are applied to luma blocks and chroma blocks using luma Sample Adaptive Offset information and the chroma Sample Adaptive Offset information respectively; and wherein the luma Sample Adaptive Offset information or the chroma Sample Adaptive Offset information for a current block is respectively predicted by the luma Sample Adaptive Offset information or the chroma Sample Adaptive Offset information for one or more other blocks.
18. The method of claim 17 , wherein the luma Sample Adaptive Offset information or the chroma Sample Adaptive Offset information for the current block is respectively predicted by the luma Sample Adaptive Offset information or the chroma Sample Adaptive Offset information corresponding to one or more neighboring blocks of the current block.
19. (canceled)
20. A method for processing reconstructed video using in-loop filter in a video decoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, the method comprising:
deriving reconstructed video data comprising reconstructed block from a video bitstream;
receiving in-loop filter information from the video bitstream if a current reconstructed block is a new partition;
deriving the in-loop filter information from a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and
applying in-loop filter processing to the current reconstructed block using the in-loop filter information.
21. The method of claim 20 , wherein said deriving the in-loop filter information is based on a merge flag in the video bitstream if said one or more neighboring blocks contain more than one neighboring block; and wherein said deriving the in-loop filter information is inferred if said one or more neighboring blocks contain one neighboring block.
22. The method of claim 20 , wherein at least one of said one or more candidate blocks is eliminated from merging with the current reconstructed block according to quadtree-partition property and merge information of said one or more candidate blocks.
23. A method for processing reconstructed video using Sample Adaptive Offset in a video encoder, the method comprising:
deriving reconstructed video data comprising luma component and chroma components;
incorporating chroma Sample Adaptive Offset indication in a video bitstream if luma Sample Adaptive Offset indication indicates that Sample Adaptive Offset processing is applied to the luma component;
incorporating chroma Sample Adaptive Offset information in the video bitstream if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components; and
applying the Sample Adaptive Offset processing to the chroma components according to the chroma Sample Adaptive Offset information if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components.
24. The method of claim 23 , wherein a chroma picture area of the reconstructed video is partitioned into chroma blocks and the chroma Sample Adaptive Offset is applied to the chroma blocks; wherein the chroma Sample Adaptive Offset information is incorporated in the video bitstream if a current reconstructed chroma block corresponding to one of the chroma components is a new partition; the chroma Sample Adaptive Offset information is derived from a target chroma block if the current reconstructed chroma block is not said new partition; and wherein the current reconstructed chroma block is merged with the target chroma block selected from one or more candidate chroma blocks corresponding to one or more neighboring chroma blocks of the current reconstructed chroma block.
25. The method of claim 23 , wherein a picture area of the reconstructed video is partitioned into blocks; wherein luma Sample Adaptive Offset and the chroma Sample Adaptive Offset are applied to luma blocks and chroma blocks respectively; and wherein partition information for the chroma components are derived from the partition information for the luma component.
26. The method of claim 23 , wherein a picture area of the reconstructed video is partitioned into blocks; wherein luma Sample Adaptive Offset and the chroma Sample Adaptive Offset are applied to luma blocks and chroma blocks using luma Sample Adaptive Offset information and the chroma Sample Adaptive Offset information respectively; and wherein the luma Sample Adaptive Offset information associated with each luma block or the chroma Sample Adaptive Offset information associated with each chroma block is encoded using an index pointing to a first set of luma Sample Adaptive Offset information or a second set of chroma Sample Adaptive Offset information.
27. The method of claim 23 , wherein a picture area of the reconstructed video is partitioned into blocks; wherein luma Sample Adaptive Offset and the chroma Sample Adaptive Offset are applied to luma blocks and chroma blocks using luma Sample Adaptive Offset information and the chroma Sample Adaptive Offset information respectively; and wherein the luma Sample Adaptive Offset information or the chroma Sample Adaptive Offset information for a current block is provided using prediction based on the luma Sample Adaptive Offset information or the chroma Sample Adaptive Offset information for one or more other blocks.
28. (canceled)
29. A method for processing reconstructed video using in-loop filter in a video encoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, the method comprising:
deriving reconstructed video data;
incorporating in-loop filter information in a video bitstream if a current reconstructed block is a new partition;
incorporating the in-loop filter information in the video bitstream based on a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and
applying in-loop filter processing to the current reconstructed block using the in-loop filter information.
30. An apparatus for processing reconstructed video using Sample Adaptive Offset in a video decoder, the apparatus comprising:
means for deriving reconstructed video data from a video bitstream, wherein the reconstructed video data comprises luma component and chroma components;
means for receiving chroma Sample Adaptive Offset indication from the video bitstream if luma Sample Adaptive Offset indication in the video bitstream indicates that Sample Adaptive Offset processing is applied to the luma component;
means for determining chroma Sample Adaptive Offset information if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components; and
means for applying the Sample Adaptive Offset processing to the chroma components according to the chroma Sample Adaptive Offset information if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components.
31. The apparatus of claim 30 , wherein a chroma picture area of the reconstructed video is partitioned into chroma blocks and the chroma Sample Adaptive Offset is applied to the chroma blocks; wherein the chroma Sample Adaptive Offset information is received from the video bitstream if a current reconstructed chroma block corresponding to one of the chroma components is a new partition; the chroma Sample Adaptive Offset information is derived from a target chroma block if the current reconstructed chroma block is not said new partition; and wherein the current reconstructed chroma block is merged with the target chroma block selected from one or more candidate chroma blocks corresponding to one or more neighboring chroma blocks of the current reconstructed chroma block.
32. (canceled)
33. An apparatus for processing reconstructed video using in-loop filter in a video decoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, the apparatus comprising:
means for deriving reconstructed video data comprising reconstructed block from a video bitstream;
means for receiving in-loop filter information from the video bitstream if a current reconstructed block is a new partition;
means for deriving the in-loop filter information from a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and
means for applying in-loop filter processing to the current reconstructed block using the in-loop filter information.
34. The apparatus of claim 33 , wherein said deriving the in-loop filter information is based on a merge flag in the video bitstream if said one or more neighboring blocks contain more than one neighboring block; and wherein said deriving the in-loop filter information is inferred if said one or more neighboring blocks contain one neighboring block.
35. The apparatus of claim 33 , wherein at least one of said one or more candidate blocks is eliminated from merging with the current reconstructed block according to quadtree-partition property and merge information of said one or more candidate blocks.
36. An apparatus for processing reconstructed video using Sample Adaptive Offset in a video encoder, the apparatus comprising:
means for deriving reconstructed video data comprising luma component and chroma components;
means for incorporating chroma Sample Adaptive Offset indication in a video bitstream if luma Sample Adaptive Offset indication indicates that Sample Adaptive Offset processing is applied to the luma component;
means for incorporating chroma Sample Adaptive Offset information in the video bitstream if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components; and
means for applying the Sample Adaptive Offset processing to the chroma components according to the chroma Sample Adaptive Offset information if the chroma Sample Adaptive Offset indication indicates that the Sample Adaptive Offset processing is applied to the chroma components.
37. (canceled)
38. An apparatus for processing reconstructed video using in-loop filter in a video encoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, the apparatus comprising:
means for deriving reconstructed video data;
means for incorporating in-loop filter information in a video bitstream if a current reconstructed block is a new partition;
means for incorporating the in-loop filter information in the video bitstream based on a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and
means for applying in-loop filter processing to the current reconstructed block using the in-loop filter information.
39. A method for processing reconstructed video using Sample Adaptive Offset in a video decoder, the method comprising:
deriving reconstructed video data from a video bitstream; and
applying block-based Sample Adaptive Offset processing to a block of the reconstructed video data, wherein a picture is divided into a plurality of blocks, and wherein each block may use Sample Adaptive Offset information associated with each block;
wherein the Sample Adaptive Offset information associated with a current block is shared by another block according to information incorporated in the video bitstream.
40. The method of claim 39 , wherein the Sample Adaptive Offset information associated with the current block is shared by one or more following blocks as indicated by a run signal.
41. The method of claim 40 , wherein the run signal is incorporated in a slice level.
42. The method of claim 39 , wherein the Sample Adaptive Offset information associated with the current block is shared by an above-block as indicated by a merge-above flag.
43. The method of claim 39 , wherein the Sample Adaptive Offset information associated with the current block is shared by one or more following blocks as indicated by a run-difference signal; wherein the run-difference signal is determined based on a difference between a current run and a prediction run; wherein the current run is associated with a first number of blocks following the current block to share the Sample Adaptive Offset information with the current block; and wherein the prediction run is associated with a second number of blocks following an above-block to share the Sample Adaptive Offset information with the above block.
44. The method of claim 43 , wherein the run-difference signal is set to the current run if the prediction run is not available.
45. The method of claim 44 , wherein the current run is encoded using an unsigned variable length code if the prediction run is not available; and wherein the run-difference signal is encoded using a signed variable length code if the prediction run is available.
46. The method of claim 45 , wherein the unsigned variable length code or the signed variable length code is selected from a group consisting of k-th order exp-Golomb code, Golomb-Rice code, and CABAC code.
47. The method of claim 39 , wherein each block is a largest coding unit (LCU).
48. The method of claim 39 , wherein all the blocks in a current row share the Sample Adaptive Offset information with all the blocks in an above-row if a repeated-row flag indicates a repeated row; and wherein the Sample Adaptive Offset information is incorporated in the video bitstream if the repeated-row flag indicates a non-repeated row.
49. The method of claim 48 , wherein an enable-repeated-row flag is incorporated in the video bitstream to indicate whether the repeated-row flag is incorporated in the video bitstream for each row of blocks.
50. The method of claim 48 , wherein the repeated-row flag is omitted in the video bitstream for the picture consisting of one slice or for a first slice of a multi-slice picture using slice-independent Sample Adaptive Offset processing.
51. The method of claim 39 , wherein the Sample Adaptive Offset information associated with a current row of blocks is shared by one or more following rows of blocks as indicated by a row-run signal associated with a number of said one of more following rows.
52. The method of claim 51 wherein the row-run signal is represented by a fixed length code; and wherein bit length of the fixed length code is derived based on minimum row run and maximum row run.
53. A method for processing reconstructed video using Sample Adaptive Offset in a video encoder, the method comprising:
deriving reconstructed video data; and
applying block-based Sample Adaptive Offset processing to a block of the reconstructed video data, wherein a picture is divided into a plurality of blocks, and wherein each block may use Sample Adaptive Offset information associated with each block;
wherein the Sample Adaptive Offset information associated with a current block is shared by another block according to information incorporated in a video bitstream.
54. The method of claim 53 , further comprising incorporating a run signal in the video bitstream to indicate a number of one or more following blocks to share the Sample Adaptive Offset information associated with the current block.
55. The method of claim 54 , wherein the run signal is incorporated in a slice level.
56. The method of claim 54 , further comprising incorporating a merge-above flag in the video bitstream to indicate that the Sample Adaptive Offset information associated with the current block is shared by an above-block.
57. The method of claim 53 , further comprising incorporating a run-difference signal to indicate that the Sample Adaptive Offset information associated with the current block is shared by one or more following blocks as indicated by the run-difference signal; wherein the run-difference signal is determined based on a difference between a current run and a prediction run; wherein the current run is associated with a first number of blocks following the current block to share the Sample Adaptive Offset information with the current block; and wherein the prediction run is associated with a second number of blocks following an above-block to share the Sample Adaptive Offset information with the above block.
58. An apparatus for processing reconstructed video using Sample Adaptive Offset in a video decoder, the apparatus comprising:
means for deriving reconstructed video data from a video bitstream; and
means for applying block-based Sample Adaptive Offset processing to a block of the reconstructed video data, wherein a picture is divided into a plurality of blocks, and wherein each block may use Sample Adaptive Offset information associated with each block; wherein the Sample Adaptive Offset information associated with a current block is shared by another block according to information incorporated in the video bitstream.
59. An apparatus for processing reconstructed video using Sample Adaptive Offset in a video encoder, the apparatus comprising:
means for deriving reconstructed video data; and
means for applying block-based Sample Adaptive Offset processing to a block of the reconstructed video data, wherein a picture is divided into a plurality of blocks, and wherein each block may use Sample Adaptive Offset information associated with each block;
wherein the Sample Adaptive Offset information associated with a current block is shared by another block according to information incorporated in a video bitstream.
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/311,953 US20120294353A1 (en) | 2011-05-16 | 2011-12-06 | Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components |
DE112012002125.8T DE112012002125T5 (en) | 2011-05-16 | 2012-02-15 | Apparatus and method for a scan adaptive offset for luminance and chrominance components |
CN201610409900.0A CN106028050B (en) | 2011-05-16 | 2012-02-15 | The method and apparatus that sample for brightness and chromatic component adaptively deviates |
CN201510473630.5A CN105120270B (en) | 2011-05-16 | 2012-02-15 | Using the method and device of the adaptive migration processing reconstructing video of sample |
CN201280022870.8A CN103535035B (en) | 2011-05-16 | 2012-02-15 | For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets |
PCT/CN2012/071147 WO2012155553A1 (en) | 2011-05-16 | 2012-02-15 | Apparatus and method of sample adaptive offset for luma and chroma components |
GB1311592.8A GB2500347B (en) | 2011-05-16 | 2012-02-15 | Apparatus and method of sample adaptive offset for luma and chroma components |
ZA2013/05528A ZA201305528B (en) | 2011-05-16 | 2013-07-22 | Apparatus and method of sample adaptive offset for luma and chroma components |
US15/015,537 US10405004B2 (en) | 2011-05-16 | 2016-02-04 | Apparatus and method of sample adaptive offset for luma and chroma components |
US16/249,063 US20190149846A1 (en) | 2011-05-16 | 2019-01-16 | Apparatus and method of sample adaptive offset for luma and chroma components |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161486504P | 2011-05-16 | 2011-05-16 | |
US201161498949P | 2011-06-20 | 2011-06-20 | |
US201161503870P | 2011-07-01 | 2011-07-01 | |
US13/311,953 US20120294353A1 (en) | 2011-05-16 | 2011-12-06 | Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/015,537 Division US10405004B2 (en) | 2011-05-16 | 2016-02-04 | Apparatus and method of sample adaptive offset for luma and chroma components |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120294353A1 true US20120294353A1 (en) | 2012-11-22 |
Family
ID=47174900
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/311,953 Abandoned US20120294353A1 (en) | 2011-05-16 | 2011-12-06 | Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components |
US15/015,537 Active 2033-11-04 US10405004B2 (en) | 2011-05-16 | 2016-02-04 | Apparatus and method of sample adaptive offset for luma and chroma components |
US16/249,063 Abandoned US20190149846A1 (en) | 2011-05-16 | 2019-01-16 | Apparatus and method of sample adaptive offset for luma and chroma components |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/015,537 Active 2033-11-04 US10405004B2 (en) | 2011-05-16 | 2016-02-04 | Apparatus and method of sample adaptive offset for luma and chroma components |
US16/249,063 Abandoned US20190149846A1 (en) | 2011-05-16 | 2019-01-16 | Apparatus and method of sample adaptive offset for luma and chroma components |
Country Status (1)
Country | Link |
---|---|
US (3) | US20120294353A1 (en) |
Cited By (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130051454A1 (en) * | 2011-08-24 | 2013-02-28 | Vivienne Sze | Sample Adaptive Offset (SAO) Parameter Signaling |
US20130051455A1 (en) * | 2011-08-24 | 2013-02-28 | Vivienne Sze | Flexible Region Based Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) |
US20130094568A1 (en) * | 2011-10-14 | 2013-04-18 | Mediatek Inc. | Method and Apparatus for In-Loop Filtering |
US20130182759A1 (en) * | 2011-06-22 | 2013-07-18 | Texas Instruments Incorporated | Method and Apparatus for Sample Adaptive Offset Parameter Estimation in Video Coding |
US20130188687A1 (en) * | 2008-11-11 | 2013-07-25 | Cisco Technology, Inc. | Digital video compression system, method and computer readable medium |
US20130223542A1 (en) * | 2012-02-27 | 2013-08-29 | Texas Instruments Incorporated | Sample Adaptive Offset (SAO) Parameter Signaling |
US20130266058A1 (en) * | 2012-04-06 | 2013-10-10 | General Instrument Corporation | Devices and methods for signaling sample adaptive offset (sao) parameters |
US20130315297A1 (en) * | 2012-05-25 | 2013-11-28 | Panasonic Corporation | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US20130336382A1 (en) * | 2012-06-14 | 2013-12-19 | Qualcomm Incorporated | Grouping of bypass-coded bins for sao syntax elements |
US20140092958A1 (en) * | 2011-06-28 | 2014-04-03 | Sony Corporation | Image processing device and method |
US20140119433A1 (en) * | 2011-06-14 | 2014-05-01 | Lg Electronics Inc. | Method for encoding and decoding image information |
US20140126630A1 (en) * | 2011-06-24 | 2014-05-08 | Lg Electronics Inc. | Image information encoding and decoding method |
US20140153636A1 (en) * | 2012-07-02 | 2014-06-05 | Panasonic Corporation | Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus |
US20140192891A1 (en) * | 2011-06-28 | 2014-07-10 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20140219337A1 (en) * | 2011-09-28 | 2014-08-07 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US20140286396A1 (en) * | 2011-09-28 | 2014-09-25 | Electronics And Telecommunications Research Instit | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US20140286391A1 (en) * | 2013-03-25 | 2014-09-25 | Kwangwoon University Industry-Academic Collaboration Foundation | Sample adaptive offset (sao) processing apparatus reusing input buffer and operation method of the sao processing apparatus |
US20140314159A1 (en) * | 2012-01-06 | 2014-10-23 | Sony Corporation | Image processing device and method |
US20140341271A1 (en) * | 2013-05-20 | 2014-11-20 | Texas Instruments Incorporated | Method and apparatus of hevc de-blocking filter |
CN104205829A (en) * | 2012-03-28 | 2014-12-10 | 高通股份有限公司 | Merge signaling and loop filter on/off signaling |
US20140369420A1 (en) * | 2011-12-22 | 2014-12-18 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US20150016501A1 (en) * | 2013-07-12 | 2015-01-15 | Qualcomm Incorporated | Palette prediction in palette-based video coding |
US20150016551A1 (en) * | 2012-03-30 | 2015-01-15 | Panasonic Intellectual Property Corporation Of America | Syntax and semantics for adaptive loop filter and sample adaptive offset |
US20150023420A1 (en) * | 2012-01-19 | 2015-01-22 | Mitsubishi Electric Corporation | Image decoding device, image encoding device, image decoding method, and image encoding method |
US20150172678A1 (en) * | 2012-06-11 | 2015-06-18 | Samsung Electronics Co., Ltd. | Sample adaptive offset (sao) adjustment method and apparatus and sao adjustment determination method and apparatus |
US20150172657A1 (en) * | 2011-05-10 | 2015-06-18 | Qualcomm Incorporated | Offset type and coefficients signaling method for sample adaptive offset |
US9204148B1 (en) * | 2011-09-28 | 2015-12-01 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US9204171B1 (en) * | 2011-09-28 | 2015-12-01 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US20150358633A1 (en) * | 2013-01-17 | 2015-12-10 | Samsung Electronics Co., Ltd. | Method for encoding video for decoder setting and device therefor, and method for decoding video on basis of decoder setting and device therefor |
KR20150140729A (en) * | 2013-04-08 | 2015-12-16 | 퀄컴 인코포레이티드 | Sample adaptive offset scaling based on bit-depth |
US9414057B2 (en) | 2012-06-04 | 2016-08-09 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus |
WO2016144519A1 (en) * | 2015-03-06 | 2016-09-15 | Qualcomm Incorporated | Low complexity sample adaptive offset (sao) coding |
US9591325B2 (en) | 2015-01-27 | 2017-03-07 | Microsoft Technology Licensing, Llc | Special case handling for merged chroma blocks in intra block copy prediction mode |
US9654777B2 (en) | 2013-04-05 | 2017-05-16 | Qualcomm Incorporated | Determining palette indices in palette-based video coding |
US9686561B2 (en) | 2013-06-17 | 2017-06-20 | Qualcomm Incorporated | Inter-component filtering |
US20170195676A1 (en) * | 2014-06-20 | 2017-07-06 | Hfi Innovation Inc. | Method of Palette Predictor Signaling for Video Coding |
US20170230656A1 (en) * | 2016-02-05 | 2017-08-10 | Apple Inc. | Sample adaptive offset systems and methods |
EP3220644A1 (en) * | 2016-03-14 | 2017-09-20 | Thomson Licensing | Method and device for encoding at least one image unit, and method and device for decoding a stream representative of at least one image unit |
US9774853B2 (en) | 2011-11-08 | 2017-09-26 | Google Technology Holdings LLC | Devices and methods for sample adaptive offset coding and/or signaling |
US9781437B2 (en) | 2012-09-10 | 2017-10-03 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
WO2017177957A1 (en) * | 2016-04-14 | 2017-10-19 | Mediatek Inc. | Non-local adaptive loop filter |
US9894352B2 (en) | 2012-05-25 | 2018-02-13 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US9955153B2 (en) | 2012-01-05 | 2018-04-24 | Google Technology Holdings LLC | Devices and methods for sample adaptive offset coding |
US10063862B2 (en) | 2012-06-27 | 2018-08-28 | Sun Patent Trust | Image decoding method and image decoding apparatus for sample adaptive offset information |
TWI638551B (en) * | 2016-07-19 | 2018-10-11 | 瑞昱半導體股份有限公司 | Wireless communication system and associated wireless communication method and wireless device |
US20180332283A1 (en) * | 2017-05-09 | 2018-11-15 | Futurewei Technologies, Inc. | Coding Chroma Samples In Video Compression |
US20180338142A1 (en) * | 2011-06-22 | 2018-11-22 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for video coding |
US10142624B2 (en) | 2012-05-25 | 2018-11-27 | Velos Media, Llc | Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus |
US10212425B2 (en) | 2012-06-08 | 2019-02-19 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US10368091B2 (en) | 2014-03-04 | 2019-07-30 | Microsoft Technology Licensing, Llc | Block flipping and skip mode in intra block copy prediction |
US10390034B2 (en) | 2014-01-03 | 2019-08-20 | Microsoft Technology Licensing, Llc | Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area |
US10469863B2 (en) | 2014-01-03 | 2019-11-05 | Microsoft Technology Licensing, Llc | Block vector prediction in video and image coding/decoding |
US10506254B2 (en) | 2013-10-14 | 2019-12-10 | Microsoft Technology Licensing, Llc | Features of base color index map mode for video and image coding and decoding |
US10542274B2 (en) | 2014-02-21 | 2020-01-21 | Microsoft Technology Licensing, Llc | Dictionary encoding and decoding of screen content |
US10554969B2 (en) | 2015-09-11 | 2020-02-04 | Kt Corporation | Method and device for processing video signal |
US10582213B2 (en) | 2013-10-14 | 2020-03-03 | Microsoft Technology Licensing, Llc | Features of intra block copy prediction mode for video and image coding and decoding |
US10659783B2 (en) | 2015-06-09 | 2020-05-19 | Microsoft Technology Licensing, Llc | Robust encoding/decoding of escape-coded pixels in palette mode |
US10785486B2 (en) | 2014-06-19 | 2020-09-22 | Microsoft Technology Licensing, Llc | Unified intra block copy and inter prediction modes |
US10812817B2 (en) | 2014-09-30 | 2020-10-20 | Microsoft Technology Licensing, Llc | Rules for intra-picture prediction modes when wavefront parallel processing is enabled |
US10986349B2 (en) | 2017-12-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Constraints on locations of reference blocks for intra block copy prediction |
US11006110B2 (en) * | 2018-05-23 | 2021-05-11 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11025931B2 (en) * | 2016-04-06 | 2021-06-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods, encoder, and transcoder for transcoding |
US11051017B2 (en) | 2018-12-20 | 2021-06-29 | Qualcomm Incorporated | Adaptive loop filter (ALF) index signaling |
US11109036B2 (en) | 2013-10-14 | 2021-08-31 | Microsoft Technology Licensing, Llc | Encoder-side options for intra block copy prediction mode for video and image coding |
US11146788B2 (en) | 2015-06-12 | 2021-10-12 | Qualcomm Incorporated | Grouping palette bypass bins for video coding |
US11184623B2 (en) * | 2011-09-26 | 2021-11-23 | Texas Instruments Incorporated | Method and system for lossless coding mode in video coding |
US20210368211A1 (en) * | 2019-03-07 | 2021-11-25 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Loop filtering implementation method and apparatus, and computer storage medium |
CN113728627A (en) * | 2019-04-26 | 2021-11-30 | 北京字节跳动网络技术有限公司 | Prediction of parameters for in-loop reconstruction |
US11284103B2 (en) | 2014-01-17 | 2022-03-22 | Microsoft Technology Licensing, Llc | Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning |
CN114391255A (en) * | 2019-09-11 | 2022-04-22 | 夏普株式会社 | System and method for reducing reconstruction errors in video coding based on cross-component correlation |
CN114586351A (en) * | 2019-08-29 | 2022-06-03 | Lg 电子株式会社 | Image compiling apparatus and method based on adaptive loop filtering |
CN114631321A (en) * | 2019-10-18 | 2022-06-14 | 北京字节跳动网络技术有限公司 | Inter-influence between sub-picture and loop filtering |
US11451773B2 (en) | 2018-06-01 | 2022-09-20 | Qualcomm Incorporated | Block-based adaptive loop filter (ALF) design and signaling |
US20230007248A1 (en) * | 2016-07-14 | 2023-01-05 | Arris Enterprises Llc | Region specific encoding and sao-sensitive-slice-width-adaptation for improved-quality hevc encoding |
EP4336840A3 (en) * | 2017-05-31 | 2024-05-29 | InterDigital Madison Patent Holdings, SAS | A method and a device for picture encoding and decoding |
US12047558B2 (en) | 2019-08-10 | 2024-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Subpicture dependent signaling in video bitstreams |
US12143645B2 (en) * | 2023-03-13 | 2024-11-12 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI618404B (en) * | 2012-06-27 | 2018-03-11 | Sony Corp | Image processing device and method |
WO2018054286A1 (en) * | 2016-09-20 | 2018-03-29 | Mediatek Inc. | Methods and apparatuses of sample adaptive offset processing for video coding |
WO2019147403A1 (en) | 2018-01-29 | 2019-08-01 | Interdigital Vc Holdings, Inc. | Encoding and decoding with refinement of the reconstructed picture |
WO2019194647A1 (en) * | 2018-04-06 | 2019-10-10 | 가온미디어 주식회사 | Filter information-based adaptive loop filtering method and image coding and decoding method using same |
MX2022001989A (en) * | 2019-08-16 | 2022-05-11 | Huawei Tech Co Ltd | Alf aps constraints in video coding. |
JP7368602B2 (en) | 2019-08-29 | 2023-10-24 | エルジー エレクトロニクス インコーポレイティド | Video coding device and method |
US12101476B2 (en) * | 2019-11-22 | 2024-09-24 | Electronics And Telecommunications Research Institute | Adaptive in-loop filtering method and device |
WO2021202393A1 (en) | 2020-03-30 | 2021-10-07 | Bytedance Inc. | Conformance window parameters in video coding |
WO2022035687A1 (en) * | 2020-08-13 | 2022-02-17 | Beijing Dajia Internet Information Technology Co., Ltd. | Chroma coding enhancement in cross-component sample adaptive offset |
US11743507B2 (en) * | 2020-12-16 | 2023-08-29 | Tencent America LLC | Method and apparatus for video filtering |
MX2023014066A (en) * | 2021-05-26 | 2024-01-10 | Beijing Dajia Internet Information Tech Co Ltd | Coding enhancement in cross-component sample adaptive offset. |
US20230079960A1 (en) * | 2021-09-15 | 2023-03-16 | Tencent America LLC | On propagating intra prediction mode information of ibc block by using block vector |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040008899A1 (en) * | 2002-07-05 | 2004-01-15 | Alexandros Tourapis | Optimization techniques for data compression |
US20100135387A1 (en) * | 2007-04-12 | 2010-06-03 | Thomson Licensing | Method and apparatus for context dependent merging for skip-direct modes for video encoding and decoding |
US20110026600A1 (en) * | 2009-07-31 | 2011-02-03 | Sony Corporation | Image processing apparatus and method |
US20110243249A1 (en) * | 2010-04-05 | 2011-10-06 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video by performing in-loop filtering based on tree-structured data unit, and method and apparatus for decoding video by performing the same |
US20130022117A1 (en) * | 2011-01-14 | 2013-01-24 | General Instrument Corporation | Temporal block merge mode |
US20130034159A1 (en) * | 2010-04-13 | 2013-02-07 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V | Decoder, encoder, method for decoding and encoding, data stream |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7982796B2 (en) * | 2001-03-21 | 2011-07-19 | Apple Inc. | Track for improved video compression |
CN100420308C (en) * | 2002-04-26 | 2008-09-17 | 株式会社Ntt都科摩 | Image encoding device and image decoding device |
US8149926B2 (en) | 2005-04-11 | 2012-04-03 | Intel Corporation | Generating edge masks for a deblocking filter |
AU2007205227B2 (en) * | 2006-01-09 | 2012-02-16 | Dolby International Ab | Method and apparatus for providing reduced resolution update mode for multi-view video coding |
US9001899B2 (en) | 2006-09-15 | 2015-04-07 | Freescale Semiconductor, Inc. | Video information processing system with selective chroma deblock filtering |
TWI372565B (en) * | 2006-10-10 | 2012-09-11 | Nippon Telegraph & Telephone | Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs |
EP1944974A1 (en) | 2007-01-09 | 2008-07-16 | Matsushita Electric Industrial Co., Ltd. | Position dependent post-filter hints |
SI3220641T1 (en) | 2011-04-21 | 2019-05-31 | Hfi Innovation Inc. | Method and apparatus for improved in-loop filtering |
US9008170B2 (en) * | 2011-05-10 | 2015-04-14 | Qualcomm Incorporated | Offset type and coefficients signaling method for sample adaptive offset |
RS58082B1 (en) | 2011-06-23 | 2019-02-28 | Huawei Tech Co Ltd | Offset decoding device, offset encoding device, image filter device, and data structure |
-
2011
- 2011-12-06 US US13/311,953 patent/US20120294353A1/en not_active Abandoned
-
2016
- 2016-02-04 US US15/015,537 patent/US10405004B2/en active Active
-
2019
- 2019-01-16 US US16/249,063 patent/US20190149846A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040008899A1 (en) * | 2002-07-05 | 2004-01-15 | Alexandros Tourapis | Optimization techniques for data compression |
US20100135387A1 (en) * | 2007-04-12 | 2010-06-03 | Thomson Licensing | Method and apparatus for context dependent merging for skip-direct modes for video encoding and decoding |
US20110026600A1 (en) * | 2009-07-31 | 2011-02-03 | Sony Corporation | Image processing apparatus and method |
US20110243249A1 (en) * | 2010-04-05 | 2011-10-06 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video by performing in-loop filtering based on tree-structured data unit, and method and apparatus for decoding video by performing the same |
US20130034159A1 (en) * | 2010-04-13 | 2013-02-07 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V | Decoder, encoder, method for decoding and encoding, data stream |
US20130022117A1 (en) * | 2011-01-14 | 2013-01-24 | General Instrument Corporation | Temporal block merge mode |
Non-Patent Citations (1)
Title |
---|
McCann, K., W. Han and I. Kim "Samsung's Response to the Call for Proposals on Video Compression Technology, document JCTVC-A124 (April 2010). Joint Collaborative Team on Video Coding (JCT-VC), p. 24. * |
Cited By (215)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130188687A1 (en) * | 2008-11-11 | 2013-07-25 | Cisco Technology, Inc. | Digital video compression system, method and computer readable medium |
US9386304B2 (en) * | 2008-11-11 | 2016-07-05 | Cisco Technology, Inc. | Digital video compression system, method and computer readable medium |
US9510000B2 (en) * | 2011-05-10 | 2016-11-29 | Qualcomm Incorporated | Offset type and coefficients signaling method for sample adaptive offset |
US20150172657A1 (en) * | 2011-05-10 | 2015-06-18 | Qualcomm Incorporated | Offset type and coefficients signaling method for sample adaptive offset |
US10924767B2 (en) * | 2011-06-14 | 2021-02-16 | Lg Electronics Inc. | Method for encoding and decoding image information |
US10798421B2 (en) * | 2011-06-14 | 2020-10-06 | Lg Electronics Inc. | Method for encoding and decoding image information |
US11671630B2 (en) * | 2011-06-14 | 2023-06-06 | Lg Electronics Inc. | Method for encoding and decoding image information |
US20220353541A1 (en) * | 2011-06-14 | 2022-11-03 | Lg Electronics Inc. | Method for encoding and decoding image information |
US11418815B2 (en) | 2011-06-14 | 2022-08-16 | Lg Electronics Inc. | Method for encoding and decoding image information |
US9300982B2 (en) | 2011-06-14 | 2016-03-29 | Lg Electronics Inc. | Method for encoding and decoding image information |
US20140119433A1 (en) * | 2011-06-14 | 2014-05-01 | Lg Electronics Inc. | Method for encoding and decoding image information |
US20160337642A1 (en) * | 2011-06-14 | 2016-11-17 | Lg Electronics Inc. | Method for encoding and decoding image information |
US9565453B2 (en) * | 2011-06-14 | 2017-02-07 | Lg Electronics Inc. | Method for encoding and decoding image information |
US9992515B2 (en) * | 2011-06-14 | 2018-06-05 | Lg Electronics Inc. | Method for encoding and decoding image information |
US10531126B2 (en) * | 2011-06-14 | 2020-01-07 | Lg Electronics Inc. | Method for encoding and decoding image information |
US11812034B2 (en) * | 2011-06-22 | 2023-11-07 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for image and video coding |
US11818365B2 (en) | 2011-06-22 | 2023-11-14 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for video coding |
US20220132142A1 (en) * | 2011-06-22 | 2022-04-28 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for image and video coding |
US20130182759A1 (en) * | 2011-06-22 | 2013-07-18 | Texas Instruments Incorporated | Method and Apparatus for Sample Adaptive Offset Parameter Estimation in Video Coding |
US11197002B2 (en) * | 2011-06-22 | 2021-12-07 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for image and video coding |
US20180338142A1 (en) * | 2011-06-22 | 2018-11-22 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for video coding |
US11212557B2 (en) * | 2011-06-22 | 2021-12-28 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for video coding |
US10038903B2 (en) * | 2011-06-22 | 2018-07-31 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation in video coding |
US10091505B2 (en) | 2011-06-24 | 2018-10-02 | Lg Electronics Inc. | Image information encoding and decoding method |
US9743083B2 (en) | 2011-06-24 | 2017-08-22 | Lg Electronics Inc. | Image information encoding and decoding method |
US11303893B2 (en) | 2011-06-24 | 2022-04-12 | Lg Electronics Inc. | Image information encoding and decoding method |
US11700369B2 (en) | 2011-06-24 | 2023-07-11 | Lg Electronics Inc. | Image information encoding and decoding method |
US20140126630A1 (en) * | 2011-06-24 | 2014-05-08 | Lg Electronics Inc. | Image information encoding and decoding method |
US10547837B2 (en) | 2011-06-24 | 2020-01-28 | Lg Electronics Inc. | Image information encoding and decoding method |
US10944968B2 (en) | 2011-06-24 | 2021-03-09 | Lg Electronics Inc. | Image information encoding and decoding method |
US9294770B2 (en) * | 2011-06-24 | 2016-03-22 | Lg Electronics Inc. | Image information encoding and decoding method |
US9253489B2 (en) * | 2011-06-24 | 2016-02-02 | Lg Electronics Inc. | Image information encoding and decoding method |
US20150195534A1 (en) * | 2011-06-24 | 2015-07-09 | Lg Electronics Inc. | Image information encoding and decoding method |
US10038911B2 (en) | 2011-06-28 | 2018-07-31 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20150163502A1 (en) * | 2011-06-28 | 2015-06-11 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US10542273B2 (en) * | 2011-06-28 | 2020-01-21 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20140192891A1 (en) * | 2011-06-28 | 2014-07-10 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US10187664B2 (en) * | 2011-06-28 | 2019-01-22 | Sony Corporation | Image processing device and method |
US20140092958A1 (en) * | 2011-06-28 | 2014-04-03 | Sony Corporation | Image processing device and method |
US20180324450A1 (en) * | 2011-06-28 | 2018-11-08 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US9462288B2 (en) * | 2011-06-28 | 2016-10-04 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US9438922B2 (en) * | 2011-06-28 | 2016-09-06 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US9438921B2 (en) * | 2011-06-28 | 2016-09-06 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20150163516A1 (en) * | 2011-06-28 | 2015-06-11 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US9426483B2 (en) * | 2011-06-28 | 2016-08-23 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US9426482B2 (en) * | 2011-06-28 | 2016-08-23 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20150131742A1 (en) * | 2011-06-28 | 2015-05-14 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20150139335A1 (en) * | 2011-06-28 | 2015-05-21 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor |
US20180376168A1 (en) * | 2011-08-24 | 2018-12-27 | Texas Instruments Incorporated | Sample adaptive offset (sao) parameter signaling |
US10536722B2 (en) * | 2011-08-24 | 2020-01-14 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US10070152B2 (en) * | 2011-08-24 | 2018-09-04 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US11025960B2 (en) * | 2011-08-24 | 2021-06-01 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US20230217045A1 (en) * | 2011-08-24 | 2023-07-06 | Texas Instruments Incorporated | Sample adaptive offset (sao) parameter signaling |
US20140334558A1 (en) * | 2011-08-24 | 2014-11-13 | Texas Instruments Incorporated | Sample adaptive offset (sao) parameter signaling |
US9344743B2 (en) * | 2011-08-24 | 2016-05-17 | Texas Instruments Incorporated | Flexible region based sample adaptive offset (SAO) and adaptive loop filter (ALF) |
US8923407B2 (en) * | 2011-08-24 | 2014-12-30 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US20130051454A1 (en) * | 2011-08-24 | 2013-02-28 | Vivienne Sze | Sample Adaptive Offset (SAO) Parameter Signaling |
US11606580B2 (en) * | 2011-08-24 | 2023-03-14 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US10778973B2 (en) | 2011-08-24 | 2020-09-15 | Texas Instruments Incorporated | Flexible region based sample adaptive offset (SAO) and adaptive loop filter (ALF) |
US20130051455A1 (en) * | 2011-08-24 | 2013-02-28 | Vivienne Sze | Flexible Region Based Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) |
US11184623B2 (en) * | 2011-09-26 | 2021-11-23 | Texas Instruments Incorporated | Method and system for lossless coding mode in video coding |
US9148663B2 (en) * | 2011-09-28 | 2015-09-29 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US9204148B1 (en) * | 2011-09-28 | 2015-12-01 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US20140219337A1 (en) * | 2011-09-28 | 2014-08-07 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US9204171B1 (en) * | 2011-09-28 | 2015-12-01 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US9270990B2 (en) * | 2011-09-28 | 2016-02-23 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US20140286396A1 (en) * | 2011-09-28 | 2014-09-25 | Electronics And Telecommunications Research Instit | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US20130094568A1 (en) * | 2011-10-14 | 2013-04-18 | Mediatek Inc. | Method and Apparatus for In-Loop Filtering |
US8913656B2 (en) * | 2011-10-14 | 2014-12-16 | Mediatek Inc. | Method and apparatus for in-loop filtering |
US9774853B2 (en) | 2011-11-08 | 2017-09-26 | Google Technology Holdings LLC | Devices and methods for sample adaptive offset coding and/or signaling |
US20150189287A1 (en) * | 2011-12-22 | 2015-07-02 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US20150189295A1 (en) * | 2011-12-22 | 2015-07-02 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US9538194B2 (en) * | 2011-12-22 | 2017-01-03 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US9538195B2 (en) * | 2011-12-22 | 2017-01-03 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US9525882B2 (en) * | 2011-12-22 | 2016-12-20 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US9571843B2 (en) * | 2011-12-22 | 2017-02-14 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US20150189290A1 (en) * | 2011-12-22 | 2015-07-02 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US20140369420A1 (en) * | 2011-12-22 | 2014-12-18 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US9544606B2 (en) * | 2011-12-22 | 2017-01-10 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US20150189296A1 (en) * | 2011-12-22 | 2015-07-02 | Samsung Electronics Co., Ltd. | Video encoding method using offset adjustment according to classification of pixels by maximum encoding units and apparatus thereof, and video decoding method and apparatus thereof |
US9955153B2 (en) | 2012-01-05 | 2018-04-24 | Google Technology Holdings LLC | Devices and methods for sample adaptive offset coding |
US10567805B2 (en) * | 2012-01-06 | 2020-02-18 | Sony Corporation | Image processing device and method using adaptive offset filter in units of largest coding unit |
US11601685B2 (en) | 2012-01-06 | 2023-03-07 | Sony Corporation | Image processing device and method using adaptive offset filter in units of largest coding unit |
US20140314159A1 (en) * | 2012-01-06 | 2014-10-23 | Sony Corporation | Image processing device and method |
US20150023420A1 (en) * | 2012-01-19 | 2015-01-22 | Mitsubishi Electric Corporation | Image decoding device, image encoding device, image decoding method, and image encoding method |
US20240114175A1 (en) * | 2012-02-27 | 2024-04-04 | Texas Instruments Incorporated | Sample Adaptive Offset (SAO) Parameter Signaling |
US11985359B2 (en) * | 2012-02-27 | 2024-05-14 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US20130223542A1 (en) * | 2012-02-27 | 2013-08-29 | Texas Instruments Incorporated | Sample Adaptive Offset (SAO) Parameter Signaling |
US9380302B2 (en) * | 2012-02-27 | 2016-06-28 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US11849154B2 (en) * | 2012-02-27 | 2023-12-19 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US11076174B2 (en) * | 2012-02-27 | 2021-07-27 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
US20140334559A1 (en) * | 2012-02-27 | 2014-11-13 | Texas Instruments Incorporated | Sample Adaptive Offset (SAO) Parameter Signaling |
US20160309195A1 (en) * | 2012-02-27 | 2016-10-20 | Texas Instruments Incorporated | Sample Adaptive Offset (SAO) Parameter Signaling |
US9591331B2 (en) * | 2012-03-28 | 2017-03-07 | Qualcomm Incorporated | Merge signaling and loop filter on/off signaling |
CN104205829A (en) * | 2012-03-28 | 2014-12-10 | 高通股份有限公司 | Merge signaling and loop filter on/off signaling |
US20150016551A1 (en) * | 2012-03-30 | 2015-01-15 | Panasonic Intellectual Property Corporation Of America | Syntax and semantics for adaptive loop filter and sample adaptive offset |
US11089336B2 (en) * | 2012-03-30 | 2021-08-10 | Sun Patent Trust | Syntax and semantics for adaptive loop filter and sample adaptive offset |
US10595049B2 (en) * | 2012-03-30 | 2020-03-17 | Sun Patent Trust | Syntax and semantics for adaptive loop filter and sample adaptive offset |
US9872034B2 (en) * | 2012-04-06 | 2018-01-16 | Google Technology Holdings LLC | Devices and methods for signaling sample adaptive offset (SAO) parameters |
US9549176B2 (en) * | 2012-04-06 | 2017-01-17 | Google Technology Holdings LLC | Devices and methods for signaling sample adaptive offset (SAO) parameters |
US20170118478A1 (en) * | 2012-04-06 | 2017-04-27 | Google Technology Holdings LLC | Devices and methods for signaling sample adaptive offset (sao) parameters |
US20130266058A1 (en) * | 2012-04-06 | 2013-10-10 | General Instrument Corporation | Devices and methods for signaling sample adaptive offset (sao) parameters |
US9749623B2 (en) * | 2012-05-25 | 2017-08-29 | Sun Patent Trust | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US9894352B2 (en) | 2012-05-25 | 2018-02-13 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US10298924B2 (en) | 2012-05-25 | 2019-05-21 | Sun Patent Trust | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US9967560B2 (en) | 2012-05-25 | 2018-05-08 | Sun Patent Trust | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US10893282B2 (en) | 2012-05-25 | 2021-01-12 | Velos Media, Llc | Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus |
US10567758B2 (en) | 2012-05-25 | 2020-02-18 | Sun Patent Trust | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US20130315297A1 (en) * | 2012-05-25 | 2013-11-28 | Panasonic Corporation | Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus |
US10142624B2 (en) | 2012-05-25 | 2018-11-27 | Velos Media, Llc | Image coding method, image coding apparatus, image decoding method, image decoding apparatus, and image coding and decoding apparatus |
US10356429B2 (en) | 2012-06-04 | 2019-07-16 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus |
US9860541B2 (en) | 2012-06-04 | 2018-01-02 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus |
US10652557B2 (en) | 2012-06-04 | 2020-05-12 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus |
US9414057B2 (en) | 2012-06-04 | 2016-08-09 | Sun Patent Trust | Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus |
US11375195B2 (en) * | 2012-06-08 | 2022-06-28 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US10812800B2 (en) | 2012-06-08 | 2020-10-20 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US20240073422A1 (en) * | 2012-06-08 | 2024-02-29 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US11849116B2 (en) * | 2012-06-08 | 2023-12-19 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US20220295067A1 (en) * | 2012-06-08 | 2022-09-15 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US10212425B2 (en) | 2012-06-08 | 2019-02-19 | Sun Patent Trust | Arithmetic coding for information related to sample adaptive offset processing |
US9807393B2 (en) * | 2012-06-11 | 2017-10-31 | Samsung Electronics Co., Ltd. | Sample adaptive offset (SAO) adjustment method and apparatus and SAO adjustment determination method and apparatus |
US20150181251A1 (en) * | 2012-06-11 | 2015-06-25 | Samsung Electronics Co., Ltd. | Sample adaptive offset (sao) adjustment method and apparatus and sao adjustment determination method and apparatus |
US9219918B2 (en) * | 2012-06-11 | 2015-12-22 | Samsung Electronics Co., Ltd. | Sample adaptive offset (SAO) adjustment method and apparatus and SAO adjustment determination method and apparatus |
US10609375B2 (en) | 2012-06-11 | 2020-03-31 | Samsung Electronics Co., Ltd. | Sample adaptive offset (SAO) adjustment method and apparatus and SAO adjustment determination method and apparatus |
US20150181252A1 (en) * | 2012-06-11 | 2015-06-25 | Samsung Electronics Co., Ltd. | Sample adaptive offset (sao) adjustment method and apparatus and sao adjustment determination method and apparatus |
US20150189330A1 (en) * | 2012-06-11 | 2015-07-02 | Samsung Electronics Co., Ltd. | Sample adaptive offset (sao) adjustment method and apparatus and sao adjustment determination method and apparatus |
US9826235B2 (en) * | 2012-06-11 | 2017-11-21 | Samsung Electronics Co., Ltd. | Sample adaptive offset (SAO) adjustment method and apparatus and SAO adjustment determination method and apparatus |
US20150172678A1 (en) * | 2012-06-11 | 2015-06-18 | Samsung Electronics Co., Ltd. | Sample adaptive offset (sao) adjustment method and apparatus and sao adjustment determination method and apparatus |
US9807392B2 (en) * | 2012-06-11 | 2017-10-31 | Samsung Electronics Co., Ltd. | Sample adaptive offset (SAO) adjustment method and apparatus and SAO adjustment determination method and apparatus |
US10075718B2 (en) * | 2012-06-11 | 2018-09-11 | Samsung Electronics Co., Ltd. | Sample adaptive offset (SAO) adjustment method and apparatus and SAO adjustment determination method and apparatus |
US20150189284A1 (en) * | 2012-06-11 | 2015-07-02 | Samsung Electronics Co., Ltd. | Sample adaptive offset (sao) adjustment method and apparatus and sao adjustment determination method and apparatus |
AU2016204699B2 (en) * | 2012-06-11 | 2017-08-31 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding videos sharing sao parameter according to color component |
US20130336382A1 (en) * | 2012-06-14 | 2013-12-19 | Qualcomm Incorporated | Grouping of bypass-coded bins for sao syntax elements |
US9386307B2 (en) * | 2012-06-14 | 2016-07-05 | Qualcomm Incorporated | Grouping of bypass-coded bins for SAO syntax elements |
US10542290B2 (en) | 2012-06-27 | 2020-01-21 | Sun Patent Trust | Image decoding method and image decoding apparatus for sample adaptive offset information |
US10063862B2 (en) | 2012-06-27 | 2018-08-28 | Sun Patent Trust | Image decoding method and image decoding apparatus for sample adaptive offset information |
US20140153636A1 (en) * | 2012-07-02 | 2014-06-05 | Panasonic Corporation | Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus |
KR20150034683A (en) * | 2012-07-02 | 2015-04-03 | 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 | Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding and decoding device |
US9781437B2 (en) | 2012-09-10 | 2017-10-03 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US9955175B2 (en) * | 2012-09-10 | 2018-04-24 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US10313688B2 (en) | 2012-09-10 | 2019-06-04 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US10616589B2 (en) | 2012-09-10 | 2020-04-07 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US10063865B2 (en) | 2012-09-10 | 2018-08-28 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus |
US20150358633A1 (en) * | 2013-01-17 | 2015-12-10 | Samsung Electronics Co., Ltd. | Method for encoding video for decoder setting and device therefor, and method for decoding video on basis of decoder setting and device therefor |
US20140286391A1 (en) * | 2013-03-25 | 2014-09-25 | Kwangwoon University Industry-Academic Collaboration Foundation | Sample adaptive offset (sao) processing apparatus reusing input buffer and operation method of the sao processing apparatus |
US9654777B2 (en) | 2013-04-05 | 2017-05-16 | Qualcomm Incorporated | Determining palette indices in palette-based video coding |
US11259020B2 (en) | 2013-04-05 | 2022-02-22 | Qualcomm Incorporated | Determining palettes in palette-based video coding |
KR102318175B1 (en) | 2013-04-08 | 2021-10-26 | 퀄컴 인코포레이티드 | Sample adaptive offset scaling based on bit-depth |
KR20150140729A (en) * | 2013-04-08 | 2015-12-16 | 퀄컴 인코포레이티드 | Sample adaptive offset scaling based on bit-depth |
US11611764B2 (en) | 2013-05-20 | 2023-03-21 | Texas Instruments Incorporated | Method and apparatus of HEVC de-blocking filter |
US11070819B2 (en) | 2013-05-20 | 2021-07-20 | Texas Instruments Incorporated | Method and apparatus of HEVC de-blocking filter |
US12010330B2 (en) | 2013-05-20 | 2024-06-11 | Texas Instruments Incorporated | Method and apparatus of HEVC de-blocking filter |
US10455238B2 (en) | 2013-05-20 | 2019-10-22 | Texas Instruments Incorporated | Method and apparatus of HEVC de-blocking filter |
US20140341271A1 (en) * | 2013-05-20 | 2014-11-20 | Texas Instruments Incorporated | Method and apparatus of hevc de-blocking filter |
US9854252B2 (en) * | 2013-05-20 | 2017-12-26 | Texas Instruments Incorporated | Method and apparatus of HEVC de-blocking filter |
US9686561B2 (en) | 2013-06-17 | 2017-06-20 | Qualcomm Incorporated | Inter-component filtering |
US9558567B2 (en) * | 2013-07-12 | 2017-01-31 | Qualcomm Incorporated | Palette prediction in palette-based video coding |
US20150016501A1 (en) * | 2013-07-12 | 2015-01-15 | Qualcomm Incorporated | Palette prediction in palette-based video coding |
US11109036B2 (en) | 2013-10-14 | 2021-08-31 | Microsoft Technology Licensing, Llc | Encoder-side options for intra block copy prediction mode for video and image coding |
US10582213B2 (en) | 2013-10-14 | 2020-03-03 | Microsoft Technology Licensing, Llc | Features of intra block copy prediction mode for video and image coding and decoding |
US10506254B2 (en) | 2013-10-14 | 2019-12-10 | Microsoft Technology Licensing, Llc | Features of base color index map mode for video and image coding and decoding |
US10469863B2 (en) | 2014-01-03 | 2019-11-05 | Microsoft Technology Licensing, Llc | Block vector prediction in video and image coding/decoding |
US10390034B2 (en) | 2014-01-03 | 2019-08-20 | Microsoft Technology Licensing, Llc | Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area |
US11284103B2 (en) | 2014-01-17 | 2022-03-22 | Microsoft Technology Licensing, Llc | Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning |
US10542274B2 (en) | 2014-02-21 | 2020-01-21 | Microsoft Technology Licensing, Llc | Dictionary encoding and decoding of screen content |
US10368091B2 (en) | 2014-03-04 | 2019-07-30 | Microsoft Technology Licensing, Llc | Block flipping and skip mode in intra block copy prediction |
US10785486B2 (en) | 2014-06-19 | 2020-09-22 | Microsoft Technology Licensing, Llc | Unified intra block copy and inter prediction modes |
US20170195676A1 (en) * | 2014-06-20 | 2017-07-06 | Hfi Innovation Inc. | Method of Palette Predictor Signaling for Video Coding |
US10623747B2 (en) * | 2014-06-20 | 2020-04-14 | Hfi Innovation Inc. | Method of palette predictor signaling for video coding |
US11044479B2 (en) * | 2014-06-20 | 2021-06-22 | Hfi Innovation Inc. | Method of palette predictor signaling for video coding |
US10812817B2 (en) | 2014-09-30 | 2020-10-20 | Microsoft Technology Licensing, Llc | Rules for intra-picture prediction modes when wavefront parallel processing is enabled |
US9591325B2 (en) | 2015-01-27 | 2017-03-07 | Microsoft Technology Licensing, Llc | Special case handling for merged chroma blocks in intra block copy prediction mode |
CN107431816A (en) * | 2015-03-06 | 2017-12-01 | 高通股份有限公司 | Low complex degree sample adaptively offsets (SAO) decoding |
WO2016144519A1 (en) * | 2015-03-06 | 2016-09-15 | Qualcomm Incorporated | Low complexity sample adaptive offset (sao) coding |
US9877024B2 (en) | 2015-03-06 | 2018-01-23 | Qualcomm Incorporated | Low complexity sample adaptive offset (SAO) coding |
US10382755B2 (en) | 2015-03-06 | 2019-08-13 | Qualcomm Incorporated | Low complexity sample adaptive offset (SAO) coding |
US10659783B2 (en) | 2015-06-09 | 2020-05-19 | Microsoft Technology Licensing, Llc | Robust encoding/decoding of escape-coded pixels in palette mode |
US11146788B2 (en) | 2015-06-12 | 2021-10-12 | Qualcomm Incorporated | Grouping palette bypass bins for video coding |
US10554969B2 (en) | 2015-09-11 | 2020-02-04 | Kt Corporation | Method and device for processing video signal |
US12143566B2 (en) | 2015-09-11 | 2024-11-12 | Kt Corporation | Method and device for processing video signal |
US11297311B2 (en) | 2015-09-11 | 2022-04-05 | Kt Corporation | Method and device for processing video signal |
US10728546B2 (en) * | 2016-02-05 | 2020-07-28 | Apple Inc. | Sample adaptive offset systems and methods |
US20170230656A1 (en) * | 2016-02-05 | 2017-08-10 | Apple Inc. | Sample adaptive offset systems and methods |
EP3220643A1 (en) * | 2016-03-14 | 2017-09-20 | Thomson Licensing | Method and device for encoding at least one image unit, and method and device for decoding a stream representative of at least one image unit |
US10567761B2 (en) * | 2016-03-14 | 2020-02-18 | Interdigital Vc Holdings, Inc. | Method and device for encoding at least one image unit, and method and device for decoding a stream representative of at least one image unit |
CN107197314A (en) * | 2016-03-14 | 2017-09-22 | 汤姆逊许可公司 | Coded image unit and decoding table show the method and apparatus of the stream of elementary area |
EP3220644A1 (en) * | 2016-03-14 | 2017-09-20 | Thomson Licensing | Method and device for encoding at least one image unit, and method and device for decoding a stream representative of at least one image unit |
US11025931B2 (en) * | 2016-04-06 | 2021-06-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods, encoder, and transcoder for transcoding |
US10462459B2 (en) | 2016-04-14 | 2019-10-29 | Mediatek Inc. | Non-local adaptive loop filter |
WO2017177957A1 (en) * | 2016-04-14 | 2017-10-19 | Mediatek Inc. | Non-local adaptive loop filter |
US20230007248A1 (en) * | 2016-07-14 | 2023-01-05 | Arris Enterprises Llc | Region specific encoding and sao-sensitive-slice-width-adaptation for improved-quality hevc encoding |
TWI638551B (en) * | 2016-07-19 | 2018-10-11 | 瑞昱半導體股份有限公司 | Wireless communication system and associated wireless communication method and wireless device |
US20180332283A1 (en) * | 2017-05-09 | 2018-11-15 | Futurewei Technologies, Inc. | Coding Chroma Samples In Video Compression |
US10531085B2 (en) * | 2017-05-09 | 2020-01-07 | Futurewei Technologies, Inc. | Coding chroma samples in video compression |
EP4336840A3 (en) * | 2017-05-31 | 2024-05-29 | InterDigital Madison Patent Holdings, SAS | A method and a device for picture encoding and decoding |
US10986349B2 (en) | 2017-12-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Constraints on locations of reference blocks for intra block copy prediction |
US11856192B2 (en) | 2018-05-23 | 2023-12-26 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11006110B2 (en) * | 2018-05-23 | 2021-05-11 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11582450B2 (en) | 2018-05-23 | 2023-02-14 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11856193B2 (en) | 2018-05-23 | 2023-12-26 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11863743B2 (en) | 2018-05-23 | 2024-01-02 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US11451773B2 (en) | 2018-06-01 | 2022-09-20 | Qualcomm Incorporated | Block-based adaptive loop filter (ALF) design and signaling |
US11051017B2 (en) | 2018-12-20 | 2021-06-29 | Qualcomm Incorporated | Adaptive loop filter (ALF) index signaling |
US11627342B2 (en) * | 2019-03-07 | 2023-04-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Loop filtering implementation method and apparatus, and computer storage medium |
US20210368211A1 (en) * | 2019-03-07 | 2021-11-25 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Loop filtering implementation method and apparatus, and computer storage medium |
CN113728627A (en) * | 2019-04-26 | 2021-11-30 | 北京字节跳动网络技术有限公司 | Prediction of parameters for in-loop reconstruction |
US12075030B2 (en) | 2019-08-10 | 2024-08-27 | Beijing Bytedance Network Technology Co., Ltd. | Subpicture dependent signaling in video bitstreams |
US12047558B2 (en) | 2019-08-10 | 2024-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Subpicture dependent signaling in video bitstreams |
US12010349B2 (en) | 2019-08-29 | 2024-06-11 | Lg Electronics Inc. | Adaptive loop filtering-based image coding apparatus and method |
CN114586351A (en) * | 2019-08-29 | 2022-06-03 | Lg 电子株式会社 | Image compiling apparatus and method based on adaptive loop filtering |
CN114391255A (en) * | 2019-09-11 | 2022-04-22 | 夏普株式会社 | System and method for reducing reconstruction errors in video coding based on cross-component correlation |
US11962771B2 (en) | 2019-10-18 | 2024-04-16 | Beijing Bytedance Network Technology Co., Ltd | Syntax constraints in parameter set signaling of subpictures |
US11956432B2 (en) | 2019-10-18 | 2024-04-09 | Beijing Bytedance Network Technology Co., Ltd | Interplay between subpictures and in-loop filtering |
CN114631321A (en) * | 2019-10-18 | 2022-06-14 | 北京字节跳动网络技术有限公司 | Inter-influence between sub-picture and loop filtering |
US12143645B2 (en) * | 2023-03-13 | 2024-11-12 | Texas Instruments Incorporated | Sample adaptive offset (SAO) parameter signaling |
Also Published As
Publication number | Publication date |
---|---|
US20190149846A1 (en) | 2019-05-16 |
US10405004B2 (en) | 2019-09-03 |
US20160156938A1 (en) | 2016-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10405004B2 (en) | Apparatus and method of sample adaptive offset for luma and chroma components | |
US10116967B2 (en) | Method and apparatus for coding of sample adaptive offset information | |
WO2012155553A1 (en) | Apparatus and method of sample adaptive offset for luma and chroma components | |
US9872015B2 (en) | Method and apparatus for improved in-loop filtering | |
AU2013248857B2 (en) | Method and apparatus for loop filtering across slice or tile boundaries | |
US9641863B2 (en) | Apparatus and method of sample adaptive offset for video coding | |
US10659817B2 (en) | Method of sample adaptive offset processing for video coding | |
AU2012327672B2 (en) | Method and apparatus for non-cross-tile loop filtering | |
CN110063057B (en) | Method and apparatus for sample adaptive offset processing for video coding and decoding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MEDIATEK INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FU, CHIH-MING;CHEN, CHING-YEH;TSAI, CHIA-YANG;AND OTHERS;REEL/FRAME:027342/0732 Effective date: 20111206 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: HFI INNOVATION INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEDIATEK INC.;REEL/FRAME:039609/0864 Effective date: 20160628 |