[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2016115733A1 - Improvements for inter-component residual prediction - Google Patents

Improvements for inter-component residual prediction Download PDF

Info

Publication number
WO2016115733A1
WO2016115733A1 PCT/CN2015/071440 CN2015071440W WO2016115733A1 WO 2016115733 A1 WO2016115733 A1 WO 2016115733A1 CN 2015071440 W CN2015071440 W CN 2015071440W WO 2016115733 A1 WO2016115733 A1 WO 2016115733A1
Authority
WO
WIPO (PCT)
Prior art keywords
samples
chroma
component
luma
reconstructed
Prior art date
Application number
PCT/CN2015/071440
Other languages
French (fr)
Inventor
Xianguo Zhang
Han HUANG
Original Assignee
Mediatek Singapore Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Singapore Pte. Ltd. filed Critical Mediatek Singapore Pte. Ltd.
Priority to PCT/CN2015/071440 priority Critical patent/WO2016115733A1/en
Priority to PCT/CN2015/092168 priority patent/WO2016066028A1/en
Priority to SG11201703014RA priority patent/SG11201703014RA/en
Priority to US15/519,181 priority patent/US20170244975A1/en
Priority to EP15855903.9A priority patent/EP3198874A4/en
Priority to KR1020177013692A priority patent/KR20170071594A/en
Priority to CA2964324A priority patent/CA2964324C/en
Priority to CN201580058756.4A priority patent/CN107079166A/en
Priority to KR1020207012648A priority patent/KR20200051831A/en
Publication of WO2016115733A1 publication Critical patent/WO2016115733A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the invention relates generally to video coding process, including general video, Screen Content (SC) video, multi-view video and Three-Dimensional (3D) video processing.
  • SC Screen Content
  • 3D Three-Dimensional
  • the present invention relates to methods for the improvements of inter-component residual prediction, such as parameter derivation and predictor calculation.
  • the HEVC extensions include range extensions (RExt) which target at non-4: 2: 0 color formats, such as 4: 2: 2 and 4: 4: 4, and higher bit-depths video such as 12, 14 and 16 bits per sample.
  • RExt range extensions
  • a coding tool developed for RExt is inter-component prediction that improves coding efficiency particularly for multiple color components with high bit-depths. Inter-component prediction can exploit the redundancy among multiple color components and improves coding efficiency accordingly.
  • a form of inter-component prediction being developed for RExt is Inter-component Residual Prediction (IRP) as disclosed by Pu et al.
  • IRP Inter-component Residual Prediction
  • JCTVC-N0266 Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Vienna, AT, 25 July –2 Aug. 2013 Document: JCTVC-N0266) .
  • the chroma residual is predicted at the encoder side as:
  • r C (x, y) denotes the final chroma reconstructed residual sample at position (x, y)
  • r c ′ (x, y) denotes the reconstructed chroma residual sample from the bit-stream at position (x, y)
  • r L (x, y) denotes the reconstructed residual sample in the luma component at position (x, y)
  • is a scaling parameter (also called alpha parameter, or scaling factor) .
  • Scaling parameter ⁇ is calculated at the encoder side and signaled.
  • the final chroma reconstructed residual sample is derived according to:
  • RGB format may be used. If R component is encoded first, R component is treated the same way as the luma component in the above example. Similarly, if G component is encoded first, the G component is treated the same way as the luma component.
  • FIG. 1 An exemplary decoding process the IRP in the current HEVC-REXT is illustrated in Fig. 1 for transform units (TUs) of the current unit (CU) .
  • the decoded coefficients of all TUs of a current CU are provided to processors for respective components.
  • the first component e.g., Y component
  • the decoded transform coefficients are inverse transformed (block 110) to recover the Intra/Inter coded residual of the first color component.
  • the Inter/Intra coded first color component is then processed by First Component Inter/Intra Compensation 120 to produce the final reconstructed first component.
  • the needed Inter/Intra reference samples for First Component Inter/Intra Compensation 120 are provided from buffers or memories.
  • the first color component is Inter/Intra coded so that the Inter/Intra compensation is used to reconstruct the first component from the reconstructed residual.
  • other coding process e.g., inter-view prediction
  • the decoded transform coefficients are decoded using second component decoding process (block 112) to recover inter-component coded second component. Since the second component is inter-component residual predicted based on the first component residual, Inter-component Prediction for second Component (block 122) is used to reconstruct the second component residual based on outputs from block 110 and block 112. As mentioned before, the inter-component residual prediction needs the scaling parameter coded.
  • decoded alpha parameter between the first color component and the second color component is provided to block 122.
  • the output from block 122 corresponds to Inter/Intra prediction residual of the second component. Therefore, second Component Inter/Intra Compensation (block 132) is used to reconstruct the final second component. Similar to the first color component, other coding process (e.g., inter-view prediction) may also be included in the coding/prediction process to generate the second color residual.
  • similar processing can be used (i.e., blocks 114, 124 and 134) to reconstruct the final third component. According to the decoding process, the encoding process can be easily derived.
  • the alpha parameters can be either transmitted in video stream or derived from samples including the reconstructed neighboring samples and predicted samples.
  • HEVC design selects the former one without parameter derivation.
  • HEVC just adopts such method for 4: 4: 4 format videos. However, such method can also achieve bit-savings on 4: 2: 0 videos. When it is extended for 4: 2: 0 videos, how to determine the correspondence between luma and the current chroma samples, derive parameters and get predictors should be designed.
  • (4)Such method only utilizes reconstructed luma residuals to predict the current chroma residuals, but it is also feasible to utilize reconstructed non-first chroma residuals to predict the current chroma residuals.
  • Fig. 1 is a diagram illustrating generalized inter-component residual prediction procedures for HEVC.
  • Fig. 2 (a) -Fig. 2 (c) is a diagram illustrating one down-sampling method used for non-444 format inter-component residual prediction.
  • Improvements are proposed for the inter-component residual prediction, including at least one of the following improvements.
  • r (x, y) includes r L (x, y) but not limited to it.
  • r (x, y) can also be the reconstructed chroma residual block.
  • the parameter beta can be transmitted in bit stream when inter-component residual prediction is utilized.
  • the parameter beta can be transmitted in video stream by some additional flags at the condition of inter-component residual prediction is utilized.
  • the 2nd improvement the required parameters are derived from the reconstructed neighboring samplesk, the reconstructed residuals of the neighboring samples or the predicted samples of the current block.
  • Fig. 2 (a) presents this parameter derivation example.
  • Fig. 2 (b) presents this parameter derivation example.
  • the 3rd improvement, method is applied to non-4: 4: 4 videos.
  • the luma component is down sampled, to have the same resolution with chroma components, for parameters derivation and predictor generation.
  • a first embodiment of this improvement in the parameter derivation process for cases when there are N luma samples but M (M ⁇ N) corresponding chroma samples, one down-sampling operation is conducted to select or generate M luma samples for the parameter derivation.
  • a third embodiment of this improvement in the parameter derivation process for cases when down-sampling N luma neighboring samples to generate M samples, typically M being equal to N/2, the average values of every two luma neighboring samples are selected.
  • the example is shown in Fig. 2 (a) .
  • Another embodiment of this improvement for either parameter derivation or predictor generation process, while down-sampling N luma samples to generate M samples, methods including but not limited to 4-point-average, corner-point selection, horizontal-average, as shown in Fig. 2 (c) , can be utilized.
  • the utilized down-sampling algorithms are the same, i.e., the same averaging or point selection method.
  • the 4th improvement, parameters and predictors are calculated for the current chroma block at PU or CU level.
  • the inter-component prediction mode signaling flag is transmitted in PU or CU level.
  • a second embodiment of this improvement the utilized parameters are transmitted in PU or CU level.
  • the residual prediction for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter CU.
  • the residual prediction for intra CU is still conducted at TU level, but conducted at CU or PU level for Intra block Copy CU.
  • the parameter transmission for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter or Intra block Copy CU.
  • the mode flag signaling for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter or Intra block Copy CU.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platform.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Methods of chroma intra prediction for general videos are disclosed. Several methods are proposed for coding the chroma components with their own prediction modes, and even their own contexts.

Description

IMPROVEMENTS FOR INTER-COMPONENT RESIDUAL PREDICTION FIELD OF THE INVENTION
 The invention relates generally to video coding process, including general video, Screen Content (SC) video, multi-view video and Three-Dimensional (3D) video processing. In particular, the present invention relates to methods for the improvements of inter-component residual prediction, such as parameter derivation and predictor calculation.
BACKGROUND AND RELATED ART
 Along with the High Efficiency Video Coding (HEVC) standard development, the development of extensions of HEVC has started. The HEVC extensions include range extensions (RExt) which target at non-4: 2: 0 color formats, such as 4: 2: 2 and 4: 4: 4, and higher bit-depths video such as 12, 14 and 16 bits per sample. A coding tool developed for RExt is inter-component prediction that improves coding efficiency particularly for multiple color components with high bit-depths. Inter-component prediction can exploit the redundancy among multiple color components and improves coding efficiency accordingly. A form of inter-component prediction being developed for RExt is Inter-component Residual Prediction (IRP) as disclosed by Pu et al. in JCTVC-N0266, ( “Non-RCE1: Inter Color Component Residual Prediction” , in Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Vienna, AT, 25 July –2 Aug. 2013 Document: JCTVC-N0266) .
 In Inter-component Residual Prediction, the chroma residual is predicted at the encoder side as:
rC' (x, y) =rC (x, y) - (α×rL (x, y) .                              (1)
 In equation (1) , rC (x, y) denotes the final chroma reconstructed residual sample at position (x, y) , rc′ (x, y) denotes the reconstructed chroma residual sample from the bit-stream at position (x, y) , rL (x, y) denotes the reconstructed residual sample in the luma  component at position (x, y) and α is a scaling parameter (also called alpha parameter, or scaling factor) . Scaling parameter α is calculated at the encoder side and signaled. At the decoder side, the final chroma reconstructed residual sample is derived according to:
rC (x, y) =rC' (x, y) + (α×rL (x, y)                       (2)
 While the YUV format is used as an example to illustrate inter-component residual prediction derivation, any other color format may be used. For example, RGB format may be used. If R component is encoded first, R component is treated the same way as the luma component in the above example. Similarly, if G component is encoded first, the G component is treated the same way as the luma component.
 An exemplary decoding process the IRP in the current HEVC-REXT is illustrated in Fig. 1 for transform units (TUs) of the current unit (CU) . The decoded coefficients of all TUs of a current CU are provided to processors for respective components. For the first component (e.g., Y component) , the decoded transform coefficients are inverse transformed (block 110) to recover the Intra/Inter coded residual of the first color component. The Inter/Intra coded first color component is then processed by First Component Inter/Intra Compensation 120 to produce the final reconstructed first component. The needed Inter/Intra reference samples for First Component Inter/Intra Compensation 120 are provided from buffers or memories. In Fig. 1, it implies that the first color component is Inter/Intra coded so that the Inter/Intra compensation is used to reconstruct the first component from the reconstructed residual. However, other coding process (e.g., inter-view prediction) may also be included to generate first component residual. For the second color component, the decoded transform coefficients are decoded using second component decoding process (block 112) to recover inter-component coded second component. Since the second component is inter-component residual predicted based on the first component residual, Inter-component Prediction for second Component (block 122) is used to reconstruct the second component residual based on outputs from block 110 and block 112. As mentioned before, the inter-component residual prediction needs the scaling parameter coded. Therefore, decoded alpha parameter between the first color component and the second color component is provided to block 122. The output from block 122 corresponds to Inter/Intra prediction residual of the second component. Therefore, second Component Inter/Intra Compensation (block 132) is used to reconstruct the final second component. Similar to the first color component, other coding process (e.g., inter-view prediction) may  also be included in the coding/prediction process to generate the second color residual. For the third component, similar processing can be used (i.e.,  blocks  114, 124 and 134) to reconstruct the final third component. According to the decoding process, the encoding process can be easily derived.
 There are several points to be noticed in such above method. (1) The alpha parameters can be either transmitted in video stream or derived from samples including the reconstructed neighboring samples and predicted samples. HEVC design selects the former one without parameter derivation. (2) HEVC just adopts such method for 4: 4: 4 format videos. However, such method can also achieve bit-savings on 4: 2: 0 videos. When it is extended for 4: 2: 0 videos, how to determine the correspondence between luma and the current chroma samples, derive parameters and get predictors should be designed. (3)Although such method conducts residual prediction, including parameter derivation and predictor generation, in TU level, CU or PU based design is also effective due to the smaller overhead produced by signalization of this mode and parameter transmission. (4)Such method only utilizes reconstructed luma residuals to predict the current chroma residuals, but it is also feasible to utilize reconstructed non-first chroma residuals to predict the current chroma residuals.
BRIEF SUMMARY OF THE INVENTION
 It is proposed enhance the inter-component residual prediction by several feasible improvements, including:
 (1) Not only one alpha parameter is required, but also one beta (also namely offset or β) parameter can be utilized to derive the predictor by rC (x, y) =rC' (x, y) + (α×rL (x, y) +β)
 (2) Derive the required parameters from the reconstructed neighboring samples, the reconstructed residuals of the neighboring samples or the luma-and-chroma predicted pixels.
 (3) Apply the method to non-4: 4: 4 videos, and for parameters derivation and predictor generation, the luma component is down sampled, to have the same resolution with chroma components,
 (4) Transmit/derive parameters and generate predictor for the current chroma block at PU or CU level.
 Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
 The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
 Fig. 1 is a diagram illustrating generalized inter-component residual prediction procedures for HEVC.
 Fig. 2 (a) -Fig. 2 (c) is a diagram illustrating one down-sampling method used for non-444 format inter-component residual prediction.
DETAILED DESCRIPTION OF THE INVENTION
 The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
 Improvements are proposed for the inter-component residual prediction, including at least one of the following improvements.
 The first improvement, not only one alpha parameter is required, but also one beta (also namely offset or β) parameter can be utilized to derive the predictor by rC (x, y) =rC' (x, y) + (α×r (x, y) +β) . r (x, y) includes rL (x, y) but not limited to it. r (x, y) can also be the reconstructed chroma residual block.
 A 1 st embodiment of this improvement, the parameter beta can be transmitted in bit stream when inter-component residual prediction is utilized.
 A first embodiment of this improvement, the parameter beta can be transmitted in video stream by some additional flags at the condition of inter-component residual prediction is utilized.
 The 2nd improvement, the required parameters are derived from the reconstructed neighboring samplesk, the reconstructed residuals of the neighboring samples or the predicted samples of the current block.
 A first embodiment of this improvement, the parameters can be derived at decoder by the reconstructed neighboring samples or the reconstructed residuals of neighbroing samples of the current block by (alpha, belta) =f (RNL, RNCb) , (alpha, belta) =f (RNL, RNCr) , or (alpha, belta) =f (RNCb, RNCr) , where RNL can be the reconstructed luma neighboring samples or the reconstructed residuals of luma neighboring samples, RNCb denotes the reconstructed  first-chroma-component neighboring samples or the reconstructed residuals of first-chroma-component neighboring samples, and RNCr is the reconstructed second-chroma-component neighboring samples or the reconstructed residuals of second -chroma-component neighboring samples. Fig. 2 (a) presents this parameter derivation example.
 A second embodiment of this improvement, the parameters can be derived at decoder by the predicted pixels of the current block by (alpha, belta) =f (RPL, RPCb) , (alpha, belta) =f (RPL, RPCr) , or (alpha, belta) =f (RPCb, RPCr) , where RPL is the reconstructed luma neighboring samples, RPCb denotes the reconstructed first-chroma-component neighboring samples and RPCr is the reconstructed second-chroma-component neighboring samples. Fig. 2 (b) presents this parameter derivation example.
 The 3rd improvement, method is applied to non-4: 4: 4 videos. The luma component is down sampled, to have the same resolution with chroma components, for parameters derivation and predictor generation.
 A first embodiment of this improvement, in the parameter derivation process for cases when there are N luma samples but M (M<N) corresponding chroma samples, one down-sampling operation is conducted to select or generate M luma samples for the parameter derivation.
 A second embodiment of this improvement, in the parameter derivation process for cases when there are total N luma neighboring samples representing the reference block but only M (M<N) corresponding chroma neighboring samples representing the current chroma block, one down-sampling operation should be conducted to select or generate M luma pixels for the parameter derivation
 A third embodiment of this improvement, in the parameter derivation process for cases when down-sampling N luma neighboring samples to generate M samples, typically M being equal to N/2, the average values of every two luma neighboring samples are selected. The example is shown in Fig. 2 (a) .
 Another embodiment of this improvement, in the parameter derivation process for cases when down-sampling N predicted luma samples to generate M samples, typically if M is equal to N/2, the average values of every two vertical-neighboring luma samples are selected.
 Another embodiment of this improvement, in the parameter derivation process for cases when down-sampling the N predicted luma samples to generate M samples, typically if M is equal to N/4, the average values of left-up and left-down samples of every four clustered samples are selected. The example is shown in Fig. 2 (b) .
 Another embodiment of this improvement, in the predictor generation process for cases when there are total N luma samples for the reference block but only M (M<N) to-be-generated predicted samples for current chroma block, one down-sampling operation should be conducted to select or generate M luma samples for the predictor generation.
 Another embodiment of this improvement, in the predictor generation process for cases when down-sampling N predicted luma samples to generate M samples, typically if M is equal to N/4, the vertical average values of left-up and left-down samples of every four clustered samples are selected. The example is shown in Fig. 2 (b) .
 Another embodiment of this improvement, for either parameter derivation or predictor generation process, while down-sampling N luma samples to generate M samples, methods including but not limited to 4-point-average, corner-point selection, horizontal-average, as shown in Fig. 2 (c) , can be utilized.
 Another embodiment of this improvement, for parameter derivation and predictor generation process, the utilized down-sampling algorithms are the same, i.e., the same averaging or point selection method.
 The 4th improvement, parameters and predictors are calculated for the current chroma block at PU or CU level.
 A first embodiment of this improvement, the inter-component prediction mode signaling flag is transmitted in PU or CU level.
 A second embodiment of this improvement, the utilized parameters are transmitted in PU or CU level.
 A third embodiment of this improvement, the residual compensation process for the equation rC (x, y) =rC' (x, y) + (α×r (x, y) +β) , is conducted for all (x, y) positions once a PU or a CU.
 Another embodiment of this improvement, the residual prediction for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter CU.
 Another embodiment of this improvement, the residual prediction for intra CU is still conducted at TU level, but conducted at CU or PU level for Intra block Copy CU.
 Another embodiment of this improvement, the parameter transmission for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter or Intra block Copy CU.
 Another embodiment of this improvement, the mode flag signaling for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter or Intra block Copy  CU.
 The proposed method described above can be used in a video encoder as well as in a video decoder. Embodiments of the method according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
 The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art) . Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (29)

  1. Improvements are proposed for the inter-component residual prediction, including adding parameters utilized for prediction, parameters being derived from reconstructed samples, down-sampling design when applied to non-4: 4: 4 videos, and CU or PU level inter-component residual prediction.
  2. The method as claimed in claim 1, wherein for adding parameters utilized for prediction, not only the alpha parameter but also one beta (also namely offset or β) parameter can be utilized to derive the predictor by rC (x, y) rC' (x, y) + (α×r (x, y) +β) .
  3. The method as claimed in claim 1, wherein for derived parameter based prediction. The required parameters are derived from the reconstructed neighboring samples or the predicted samples of the current block.
  4. The method as claimed in claim 1, wherein when applying inter-component residual prediction to non-4: 4: 4 videos, the luma component is down sampled to have the same resolution with chroma components for parameter derivation and predictor generation.
  5. The method as claimed in claim 1, CU or PU level Prediction. Parameter and predictors can be calculated for the current chroma block at PU or CU level.
  6. The method as claimed in claim 2, wherein the parameter beta can be transmitted in video stream by some additional flags at the condition of inter-component residual prediction is utilized.
  7. The method as claimed in claim 3, wherein the required parameters are derived from the reconstructed neighboring samples of the current block by a function with reconstructed luma neighboring samples and reconstructed chroma neighboring samples as input.
  8. The method as claimed in claim 3, wherein the parameters can be derived from the reconstructed residuals of the neighboring samples of the current block by a function with the reconstructed luma neighboring residuals and reconstructed chroma neighboring residuals as input.
  9. The method as claimed in claim 3, wherein the parameters can be derived by the predicted samples of the current block by a function with the luma predicted samples and chroma predicted samples residuals as input.
  10. The method as claimed in claim 3, wherein the required parameters are  derived from the reconstructed neighboring samples of the current second-chroma-component block by a function with the reconstructed first-chroma-component neighboring samples and reconstructed second-chroma-component neighboring samples as input.
  11. The method as claimed in claim 3, wherein the parameters can be derived from the reconstructed residuals of the neighboring samples of the current second-chroma-component block by a function with the reconstructed first-chroma-component neighboring residuals and reconstructed second-chroma-component neighboring residuals as input.
  12. The method as claimed in claim 3, wherein the parameters can be derived by the predicted samples of the current second-chroma-component block by a function with the first-chroma-component predicted samples and second-chroma-component predicted samples residuals as input.
  13. The method as claimed in claim 3, wherein the parameters alpha and beta can be derived from well known functions or techniques including but not limited to linear least squares, non-linear least squares, weighted least squares.
  14. The method as claimed in claim 4, wherein the parameter derivation process for cases when there are total N luma neighboring samples representing the reference block but only M (M<N) corresponding chroma neighboring samples representing the current chroma block, one down-sampling operation should be conducted to select or generate M luma samples for the parameter derivation.
  15. The method as claimed in claim 4, wherein in the parameter derivation process for cases when there are total N luma neighboring samples representing the reference block but only M (M<N) corresponding chroma neighboring samples representing the current chroma block, one down-sampling operation should be conducted to select or generate M luma samples for the parameter derivation.
  16. The method as claimed in claim 4, wherein in the parameter derivation process for cases when down-sampling the N reconstructed luma-neighboring samples to generate M samples, typically M being equal to N/2, the average values of every two luma neighboring samples are selected.
  17. The method as claimed in claim 4, wherein in the parameter derivation process when down-sampling the N predicted luma samples to generate M samples, typically if M is equal to N/2, the average values of every two vertical-neighboring luma samples are selected.
  18. The method as claimed in claim 4, wherein in the parameter derivation process when down-sampling the N predicted luma samples to generate M samples, typically if M is equal to N/4, the average values of left-up and left-down samples of every four clustered samples are selected.
  19. The method as claimed in claim 4, wherein in the predictor process for cases when there are total N luma samples for the reference block but only M (M<N) to-be-generated predicted samples for current chroma block, one down-sampling operation should be conducted to select or generate M luma samples for the predictor generation.
  20. The method as claimed in claim 4, wherein in the predictor generation process for cases when down-sampling N predicted luma samples to generate M samples, typically if M is equal to N/4, the vertical average values of left-up and left-down samples of every four clustered samples are selected.
  21. The method as claimed in claim 4, wherein for either parameter derivation or predictor generation process, while down-sampling N luma samples to generate M samples, methods including but not limited to 4-point-average, corner-point selection, horizontal-average, can be utilized.
  22. The method as claimed in claim 4, wherein the for parameter derivation and predictor generation processes, the down-sampling algorithms are the same, i.e., the same averaging or point selection method.
  23. The method as claimed in claim 5, wherein the inter-component prediction mode signaling flag is transmitted in PU or CU level.
  24. The method as claimed in claim 5, wherein the alpha or beta parameters are transmitted in PU or CU level.
  25. The method as claimed in claim 5, wherein the residual compensation process for the equation rC (x, y) rC' (x, y) + (α×rL (x, y) +β) , is conducted for all (x, y) positions once a PU or a CU.
  26. The method as claimed in claim 5, wherein the residual prediction for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter CU.
  27. The method as claimed in claim 5, wherein the residual prediction for intra CU is still conducted at TU level, but conducted at CU or PU level for Intra block Copy CU.
  28. The method as claimed in claim 5, wherein the parameter transmission for intra CU is still conducted at TU level, but conducted at CU or PU level for Inter or  Intra block Copy CU.
  29. The method as claimed in claim 5, wherein the inter-component residual prediction mode flag signaling for intra CU is still conducted at TU level, and conducted at CU or PU level for Inter or Intra block Copy CU.
PCT/CN2015/071440 2014-10-28 2015-01-23 Improvements for inter-component residual prediction WO2016115733A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
PCT/CN2015/071440 WO2016115733A1 (en) 2015-01-23 2015-01-23 Improvements for inter-component residual prediction
PCT/CN2015/092168 WO2016066028A1 (en) 2014-10-28 2015-10-19 Method of guided cross-component prediction for video coding
SG11201703014RA SG11201703014RA (en) 2014-10-28 2015-10-19 Method of guided cross-component prediction for video coding
US15/519,181 US20170244975A1 (en) 2014-10-28 2015-10-19 Method of Guided Cross-Component Prediction for Video Coding
EP15855903.9A EP3198874A4 (en) 2014-10-28 2015-10-19 Method of guided cross-component prediction for video coding
KR1020177013692A KR20170071594A (en) 2014-10-28 2015-10-19 Method of guided cross-component prediction for video coding
CA2964324A CA2964324C (en) 2014-10-28 2015-10-19 Method of guided cross-component prediction for video coding
CN201580058756.4A CN107079166A (en) 2014-10-28 2015-10-19 The method that guided crossover component for Video coding is predicted
KR1020207012648A KR20200051831A (en) 2014-10-28 2015-10-19 Method of guided cross-component prediction for video coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/071440 WO2016115733A1 (en) 2015-01-23 2015-01-23 Improvements for inter-component residual prediction

Publications (1)

Publication Number Publication Date
WO2016115733A1 true WO2016115733A1 (en) 2016-07-28

Family

ID=56416313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/071440 WO2016115733A1 (en) 2014-10-28 2015-01-23 Improvements for inter-component residual prediction

Country Status (1)

Country Link
WO (1) WO2016115733A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2567249A (en) * 2017-10-09 2019-04-10 Canon Kk New sample sets and new down-sampling schemes for linear component sample prediction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050013370A1 (en) * 2003-07-16 2005-01-20 Samsung Electronics Co., Ltd. Lossless image encoding/decoding method and apparatus using inter-color plane prediction
US20080304759A1 (en) * 2007-06-11 2008-12-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image by using inter color compensation
WO2012160797A1 (en) * 2011-05-20 2012-11-29 Panasonic Corporation Methods and apparatuses for encoding and decoding video using inter-color-plane prediction
WO2014190171A1 (en) * 2013-05-22 2014-11-27 Qualcomm Incorporated Video coding using sample prediction among color components
WO2015009732A1 (en) * 2013-07-15 2015-01-22 Qualcomm Incorporated Inter-color component residual prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050013370A1 (en) * 2003-07-16 2005-01-20 Samsung Electronics Co., Ltd. Lossless image encoding/decoding method and apparatus using inter-color plane prediction
US20080304759A1 (en) * 2007-06-11 2008-12-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image by using inter color compensation
WO2012160797A1 (en) * 2011-05-20 2012-11-29 Panasonic Corporation Methods and apparatuses for encoding and decoding video using inter-color-plane prediction
WO2014190171A1 (en) * 2013-05-22 2014-11-27 Qualcomm Incorporated Video coding using sample prediction among color components
WO2015009732A1 (en) * 2013-07-15 2015-01-22 Qualcomm Incorporated Inter-color component residual prediction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MINEZAWA, AKIRA ET AL.: "SCCE5 3.1.1: Extended inter-component prediction (JCTVC- Q0036", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISOLIEC JT( 1/SC 29/WG 11, 9 July 2014 (2014-07-09) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2567249A (en) * 2017-10-09 2019-04-10 Canon Kk New sample sets and new down-sampling schemes for linear component sample prediction

Similar Documents

Publication Publication Date Title
CA2964324C (en) Method of guided cross-component prediction for video coding
US10812806B2 (en) Method and apparatus of localized luma prediction mode inheritance for chroma prediction in video coding
US10750169B2 (en) Method and apparatus for intra chroma coding in image and video coding
US10554979B2 (en) Methods of handling escape pixel as a predictor in index map coding
AU2019202043B2 (en) Method and apparatus for palette coding of monochrome contents in video and image compression
US10477214B2 (en) Method and apparatus for scaling parameter coding for inter-component residual prediction
EP3058740B1 (en) Features of base color index map mode for video and image coding and decoding
US10057580B2 (en) Method and apparatus for entropy coding of source samples with large alphabet
US20190045184A1 (en) Method and apparatus of advanced intra prediction for chroma components in video coding
WO2016115981A1 (en) Method of video coding for chroma components
GB2567249A (en) New sample sets and new down-sampling schemes for linear component sample prediction
US20160286217A1 (en) Method of Run-Length Coding for Palette Predictor
WO2015176685A1 (en) Methods for palette size signaling and conditional palette escape flag signaling
US10652555B2 (en) Method and apparatus of palette index map coding for screen content coding
JP2017512026A (en) Block inversion and skip mode in intra block copy prediction
WO2017008679A1 (en) Method and apparatus of advanced intra prediction for chroma components in video and image coding
CA2950818A1 (en) Method and apparatus of binarization and context-adaptive coding for syntax in video coding
CN114786019B (en) Image prediction method, encoder, decoder, and storage medium
US20240107044A1 (en) Moving picture decoding device, moving picture decoding method, and program obtaining chrominance values from corresponding luminance values
WO2016115728A1 (en) Improved escape value coding methods
US10244258B2 (en) Method of segmental prediction for depth and texture data in 3D and multi-view coding systems
WO2016146076A1 (en) Method and apparatus for index map coding in video and image compression
WO2016115733A1 (en) Improvements for inter-component residual prediction
US10110892B2 (en) Video encoding device, video decoding device, video encoding method, video decoding method, and program
WO2016044974A1 (en) Palette table signalling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15878410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15878410

Country of ref document: EP

Kind code of ref document: A1