CN117979022A - Contour-based inter-frame prediction method, system, device and storage medium - Google Patents
Contour-based inter-frame prediction method, system, device and storage medium Download PDFInfo
- Publication number
- CN117979022A CN117979022A CN202410259982.XA CN202410259982A CN117979022A CN 117979022 A CN117979022 A CN 117979022A CN 202410259982 A CN202410259982 A CN 202410259982A CN 117979022 A CN117979022 A CN 117979022A
- Authority
- CN
- China
- Prior art keywords
- contour
- block
- decoding
- reference block
- current coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 239000013598 vector Substances 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000977 initiatory effect Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 239000003550 marker Substances 0.000 claims description 2
- 238000002474 experimental method Methods 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- -1 carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000000306 component Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007858 starting material Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a contour-based inter-frame prediction method, a system, equipment and a storage medium, which are one-to-one schemes, wherein the schemes mainly comprise: selecting a reference block with a contour, and encoding a corresponding motion vector; when the contour of the current coding block is coincident with the contour of the reference block, a flag bit is coded to indicate whether the next contour direction of the current coding block is identical with the next contour direction of the reference block; decoding the motion vector to obtain a corresponding reference block; when the contour of the current encoded block coincides with the contour of the reference block, then a flag bit is decoded to indicate whether the next direction of the contour is the same as the next contour of the reference block. The scheme can effectively reduce the time domain redundancy of video coding; experiments show that compared with the existing scheme, the scheme provided by the invention can reduce the encoding and decoding time and save the video code rate.
Description
Technical Field
The present invention relates to the field of video coding technologies, and in particular, to a contour-based inter-frame prediction method, system, device, and storage medium.
Background
Inter-prediction is one of the most important techniques in video coding and can be used to remove temporal redundancy between adjacent video frames. In almost all contemporary video coding standards, such as h.265 (high-efficiency video coding) and h.266 (multi-function video coding), block-based motion estimation and motion compensation are widely used for inter-frame prediction. However, for the scene with changed outline, the motion model is difficult to solve, and the processing can only be performed by a residual error coding method at present. The contours have strong time domain correlation, and the current residual error coding method cannot fully utilize the time domain correlation, so that the coding performance is poor when the target contour is changed.
In view of this, the present invention has been made.
Disclosure of Invention
The invention aims to provide a contour-based inter-frame prediction method, a system, equipment and a storage medium, which can reduce video coding time domain redundancy and improve coding performance.
The invention aims at realizing the following technical scheme:
An inter prediction method based on a contour, comprising:
contour-based coding section: selecting a reference block with a contour, and encoding a corresponding motion vector; coding the contour initial point position of the current coding block; coding the contour information of the current coding block by combining the position of the initial point of the coded contour and the contour direction of the reference block; coding the color value of the current coding block; combining the coded motion vector, the contour initial point position, the contour information of the current coding block and the color value to form a code stream;
Profile-based decoding section: decoding a motion vector in a code stream to obtain a corresponding reference block; decoding the initial point position of the contour in the code stream, and positioning the contour; decoding contour information in the code stream according to the reference block and the positioned contour obtained by decoding; decoding color values in the code stream; and combining the contour information obtained by decoding with the color value to realize the reconstruction of the image block.
An inter-frame prediction system based on contours, comprising:
The contour-based coding module is used for selecting a reference block with a contour and coding a corresponding motion vector; coding the contour initial point position of the current coding block; coding the contour information of the current coding block by combining the position of the initial point of the coded contour and the contour direction of the reference block; coding the color value of the current coding block; combining the coded motion vector, the contour initial point position, the contour information of the current coding block and the color value to form a code stream;
The decoding module based on the outline is used for decoding the motion vector in the code stream to obtain a corresponding reference block; decoding the initial point position of the contour in the code stream, and positioning the contour; decoding contour information in the code stream according to the reference block and the positioned contour obtained by decoding; decoding color values in the code stream; and combining the contour information obtained by decoding with the color value to realize the reconstruction of the image block.
A processing apparatus, comprising: one or more processors; a memory for storing one or more programs;
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the aforementioned methods.
A readable storage medium storing a computer program which, when executed by a processor, implements the method described above.
According to the technical scheme provided by the invention, the inter-frame prediction method based on the contour can effectively reduce the video coding time domain redundancy; experiments show that compared with the existing scheme, the scheme provided by the invention can reduce the encoding and decoding time and save the video code rate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an inter prediction method based on a contour according to an embodiment of the present invention;
FIG. 2 is a flow chart of an encoding portion provided in an embodiment of the present invention;
fig. 3 is a schematic diagram of a contour-based inter prediction method according to an embodiment of the present invention;
FIG. 4 is a flow chart of a decoding portion provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of an inter prediction system based on a contour according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of a processing apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The terms that may be used herein will first be described as follows:
The term "and/or" is intended to mean that either or both may be implemented, e.g., X and/or Y are intended to include both the cases of "X" or "Y" and the cases of "X and Y".
The terms "comprises," "comprising," "includes," "including," "has," "having" or other similar referents are to be construed to cover a non-exclusive inclusion. For example: including a particular feature (e.g., a starting material, component, ingredient, carrier, formulation, material, dimension, part, means, mechanism, apparatus, step, procedure, method, reaction condition, processing condition, parameter, algorithm, signal, data, product or article of manufacture, etc.), should be construed as including not only a particular feature but also other features known in the art that are not explicitly recited.
The following describes in detail a contour-based inter prediction method, system, apparatus and storage medium. What is not described in detail in the embodiments of the present invention belongs to the prior art known to those skilled in the art. The specific conditions are not noted in the examples of the present invention and are carried out according to the conditions conventional in the art or suggested by the manufacturer. The apparatus used in the examples of the present invention did not identify the manufacturer and was a conventional product commercially available.
Example 1
An embodiment of the present invention provides an inter prediction method based on a contour, as shown in fig. 1, mainly including a coding portion based on a contour and a decoding portion based on a contour.
As shown in fig. 2, the contour-based coding section mainly includes:
and 21, selecting a reference block with a contour, and encoding a corresponding motion vector.
In the embodiment of the invention, the coding block (simply called the current coding block) which can be selected for the inter-frame prediction needs to be satisfied that only one contour exists, and color values (pixel values) of two sides of the contour are different. It is necessary to select a reference block that exists and has only one contour, and when a plurality of blocks satisfy the condition, a block optimal for encoding can be selected as a reference block by RDO (Rate-Distortion Optimization, rate distortion optimization). The code rate R needs to be calculated when the RDO is used to select the reference block.
As will be understood by those skilled in the art, RDO is a reference block with the lowest value selected by comparing the formula values of different blocks by the formula d+λr, where D is distortion and λ is a super parameter, when used for lossless coding, d=0, and the calculation method of R is a precoding method, that is, only coding, and no code stream is written, and the specific calculation method can be implemented with reference to conventional techniques, which will not be described in detail in the present invention.
It is also necessary to encode the motion vector after determining the reference block. For example, prediction may be performed using only a previous frame, and motion vectors in horizontal and vertical directions of the previous frame may be encoded.
Furthermore, a contour coding method needs to be preselected so as to be convenient for coding the contour information of the current coding block in the subsequent flow, because the contour information is used for reconstructing the video, the coding modes of the contour information are various; by way of example, a chain coding method (3 OT) may be used, and a markov model may be used as the context model to efficiently code the 3OT chain code. The chain code realizes the expression of the contour information by representing the contour direction.
Step 22, encoding the contour initial point position of the current encoding block.
The contour initiation point is used to locate the contour before the contour information can be represented by the contour direction. The contour initial point is the point of the first color value change that is selected counterclockwise from the top left corner vertex.
For example, the encoding method of the contour initial point position may be to directly encode the number of cells counterclockwise along the encoding block boundary from the top left corner vertex.
And step 23, coding the contour information of the current coding block by combining the position of the initial point of the coded contour and the contour direction of the reference block.
As shown in fig. 3, which is a schematic diagram of the principle of contour-based inter prediction, lines with arrows in a reference block and a current coded block are contours, and two different color values in the block are separated by one contour; the profile direction is represented by a 3OT chain code: 0 represents the same direction as the previous direction; 1 denotes a direction different from the previous direction and different from the previous transfer direction (left or right turn); 2 indicates a direction different from the previous direction and the same as the previous transfer direction.
In fig. 3, the reference block has a certain correlation with the contour of the current encoded block. The main process of encoding the contour information of the current encoded block is as follows: (1) Locating the contour of the current coding block based on the position of the initial point of the coded contour; (2) If the contour of the current encoded block coincides with the contour of the reference block, a flag bit is encoded indicating whether the next contour direction of the current encoded block is the same as the next contour direction of the reference block, specifically: if the coded flag bit indicates that the next contour direction of the current coding block is the same as the next contour direction of the reference block, the next contour direction does not need to be coded; if the next contour direction of the current encoded block is not the same as the next contour direction of the reference block, the contour direction is directly encoded. (3) If the contour of the current coding block does not coincide with the contour of the reference block, the contour direction is directly coded. (4) When the contour information of the current coding block is coded, the contour direction is coded according to the direction sequence from the contour initial point.
Step 24, encoding the color value of the current encoding block.
In order to reconstruct the current encoded block, two color values on both sides of the encoded contour are also required. For example, two color values of the current encoding block may be predicted using color values of both sides of the reference block, and the residual error of the color value of the reference block and the predicted color value of the current encoding block on the same side may be encoded to complete the encoding of the color value of the current encoding block.
And integrating the coded motion vector, the contour initial point position, the contour information of the current coding block and the color value to form a code stream, and transmitting the code stream to a decoding end for subsequent decoding work.
It should be noted that, the serial numbers of the steps 21 to 24 are mainly used for distinguishing different steps, and do not represent the execution sequence of the steps, and those skilled in the art can determine the logical association between the steps according to the specific content of the steps; for example, step 21 and step 24 may be performed sequentially in any order, or may be performed synchronously; step 22 and step 23 are logically sequential, i.e. step 22 is executed first, and then step 23 is executed; the execution sequence of the steps is not described herein, and may be specifically determined by those skilled in the art according to the specific content of the steps.
As shown in fig. 4, the contour-based decoding section mainly includes:
And step 41, decoding the motion vector in the code stream to obtain a corresponding reference block.
In the embodiment of the invention, the reference block can be obtained by decoding the motion vector in the code stream. Illustratively, decoding motion vectors in the horizontal and vertical directions acquires a reference block of a previous frame.
Step 42, decoding the initial point position of the contour in the code stream for positioning the contour.
The contour initial point can be obtained by decoding the position information in the code stream, and the contour is positioned. Illustratively, the amount of decoding position, the contour initiation point is obtained counterclockwise along the coding block boundary by the upper left corner vertex.
Further, a contour coding method corresponding to the coding portion needs to be selected so as to facilitate decoding of contour information in the code stream in a subsequent process.
And 43, decoding contour information in the code stream according to the reference block and the positioned contour obtained by decoding.
The main process is as follows: (1) Judging whether the contour of the current coding block coincides with the contour of the reference block or not according to the reference block and the positioned contour obtained by decoding; (2) If the contour of the current encoded block coincides with the contour of the reference block, a flag bit is decoded from the contour information to indicate if the next direction of the contour is the same as the next contour direction of the reference block, specifically: positioning the position of the marker bit to be decoded according to the reference block, and if the next contour direction of the current coding block is the same as the next contour direction of the reference block, not needing to decode the next contour direction; if the next contour direction of the current coding block is different from the next contour direction of the reference block, directly decoding the contour direction; (3) If the contour of the current coding block does not coincide with the contour of the reference block, the contour direction is directly decoded. (4) And during decoding, the contour direction is decoded from the initial point of the contour according to the direction sequence, and the contour information is obtained through decoding. Step 44, decoding the color values in the code stream.
And 45, combining the contour information obtained by decoding and the color value to realize reconstruction of the current coding block.
Illustratively, two color values of the encoded block are predicted using color values on both sides of the reference block, and the decoded residual is added to the predicted value to obtain the color value.
Similarly, the serial numbers of the steps 41 to 45 are also used for distinguishing different steps, and do not represent the execution sequence of the steps, which are not described herein in detail, and can be determined by a person skilled in the art according to the specific content of the steps.
The above describes the coding and decoding processes of a single coding block, and the reconstruction work of the whole image and the related video can be completed based on the same processes.
It should be noted that, the specific encoding and decoding modes related to the above schemes may be implemented by referring to conventional technologies, which are not described in detail herein.
The scheme provided by the embodiment of the invention has the main advantages that: because the contours between adjacent frames have a certain similarity, and the contour information is needed for encoding, the contour of the current frame can be referenced in the previous frame to reduce the code rate, therefore, the scheme provided by the embodiment of the invention can effectively reduce the time domain redundancy of video encoding, reduce the encoding and decoding time and save the video code rate.
Related test experiments were also performed to demonstrate the performance of the present invention.
Test conditions: 1) Test sequence: the semantic segmentation video VSPW dataset verifies the top 6 sequences of the set. The sequence frame rate is 15fps. Table 1 shows the details of the test sequences. 2) Evaluation index: byte count and codec time.
Table 1: detailed information of test sequence
Test sequence | Resolution ratio | Frame number | Camera movement |
112 | 1280x720 | 136 | Rest |
127 | 1280x720 | 131 | Slowly pull-up |
231 | 1280x720 | 136 | Slow movement |
1296 | 1920x1080 | 45 | Rest |
1643 | 1920x1080 | 45 | Strenuous exercise |
2097 | 1920x1080 | 45 | Strenuous exercise |
Table 2 shows the results of the present invention and the SCM-7.0 method on the test sequence, SCM-7.0 being the reference software for the HEVC screen extension mode of the video coding standard. intra stands for all intra mode, where each video frame uses intra coding mode, i.e. only the current video frame can be used as reference. The LDP mode is a low latency mode, and each video frame can only use the current video frame or the previous video frame as a reference.
Table 2: test results
From the test results shown in Table 2, it can be seen that the codec time of the present invention is significantly less than that of the SCM-7.0 method. The coding performance of the invention is significantly superior to SCM-7.0. Specifically, the present invention saves 39.38% code rate when compared to SCM-7.0 LDP.
From the description of the above embodiments, it will be apparent to those skilled in the art that the above embodiments may be implemented in software, or may be implemented by means of software plus a necessary general hardware platform. With such understanding, the technical solutions of the foregoing embodiments may be embodied in a software product, where the software product may be stored in a nonvolatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and include several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to perform the methods of the embodiments of the present invention.
Example two
The present invention also provides a contour-based inter prediction system, which is mainly used for implementing the method provided in the foregoing embodiment, as shown in fig. 5, and the system mainly includes:
The contour-based coding module is used for selecting a reference block with a contour and coding a corresponding motion vector; coding the contour initial point position of the current coding block; coding the contour information of the current coding block by combining the position of the initial point of the coded contour and the contour direction of the reference block; coding the color value of the current coding block; combining the coded motion vector, the contour initial point position, the contour information of the current coding block and the color value to form a code stream;
The decoding module based on the outline is used for decoding the motion vector in the code stream to obtain a corresponding reference block; decoding the initial point position of the contour in the code stream, and positioning the contour; decoding contour information in the code stream according to the reference block and the positioned contour obtained by decoding; decoding color values in the code stream; and combining the contour information obtained by decoding with the color value to realize the reconstruction of the image block.
Details of the processing involved in the above two modules have been described in the first embodiment, and will not be described again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the system is divided into different functional modules to perform all or part of the functions described above.
Example III
The present invention also provides a processing apparatus, as shown in fig. 6, which mainly includes: one or more processors; a memory for storing one or more programs; wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods provided by the foregoing embodiments.
Further, the processing device further comprises at least one input device and at least one output device; in the processing device, the processor, the memory, the input device and the output device are connected through buses.
In the embodiment of the invention, the specific types of the memory, the input device and the output device are not limited; for example:
The input device can be a touch screen, an image acquisition device, a physical key or a mouse and the like;
The output device may be a display terminal;
The memory may be random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as disk memory.
Example IV
The invention also provides a readable storage medium storing a computer program which, when executed by a processor, implements the method provided by the foregoing embodiments.
The readable storage medium according to the embodiment of the present invention may be provided as a computer readable storage medium in the aforementioned processing apparatus, for example, as a memory in the processing apparatus. The readable storage medium may be any of various media capable of storing a program code, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, and an optical disk.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (10)
1. A contour-based inter prediction method, comprising:
contour-based coding section: selecting a reference block with a contour, and encoding a corresponding motion vector; coding the contour initial point position of the current coding block; coding the contour information of the current coding block by combining the position of the initial point of the coded contour and the contour direction of the reference block; coding the color value of the current coding block; combining the coded motion vector, the contour initial point position, the contour information of the current coding block and the color value to form a code stream;
Profile-based decoding section: decoding a motion vector in a code stream to obtain a corresponding reference block; decoding the initial point position of the contour in the code stream, and positioning the contour; decoding contour information in the code stream according to the reference block and the positioned contour obtained by decoding; decoding color values in the code stream; and combining the contour information obtained by decoding with the color value to realize the reconstruction of the image block.
2. The contour-based inter prediction method as defined in claim 1, further comprising:
in the encoding section, a contour encoding method is selected in advance for encoding contour information of a current encoding block;
In the decoding section, a contour encoding method corresponding to the encoding section is used for decoding contour information in the code stream.
3. The contour-based inter prediction method as defined in claim 1, wherein said selecting a reference block having a contour comprises:
And selecting a reference block which exists and has only one contour, and selecting a block optimal for coding as the reference block through rate distortion optimization when a plurality of blocks meet the condition.
4. The contour-based inter prediction method as defined in claim 1, wherein said encoding a contour initial point position of a current encoded block comprises:
the contour initiation point is the point of the first color value change selected counterclockwise from the top left corner vertex, and the position of the contour initiation point is encoded for locating the contour.
5. The contour-based inter prediction method as defined in claim 1, wherein said encoding contour information of a current encoded block by combining a position of an encoded contour initial point and a contour direction of a reference block comprises:
Locating the contour of the current coding block based on the position of the initial point of the coded contour;
If the contour of the current coding block coincides with the contour of the reference block, a flag bit is coded to indicate whether the next contour direction of the current coding block is the same as the next contour direction of the reference block; if the coded flag bit indicates that the next contour direction of the current coding block is the same as the next contour direction of the reference block, the next contour direction does not need to be coded; if the next contour direction of the current coding block is different from the next contour direction of the reference block, directly coding the contour direction;
if the contour of the current coding block is not coincident with the contour of the reference block, directly coding the contour direction;
when the contour information of the current coding block is coded, the contour direction is coded according to the direction sequence from the contour initial point.
6. A contour based inter prediction method as defined in claim 1, wherein said encoding color values of a current encoded block comprises:
The current coding block has only one contour, and color values of two sides of the contour are different; and predicting two color values of the current coding block by using the color values at two sides of the reference block, and coding the residual error of the color value of the reference block at the same side and the predicted color value of the current coding block to finish coding the color value of the current coding block.
7. The contour-based inter prediction method as defined in claim 1, wherein said decoding contour information in the bitstream based on the decoded reference block and the located contour comprises:
judging whether the contour of the current coding block coincides with the contour of the reference block or not according to the reference block and the positioned contour obtained by decoding;
if the contour of the current coding block is coincident with the contour of the reference block, positioning the position of the marker bit to be decoded according to the reference block, and if the next contour direction of the current coding block is identical to the next contour direction of the reference block, decoding the next contour direction is not needed; if the next contour direction of the current coding block is different from the next contour direction of the reference block, directly decoding the contour direction;
If the contour of the current coding block is not coincident with the contour of the reference block, directly decoding the contour direction;
and during decoding, the contour direction is decoded from the initial point of the contour according to the direction sequence, and the contour information is obtained through decoding.
8. An inter-frame prediction system based on contours, comprising:
The contour-based coding module is used for selecting a reference block with a contour and coding a corresponding motion vector; coding the contour initial point position of the current coding block; coding the contour information of the current coding block by combining the position of the initial point of the coded contour and the contour direction of the reference block; coding the color value of the current coding block; combining the coded motion vector, the contour initial point position, the contour information of the current coding block and the color value to form a code stream;
The decoding module based on the outline is used for decoding the motion vector in the code stream to obtain a corresponding reference block; decoding the initial point position of the contour in the code stream, and positioning the contour; decoding contour information in the code stream according to the reference block and the positioned contour obtained by decoding; decoding color values in the code stream; and combining the contour information obtained by decoding with the color value to realize the reconstruction of the image block.
9. A processing apparatus, comprising: one or more processors; a memory for storing one or more programs;
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A readable storage medium storing a computer program, characterized in that the method according to any one of claims 1-7 is implemented when the computer program is executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410259982.XA CN117979022A (en) | 2024-03-07 | 2024-03-07 | Contour-based inter-frame prediction method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410259982.XA CN117979022A (en) | 2024-03-07 | 2024-03-07 | Contour-based inter-frame prediction method, system, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117979022A true CN117979022A (en) | 2024-05-03 |
Family
ID=90865850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410259982.XA Pending CN117979022A (en) | 2024-03-07 | 2024-03-07 | Contour-based inter-frame prediction method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117979022A (en) |
-
2024
- 2024-03-07 CN CN202410259982.XA patent/CN117979022A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11178419B2 (en) | Picture prediction method and related apparatus | |
CN104363451B (en) | Image prediction method and relevant apparatus | |
CN112087629B (en) | Image prediction method, device and computer readable storage medium | |
CN107809642B (en) | Method for encoding and decoding video image, encoding device and decoding device | |
CN112385211B (en) | Motion compensation for video encoding and decoding | |
CN114363612B (en) | Method and apparatus for bit width control of bi-directional optical flow | |
KR101855542B1 (en) | Video encoding using example - based data pruning | |
US11102501B2 (en) | Motion vector field coding and decoding method, coding apparatus, and decoding apparatus | |
CN104363449B (en) | Image prediction method and relevant apparatus | |
GB2487261A (en) | Motion compensated image coding using diverse set of motion predictors | |
US11310524B2 (en) | Method and apparatus for determining motion vector of affine code block | |
US20190098312A1 (en) | Image prediction method and related device | |
CN110933426B (en) | Decoding and encoding method and device thereof | |
WO2020140243A1 (en) | Video image processing method and apparatus | |
GB2492778A (en) | Motion compensated image coding by combining motion information predictors | |
CN117221534A (en) | Inter-frame prediction method, video encoding and decoding method, device and medium | |
CN109151476A (en) | A kind of reference frame generating method and device based on bi-directional predicted B frame image | |
CN115699736A (en) | Geometric partitioning mode with motion vector refinement | |
CN117979022A (en) | Contour-based inter-frame prediction method, system, device and storage medium | |
CN101841701B (en) | Coding and decoding method and device based on macro block pair | |
CN112073734B (en) | Encoding and decoding method, device and equipment | |
CN118118689A (en) | Method, device, equipment and storage medium for determining motion vector of video coding | |
CN116156187A (en) | Image processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |