CN108769695B - Frame type conversion method, system and terminal - Google Patents
Frame type conversion method, system and terminal Download PDFInfo
- Publication number
- CN108769695B CN108769695B CN201810487647.XA CN201810487647A CN108769695B CN 108769695 B CN108769695 B CN 108769695B CN 201810487647 A CN201810487647 A CN 201810487647A CN 108769695 B CN108769695 B CN 108769695B
- Authority
- CN
- China
- Prior art keywords
- frame
- transcoding
- same scene
- input end
- intra
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000006243 chemical reaction Methods 0.000 title claims description 25
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 238000004590 computer program Methods 0.000 claims description 28
- 230000006870 function Effects 0.000 claims description 15
- 238000012935 Averaging Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 abstract description 8
- 125000000205 L-threonino group Chemical group [H]OC(=O)[C@@]([H])(N([H])[*])[C@](C([H])([H])[H])([H])O[H] 0.000 description 6
- 230000001133 acceleration Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234309—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention is suitable for the technical field of transcoding, and provides a method, a device and a terminal for converting frame types. When the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame, whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not is judged to determine whether the frame type of the current transcoding frame is converted or not, so that the irrationality of the existing blind set strategy is improved, the calculation amount of traversal optimization of a prediction mode is reduced, the transcoding accuracy, the transcoding efficiency and the transcoding flexibility are further improved, and the user experience is improved.
Description
Technical Field
The invention belongs to the technical field of transcoding, and particularly relates to a frame type conversion method, a frame type conversion system and a frame type conversion terminal.
Background
With the rapid development of the mobile internet, the user of the smart phone is urgently required to watch the network video anytime anywhere. The smart phone is limited by conditions such as resolution, decoding capability, network bandwidth, cruising capability and the like, so that the smart phone is difficult to directly watch network videos like a computer, and video transcoding is carried out at the same time.
When transcoding a video, a method commonly used at present is to use a blind setting method of frame types by a transcoder, that is, to set the frame types of frames to be transcoded uniformly by transcoding parameters, so as to obtain a better transcoding effect. However, due to the diversity of videos and the great difference in redundancy of inter-frame information of various videos, if a blind setting method of a transcoder is adopted, the compression effects of different videos are different, and the user experience is poor. Meanwhile, due to the irrational blind strategy, the flexibility of video transcoding is low, and the calculation amount of traversal optimization of the prediction mode is increased.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a terminal for frame type conversion, so as to solve the problems of irrationality of the existing blind setting policy, large calculation amount for traversal optimization of a prediction mode, low transcoding efficiency flexibility, and poor user experience.
A first aspect of an embodiment of the present invention provides a method for converting a frame type, including:
when a first input end decoding frame corresponding to the current transcoding frame is not an intra-frame prediction frame, judging whether the difference value of the first input end decoding frame and the input end intra-frame prediction decoding frame is greater than a first threshold value; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
when the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is larger than a first threshold value, judging whether the ratio of the bit of a second input end decoded frame to the bit of the first input end decoded frame is smaller than a second threshold value;
and when the ratio of the bits of the second input decoded frame to the bits of the first input decoded frame is smaller than a second threshold value, converting the frame type of the current transcoding frame into an intra-frame prediction frame.
Further, when a first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame, judging whether the current transcoding frame and a pre-transcoding frame belong to the same scene; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and when the current transcoding frame and the pre-transcoding frame belong to the same scene, converting the frame type of the current transcoding frame into an inter-frame prediction frame.
A second aspect of an embodiment of the present invention provides a device for converting a frame type, including:
the first judgment unit is used for judging whether the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is larger than a first threshold value or not when the first input end decoded frame corresponding to the current transcoding frame is not an intra-frame prediction frame; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
a second determination unit, configured to determine whether a ratio of bits of a second input decoded frame to bits of the first input decoded frame is smaller than a second threshold when a difference between the first input decoded frame and the input intra-prediction decoded frame is larger than a first threshold;
and the first conversion unit is used for converting the frame type of the current transcoding frame into an intra-frame prediction frame when the ratio of the bits of the second input end decoding frame to the bits of the first input end decoding frame is smaller than a second threshold value.
Further, the apparatus further comprises:
the third judgment unit is used for judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not when the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and the second conversion unit is used for converting the frame type of the current transcoding frame into an inter-frame prediction frame when the current transcoding frame and the pre-transcoding frame belong to the same scene.
A third aspect of an embodiment of the present invention provides a terminal, including:
the present invention also relates to a computer program stored in a memory and executable on a processor, wherein the processor implements the steps of the method for converting frame classes provided in the first aspect of the embodiments of the present invention when executing the computer program.
Wherein the computer program comprises:
the first judgment unit is used for judging whether the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is larger than a first threshold value or not when the first input end decoded frame corresponding to the current transcoding frame is not an intra-frame prediction frame; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
a second determination unit, configured to determine whether a ratio of bits of a second input decoded frame to bits of the first input decoded frame is smaller than a second threshold when a difference between the first input decoded frame and the input intra-prediction decoded frame is larger than a first threshold;
and the first conversion unit is used for converting the frame type of the current transcoding frame into an intra-frame prediction frame when the ratio of the bits of the second input end decoding frame to the bits of the first input end decoding frame is smaller than a second threshold value.
Further, the computer program further comprises:
the third judgment unit is used for judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not when the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and the second conversion unit is used for converting the frame type of the current transcoding frame into an inter-frame prediction frame when the current transcoding frame and the pre-transcoding frame belong to the same scene.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, where the computer program, when executed by a processor, implements the steps of the method for converting frame categories provided by the first aspect of the embodiments of the present invention.
Wherein the computer program comprises:
the first judgment unit is used for judging whether the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is larger than a first threshold value or not when the first input end decoded frame corresponding to the current transcoding frame is not an intra-frame prediction frame; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
a second determination unit, configured to determine whether a ratio of bits of a second input decoded frame to bits of the first input decoded frame is smaller than a second threshold when a difference between the first input decoded frame and the input intra-prediction decoded frame is larger than a first threshold;
and the first conversion unit is used for converting the frame type of the current transcoding frame into an intra-frame prediction frame when the ratio of the bits of the second input end decoding frame to the bits of the first input end decoding frame is smaller than a second threshold value.
Further, the computer program further comprises:
the third judgment unit is used for judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not when the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and the second conversion unit is used for converting the frame type of the current transcoding frame into an inter-frame prediction frame when the current transcoding frame and the pre-transcoding frame belong to the same scene.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: in the embodiment of the invention, when a first input end decoded frame corresponding to a current transcoding frame is not an intra-frame prediction frame, after judging that the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is greater than a first threshold value and then judging that the ratio of the bit of a second input end decoded frame to the bit of the first input end decoded frame is less than a second threshold value, the frame type of the current transcoding frame is converted into the intra-frame prediction frame. When the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame, whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not is judged to determine whether the frame type of the current transcoding frame is converted or not, so that the irrationality of the existing blind set strategy is improved, the calculation amount of traversal optimization of a prediction mode is reduced, the transcoding accuracy, the transcoding efficiency and the transcoding flexibility are further improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a frame class conversion method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an implementation of a method for determining whether a current transcoded frame and a pre-transcoded frame belong to the same scene according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific implementation of a method for determining whether a current transcoded frame and a pre-transcoded frame belong to the same scene according to an embodiment of the present invention;
fig. 4 is a flowchart of a specific implementation of another method for determining whether a current transcoded frame and a pre-transcoded frame belong to the same scene according to an embodiment of the present invention;
fig. 5 is a specific implementation flow of a method for determining a same scene determination area according to the same scene initial determination area, provided by the embodiment of the present invention;
fig. 6 is a schematic diagram of a frame class converting apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 shows an implementation flow of a frame class conversion method according to an embodiment of the present invention, which is detailed as follows:
in step S101, when a first input decoded frame corresponding to a currently transcoded frame is not an intra-frame predicted frame, it is determined whether a difference between the first input decoded frame and the input intra-frame predicted decoded frame is greater than a first threshold.
In embodiments of the present invention, frame classes include, but are not limited to, intra-predicted frames and inter-predicted frames; the current transcoding frame is a frame to be transcoded currently; the first input end decoding frame is an input end decoding frame corresponding to the current transcoder frame; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame and comprises a frame closest to the input end intra-frame prediction decoding frame of the first N frames of the current transcoding frame and a frame closest to the input end intra-frame prediction decoding frame of the last N frames of the current transcoding frame.
The formula for determining whether the difference between the first input decoded frame and the input intra-frame prediction decoded frame is greater than the first threshold is specifically:
wherein poc () represents the play sequence number of a variable; frame represents the current transcoded frame; framedecRepresenting a first input decoded frame;represents an input-side intra-prediction decoded frame; thres1Representing a first threshold value. Preferred Thres herein1A value of 0<Thres1<fps/2; fps denotes the frame rate.
In step S102, when the difference between the first input decoded frame and the input intra-frame predicted decoded frame is greater than a first threshold, it is determined whether the ratio of the bits of the second input decoded frame to the bits of the first input decoded frame is less than a second threshold.
In the embodiment of the present invention, the second input decoded frame is specifically a previous input decoded frame corresponding to the playing order of the current transcoded frame.
Representing decoded frames, Thres, at the second input2Denotes a second threshold value, Thres2Preferably, the value of (A) is Thres2Less than or equal to 1/2; when in useThen, the frames are acquired separatelydecAndcorresponding bits, respectively using bitdecAndindicating, at this time, judgingAnd bitdecWhether the ratio of (A) is less than Thres2。
Further, whenAnd when the current transcoding frame is in use, the frame type of the current transcoding frame is kept and the current transcoding frame is directly transcoded.
In step S103, when the ratio of the bits of the second input decoded frame to the bits of the first input decoded frame is smaller than a second threshold, the frame type of the currently transcoded frame is converted into an intra-frame predicted frame.
In the embodiment of the invention, whenAnd then, converting the frame type of the current transcoding frame into an intra-frame prediction frame, and transcoding the converted current transcoding frame.
Further, whenAnd when the current transcoding frame is in use, the frame type of the current transcoding frame is kept and the current transcoding frame is directly transcoded.
In the embodiment of the invention, when a first input end decoded frame corresponding to a current transcoding frame is not an intra-frame prediction frame, after the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is judged to be larger than a first threshold value, and then the ratio of the bit of a second input end decoded frame to the bit of the first input end decoded frame is judged to be smaller than a second threshold value, the frame type of the current transcoding frame is converted into the intra-frame prediction frame, and the converted current transcoding frame is transcoded, so that the irrationality of the existing blind design strategy is improved, the calculation amount of traversal optimization of a prediction mode is reduced, the transcoding efficiency and flexibility are improved, and the user experience is improved.
Furthermore, when the first input end decoded frame corresponding to the current transcoding frame is an intra-frame prediction frame, whether the current transcoding frame and a pre-transcoding frame belong to the same scene is further determined by setting an acceleration variable, so that whether the frame type of the current transcoding frame needs to be converted is further determined, and the efficiency and accuracy of transcoding are improved. And the pre-transcoding frame is a previous frame transcoding frame corresponding to the playing sequence of the current transcoding frame.
Here, the acceleration variable is generally set to values of 0 and 1 by the user. The acceleration variable is set to 1 when the user desires a bias towards speed and 0 when the user desires a bias towards accuracy.
Here, it is preferable that when the acceleration variable is 1, the frame type of the currently transcoded frame is maintained and the currently transcoded frame is directly transcoded to improve the efficiency of transcoding.
Further, when the acceleration variable is 0, whether the current transcoding frame and the pre-transcoding frame belong to the same scene is judged to determine whether the frame type of the current transcoding frame needs to be converted, so that the accuracy of transcoding is improved.
As shown in fig. 2, fig. 2 provides a method for determining whether a current transcoded frame and a pre-transcoded frame belong to the same scene to determine whether a frame type of the current transcoded frame needs to be converted, which includes the following specific steps:
in step S201, when the first input decoded frame corresponding to the current transcoded frame is an intra-frame predicted frame, it is determined whether the current transcoded frame and the pre-transcoded frame belong to the same scene.
In the embodiment of the present invention, before step S201, a previous input decoded frame and a next input decoded frame corresponding to the playing sequence of the current transcoded frame are searched, and the two frames of images are decoded to obtain a real image.
In the images of the previous input end decoding frame and the next input end decoding frame corresponding to the playing sequence of the current transcoding frame obtained after decoding, basic blocks of which the prediction modes of the basic blocks at the same positions in the two frames of images are both in the SKIP mode are divided into the same scene initial judgment area, and the number of the basic blocks contained in the same scene initial judgment area is counted.
As shown in fig. 3, fig. 3 provides a method for determining whether a current transcoded frame and a pre-transcoded frame belong to the same scene, which includes the following specific steps:
in step S301, the number of basic blocks included in the initial determination region of the same scene is acquired.
In an embodiment of the invention, the basic block is the largest block that the coding standard allows to partition. The number of basic blocks contained in the initial judgment area of the same scene is 1.
In step S302, it is determined whether a ratio of the number of basic blocks included in the initial determination region of the same scene to the number of basic blocks included in one frame of image is smaller than an initial determination threshold.
In the embodiment of the present invention, the initial threshold is represented by Thres _ N, and is preferably Thres _ N < 0.2.
In step S303, when a ratio of the number of basic blocks included in the initial determination region of the same scene to the number of basic blocks included in one frame of image is smaller than an initial determination threshold, it is determined that the current transcoded frame and the pre-transcoded frame belong to the same scene.
In the embodiment of the invention, whether the current transcoding frame and the pre-transcoding frame belong to the same scene is judged by judging whether the ratio of the number of basic blocks contained in the initial judgment region of the same scene to the number of basic blocks contained in a frame image is smaller than an initial judgment threshold value, and whether the frame type of the current transcoding frame is converted is determined, so that the calculation amount of traversal optimization of a prediction mode is reduced, and the transcoding efficiency is improved.
Further, another method for determining whether the current transcoded frame and the pre-transcoded frame belong to the same scene as shown in fig. 4 is further provided in the embodiment of the present invention, which includes the following specific steps:
in step S401, when a ratio of the number of basic blocks included in the same scene initial judgment region to the number of basic blocks included in one frame of image is not less than an initial judgment threshold, determining the same scene judgment region according to the same scene initial judgment region;
in step S402, scene statistical variables of the same scene determination region are acquired.
In the embodiment of the present invention, the scene statistical variables of the same scene determination area are specifically calculated by the following companies:
TI_frame=sum(sign(ti_blockn,Thres3));
wherein TI _ frame represents a scene statistical variable; sum () represents summing the variables that satisfy the condition; sign () represents a sign function; ti _ blocknRepresenting an intermediate variable; thres3Represents a third threshold; std () represents averaging the variables that satisfy the condition;andrespectively representing a previous input end decoding frame and a next input end decoding frame which correspond to the playing sequence of the current transcoding frame; y isprev,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; y isnext,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; blockprev,n decIndicating the same scene decision regionAn nth basic block; blocknext,n decRepresenting decision regions in the same sceneThe nth basic block.
In step S403, it is determined whether a ratio of the scene statistical variable to the number of basic blocks included in the same scene initial determination area is greater than a scene threshold.
In the embodiment of the present invention, the scene threshold is used for Thres _ C representation, and Thres _ C >0.8 is preferred.
In step S404, when a ratio of the scene statistical variable to the number of basic blocks included in the same scene initial determination region is greater than a scene threshold, it is determined that the current transcoded frame and a pre-transcoded frame belong to the same scene.
Specifically, an embodiment of the present invention provides a method for determining a same scene determination area according to the same scene initial determination area, as shown in fig. 5, and the specific steps are as follows:
in step S501, it is determined whether the same scene initial determination area is located around the image.
In step S502, when the same scene initial judgment region is located around the image, the inner ring block group located in the middle of the same scene initial judgment region is deleted to form the same scene judgment region.
In step S503, when the same scene initial judgment region is not located around the image, the outer ring block group located outside the same scene initial judgment region is deleted to form the same scene judgment region.
In the embodiment of the invention, whether the same scene initial judgment region is positioned around the image or not is judged to determine to delete the block group in the same scene initial judgment region so as to form the same scene judgment region, so that whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not can be judged more accurately, and the accuracy and the efficiency of transcoding are improved.
In step S202, when the current transcoded frame and a pre-transcoded frame belong to the same scene, the frame type of the current transcoded frame is converted into an inter-frame prediction frame.
In the embodiment of the invention, after the frame type of the current transcoding frame is converted into the intra-frame prediction frame, the converted current transcoding frame is transcoded.
When the current transcoding frame and the pre-transcoding frame do not belong to the same scene, the frame type of the current transcoding frame is kept, and the current transcoding frame is transcoded.
In the embodiment of the invention, the transcoding accuracy and the transcoding efficiency are further improved by judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene.
In the embodiment of the invention, when a first input end decoded frame corresponding to a current transcoding frame is not an intra-frame prediction frame, after judging that the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is greater than a first threshold value and then judging that the ratio of the bit of a second input end decoded frame to the bit of the first input end decoded frame is less than a second threshold value, the frame type of the current transcoding frame is converted into the intra-frame prediction frame. When the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame, whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not is judged to determine whether the frame type of the current transcoding frame is converted or not, so that the irrationality of the existing blind set strategy is improved, the calculation amount of traversal optimization of a prediction mode is reduced, the transcoding accuracy, the transcoding efficiency and the transcoding flexibility are further improved, and the user experience is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 6 is a schematic diagram of a frame class conversion apparatus according to an embodiment of the present invention, which corresponds to the frame class conversion method described in the foregoing embodiment, and for convenience of description, only the relevant parts of the frame class conversion apparatus according to the embodiment of the present invention are shown.
Referring to fig. 6, the apparatus includes:
the first judging unit 61 is configured to, when a first input end decoded frame corresponding to a currently transcoded frame is not an intra-frame prediction frame, judge whether a difference between the first input end decoded frame and an input end intra-frame prediction decoded frame is greater than a first threshold; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
a second judging unit 62, configured to judge whether a ratio of bits of a second input decoded frame to bits of the first input decoded frame is smaller than a second threshold when a difference between the first input decoded frame and the input intra-prediction decoded frame is larger than a first threshold;
a first conversion unit 63, configured to convert the frame type of the currently transcoded frame into an intra-frame prediction frame when a ratio of bits of the second input decoded frame to bits of the first input decoded frame is smaller than a second threshold.
Further, the apparatus further comprises:
the third judgment unit is used for judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not when the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and the second conversion unit is used for converting the frame type of the current transcoding frame into an inter-frame prediction frame when the current transcoding frame and a pre-transcoding frame belong to the same scene.
Specifically, the third determining unit includes:
a basic block number obtaining subunit, configured to obtain the number of basic blocks included in the same scene initial judgment area;
a third judging subunit, configured to judge whether a ratio of the number of basic blocks included in the initial judging region of the same scene to the number of basic blocks included in one frame of image is smaller than an initial judging threshold;
and the first scene judgment subunit is configured to judge that the current transcoded frame and the pre-transcoded frame belong to the same scene when a ratio of the number of basic blocks included in the same scene initial judgment region to the number of basic blocks included in one frame of image is smaller than an initial judgment threshold.
Further, the third determining unit further includes:
a scene determining subunit, configured to determine, when a ratio of the number of basic blocks included in the same scene initial judgment region to the number of basic blocks included in one frame of image is not less than an initial judgment threshold, the same scene judgment region according to the same scene initial judgment region;
a scene statistic variable acquiring subunit, configured to acquire a scene statistic variable of the same scene determination region;
a fourth judging unit, configured to judge whether a ratio of the scene statistical variable to the number of basic blocks included in the same scene initial judgment region is greater than a scene threshold;
and the second scene judgment subunit is used for judging that the current transcoding frame and the pre-transcoding frame belong to the same scene when the ratio of the scene statistical variable to the number of the basic blocks contained in the same scene initial judgment region is greater than a scene threshold value.
Specifically, the scene determination subunit specifically includes:
the area judgment subunit is used for judging whether the initial judgment area of the same scene is positioned around the image or not;
the first scene determining subunit is configured to delete the inner ring block group located in the middle of the same scene initial judgment region to form the same scene judgment region when the same scene initial judgment region is located around the image;
and the second scene determining subunit is configured to delete the outer ring block group located outside the same scene initial determination region to form the same scene determination region when the same scene initial determination region is not located around the image.
Specifically, the scene statistical variable is a scene statistical variable calculated by a calculation formula, and the calculation formula specifically includes:
TI_frame=sum(sign(ti_blockn,Thres3));
wherein TI _ frame represents a scene statistical variable; sum () represents summing the variables that satisfy the condition; sign () represents a sign function; ti _ blocknRepresenting an intermediate variable; thres3Represents a third threshold; std () represents averaging the variables that satisfy the condition;andrespectively representing a previous input end decoding frame and a next input end decoding frame which correspond to the playing sequence of the current transcoding frame; y isprev,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; y isnext,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; blockprev,n decIndicating the same scene decision regionAn nth basic block; blocknext,n decRepresenting decision regions in the same sceneThe nth basic block.
In the embodiment of the present invention, when a first input end decoded frame corresponding to a current transcoded frame is not an intra-frame predicted frame, a frame type conversion device converts the frame type of the current transcoded frame into the intra-frame predicted frame by judging that a difference between the first input end decoded frame and an input end intra-frame predicted decoded frame is greater than a first threshold, and then judging that a ratio of a bit of a second input end decoded frame to a bit of the first input end decoded frame is smaller than a second threshold. When the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame, whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not is judged to determine whether the frame type of the current transcoding frame is converted or not, so that the irrationality of the existing blind set strategy is improved, the calculation amount of traversal optimization of a prediction mode is reduced, the transcoding accuracy, the transcoding efficiency and the transcoding flexibility are further improved, and the user experience is improved.
Fig. 7 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 7, the terminal 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72 stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the above-described embodiments of the method for converting frame categories, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the units in the system embodiments, such as the functions of the modules 61 to 63 shown in fig. 6.
Illustratively, the computer program 72 may be divided into one or more units, which are stored in the memory 71 and executed by the processor 70 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the terminal 7. For example, the computer program 72 may be divided into the first determining unit 61, the second determining unit 62, and the first converting unit 63, and the specific functions of each unit are as follows:
the first judging unit 61 is configured to, when a first input end decoded frame corresponding to a currently transcoded frame is not an intra-frame prediction frame, judge whether a difference between the first input end decoded frame and an input end intra-frame prediction decoded frame is greater than a first threshold; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
a second judging unit 62, configured to judge whether a ratio of bits of a second input decoded frame to bits of the first input decoded frame is smaller than a second threshold when a difference between the first input decoded frame and the input intra-prediction decoded frame is larger than a first threshold;
a first conversion unit 63, configured to convert the frame type of the currently transcoded frame into an intra-frame prediction frame when a ratio of bits of the second input decoded frame to bits of the first input decoded frame is smaller than a second threshold.
Further, the computer program 72 may be further divided into a third determining unit and a second converting unit, and the specific functions of each unit are as follows:
the third judgment unit is used for judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not when the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and the second conversion unit is used for converting the frame type of the current transcoding frame into an inter-frame prediction frame when the current transcoding frame and a pre-transcoding frame belong to the same scene.
Specifically, the third determining unit in the computer program 72 may be divided into a basic block number obtaining subunit, a third determining subunit, and a first scenario determining subunit, where the functions of the units are as follows:
a basic block number obtaining subunit, configured to obtain the number of basic blocks included in the same scene initial judgment area;
a third judging subunit, configured to judge whether a ratio of the number of basic blocks included in the initial judging region of the same scene to the number of basic blocks included in one frame of image is smaller than an initial judging threshold;
and the first scene judgment subunit is configured to judge that the current transcoded frame and the pre-transcoded frame belong to the same scene when a ratio of the number of basic blocks included in the same scene initial judgment region to the number of basic blocks included in one frame of image is smaller than an initial judgment threshold.
Further, the third determining unit in the computer program 72 may be further divided into a scene determining subunit, a scene statistical variable obtaining subunit, a fourth determining unit, and a second scene determining subunit, where the specific functions of each unit are as follows:
a scene determining subunit, configured to determine, when a ratio of the number of basic blocks included in the same scene initial judgment region to the number of basic blocks included in one frame of image is not less than an initial judgment threshold, the same scene judgment region according to the same scene initial judgment region;
a scene statistic variable acquiring subunit, configured to acquire a scene statistic variable of the same scene determination region;
a fourth judging unit, configured to judge whether a ratio of the scene statistical variable to the number of basic blocks included in the same scene initial judgment region is greater than a scene threshold;
and the second scene judgment subunit is used for judging that the current transcoding frame and the pre-transcoding frame belong to the same scene when the ratio of the scene statistical variable to the number of the basic blocks contained in the same scene initial judgment region is greater than a scene threshold value.
Specifically, the scene determining subunit in the computer program 72 may be further divided into an area determining subunit, a first scene determining subunit, and a second scene determining subunit, where the functions of the units are as follows:
the area judgment subunit is used for judging whether the initial judgment area of the same scene is positioned around the image or not;
the first scene determining subunit is configured to delete the inner ring block group located in the middle of the same scene initial judgment region to form the same scene judgment region when the same scene initial judgment region is located around the image;
and the second scene determining subunit is configured to delete the outer ring block group located outside the same scene initial determination region to form the same scene determination region when the same scene initial determination region is not located around the image.
Specifically, the scene statistical variable is a scene statistical variable calculated by a calculation formula, and the calculation formula specifically includes:
TI_frame=sum(sign(ti_blockn,Thres3));
wherein TI _ frame represents a scene statistical variable; sum () represents summing the variables that satisfy the condition; sign () represents a sign function; ti _ blocknRepresenting an intermediate variable; thres3Represents a third threshold; std () represents averaging the variables that satisfy the condition;andrespectively representing a previous input end decoding frame and a next input end decoding frame which correspond to the playing sequence of the current transcoding frame; y isprev,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; y isnext,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; blockprev,n decIndicating the same scene decision regionAn nth basic block; blocknext,n decRepresenting decision regions in the same sceneThe nth basic block.
The terminal 7 may be a desktop computer, a notebook, a palm computer, a smart phone, or other terminal equipment. The terminal 7 may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is only an example of a terminal 7 and does not constitute a limitation of the terminal 7, and that it may comprise more or less components than those shown, or some components may be combined, or different components, for example the terminal may further comprise input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the terminal 7, such as a hard disk or a memory of the terminal 7. The memory 71 may also be an external storage device of the terminal 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal 7. The memory 71 is used for storing the computer program and other programs and data required by the terminal. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system/terminal device and method can be implemented in other ways. For example, the above-described system/terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or system capable of carrying said computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. A method for converting frame classes, the method comprising:
when a first input end decoding frame corresponding to the current transcoding frame is not an intra-frame prediction frame, judging whether the difference value of the first input end decoding frame and the input end intra-frame prediction decoding frame is greater than a first threshold value; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
when the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is larger than a first threshold value, judging whether the ratio of the bit of a second input end decoded frame to the bit of the first input end decoded frame is smaller than a second threshold value, wherein the second input end decoded frame is a previous input end decoded frame corresponding to the playing sequence of the current transcoding frame;
and when the ratio of the bits of the second input decoded frame to the bits of the first input decoded frame is smaller than a second threshold value, converting the frame type of the current transcoding frame into an intra-frame prediction frame.
2. The method of claim 1, wherein the method further comprises:
when a first input end decoding frame corresponding to a current transcoding frame is an intra-frame prediction frame, judging whether the current transcoding frame and a pre-transcoding frame belong to the same scene; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and when the current transcoding frame and the pre-transcoding frame belong to the same scene, converting the frame type of the current transcoding frame into an inter-frame prediction frame.
3. The method of claim 2, wherein the step of determining whether the current transcoded frame and the pre-transcoded frame belong to the same scene comprises:
acquiring the number of basic blocks contained in a primary judgment area of the same scene;
judging whether the ratio of the number of the basic blocks contained in the initial judgment area of the same scene to the number of the basic blocks contained in one frame of image is smaller than an initial judgment threshold value or not;
and when the ratio of the number of the basic blocks contained in the initial judgment region of the same scene to the number of the basic blocks contained in one frame of image is smaller than an initial judgment threshold value, judging that the current transcoding frame and the pre-transcoding frame belong to the same scene.
4. The method of claim 3, wherein the step of determining whether the current transcoded frame and the pre-transcoded frame belong to the same scene further comprises:
when the ratio of the number of the basic blocks contained in the same scene initial judgment area to the number of the basic blocks contained in one frame of image is not less than an initial judgment threshold value, determining the same scene judgment area according to the same scene initial judgment area;
acquiring scene statistical variables of the same scene judgment area;
judging whether the ratio of the scene statistical variable to the number of the basic blocks contained in the same scene initial judgment area is greater than a scene threshold value or not;
and when the ratio of the scene statistical variable to the number of the basic blocks contained in the same scene initial judgment region is greater than a scene threshold, judging that the current transcoding frame and the pre-transcoding frame belong to the same scene.
5. The method according to claim 4, wherein the step of determining the same scene determination area according to the same scene initial determination area specifically comprises:
judging whether the initial judgment area of the same scene is positioned around the image or not;
when the same scene initial judgment area is positioned at the periphery of the image, deleting the inner ring block group positioned in the middle of the same scene initial judgment area to form the same scene judgment area;
and when the same scene initial judgment area is not positioned at the periphery of the image, deleting the outer ring block group positioned at the outer side of the same scene initial judgment area to form the same scene judgment area.
6. The method according to claim 4, wherein the scene statistical variables are scene statistical variables calculated by a calculation formula, the calculation formula being specifically:
TI_frame=sum(sign(ti_blockn,Thres3));
Wherein TI _ frame represents a scene statistical variable; sum () represents summing the variables that satisfy the condition; sign () represents a sign function; ti _ blocknRepresenting an intermediate variable; thres3Represents a third threshold; std () represents averaging the variables that satisfy the condition;andrespectively representing a previous input end decoding frame and a next input end decoding frame which correspond to the playing sequence of the current transcoding frame; y isprev,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; y isnext,n(i, j) representsThe luminance value of the ith row and the jth column of the nth basic block; blockprev,n decIndicating the same scene decision regionAn nth basic block; blocknext,n decRepresenting decision regions in the same sceneThe nth basic block.
7. An apparatus for converting frame classes, the apparatus comprising:
the first judgment unit is used for judging whether the difference value between the first input end decoded frame and the input end intra-frame prediction decoded frame is larger than a first threshold value or not when the first input end decoded frame corresponding to the current transcoding frame is not an intra-frame prediction frame; the input end intra-frame prediction decoding frame is the input end intra-frame prediction decoding frame closest to the first input end decoding frame;
a second determining unit, configured to determine whether a ratio of a bit of a second input decoded frame to a bit of the first input decoded frame is smaller than a second threshold when a difference between the first input decoded frame and the input intra-frame predicted decoded frame is larger than a first threshold, where the second input decoded frame is a previous input decoded frame corresponding to a current transcoding frame playing sequence;
and the first conversion unit is used for converting the frame type of the current transcoding frame into an intra-frame prediction frame when the ratio of the bits of the second input end decoding frame to the bits of the first input end decoding frame is smaller than a second threshold value.
8. The apparatus of claim 7, wherein the apparatus further comprises:
the third judgment unit is used for judging whether the current transcoding frame and the pre-transcoding frame belong to the same scene or not when the first input end decoding frame corresponding to the current transcoding frame is an intra-frame prediction frame; the pre-transcoding frame is a previous transcoding frame corresponding to the playing sequence of the current transcoding frame;
and the second conversion unit is used for converting the frame type of the current transcoding frame into an inter-frame prediction frame when the current transcoding frame and the pre-transcoding frame belong to the same scene.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the frame class conversion method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for converting frame classes according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810487647.XA CN108769695B (en) | 2018-05-21 | 2018-05-21 | Frame type conversion method, system and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810487647.XA CN108769695B (en) | 2018-05-21 | 2018-05-21 | Frame type conversion method, system and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108769695A CN108769695A (en) | 2018-11-06 |
CN108769695B true CN108769695B (en) | 2020-08-07 |
Family
ID=64008598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810487647.XA Active CN108769695B (en) | 2018-05-21 | 2018-05-21 | Frame type conversion method, system and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108769695B (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008146892A1 (en) * | 2007-05-29 | 2008-12-04 | Nec Corporation | Moving image converting apparatus, moving image converting method, and moving image converting program |
CN103686184B (en) * | 2013-11-18 | 2017-05-17 | 深圳市云宙多媒体技术有限公司 | Adjusting method and system for frame type in trans-coding |
CN105898316A (en) * | 2015-12-14 | 2016-08-24 | 乐视云计算有限公司 | Coding information inherent real-time trancoding method and device |
-
2018
- 2018-05-21 CN CN201810487647.XA patent/CN108769695B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108769695A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2024112884A (en) | Method and device for encoding/decoding image unit comprising image data represented by luminance channel and at least one chrominance channel | |
CN1925582B (en) | Techniques for improving contrast enhancement using luminance histograms | |
US10194150B2 (en) | Method and device for coding image, and method and device for decoding image | |
EP3509305B1 (en) | Intra-prediction video coding method and device | |
US20200007862A1 (en) | Method of adaptive filtering for multiple reference line of intra prediction in video coding, video encoding apparatus and video decoding apparatus therewith | |
CN107155093B (en) | Video preview method, device and equipment | |
CN108600783B (en) | Frame rate adjusting method and device and terminal equipment | |
CN113421312B (en) | Coloring method and device for black-and-white video, storage medium and terminal | |
CN1921561A (en) | Techniques to improve contrast enhancement | |
CN110248189B (en) | Video quality prediction method, device, medium and electronic equipment | |
CN110855958B (en) | Image adjusting method and device, electronic equipment and storage medium | |
US11032552B2 (en) | Video encoding method, video encoding apparatus, electronic device, and storage medium | |
WO2007078888A1 (en) | Hardware motion compensation for video decoding | |
CN112203085B (en) | Image processing method, device, terminal and storage medium | |
CN110418134B (en) | Video coding method and device based on video quality and electronic equipment | |
CN108600794B (en) | Frame loss rate correction method, device and terminal | |
CN112203086B (en) | Image processing method, device, terminal and storage medium | |
CN108765503B (en) | Skin color detection method, device and terminal | |
CN108769695B (en) | Frame type conversion method, system and terminal | |
CN115442617A (en) | Video processing method and device based on video coding | |
JP2024513873A (en) | Geometric partitioning with switchable interpolation filters | |
CN112703733B (en) | Translation and affine candidates in a unified list | |
CN113542737A (en) | Encoding mode determining method and device, electronic equipment and storage medium | |
WO2024051299A1 (en) | Encoding method and apparatus, and decoding method and apparatus | |
CN118870009A (en) | Image encoding method, decoding method, device, apparatus, medium, and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |