[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112584232A - Video frame insertion method and device and server - Google Patents

Video frame insertion method and device and server Download PDF

Info

Publication number
CN112584232A
CN112584232A CN201910953084.3A CN201910953084A CN112584232A CN 112584232 A CN112584232 A CN 112584232A CN 201910953084 A CN201910953084 A CN 201910953084A CN 112584232 A CN112584232 A CN 112584232A
Authority
CN
China
Prior art keywords
frame
reference frame
video
similarity
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910953084.3A
Other languages
Chinese (zh)
Inventor
鲁方波
樊鸿飞
汪贤
蔡媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Beijing Kingsoft Cloud Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Beijing Kingsoft Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd, Beijing Kingsoft Cloud Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN201910953084.3A priority Critical patent/CN112584232A/en
Publication of CN112584232A publication Critical patent/CN112584232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

本发明提供了一种视频插帧方法、装置及服务器;其中,该方法包括:基于待处理的视频帧序列,确定当前的前向参考帧和后向参考帧;确定前向参考帧及后向参考帧之间的帧相似度;基于帧相似度确定插帧数量;在前向参考帧和后向参考帧之间插入插帧数量对应的视频帧。本发明可以基于前向参考帧和后向参考帧的帧相似度合理地确定插帧数量,提高了对视频进行插帧处理的灵活性,提高了插帧效果,从而提升了用户的视频观看体验。

Figure 201910953084

The present invention provides a video frame insertion method, device and server; wherein, the method includes: determining the current forward reference frame and the backward reference frame based on the video frame sequence to be processed; determining the forward reference frame and the backward reference frame Frame similarity between reference frames; determine the number of interpolated frames based on the frame similarity; insert video frames corresponding to the number of interpolated frames between the forward reference frame and the backward reference frame. The present invention can reasonably determine the number of inserted frames based on the frame similarity between the forward reference frame and the backward reference frame, thereby improving the flexibility of the video frame insertion processing, improving the frame insertion effect, and improving the user's video viewing experience. .

Figure 201910953084

Description

Video frame insertion method and device and server
Technical Field
The invention relates to the technical field of image processing, in particular to a video frame interpolation method, a video frame interpolation device and a server.
Background
In the related art, when a video is interpolated, the video is usually interpolated by a fixed multiple. However, for the same video needing frame interpolation, the video complexity of different video segments is different; for video clips with fast interframe change (which may also be referred to as high video complexity), frame interpolation according to the fixed multiple may not be enough to improve the fluency and definition of the video; for video clips with slow interframe change (which can also be called as low video complexity), frame interpolation is carried out according to the fixed multiple and is possibly redundant, human eyes cannot perceive visual improvement generated by frame interpolation, and the video volume is increased; in a word, the frame interpolation mode with fixed multiple lacks rationality and flexibility, and the frame interpolation effect is not good.
Disclosure of Invention
In view of this, an object of the present invention is to provide a method, an apparatus and a server for video frame interpolation, so as to improve the flexibility of frame interpolation processing for a video, improve the frame interpolation effect, and thereby improve the video viewing experience of a user.
In a first aspect, an embodiment of the present invention provides a video frame interpolation method, including: determining a current forward reference frame and a current backward reference frame based on a video frame sequence to be processed; determining frame similarity between a forward reference frame and a backward reference frame; determining the number of interpolation frames based on the frame similarity; and inserting video frames corresponding to the number of the inserted frames between the forward reference frame and the backward reference frame.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of determining frame similarities of a forward reference frame and a backward reference frame includes: extracting a first feature vector of a forward reference frame and a second feature vector of a backward reference frame through a pre-trained feature extraction network; calculating the feature similarity of the first feature vector and the second feature vector; and determining the feature similarity as the frame similarity of the forward reference frame and the backward reference frame.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of extracting a first feature vector of a forward reference frame and a second feature vector of a backward reference frame includes: if the forward reference frame is the first video frame in the video frame sequence, respectively extracting a first feature vector of the forward reference frame and a second feature vector of the backward reference frame by adopting a feature extraction network; if the forward reference frame is a video frame except the first video frame in the video frame sequence, taking the second feature vector of the last backward reference frame as the first feature vector of the current forward reference frame; and extracting a second feature vector of the current backward reference frame by adopting a feature extraction network.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the feature extraction network includes a convolution feature extraction module, a multi-scale feature extraction module, and a full connection layer, which are connected in sequence; the convolution characteristic extraction module is used for performing convolution calculation and average pooling calculation on the input video frame and outputting an initial characteristic matrix; the multi-scale feature extraction module is used for extracting multi-scale features of the initial feature matrix through various preset convolution kernels to obtain a multi-scale feature matrix; and the full connection layer is used for carrying out comprehensive characteristic processing on the multi-scale characteristic matrix to obtain the characteristic vector of the input video frame.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of determining the number of interpolated frames based on the frame similarity includes: determining the frame interpolation times according to the frame similarity and a preset frame interpolation time range; and determining the number of the inserted frames according to the frame inserting times and the preset single frame inserting amount.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the step of determining the frame interpolation times according to the frame similarity and a preset frame interpolation time range includes: the number of frame insertions is calculated by the following formula:
n=Round((B1-B2)·Similarity+B2)
where n is the number of frame insertions, Round () represents a rounding operation, B1Is the preset minimum frame insertion times,B2the number of frame interpolation times is a preset maximum number of frame interpolation times; similarity is the frame Similarity.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the step of inserting video frames corresponding to the number of inserted frames between a forward reference frame and a backward reference frame includes: if the number of the inserted frames is one, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame; and inserting the first prediction frame between the forward reference frame and the backward reference frame to obtain the video frame sequence after frame insertion.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the step of inserting video frames corresponding to the number of the above-mentioned inserted frames between a forward reference frame and the backward reference frame includes: if the number of the inserted frames is more than one, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame; inserting a first prediction frame between a forward reference frame and a backward reference frame to obtain a video frame sequence after frame insertion; the following steps are executed in a circulating mode until the number of video frames inserted between the forward reference frame and the backward reference frame reaches the number of inserted frames: determining the frame interpolation position of the next predicted frame according to the frame similarity between every two adjacent video frames between a forward reference frame and a backward reference frame in the video frame sequence after frame interpolation; inputting a previous video frame and a next video frame corresponding to the frame insertion position into a prediction model, and outputting a second prediction frame; the second predicted frame is inserted at the interpolated frame position.
In a second aspect, an embodiment of the present invention further provides a video frame interpolation apparatus, including: a reference frame determining module, configured to determine a current forward reference frame and a current backward reference frame based on a video frame sequence to be processed; the frame similarity determining module is used for determining the frame similarity between the forward reference frame and the backward reference frame; the frame interpolation quantity determining module is used for determining the number of the frame interpolation based on the frame similarity; and the frame inserting module is used for inserting the video frames corresponding to the frame inserting quantity between the forward reference frame and the backward reference frame.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the frame similarity determining module is further configured to: extracting a first feature vector of a forward reference frame and a second feature vector of a backward reference frame through a pre-trained feature extraction network; calculating the feature similarity of the first feature vector and the second feature vector; and determining the feature similarity as the frame similarity of the forward reference frame and the backward reference frame.
With reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the frame insertion number determining module is further configured to: determining the frame interpolation times according to the frame similarity and a preset frame interpolation time range; and determining the number of the inserted frames according to the frame inserting times and the preset single frame inserting amount.
With reference to the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the frame insertion module is further configured to: if the number of the inserted frames is one, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame; and inserting the first prediction frame between the forward reference frame and the backward reference frame to obtain the video frame sequence after frame insertion.
In a third aspect, an embodiment of the present invention further provides a server, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the video frame interpolation method.
In a fourth aspect, embodiments of the present invention also provide a machine-readable storage medium storing machine-executable instructions, which when invoked and executed by a processor, cause the processor to implement the video framing method described above.
The embodiment of the invention has the following beneficial effects:
according to the video frame interpolation method, the device and the server, firstly, a current forward reference frame and a current backward reference frame are determined based on a video frame sequence to be processed, after the frame similarity between the forward reference frame and the backward reference frame is determined, the frame interpolation quantity is determined based on the frame similarity, and then video frames corresponding to the frame interpolation quantity are inserted between the forward reference frame and the backward reference frame. The method can reasonably determine the number of the inserted frames based on the frame similarity of the forward reference frame and the backward reference frame, improves the flexibility of frame insertion processing of the video, and improves the frame insertion effect, thereby improving the video watching experience of users.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a video frame interpolation method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another video frame interpolation method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a feature extraction network according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a work flow of a feature extraction network according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an inclusion module layer in a feature extraction network according to an embodiment of the present invention;
FIG. 6 is a flowchart of another video frame interpolation method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a video frame interpolation apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For ease of understanding, the frame rate standard of the video source in the related art will be described first. At present, whether 2D or 3D video materials are shot, most of the video materials still adopt a lower frame rate standard, for example, television media widely adopt 25 frames or 30 frames, and some live broadcast platforms adopt a lower frame rate of 15 frames for video shooting and transmission. For example, when video shooting is performed outdoors or a mobile terminal is used for video shooting, due to network bandwidth limitation, a video with a lower frame rate is generally used for transmission, so as to reduce video transmission code rate and save network bandwidth.
However, with the advance of computer vision and multimedia technology, digital media has been gradually updated from low definition to high definition, and even ultra high definition 4K (the resolution of 4K video can reach 2160 x 4096). The existing video source has been unable to meet the increasing demand for video impression, so it is required to provide better viewing experience for users by increasing the real-time frame rate of the video. By performing real-time frame interpolation at the video receiving end, the problems of unsmooth video playing and the like caused by too low original video frame rate can be avoided.
The current video frame insertion method mainly comprises three types:
the first type is a video frame interpolation method based on image interpolation; the method obtains an intermediate frame by using block search and motion compensation between two previous video frames and two subsequent video frames.
The second category is video frame interpolation methods based on optical flow estimation; the method comprises the steps of firstly, calculating an optical flow vector between two adjacent video frames, and performing motion compensation based on the motion vector to generate a new intermediate video frame; according to the calculation method of the optical flow vector, the method can be divided into a frame interpolation method based on unidirectional motion estimation and a frame interpolation method based on bidirectional motion estimation.
The third method is a video frame insertion method based on deep learning; the method is based on a convolution structure of a Convolutional Neural Network (CNN) to extract video characteristics of two adjacent frames, and then a certain fusion strategy is adopted to organically combine information of the two adjacent frames to obtain an intermediate frame.
The video frame interpolation method generally adopts a static frame interpolation mode to perform video frame interpolation, namely, videos of any scene and any video complexity are subjected to video frame interpolation by adopting preset fixed multiples, and video frames with the same number are interpolated in two adjacent frames. However, for the same video needing frame interpolation, the video complexity of different video segments is different, and for the segments with slow interframe change (the video complexity is low), if too many frames are inserted, not only human eyes cannot perceive the video watching effect caused by the inserted frames, that is, cannot improve the subjective impression of human eyes, but also the video volume is increased, and the video transmission bandwidth is increased; for a segment with fast interframe change (high video complexity), if the frame interpolation is performed by the same multiple as that of a video frame with slow interframe change, the video fluency cannot be improved to the maximum extent.
In summary, the video frame interpolation methods in the related art are not reasonable and flexible, the frame interpolation effect is not good, and the video viewing experience of the user is affected, and accordingly, embodiments of the present invention provide a video frame interpolation method, an apparatus, and a server, and the technology can be applied to video frame interpolation processes of various video sources, such as 2D video, 3D video, and the like.
First, referring to a flow chart of a video frame interpolation method shown in fig. 1, the method includes the following steps:
step S100, based on the video frame sequence to be processed, determining the current forward reference frame and backward reference frame.
The video frame sequence may be a plurality of images arranged in a certain order; when the continuous image changes by more than 24 frames per second, the human eye cannot distinguish a single still picture according to the persistence of vision principle, but feels a smooth continuous visual effect, in which case a video is formed. In the specific implementation process, a video frame interpolation method is usually adopted to increase the real-time frame rate, that is, the number of video frames played in unit time is increased; therefore, every two adjacent images of the video frame sequence can be respectively used as a forward reference frame and a backward reference frame, or two images can be extracted according to a preset interval to be used as a forward reference frame and a backward reference frame, and then video frame interpolation is performed between the forward reference frame and the backward reference frame, so that the frame rate is increased. Typically, the forward reference frame is played earlier than the backward reference frame.
Step S102, determining the frame similarity between the forward reference frame and the backward reference frame.
The frame similarity is mainly used for measuring the similarity between the forward reference frame and the backward reference frame. Because the forward reference frame and the backward reference frame are both images, the determination process of the frame similarity between the forward reference frame and the backward reference frame is communicated with the determination process of the similarity between the two images to a certain extent; the frame similarity can be determined in a manner of determining the image similarity in the related art.
Specifically, the frame similarity may be obtained by calculating Structural Similarity (SSIM), cosine similarity, histogram similarity, or the like of the forward reference frame and the backward reference frame. The structural similarity measures the image similarity from three aspects of brightness, contrast and structure; the cosine similarity calculation process is that the images are expressed into a vector, and the similarity of the two images is represented by calculating the cosine distance between the vectors; the histogram similarity determines the similarity of two images based on the global distribution of colors in the histogram description image.
In practical implementation, one of the above manners may be selected to determine the frame similarity according to the practical requirement of the frame interpolation effect, or multiple manners of the above manners may be selected to perform calculation, and the final frame similarity is obtained by respectively weighting the calculation results.
And step S104, determining the number of the inserted frames based on the frame similarity.
The number of the inserted frames can be understood as the number of frames to be inserted; the number of the interpolation frames is an integer greater than or equal to zero; when the frame similarity between the forward reference frame and the backward reference frame is large, when a user watches two reference frames for continuous playing, the switching process of the two images cannot be distinguished by human eyes, the video is smooth, the film watching effect is good, and at the moment, a small amount of insertion frames can be inserted between the two reference frames or the insertion frames are not inserted between the two reference frames; when the frame similarity between the forward reference frame and the backward reference frame is small, when a user watches two reference frames for continuous playing, the switching process of two images may be distinguished by human eyes, the video is relatively pause, and the film watching effect is poor, and at the moment, more insertion frames need to be inserted between the two reference frames.
In a specific implementation process, the range of the number of the inserted frames of the video frame sequence can be determined according to the preset video fluency; the above process of determining the range of the number of interpolated frames may be implemented by historical data, or by experiment. The range typically includes upper and lower limits; the upper limit value is the maximum number of the inserted frames, and when the inserted frame corresponding to the upper limit value is inserted between two video frames with the lowest frame similarity in the video, the preset video fluency can be at least reached when the segment is played; the lower limit value is the minimum number of the inserted frames, and when the inserted frame corresponding to the lower limit value is inserted between two video frames with the highest frame similarity in the video, the preset video fluency can be at least achieved when the segment is played.
For example, assuming that the frame similarity has a value range of [0,1], when the frame similarity of two video frames is larger, it indicates that the two video frames are more similar; when the frame similarity of two video frames is 1, the two video frames are completely the same; when the frame similarity of two adjacent video frames in the video frame sequence of the frame to be inserted is 1, the lower limit value of the number of the inserted frames can be set to 0, that is, other video frames do not need to be inserted into the two video frames; when the minimum frame similarity of two adjacent video frames in the video frame sequence is 0.1, the minimum lower limit value of the number of the inserted frames can be set to 12 according to historical experience or experiments, and the preset video fluency can be achieved. The frame similarity value of other two adjacent video frames is in the range of (0.1,1), and the number of the inserted frames corresponding to the frame similarity can be obtained according to a preset proportional relation, a logarithmic relation and the like; the proportional relationship, the logarithmic relationship, or the like may be determined by historical data or experiments.
And step S106, inserting video frames corresponding to the number of the inserted frames between the forward reference frame and the backward reference frame.
After the number of the interpolation frames is determined, when the number of the interpolation frames is zero, the interpolation frame processing between the forward reference frame and the backward reference frame is not needed; if the number of the inserted frames is not zero, firstly, a preset video frame inserting mode can be adopted to carry out frame inserting processing on the forward reference frame and the backward reference frame to obtain a first video frame; the preset video frame interpolation method can be a video frame interpolation method based on image interpolation, optical flow estimation or deep learning; then inserting the obtained first video frame between a forward reference frame and a backward reference frame; if the number of frames to be interpolated is one, the interpolation between the current forward reference frame and the backward reference frame is completed at this time.
If the number of the inserted frames is more than 1, the insertion position of the next video frame needs to be determined, and a forward reference frame and a backward reference frame of the next video frame are determined at the same time; at this time, the insertion position may be between the forward reference frame and the first video frame, or between the first video frame and the backward reference frame. In a specific implementation process, the frame similarity between a forward reference frame and a first video frame and the frame similarity between the first video frame and a backward reference frame can be respectively calculated, the similarity of the two frames is compared, and the position of an inserted frame is determined to be between the two video frames with smaller frame similarity; if the frame similarity between the forward reference frame and the first video frame is 0.3 and the frame similarity between the first video frame and the backward reference frame is 0.2, determining the frame interpolation position as the position between the first video frame and the backward reference frame, wherein the first video frame is the forward reference frame of the next video frame and the backward reference frame is the forward reference frame of the next video frame; and performing frame interpolation processing on the first video frame and the backward reference frame by adopting a preset video frame interpolation mode to obtain a next video frame, and interpolating the next video frame between the first video frame and the backward reference frame.
When the number of the video frames inserted between the initial forward reference frame and the backward reference frame does not reach the number of the inserted frames, the step of determining the position of the inserted frame of the next video frame and performing the frame insertion processing can be repeated until the number of the video frames inserted between the initial forward reference frame and the backward reference frame reaches the number of the inserted frames.
Further, the steps of determining the current forward reference frame and the backward reference frame may be continued until the determined backward reference frame is the last video frame in the sequence of video frames. After the frame interpolation process is performed between the current forward reference frame and the backward reference frame, the forward reference frame and the backward reference frame can be re-determined based on the above-mentioned video frame sequence to be processed. As an example, the current backward reference frame may be used as a forward reference frame, and a certain video frame in the video frame sequence after the current backward reference frame may be used as a backward reference frame, and the steps of determining the similarity between the two frames, determining the number of interpolated frames, and performing the interpolation processing may be performed again. And when the video frame behind the current backward reference frame does not exist in the video frame sequence after the frame interpolation processing, namely the current backward reference frame is the last video frame in the video frame sequence, ending the video frame interpolation process.
The video frame interpolation method comprises the steps of firstly determining a current forward reference frame and a current backward reference frame based on a video frame sequence to be processed, determining the frame similarity between the forward reference frame and the backward reference frame, then determining the number of interpolated frames based on the frame similarity, and further inserting video frames corresponding to the number of interpolated frames between the forward reference frame and the backward reference frame. The method can reasonably determine the number of the inserted frames based on the frame similarity of the forward reference frame and the backward reference frame, improves the flexibility of frame insertion processing of the video, and improves the frame insertion effect, thereby improving the video watching experience of users.
The embodiment of the invention also provides another video frame insertion method, which is realized on the basis of the method in the embodiment; the method mainly describes a specific process for determining the frame similarity of a forward reference frame and a backward reference frame, and the process is specifically realized by the following steps S202-S206; the method also describes a specific implementation process for determining the number of the inserted frames based on the frame similarity, which is specifically implemented by the following steps S208 and S210; as shown in fig. 2, the method comprises the steps of:
step S200, based on the video frame sequence to be processed, determining the current forward reference frame and backward reference frame.
Step S202, extracting a first feature vector of the forward reference frame and a second feature vector of the backward reference frame through the pre-trained feature extraction network.
The feature extraction network may be established by a machine learning method or a convolutional neural network structure. In a specific implementation process, the feature extraction network may be composed of a convolution feature extraction module, a multi-scale feature extraction module and a full connection layer, which are connected in sequence, and a schematic structural diagram of the feature extraction network is shown in fig. 3; the convolution feature extraction module can be realized by one or more convolution layers and one or more pooling layers, and is used for performing convolution calculation and average pooling calculation on an input video frame and outputting an initial feature matrix; the multi-scale feature extraction module can be realized by an inclusion module layer or a plurality of convolution layers with different convolution kernels, and is used for extracting multi-scale features of the initial feature matrix through a plurality of preset convolution kernels so as to obtain a multi-scale feature matrix; the full connection layer is used for carrying out comprehensive characteristic processing on the multi-scale characteristic matrix to obtain a characteristic vector of the input video frame; in particular, the fully-connected layer may convert the multi-scale feature matrix into one-dimensional features, i.e., feature vectors of the forward reference frame or the backward reference frame described above.
In another implementation, the feature extraction network has 7 layers in total, including 2 convolutional layers (conv), 2 average pooling layers (AvgPool), 2 inclusion module layers, and 1 full-connected layer (FC layer), and a workflow diagram of the feature extraction network is shown in fig. 4. In the network, the first layer is a convolutional layer, the convolutional kernel size of the convolutional layer can be set to be 7x7, the number of feature maps can be 64, and the step size can be 1; the second layer is a pooling layer which can adopt an average pooling mode, and the step length can be 2; the third layer is a convolutional layer, the convolutional kernel size of the convolutional layer can be set to be 3x3, the number of feature maps can be 128, and the step size can be 1; the fourth layer is a pooling layer which can adopt an average pooling mode, and the step length can be 2; the fifth layer is an inclusion module layer, and the number of characteristic diagrams can be 128; the sixth layer is an inclusion module layer, and the number of the characteristic diagrams can be 128; the seventh layer is a fully connected layer, and the dimension of the output feature map is 500x 1.
Wherein the inclusion module layers of the fifth and sixth layers are shown in fig. 5; a Previous layer can be understood as the upper layer; for example, for the inclusion module layer of the fifth layer, the upper layer is the pooling layer, and for the inclusion module layer of the sixth layer, the upper layer is the inclusion module layer of the fifth layer. The inclusion module layers include 4 convolutional layers with a convolutional kernel size of 1x1, 1 convolutional layer with a convolutional kernel size of 3x3, 1 convolutional layer with a convolutional kernel size of 5x5, 1 MaxPool with a convolutional kernel size of 3x3, and a series filter.
Before extracting feature vectors of an input image through the feature extraction network, the input image is generally required to be preprocessed; for example, the image input into the feature extraction network should be a three-channel image of 224 × 3. Therefore, for any size image, before inputting into the network, the image needs to be resized to a fixed size of 224 × 3, so that the feature vectors of the same length can be obtained finally, thereby facilitating subsequent processing.
In the process of extracting the feature vector, if the forward reference frame is not the first video frame in the video frame sequence, the second feature vector of the last backward reference frame can be used as the first feature vector of the current forward reference frame; and extracting a second feature vector of the current backward reference frame by adopting a feature extraction network. When the characteristic vector of the reference frame is extracted in the mode, the characteristic vector needs to be extracted twice when the frame is inserted between the first video frame and the second video frame, and each frame insertion operation only needs to calculate the characteristic vector of one video frame, so that the operation amount is reduced, and the processing speed is increased.
Step S204, calculating the feature similarity of the first feature vector and the second feature vector.
In a specific implementation process, parameters such as cosine similarity or pearson correlation coefficient can be adopted to represent the feature similarity of the image. When the feature Similarity is expressed by using the cosine Similarity, assuming that the first feature vector of the forward reference frame extracted by the above feature extraction network is F1 and the second feature vector of the backward reference frame is F2, the feature Similarity of the feature vectors F1 and F2 can be calculated by the following formula:
Figure BDA0002223846850000131
wherein denotes a multiplication operation; n denotes the vector dimension. The Similarity represents the magnitude of the cosine Similarity, and the numeric range of the calculated Similarity is usually [0,1 ].
Step S206, determining the feature similarity as the frame similarity of the forward reference frame and the backward reference frame; specifically, the value of the similarity may be used as the frame similarity of the forward reference frame and the backward reference frame, and the value range of the frame similarity is also [0,1 ].
And step S208, determining the frame interpolation times according to the preset frame interpolation number range and the frame similarity.
Specifically, the number of frame insertions can be calculated by the following formula:
n=Round((B1-B2)·Similarity+B2) (2)
where n is the number of frame insertions, Round () represents a rounding operation, B1Is a preset minimum number of frame insertions, B2The number of frame interpolation times is a preset maximum number of frame interpolation times; similarity is the frame Similarity.
Above-mentioned [ B1,B2]I.e. the above frame interpolation times range, B2Greater than B1(ii) a The frame interpolation time range can be set by a user according to the video scene and the complexity of the frame to be interpolated; the user can determine the specific value of the frame insertion frequency range according to historical experience or experiments.
Step S210, determining the frame interpolation quantity according to the frame interpolation times and the preset single frame interpolation quantity.
Specifically, the single frame interpolation amount may be a fixed value, and if the frame interpolation time is set to 1, then when the frame interpolation time is 2, performing frame interpolation for 2 times between the current forward reference frame and the current backward reference frame, and interpolating 1 video frame each time, where the number of the frame interpolation is 2. The value of the single frame interpolation amount may also be related to the frame interpolation times, for example, it may be set in the process of processing a pair of forward reference frames and backward reference frames in the video frame sequence, and the single frame interpolation amount is the same as the frame interpolation times, for example, 1 video frame may be inserted in the first frame interpolation, 2 video frames may be inserted in the second frame interpolation, and so on. As an example, when it is determined that the number of frame interpolation times is 2, 2 frame interpolation is performed between the current forward reference frame and the backward reference frame at this time, 1 video frame is inserted for the first time, 2 video frames are inserted for the second time, and the total number of frame interpolation is 3.
It is assumed that all motion processes are displayed smoothly when the frame rate of the video of the ball motion is 90 frames according to historical experience; if the video frame rate of the currently obtained ball game is 15 frames, the maximum frame interpolation number can be set to 6, and if the single frame interpolation number is set to 1, the corresponding maximum frame interpolation times is 6; there are some video segments of intermittent rest of the game in the video of the ball game, in these video segments, the picture has hardly changed, the minimum number of inserted frames can be set as 0, namely, the video frame is not inserted; the resulting number of frame insertions ranges from 0, 6.
In step S212, video frames corresponding to the number of the interpolated frames are inserted between the forward reference frame and the backward reference frame.
Step S214, judging whether the backward reference frame is the last video frame in the video frame sequence; if not, executing step S200; if so, ending.
If the frame interpolation is performed between every two video frames in the video frame sequence in a sequential manner, and the current backward reference frame is the last video frame in the video frame sequence, the frame interpolation process can be considered to be completed when the frame interpolation process of the whole video frame sequence is completed, and the frame interpolation process is ended; if the current backward reference frame is not the last video frame in the sequence of video frames, which indicates that the frame interpolation process for the entire sequence of video frames is not yet completed, the above steps S200 to S212 need to be repeatedly performed until the frame interpolation process for the entire sequence of video frames is completed.
According to the video frame interpolation method, through a pre-trained feature extraction network, a first feature vector of a forward reference frame and a second feature vector of a backward reference frame are extracted, feature similarity of the first feature vector and the second feature vector is calculated, the number of obtained interpolation frames is calculated, frame interpolation processing is carried out on the forward reference frame and the backward reference frame, and video frames corresponding to the number of the interpolation frames are interpolated. According to the method, reasonable frame interpolation processing is carried out on the reference frames with different frame change speeds by calculating the feature similarity of the feature vectors and the frame interpolation number of the current reference frames, so that the video frame interpolation effect is ensured, and meanwhile, the flexibility of frame interpolation processing is improved.
The embodiment of the invention also provides another video frame insertion method, which is realized on the basis of the method in the embodiment; the method mainly describes a specific implementation process of inserting video frames corresponding to the number of the inserted frames between a forward reference frame and a backward reference frame, and when the number of the inserted frames is one, the process is specifically implemented by the following steps S610 and S612; when the number of the inserted frames is greater than one, the process is specifically realized by the following steps S614 to S622; as shown in fig. 6, the method includes the steps of:
step S600, determining a current forward reference frame and a current backward reference frame based on the video frame sequence to be processed.
Step S602, determining a frame similarity between the forward reference frame and the backward reference frame.
In step S604, the number of interpolated frames is determined based on the frame similarity.
Step S606, judging whether the number of the inserted frames is larger than zero; if yes, go to step S608; if not, go to step 626; specifically, the number of the interpolation frames is an integer greater than or equal to zero; if the number of the interpolation frames is zero, the interpolation frame processing is not required to be carried out between the current forward reference frame and the current backward reference frame; frame interpolation processing can be continuously carried out on reference frames behind the current reference frame in the video frame sequence to be processed; when the current backward reference frame is the last video frame of the video frame sequence to be processed, the frame insertion processing of the whole video frame sequence is completed.
Step S608, determining whether the number of the inserted frames is greater than one; if not, executing step S610; if so, go to step S614.
Specifically, if the number of the interpolated frames is one, the processing of the interpolated frames between the current forward reference frame and the backward reference frame is completed; frame interpolation processing can be continuously carried out on reference frames behind the current reference frame in the video frame sequence to be processed; when the current backward reference frame is the last video frame of the video frame sequence to be processed, the frame insertion processing of the whole video frame sequence is completed.
Step S610, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame.
The prediction model can be a video frame interpolation model obtained through training; firstly, an initial model can be established according to a deep learning principle or a neural network, and then a plurality of groups of training data are input into the initial model; a set of training data comprising a forward reference frame and a backward reference frame; and finally obtaining the video frame interpolation model by continuously iteratively training the parameters of the initial model. In the above steps, the forward reference frame and the backward reference frame are input into a preset prediction model, and a first prediction frame serving as an insertion frame between the two reference frames can be obtained.
Step S612, inserting the first prediction frame between the forward reference frame and the backward reference frame to obtain a video frame sequence after frame insertion, and executing step S626; specifically, a first predicted frame is inserted as an insertion frame between a forward reference frame and a backward reference frame; at this time, if the video is played, firstly, the forward reference frame is played, then, the first prediction frame is played, and finally, the backward reference frame is played; since the similarity between the forward reference frame and the first predicted frame, and the similarity between the first predicted frame and the backward reference frame are higher than the similarity between the forward reference frame and the backward reference frame, the user feels the fluency of the segment is higher when viewing the segment than before the frame insertion. When the number of the interpolation frames is one, the interpolation frame processing between the current forward reference frame and the backward reference frame is already completed.
Step S614, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame.
Step S616, the first predicted frame is inserted between the forward reference frame and the backward reference frame to obtain the video frame sequence after frame insertion.
Step S618, determining the frame interpolation position of the next predicted frame according to the frame similarity between every two adjacent video frames between the forward reference frame and the backward reference frame in the video frame sequence after the frame interpolation.
Specifically, after a video frame is subjected to first frame interpolation, the video frame comprises a forward reference frame, a first prediction frame and a backward reference frame which are sequentially arranged; when the video frame needs to be interpolated for the second time, the frame similarity between the forward reference frame and the first predicted frame and the frame similarity between the first predicted frame and the backward reference frame can be obtained by calculation; the lower the frame similarity between the two images, the more obvious the visual seizure feeling of the user is when the two images are switched; therefore, the position between two images having low frame similarity is selected as the frame interpolation position of the second predicted frame.
Similarly, when determining the frame insertion position of the nth (n is an integer greater than 1) predicted frame, firstly determining the frame similarity between every two adjacent video frames between the forward reference frame and the backward reference frame, and then selecting the two video frames with the lowest frame similarity as the frame insertion position of the nth predicted frame; generally, only the frame similarity between the (n-1) th predicted frame and the previous video frame and the next video frame needs to be calculated, and the frame similarity between the two adjacent video frames is obtained in the previous frame interpolation process.
Step S620, inputting a previous video frame and a next video frame corresponding to the position of the inserted frame into a prediction model, and outputting a current prediction frame; specifically, the previous video frame may be used as a forward reference frame of the current prediction frame, and the next video frame may be used as a backward reference frame of the current prediction frame; the implementation process of this step is similar to step S610 described above.
Step S622, inserting the current predicted frame into the above-mentioned frame insertion position; specifically, the frame interpolation position is the frame interpolation position determined in step S618.
Step S624, determining whether the number of video frames inserted between the forward reference frame and the backward reference frame is equal to the number of inserted frames; if so, go to step 626; if not, go to step S618; when the number of video frames inserted between the current forward reference frame and the backward reference frame is equal to the number of inserted frames, the frame insertion process for the two reference frames is completed; if not, the steps S614-S622 are continued until the frame insertion process of the current two reference frames is completed.
Step S626, judging whether the backward reference frame is the last video frame in the video frame sequence; if not, executing step S600; if so, ending.
According to the video frame interpolation method, after the number of frames to be interpolated is determined according to the feature similarity of a first feature vector and a second feature vector, frame interpolation processing is carried out on a forward reference frame and a backward reference frame, when the number of frames to be interpolated is one, the forward reference frame and the backward reference frame are input into a preset prediction model, a first prediction frame is output, and the prediction frame is inserted between the forward reference frame and the backward reference frame; when the number of the inserted frames is more than one, after a first predicted frame is obtained through frame insertion processing, according to the frame similarity between two adjacent video frames between a forward reference frame and a backward reference frame in a video frame sequence after the frame insertion, the position of the inserted frame of the next predicted frame is determined, and frame insertion processing is continued until the video frame corresponding to the number of the inserted frames is inserted between the forward reference frame and the backward reference frame. According to the method, the frame insertion position of the predicted frame is determined through the frame similarity, frame insertion processing is carried out between two video frames with low frame similarity, and the fluency of the video is efficiently improved.
The embodiment of the invention also provides another video frame insertion method which is realized on the basis of the method in the embodiment. In the related technology, the whole video is subjected to frame interpolation by adopting a static frame interpolation mode, namely, the video frame rate is kept unchanged by adopting a preset fixed multiple for carrying out the video frame interpolation; the method has poor flexibility and unstable frame interpolation effect; the method provided by the embodiment of the invention adopts a dynamic frame interpolation mode to perform frame interpolation processing on the video, namely, the self-adaptive frame interpolation is performed according to the complexity of the video, and the video frame rate is dynamically changed. The method comprises the following steps:
(1) selecting two adjacent frame images S1 and S2 as reference frames; images S1 and S2 are taken as forward reference frame and backward reference frame, respectively; using a pre-trained feature extraction backbone network Net1 to perform feature extraction on the forward reference frame and the backward reference frame to obtain feature vectors which are F1 and F2 respectively; the structure of the feature extraction backbone network can be referred to fig. 4.
In a specific implementation process, in order to reduce repeated calculation, the feature vector F2 of the previous adjacent frame can be used as the feature vector F1 of the next adjacent frame, and except for the first frame, only the feature vector of one image needs to be calculated in the process of calculating the similarity during each frame interpolation.
(2) Calculating the similarity of the feature vectors F1 and F2 so as to determine the similarity of two adjacent frames; specifically, the similarity calculation may be represented by a cosine similarity (see formula 1 for a specific calculation process), or may be represented by a pearson correlation coefficient, or the like.
(3) Calculating the multiple of the frame to be interpolated according to the similarity of the video frames; the specific calculation process can be seen in formula (2); b is1Is a predetermined minimum number of interpolation frames, B2The number of the frames is the preset maximum frame insertion number; b is1And B2May be determined from different video scenes; for example, in some scenarios the following settings may be made: b is11 and B2=12。
(4) Performing n times of video frame interpolation prediction on adjacent video frames by adopting a trained network NET2 based on a deep learning method, thereby obtaining a video sequence after final frame interpolation; the NET2 can be established based on a SepConv network, or can be established according to other deep learning models or neural network models.
Specifically, if the number n of frames interpolated based on the above calculation is 0, no frame interpolation is performed; if n is 1, frame interpolation according to S1 and S2 obtains M1; if n is 2, firstly obtaining M1 by frame interpolation according to S1 and S2, then respectively calculating the Similarity between S1 and M1 and the Similarity between M1 and S2 to obtain Similarity1 and Similarity2, and if the Similarity1 is greater than the Similarity2, namely the Similarity between S1 and M1 is greater than the Similarity between M1 and S2, obtaining M2 by frame interpolation according to M1 and S2; and the rest is repeated, so that the frame interpolation result for n times is obtained.
In a specific implementation process, before using NET2 for frame interpolation prediction, NET2 usually needs to be trained; specifically, a current frame can be used as a target frame, two frames before and after the current frame are used as input data, the input data are sent to an initial model of the NET2, model parameters are learned through continuous iterative training of the network, and therefore a final network NET2 is obtained, so that the function of predicting two adjacent frame data of any input video and obtaining an insertion frame is achieved.
The video frame interpolation method comprises the steps of firstly, selecting two adjacent frame images as reference frames, and performing feature extraction on the reference frames by using a feature backbone network to obtain feature vectors of the reference frames; then calculating the similarity of the two eigenvectors, and calculating to obtain an interpolation frame multiple according to the similarity of adjacent frames; and finally, performing frame interpolation processing by adopting a pre-trained deep learning network according to the frame interpolation multiple, performing frame interpolation or not performing frame interpolation on the video clips with lower complexity by adopting a lower multiple, and performing frame interpolation on the video clips with higher complexity by adopting a higher multiple. Thereby obtaining the final frame-inserted video sequence. The method solves the problems that the video smoothness cannot be improved to the maximum extent by a static frame interpolation method, and meanwhile, the video transmission bandwidth is saved.
Corresponding to the above video frame interpolation method embodiment, an embodiment of the present invention further provides a video frame interpolation apparatus, as shown in fig. 7, the apparatus includes:
a reference frame determination module 700 for determining a current forward reference frame and a current backward reference frame based on a sequence of video frames to be processed.
A frame similarity determining module 702, configured to determine frame similarity between a forward reference frame and a backward reference frame.
And an interpolated frame number determining module 704, configured to determine the number of interpolated frames based on the frame similarity.
And an inserting frame module 706, configured to insert video frames corresponding to the number of inserted frames between the forward reference frame and the backward reference frame.
The video frame interpolation device firstly determines the current forward reference frame and the current backward reference frame based on a video frame sequence to be processed, determines the frame similarity between the forward reference frame and the backward reference frame, then determines the frame interpolation quantity based on the frame similarity, and further inserts video frames corresponding to the frame interpolation quantity between the forward reference frame and the backward reference frame. The method can reasonably determine the number of the inserted frames based on the frame similarity of the forward reference frame and the backward reference frame, improves the flexibility of frame insertion processing of the video, and improves the frame insertion effect, thereby improving the video watching experience of users.
Further, the frame similarity determining module is further configured to: extracting a first feature vector of a forward reference frame and a second feature vector of a backward reference frame through a pre-trained feature extraction network; calculating the feature similarity of the first feature vector and the second feature vector; and determining the feature similarity as the frame similarity of the forward reference frame and the backward reference frame.
Further, the frame similarity determining module is further configured to: if the forward reference frame is the first video frame in the video frame sequence, respectively extracting a first feature vector of the forward reference frame and a second feature vector of the backward reference frame by adopting a feature extraction network; if the forward reference frame is a video frame except the first video frame in the video frame sequence, taking the second feature vector of the last backward reference frame as the first feature vector of the current forward reference frame; and extracting a second feature vector of the current backward reference frame by adopting a feature extraction network.
The feature extraction network comprises a convolution feature extraction module, a multi-scale feature extraction module and a full connection layer which are connected in sequence; the convolution characteristic extraction module is used for performing convolution calculation and average pooling calculation on the input video frame and outputting an initial characteristic matrix; the multi-scale feature extraction module is used for extracting multi-scale features of the initial feature matrix through various preset convolution kernels to obtain a multi-scale feature matrix; and the full connection layer is used for carrying out comprehensive characteristic processing on the multi-scale characteristic matrix to obtain the characteristic vector of the input video frame.
Further, the frame insertion quantity determining module is further configured to: determining the frame interpolation times according to the frame similarity and a preset frame interpolation time range; and determining the number of the inserted frames according to the frame inserting times and the preset single frame inserting amount.
Specifically, the frame interpolation number determining module is further configured to calculate the frame interpolation times according to the following formula:
n=Round((B1-B2)·Similarity+B2)
where n is the number of frame insertions, Round () represents a rounding operation, B1Is a preset minimum number of frame insertions, B2The number of frame interpolation times is a preset maximum number of frame interpolation times; similarity is the frame Similarity.
Further, the frame interpolation module is further configured to: if the number of the inserted frames is one, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame; and inserting the first prediction frame between the forward reference frame and the backward reference frame to obtain the video frame sequence after frame insertion.
Further, the frame interpolation module is further configured to: if the number of the inserted frames is more than one, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame; inserting a first prediction frame between a forward reference frame and a backward reference frame to obtain a video frame sequence after frame insertion; the following steps are executed in a circulating mode until the number of video frames inserted between the forward reference frame and the backward reference frame reaches the number of inserted frames: determining the frame interpolation position of the next predicted frame according to the frame similarity between every two adjacent video frames between a forward reference frame and a backward reference frame in the video frame sequence after frame interpolation; inputting a previous video frame and a next video frame corresponding to the frame insertion position into a prediction model, and outputting a second prediction frame; the second predicted frame is inserted at the interpolated frame position.
The video frame interpolation apparatus provided in the embodiment of the present invention has the same implementation principle and technical effect as those of the video frame interpolation method embodiment, and for brief description, reference may be made to corresponding contents in the video frame interpolation method embodiment for the sake of brevity.
The embodiment of the present invention further provides a server, referring to fig. 8, the server includes a processor 130 and a memory 131, the memory 131 stores machine executable instructions capable of being executed by the processor 130, and the processor 130 executes the machine executable instructions to implement the video frame insertion method.
Further, the server shown in fig. 8 further includes a bus 132 and a communication interface 133, and the processor 130, the communication interface 133 and the memory 131 are connected through the bus 132.
The Memory 131 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 133 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 132 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The Processor 130 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 131, and the processor 130 reads the information in the memory 131 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the video frame interpolation method.
The video frame interpolation method, the video frame interpolation device, and the computer program product of the server provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1.一种视频插帧方法,其特征在于,包括:1. a video frame insertion method, is characterized in that, comprises: 基于待处理的视频帧序列,确定当前的前向参考帧和后向参考帧;Determine the current forward reference frame and the backward reference frame based on the sequence of video frames to be processed; 确定所述前向参考帧及所述后向参考帧之间的帧相似度;determining a frame similarity between the forward reference frame and the backward reference frame; 基于所述帧相似度确定插帧数量;determining the number of interpolated frames based on the frame similarity; 在所述前向参考帧和所述后向参考帧之间插入所述插帧数量对应的视频帧。A video frame corresponding to the number of interpolated frames is inserted between the forward reference frame and the backward reference frame. 2.根据权利要求1所述的方法,其特征在于,确定所述前向参考帧及所述后向参考帧的帧相似度的步骤,包括:2. The method according to claim 1, wherein the step of determining the frame similarity of the forward reference frame and the backward reference frame comprises: 通过预先训练完成的特征提取网络,提取所述前向参考帧的第一特征向量和所述后向参考帧的第二特征向量;Extract the first feature vector of the forward reference frame and the second feature vector of the backward reference frame by pre-training the feature extraction network; 计算所述第一特征向量和所述第二特征向量的特征相似度;Calculate the feature similarity of the first feature vector and the second feature vector; 将所述特征相似度确定为所述前向参考帧和所述后向参考帧的帧相似度。The feature similarity is determined as the frame similarity of the forward reference frame and the backward reference frame. 3.根据权利要求2所述的方法,其特征在于,提取所述前向参考帧的第一特征向量和所述后向参考帧的第二特征向量的步骤,包括:3. The method according to claim 2, wherein the step of extracting the first feature vector of the forward reference frame and the second feature vector of the backward reference frame comprises: 如果所述前向参考帧是所述视频帧序列中第一个视频帧,采用所述特征提取网络分别提取所述前向参考帧的第一特征向量和所述后向参考帧的第二特征向量;If the forward reference frame is the first video frame in the video frame sequence, use the feature extraction network to extract the first feature vector of the forward reference frame and the second feature of the backward reference frame respectively vector; 如果所述前向参考帧是所述视频帧序列中除所述第一个视频帧以外的视频帧,将上一个后向参考帧的第二特征向量作为当前的所述前向参考帧的第一特征向量;采用所述特征提取网络提取当前的后向参考帧的第二特征向量。If the forward reference frame is a video frame other than the first video frame in the video frame sequence, the second feature vector of the previous backward reference frame is used as the second feature vector of the current forward reference frame. a feature vector; using the feature extraction network to extract the second feature vector of the current backward reference frame. 4.根据权利要求2所述的方法,其特征在于,所述特征提取网络包括依次连接的卷积特征提取模块、多尺度特征提取模块和全连接层;4. The method according to claim 2, wherein the feature extraction network comprises a convolutional feature extraction module, a multi-scale feature extraction module and a fully connected layer connected in sequence; 所述卷积特征提取模块用于对输入的视频帧进行卷积计算和平均池化计算,输出初始特征矩阵;The convolution feature extraction module is used to perform convolution calculation and average pooling calculation on the input video frame, and output an initial feature matrix; 所述多尺度特征提取模块用于通过预设的多种卷积核提取所述初始特征矩阵的多尺度特征,得到多尺度特征矩阵;The multi-scale feature extraction module is used to extract the multi-scale features of the initial feature matrix through a variety of preset convolution kernels to obtain a multi-scale feature matrix; 所述全连接层用于对所述多尺度特征矩阵进行特征综合处理,得到所述输入的视频帧的特征向量。The fully connected layer is used to perform feature synthesis processing on the multi-scale feature matrix to obtain the feature vector of the input video frame. 5.根据权利要求1所述的方法,其特征在于,基于所述帧相似度确定插帧数量的步骤,包括:5. The method according to claim 1, wherein the step of determining the number of inserted frames based on the frame similarity comprises: 根据所述帧相似度和预设的插帧次数范围,确定插帧次数;Determine the number of frame insertions according to the frame similarity and a preset range of frame insertion times; 根据所述插帧次数和预设的单次插帧量,确定插帧数量。The number of frame insertions is determined according to the number of frame insertions and a preset single frame insertion amount. 6.根据权利要求5所述的方法,其特征在于,根据所述帧相似度和预设的插帧次数范围,确定插帧次数的步骤,包括:6. The method according to claim 5, wherein, according to the frame similarity and a preset frame insertion frequency range, the step of determining the frame insertion times comprises: 通过以下公式计算插帧次数:The number of frame insertions is calculated by the following formula: n=Round((B1-B2)·Similarity+B2)n=Round((B 1 -B 2 )·Similarity+B 2 ) 其中,n为所述插帧次数,Round()表示四舍五入的取整运算,B1为预设的最小插帧次数,B2为预设的最大插帧次数;Similarity为所述帧相似度。Wherein, n is the number of times of frame insertion, Round( ) represents the rounding operation of rounding, B 1 is the preset minimum number of frame insertion times, B 2 is the preset maximum number of frame insertion times; Similarity is the frame similarity. 7.根据权利要求1所述的方法,其特征在于,在所述前向参考帧和所述后向参考帧之间插入所述插帧数量对应的视频帧的步骤,包括:7. The method according to claim 1, wherein the step of inserting a video frame corresponding to the number of inserted frames between the forward reference frame and the backward reference frame comprises: 如果所述插帧数量为一,将所述前向参考帧和所述后向参考帧输入至预设的预测模型中,输出第一预测帧;If the number of inserted frames is one, input the forward reference frame and the backward reference frame into a preset prediction model, and output the first prediction frame; 将所述第一预测帧插入至所述前向参考帧和所述后向参考帧之间,得到插帧后的视频帧序列。Inserting the first predicted frame between the forward reference frame and the backward reference frame to obtain a video frame sequence after frame insertion. 8.根据权利要求1所述的方法,其特征在于,在所述前向参考帧和所述后向参考帧之间插入所述插帧数量对应的视频帧的步骤,包括:8. The method according to claim 1, wherein the step of inserting a video frame corresponding to the number of inserted frames between the forward reference frame and the backward reference frame comprises: 如果所述插帧数量大于一,将所述前向参考帧和所述后向参考帧输入至预设的预测模型中,输出第一预测帧;If the number of inserted frames is greater than one, inputting the forward reference frame and the backward reference frame into a preset prediction model, and outputting a first prediction frame; 将所述第一预测帧插入至所述前向参考帧和所述后向参考帧之间,得到插帧后的视频帧序列;inserting the first predicted frame between the forward reference frame and the backward reference frame to obtain a video frame sequence after frame insertion; 循环执行以下步骤,直到所述前向参考帧和所述后向参考帧之间插入的视频帧数量达到所述插帧数量:The following steps are performed in a loop until the number of video frames inserted between the forward reference frame and the backward reference frame reaches the number of inserted frames: 根据插帧后的视频帧序列中,所述前向参考帧和所述后向参考帧之间每两个相邻的视频帧之间的帧相似度,确定下一个预测帧的插帧位置;According to the frame similarity between every two adjacent video frames between the forward reference frame and the backward reference frame in the video frame sequence after frame insertion, determine the frame insertion position of the next predicted frame; 将所述插帧位置对应的前一个视频帧和后一个视频帧输入至所述预测模型中,输出第二预测帧;The previous video frame and the next video frame corresponding to the frame insertion position are input into the prediction model, and the second prediction frame is output; 将所述第二预测帧插入至所述插帧位置上。Inserting the second predicted frame at the insertion frame position. 9.一种视频插帧装置,其特征在于,包括:9. A video frame insertion device, characterized in that, comprising: 参考帧确定模块,用于基于待处理的视频帧序列,确定当前的前向参考帧和后向参考帧;a reference frame determination module for determining the current forward reference frame and the backward reference frame based on the video frame sequence to be processed; 帧相似度确定模块,用于确定所述前向参考帧及所述后向参考帧之间的帧相似度;a frame similarity determination module, configured to determine the frame similarity between the forward reference frame and the backward reference frame; 插帧数量确定模块,用于基于所述帧相似度确定插帧数量;a frame insertion quantity determination module, for determining the frame insertion quantity based on the frame similarity; 插帧模块,用于在所述前向参考帧和所述后向参考帧之间插入所述插帧数量对应的视频帧。A frame insertion module, configured to insert video frames corresponding to the number of inserted frames between the forward reference frame and the backward reference frame. 10.根据权利要求9所述的装置,其特征在于,所述帧相似度确定模块还用于:10. The apparatus according to claim 9, wherein the frame similarity determination module is further configured to: 通过预先训练完成的特征提取网络,提取所述前向参考帧的第一特征向量和所述后向参考帧的第二特征向量;Extract the first feature vector of the forward reference frame and the second feature vector of the backward reference frame by pre-training the feature extraction network; 计算所述第一特征向量和所述第二特征向量的特征相似度;Calculate the feature similarity of the first feature vector and the second feature vector; 将所述特征相似度确定为所述前向参考帧和所述后向参考帧的帧相似度。The feature similarity is determined as the frame similarity of the forward reference frame and the backward reference frame. 11.根据权利要求9所述的装置,其特征在于,所述插帧数量确定模块还用于:11. device according to claim 9, is characterized in that, described inserting frame quantity determination module is also used for: 根据所述帧相似度和预设的插帧次数范围,确定插帧次数;Determine the number of frame insertions according to the frame similarity and a preset range of frame insertion times; 根据所述插帧次数和预设的单次插帧量,确定插帧数量。The number of frame insertions is determined according to the number of frame insertions and a preset single frame insertion amount. 12.根据权利要求9所述的装置,其特征在于,所述插帧模块还用于:12. The device according to claim 9, wherein the frame insertion module is also used for: 如果所述插帧数量为一,将所述前向参考帧和所述后向参考帧输入至预设的预测模型中,输出第一预测帧;If the number of inserted frames is one, input the forward reference frame and the backward reference frame into a preset prediction model, and output the first prediction frame; 将所述第一预测帧插入至所述前向参考帧和所述后向参考帧之间,得到插帧后的视频帧序列。Inserting the first predicted frame between the forward reference frame and the backward reference frame to obtain a video frame sequence after frame insertion. 13.一种服务器,其特征在于,包括处理器和存储器,所述存储器存储有能够被所述处理器执行的机器可执行指令,所述处理器执行所述机器可执行指令以实现权利要求1至8任一项所述的视频插帧方法。13. A server, comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement claim 1 The video frame insertion method described in any one of to 8. 14.一种机器可读存储介质,其特征在于,所述机器可读存储介质存储有机器可执行指令,所述机器可执行指令在被处理器调用和执行时,机器可执行指令促使处理器实现权利要求1至8任一项所述的视频插帧方法。14. A machine-readable storage medium, characterized in that the machine-readable storage medium stores machine-executable instructions that, when invoked and executed by a processor, cause the processor to The method for video frame insertion described in any one of claims 1 to 8 is implemented.
CN201910953084.3A 2019-09-30 2019-09-30 Video frame insertion method and device and server Pending CN112584232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910953084.3A CN112584232A (en) 2019-09-30 2019-09-30 Video frame insertion method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910953084.3A CN112584232A (en) 2019-09-30 2019-09-30 Video frame insertion method and device and server

Publications (1)

Publication Number Publication Date
CN112584232A true CN112584232A (en) 2021-03-30

Family

ID=75116965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910953084.3A Pending CN112584232A (en) 2019-09-30 2019-09-30 Video frame insertion method and device and server

Country Status (1)

Country Link
CN (1) CN112584232A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596556A (en) * 2021-07-02 2021-11-02 咪咕互动娱乐有限公司 Video transmission method, server and storage medium
CN113850718A (en) * 2021-06-01 2021-12-28 天翼智慧家庭科技有限公司 Video synchronization space-time super-resolution method based on inter-frame feature alignment
CN113873295A (en) * 2021-10-26 2021-12-31 北京金山云网络技术有限公司 Multimedia information processing method, device, equipment and storage medium
CN114095754A (en) * 2021-11-17 2022-02-25 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN114490671A (en) * 2022-03-31 2022-05-13 北京华建云鼎科技股份公司 Client-side same-screen data synchronization system
CN114554285A (en) * 2022-02-25 2022-05-27 京东方科技集团股份有限公司 Video frame insertion processing method, video frame insertion processing device and readable storage medium
CN114679605A (en) * 2022-03-25 2022-06-28 腾讯科技(深圳)有限公司 Video transition method and device, computer equipment and storage medium
CN115499707A (en) * 2022-09-22 2022-12-20 北京百度网讯科技有限公司 Method and device for determining video similarity
WO2023159470A1 (en) * 2022-02-25 2023-08-31 京东方科技集团股份有限公司 Video frame interpolation processing method, video frame interpolation processing apparatus and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110211111A1 (en) * 2010-03-01 2011-09-01 Kabushiki Kaisha Toshiba Interpolation frame generating apparatus and method
CN104331164A (en) * 2014-11-27 2015-02-04 韩慧健 Gesture movement smoothing method based on similarity threshold value analysis of gesture recognition
CN105828106A (en) * 2016-04-15 2016-08-03 山东大学苏州研究院 Non-integral multiple frame rate improving method based on motion information
CN106210767A (en) * 2016-08-11 2016-12-07 上海交通大学 A kind of video frame rate upconversion method and system of Intelligent lifting fluidity of motion
CN108322685A (en) * 2018-01-12 2018-07-24 广州华多网络科技有限公司 Video frame interpolation method, storage medium and terminal
CN109803175A (en) * 2019-03-12 2019-05-24 京东方科技集团股份有限公司 Method for processing video frequency and device, equipment, storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110211111A1 (en) * 2010-03-01 2011-09-01 Kabushiki Kaisha Toshiba Interpolation frame generating apparatus and method
CN104331164A (en) * 2014-11-27 2015-02-04 韩慧健 Gesture movement smoothing method based on similarity threshold value analysis of gesture recognition
CN105828106A (en) * 2016-04-15 2016-08-03 山东大学苏州研究院 Non-integral multiple frame rate improving method based on motion information
CN106210767A (en) * 2016-08-11 2016-12-07 上海交通大学 A kind of video frame rate upconversion method and system of Intelligent lifting fluidity of motion
CN108322685A (en) * 2018-01-12 2018-07-24 广州华多网络科技有限公司 Video frame interpolation method, storage medium and terminal
CN109803175A (en) * 2019-03-12 2019-05-24 京东方科技集团股份有限公司 Method for processing video frequency and device, equipment, storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850718A (en) * 2021-06-01 2021-12-28 天翼智慧家庭科技有限公司 Video synchronization space-time super-resolution method based on inter-frame feature alignment
CN113596556A (en) * 2021-07-02 2021-11-02 咪咕互动娱乐有限公司 Video transmission method, server and storage medium
CN113873295A (en) * 2021-10-26 2021-12-31 北京金山云网络技术有限公司 Multimedia information processing method, device, equipment and storage medium
CN113873295B (en) * 2021-10-26 2024-05-28 北京金山云网络技术有限公司 Multimedia information processing method, device, equipment and storage medium
CN114095754A (en) * 2021-11-17 2022-02-25 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN114095754B (en) * 2021-11-17 2024-04-19 维沃移动通信有限公司 Video processing method, device and electronic equipment
WO2023159470A1 (en) * 2022-02-25 2023-08-31 京东方科技集团股份有限公司 Video frame interpolation processing method, video frame interpolation processing apparatus and readable storage medium
US12212882B2 (en) 2022-02-25 2025-01-28 Boe Technology Group Co., Ltd. Video frame interpolation processing method, video frame interpolation processing apparatus and readable storage medium
CN114554285A (en) * 2022-02-25 2022-05-27 京东方科技集团股份有限公司 Video frame insertion processing method, video frame insertion processing device and readable storage medium
CN114679605A (en) * 2022-03-25 2022-06-28 腾讯科技(深圳)有限公司 Video transition method and device, computer equipment and storage medium
CN114679605B (en) * 2022-03-25 2023-07-18 腾讯科技(深圳)有限公司 Video transition method, device, computer equipment and storage medium
CN114490671B (en) * 2022-03-31 2022-07-29 北京华建云鼎科技股份公司 Client-side same-screen data synchronization system
CN114490671A (en) * 2022-03-31 2022-05-13 北京华建云鼎科技股份公司 Client-side same-screen data synchronization system
CN115499707A (en) * 2022-09-22 2022-12-20 北京百度网讯科技有限公司 Method and device for determining video similarity

Similar Documents

Publication Publication Date Title
CN112584232A (en) Video frame insertion method and device and server
CN110324664B (en) A neural network-based video frame supplementation method and its model training method
WO2022033048A1 (en) Video frame interpolation method, model training method, and corresponding device
CN112584196A (en) Video frame insertion method and device and server
CN109102483B (en) Image enhancement model training method and device, electronic equipment and readable storage medium
WO2023005140A1 (en) Video data processing method, apparatus, device, and storage medium
CN111047543B (en) Image enhancement method, device and storage medium
CN109785264B (en) Image enhancement method and device and electronic equipment
CN110751649A (en) Video quality evaluation method and device, electronic equipment and storage medium
CN102541494A (en) Video size switching system and video size switching method facing display terminal
CN107484036B (en) A kind of barrage display methods and device
CN110781740B (en) Video image quality identification method, system and equipment
CN108924427A (en) A kind of video camera focus method, device and video camera
CN109413510A (en) Video abstraction generating method and device, electronic equipment, computer storage medium
CN109035257B (en) Portrait segmentation method, device and equipment
CN108961314B (en) Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
CN114494775A (en) Video segmentation method, device, device and storage medium
CN104683783A (en) An Adaptive Depth Image Filtering Method
CN116980579B (en) A method and related device for stereoscopic imaging based on image blur
CN112950580A (en) Quality evaluation method, and quality evaluation model training method and device
CN109615620B (en) Image compression degree identification method, device, equipment and computer readable storage medium
CN115119014B (en) Video processing method, training method and device for interpolation frame number model
CN115942045A (en) Video frame insertion method, device, equipment and storage medium
CN111179158A (en) Image processing method, image processing apparatus, electronic device, and medium
CN103858421B (en) Image processor and image treatment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210330