CN116156250B - Video processing method and device - Google Patents
Video processing method and device Download PDFInfo
- Publication number
- CN116156250B CN116156250B CN202310149440.2A CN202310149440A CN116156250B CN 116156250 B CN116156250 B CN 116156250B CN 202310149440 A CN202310149440 A CN 202310149440A CN 116156250 B CN116156250 B CN 116156250B
- Authority
- CN
- China
- Prior art keywords
- image frames
- target
- image
- frame
- motion speed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title abstract description 23
- 230000033001 locomotion Effects 0.000 claims abstract description 239
- 238000000605 extraction Methods 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 29
- 238000003780 insertion Methods 0.000 claims description 9
- 230000037431 insertion Effects 0.000 claims description 9
- 230000015654 memory Effects 0.000 description 18
- 230000000694 effects Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000016285 Movement disease Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a video processing method and a device thereof, belonging to the technical field of video. The embodiment of the application provides a video processing method, which comprises the following steps: determining the motion speed of each object in an original video based on a plurality of image frames included in the original video; determining an object with the motion speed meeting the target condition in the objects as a reference object; according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object; and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
Description
Technical Field
The application belongs to the technical field of video, and particularly relates to a video processing method and a device thereof.
Background
As the functions of mobile phone cameras are becoming more and more abundant, the scenes in which users use mobile phones to record video are becoming more and more abundant. The mobile phone video recording function comprises a fast lens video recording and a slow lens video recording, and the shooting object can be accelerated or decelerated through the fast lens video recording or the slow lens video recording.
In the prior art, the function of fast lens video or slow lens video can only be fully accelerated or fully decelerated, which results in faster moving objects in the fast lens and slower moving objects in the slow lens. However, in the video recorded by the single fast lens video recording or slow lens video recording function, there may be an object moving too fast or too slow, and the problem of disorder of the object too fast or too slow may cause the video playing effect to be poor.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method and a device thereof, which can solve the problem of poor video playing effect caused by object motion disorder in the prior art.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
determining the motion speed of each object in an original video based on a plurality of image frames included in the original video;
determining an object with the motion speed meeting the target condition in the objects as a reference object;
According to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object;
and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first determining module is used for determining the motion speed of each object in the original video based on a plurality of image frames included in the original video;
The second determining module is used for determining an object with the motion speed meeting the target condition in the objects as a reference object;
the adjusting module is used for adjusting the number of the image frames contained in the original video according to the movement speed of the reference object and the movement speed of the target object to obtain the image frames corresponding to the target object; the number of the image frames corresponding to the target object is matched with the movement speed of the reference object and the movement speed of the target object;
and the generating module is used for generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the video processing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the video processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the video processing method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executed by at least one processor to implement the steps of the video processing method as described in the first aspect.
In the embodiment of the application, the motion speed of each object in the original video is determined based on a plurality of image frames included in the original video; determining an object with the motion speed meeting the target condition in the objects as a reference object; according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object; and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects. In this way, the object with the motion speed meeting the target condition is taken as a reference object, and the image frames corresponding to the target object are obtained based on the motion speed of the reference object and the motion speed of the target object by taking the reference object as a reference, so that the number of the image frames contained in the image frames corresponding to each target object is matched with the motion speed of the reference object and the motion speed of each target object, therefore, the target video is generated according to the plurality of image frames and the image frames corresponding to each target object, the number of the image frames corresponding to any target object in the target video is matched with the motion speed of the reference object and the motion speed of each target object, the motion speeds of the objects in the target video are more coordinated, and the problem of poor video effect caused by motion disorder can be avoided to a certain extent, and the video playing effect is improved.
Drawings
FIG. 1 is a schematic diagram showing steps of a video processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating steps of another video processing method according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments with reference to the accompanying drawings.
An embodiment of the present application provides a video processing method, as shown in fig. 1, where the video processing method includes:
Step S1, determining the motion speed of each object in the original video based on a plurality of image frames included in the original video.
It should be noted that, video recording is to take a frame of image with a fixed interval of time, for example, taking a frame of image at an interval of 1/30 second, that is, taking 30 frames of image per second, so as to obtain a plurality of image frames. The captured plurality of image frames are then packed and compressed to generate a video file. By decompressing the video file, a plurality of image frames for generating the video file may be obtained.
In the embodiment of the application, the original video can be a video file shot by mobile terminals such as mobile phones and tablets, or shooting equipment such as cameras. The plurality of image frames included in the original video may be obtained by decompressing the original video into an image of one frame by one frame. The plurality of image frames may be arranged in a certain order, and the arrangement order of the plurality of image frames may be a photographing order of the plurality of image frames in the original video. The object in the original video may be an object whose picture content is displayed in a plurality of image frames, for example, sun, tree, pedestrian, or the like.
According to the embodiment of the application, the motion trail of each object in a plurality of image frames can be obtained according to the position change of each object in the original video in the plurality of image frames, the motion time of each object in the plurality of image frames is determined according to the shooting interval time of the plurality of image frames, then the motion speed of each object is calculated according to the motion trail and the motion time of each object, and finally the motion speed of each object in the original video is determined. The motion track of any object may be specifically calculated according to the change of the coordinate positions of the object in the plurality of image frames, the motion time of the object in the plurality of image frames may be calculated according to the frame rate of the original video or the shooting interval time of the plurality of image frames, or other manners may be adopted to obtain the motion track and the motion time of the object, which is not limited in the embodiment of the present application.
And S2, determining the object with the motion speed meeting the target condition in the objects as a reference object.
In the embodiment of the present application, the target condition may be that a difference between the movement speed of the object and the reference speed is a preset value. The preset value may be zero or a fixed value, which is not limited in the embodiment of the present application. The reference speed may be an arithmetic average corresponding to the movement speed of each object. Or the reference speed may be a weighted average corresponding to the motion speed of each object, specifically, the weight corresponding to any object may be set according to the actual requirement, the weighted average may be calculated according to the motion speed of each object and the weight corresponding to each object, and the weighted average is used as the reference speed.
For example, the sun is moving slowly, the pedestrian is moving fast, the weight corresponding to the sun is set to 10, the weight corresponding to the pedestrian is set to 1, and thus the influence of the moving speed of the object moving extremely fast or slow on the reference speed can be reduced.
If the number of objects that meet the target condition is equal to or greater than 2, the movement speed of the reference object may be determined according to the movement speeds of all the objects that meet the target condition, specifically, an average speed corresponding to the movement speeds of all the objects that meet the target condition may be calculated, and the average speed may be used as the movement speed of the reference object.
Alternatively, the target condition may be that the absolute value of the difference between the movement speed of the object and the reference speed is the smallest. In the embodiment of the application, according to the difference value between the motion speed and the reference speed of each object, the object with the smallest absolute value of the difference value can be used as the reference object. The movement speed of the reference object refers to the movement speed of the object whose absolute value of the difference is smallest.
Step S3, according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted, and the image frames corresponding to the target object are obtained; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object.
In the embodiment of the application, for any target object except the reference object in each object, the difference value between the movement speed of the reference object and the movement speed of the target object can be calculated according to the movement speed of the reference object and the movement speed of the target object. If the difference is positive, indicating that the reference object moves slower than the target object, the number of image frames included in the plurality of image frames may be proportionally reduced according to the proportional relationship between the movement speed of the reference object and the movement speed of the target object, and the reduced plurality of image frames may be regarded as the image frames corresponding to the target object. If the difference is negative, indicating that the reference object moves faster than the target object, the number of image frames included in the plurality of image frames may be proportionally increased according to the proportional relationship between the movement speed of the reference object and the movement speed of the target object, and the plurality of image frames after the increase may be regarded as the image frames corresponding to the target object.
In the embodiment of the present application, the number of image frames corresponding to the target object may be the number of image frames remaining after the number of image frames contained in the plurality of image frames is proportionally reduced, or the number of all image frames after the number of image frames contained in the plurality of image frames is proportionally increased. Therefore, the number of image frames corresponding to the target object matches the proportional relationship of the movement speed of the reference object and the movement speed of the target object, so that the number of image frames corresponding to the target object matches the movement speed of the reference object and the movement speed of the target object.
The image frames corresponding to the target object may be Z image frames arranged in a certain order, Z is a positive integer, and the arrangement order of the Z image frames may be determined according to the shooting order of the plurality of image frames. Specifically, the number of the plurality of image frames is defined as M, if M > Z, the arrangement sequence of the Z image frames may be identical to the arrangement sequence of the image frames corresponding to the Z image frames in the plurality of image frames, and if M < Z, the arrangement sequence of the Z image frames may be determined according to the arrangement sequence of the plurality of image frames and the number of image frames that increases proportionally to the plurality of image frames.
And S4, generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
In the embodiment of the application, for the image frame corresponding to any target object, the image frame corresponding to the target object and a plurality of image frames are in one-to-one correspondence according to the respective arrangement sequence, then the image frames corresponding to one-to-one are fused to obtain a new image frame, and the obtained new image frame is used as the latest image frame corresponding to the target object. The sub-image of the target object in the image frame corresponding to the target object can be obtained through matting, and then the image parts at the corresponding positions in the image frames corresponding to the target object one by one are replaced according to the sub-image of the target object, so that the image frames corresponding to the target object one by one are fused to obtain a new image.
In the embodiment of the application, the image frames corresponding to the target objects can be sequentially fused into a plurality of image frames, and specifically, the sub-images contained in the image frame corresponding to the first target object can be fused into a plurality of image frames in a matting and replacing mode, so as to obtain a plurality of image frames after primary fusion. And then merging the sub-images contained in the image frames corresponding to the second target object into a plurality of image frames after primary merging in a matting and replacing mode, so that the image frames corresponding to the target objects are merged into the plurality of image frames in sequence, and a plurality of image frames which are all merged into the image frames corresponding to the target objects are obtained. And finally, compressing and generating a video file by utilizing video compression software according to all the fused multiple image frames to serve as a target video.
In the embodiment of the application, the motion speed of each object in the original video is determined based on a plurality of image frames included in the original video; determining an object with the motion speed meeting the target condition in the objects as a reference object; according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object; and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects. In this way, the object with the motion speed meeting the target condition is taken as a reference object, and the image frames corresponding to the target object are obtained based on the motion speed of the reference object and the motion speed of the target object by taking the reference object as a reference, so that the number of the image frames contained in the image frames corresponding to each target object is matched with the motion speed of the reference object and the motion speed of each target object, therefore, the target video is generated according to the plurality of image frames and the image frames corresponding to each target object, the number of the image frames corresponding to any target object in the target video is matched with the motion speed of the reference object and the motion speed of each target object, the motion speeds of the objects in the target video are more coordinated, and the problem of poor video effect caused by motion disorder can be avoided to a certain extent, and the video playing effect is improved.
Optionally, step S3 may include the steps of:
And S31, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object under the condition that the motion speed of the target object is smaller than the motion speed of the reference object, so as to obtain the image frames corresponding to the target object.
In the embodiment of the application, the difference between the motion speed of the reference object and the motion speed of the target object can be calculated according to the motion speed of the reference object and the motion speed of the target object, and if the difference is positive, the motion speed of the target object is smaller than the motion speed of the reference object, which indicates that the target object moves slower than the reference object.
In the embodiment of the application, under the condition that the movement speed of the target object is smaller than that of the reference object, calculating the movement speed ratio of the reference object to the target object according to the movement speed of the reference object and the movement speed of the target object, wherein the movement speed ratio is a numerical value larger than 1. The frame is extracted from the plurality of image frames according to the actual value of the motion speed ratio, and then the extracted images are arranged according to the corresponding sequence in the plurality of image frames to be used as the image frames corresponding to the target object. The integer part can be used as the number of the frames after the actual value of the motion speed ratio is rounded.
And step S32, under the condition that the movement speed of the target object is not less than the movement speed of the reference object, performing frame interpolation processing on the plurality of image frames according to the movement speed of the reference object and the movement speed of the target object to obtain the image frames corresponding to the target object.
In the embodiment of the application, the difference between the motion speed of the reference object and the motion speed of the target object can be calculated according to the motion speed of the reference object and the motion speed of the target object, and if the difference is zero or negative, the motion speed of the target object is not less than the motion speed of the reference object, which indicates that the motion speed of the target object is consistent with or faster than the motion speed of the reference object.
In the embodiment of the application, under the condition that the movement speed of the target object is not less than the movement speed of the reference object, calculating the movement speed ratio of the target object to the reference object according to the movement speed of the reference object and the movement speed of the target object, wherein the movement speed ratio is a numerical value greater than or equal to 1. The image frames can be interpolated according to the actual value of the motion speed ratio, the image frames used for interpolation can be obtained by copying the image frames before or after the interpolation position, and then the image frames after interpolation are taken as the image frames corresponding to the target object. Wherein the integer part can be used as the number of the inserted frames after the actual numerical value of the motion speed ratio is rounded.
In the embodiment of the application, under the condition that the movement speed of the target object is smaller than that of the reference object, frame extraction processing is carried out on the plurality of image frames according to the movement speed of the reference object and the movement speed of the target object, so as to obtain the image frames corresponding to the target object;
And under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object. Therefore, the object with the motion speed smaller than that of the reference object or the object with the motion speed not smaller than that of the reference object can be distinguished conveniently according to the motion speed of the reference object, and the image frames corresponding to the object with the motion speed smaller than that of the reference object can be obtained by performing frame extraction processing on the plurality of image frames, so that the motion speed of the object displayed in the corresponding image frames can be increased. And obtaining an image frame corresponding to a target object with a motion speed not less than that of the reference object by performing frame interpolation processing on the plurality of image frames, so that the motion speed of the target object displayed in the corresponding image frame is reduced. The motion speed of the target object which moves fast compared with the reference object is slowed down to a certain extent, and the motion speed of the target object which moves slow compared with the reference object is quickened, so that the motion speed of a single object can be prevented from being too fast or too slow, and the motion speeds of the objects are more coordinated.
Optionally, step S31 may include the steps of:
step S311, determining a frame extraction ratio according to the motion speed of the reference object and the motion speed of the target object.
In the embodiment of the application, the motion speed ratio of the reference object to the target object can be determined according to the motion speed of the reference object and the motion speed of the target object, and the motion speed ratio is taken as the frame extraction ratio. Specifically, the following formula (1) is referred to:
Where r i represents the frame extraction ratio corresponding to the ith object in each object, v 0 represents the motion speed of the reference object, and v i represents the motion speed of the ith object. Wherein the calculation result of formula (1) retains the integer part.
Step S312, extracting an image frame from the plurality of image frames according to the frame extraction ratio as a first target image frame.
In the embodiment of the application, a plurality of image frames can be divided into a plurality of groups according to the frame extraction proportion, and the first image frame of each group is extracted as the first target image frame, namely, one image frame is extracted from every r i image frames in the plurality of image frames according to the frame extraction proportion r i, and then the extracted image frame is taken as the first target image frame.
And step S313, forming an image frame corresponding to the target object according to the first target image frame.
In the embodiment of the application, the first target image frames can be arranged according to the extraction sequence, and the arranged first target image frames are used as the image frames corresponding to the target object.
In the embodiment of the application, the frame extraction proportion is determined according to the motion speed of the reference object and the motion speed of the target object; extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame; and forming an image frame corresponding to the target object according to the first target image frame. Therefore, the frame extraction processing can be conveniently carried out on the plurality of image frames according to the frame extraction proportion, the frame extraction processing efficiency is improved, and the image frames corresponding to the target object are conveniently obtained according to the extracted first target image frames, so that the acquisition efficiency of the image frames corresponding to the target object is improved.
Optionally, step S32 may include the steps of:
step S321, determining the frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object.
In the embodiment of the application, the motion speed ratio of the target object to the reference object can be determined according to the motion speed of the reference object and the motion speed of the target object, and the motion speed ratio is taken as the frame inserting ratio.
And step S322, acquiring an image frame from the plurality of image frames according to the frame inserting proportion as a second target image frame.
In the embodiment of the present application, the plurality of image frames may be divided into a plurality of groups according to the interpolation ratio, and the first image frame of each group is copied as the second target image frame, that is, one image frame is copied from every x i image frames in the plurality of image frames according to the interpolation ratio x i, and then the copied image frame is used as the second target image frame.
Step S323, inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
In the embodiment of the application, the second target image frames can be sequentially inserted into the plurality of image frames according to the corresponding copying positions in the plurality of image frames, and the plurality of image frames after the frame insertion are used as the image frames corresponding to the target object. Specifically, for any second target image frame, the second target image frame may be inserted in front of or behind the image frame corresponding to the copy position according to the copy position of the second target image frame in the plurality of image frames, so that the second target image frame is inserted in the plurality of image frames. The present embodiments are to be considered in all respects as illustrative and not restrictive.
In the embodiment of the application, the frame inserting proportion is determined according to the motion speed of the reference object and the motion speed of the target object; acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame; and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted. Therefore, the frame inserting process can be conveniently carried out on the plurality of image frames according to the frame inserting proportion, the frame inserting process efficiency is improved, the image frames corresponding to the target object are conveniently obtained according to the obtained second target image frames, and therefore the obtaining efficiency of the image frames corresponding to the target object is improved.
Optionally, step S1 may include the steps of:
step S11, obtaining image coordinate information of each object in a plurality of image frames included in the original video.
In the embodiment of the application, the image of the image frame can be divided into areas according to the positions of the objects in the image frames included in the original video, each area only contains one object, and the objects in each area are marked by the marks, for example, the image frames contain sun, tree, pedestrian and the like, and the object A, the object B and the object C can be used as the marks corresponding to the sun, the tree and the pedestrian respectively to mark the sun, the tree and the pedestrian.
In the embodiment of the application, each image frame in a plurality of image frames can be coordinated, specifically, the lower left corner of the image frame can be taken as the origin of coordinates, and two sides adjacent to the origin of coordinates of the image frame can be taken as coordinate axes to construct a coordinate system. For each object on any image frame, the coordinates of the center position of the object in the coordinate system corresponding to the image frame can be obtained according to the position of the object in the image frame and used as the image coordinates of the object on the image frame. Correspondingly, for any object, respectively obtaining the image coordinates of the object in the coordinate system corresponding to each of the plurality of image frames, and forming the image coordinate information corresponding to the object, thereby obtaining the image coordinate information of each object in the plurality of image frames included in the original video. For example, for the object a, the coordinates (Ax i,Ayi) of the object a in the i-th image frame are acquired, and an image coordinate information matrix corresponding to the object a is generated from the image coordinates (Ax i,Ayi), as shown in the following matrix (2):
Where matrix a represents an image coordinate information matrix corresponding to object a, ax 1 represents an x coordinate of object a on the 1 st image frame, ay 1 represents a y coordinate of object a on the 1 st image frame, and so on, ax n represents an x coordinate of object a on the n-th image frame, and Ay n represents a y coordinate of object a on the n-th image frame. The examples are presented herein by way of illustration only and are not limiting in any way.
And step S12, calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames.
In the embodiment of the application, the frame numbers of the plurality of image frames, such as f 1,f2,f3, can be obtained according to the shooting sequence of the plurality of image frames for the plurality of image frames. The corresponding speed of each object in each image frame may be calculated from the frame numbers of the plurality of image frames, the image coordinate information of each object, and the frame rate of the original video. Specifically, for any one of the objects, reference may be made to the following formula (3):
Where fps is the number of transmission frames per Second (FRAMES PER seconds, fps), representing the frame rate of the original video. x 1 represents the x-coordinate of the object center point in the previous image frame, x 2 represents the x-coordinate of the object center point in the next image frame, y 1 represents the y-coordinate of the object center point in the previous image frame, y 2 represents the y-coordinate of the object center point in the next image frame, f 1 represents the frame number of the previous image frame, and f 2 represents the frame number of the next image frame. For example, for object a, the speed v 1 of object a frame 1, the speed v 2 of object a frame 2, and in turn v 3、v4…vn can be obtained by calculation of equation (3).
And step S13, determining the movement speed of each object according to the speed of each object corresponding to each image frame.
In the embodiment of the application, the moving speed average value of each object can be calculated according to the speed of each object corresponding to each image frame, and the moving speed average value of each object is taken as the moving speed of each object. Specifically, for any one of the objects, reference may be made to the following formula (4):
Where v i denotes the speed of the object to which the i-th image frame corresponds, and N denotes the number of image frames of the plurality of image frames. For example, the average velocity V A corresponding to the object a can be calculated with reference to the formula (4), and V A is taken as the moving velocity of the object a.
In the embodiment of the application, the image coordinate information of each object in a plurality of image frames included in the original video is obtained; calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames; and determining the movement speed of each object according to the speed of each object corresponding to each image frame. In this way, the speed of each object corresponding to each image frame can be conveniently calculated according to the image coordinate information, the frame rate of the original video and the plurality of image frames by acquiring the image coordinate information of each object, so that the motion speed of each object in the plurality of image frames can be conveniently determined according to the speed of each object corresponding to each image frame.
Optionally, step S2 may include the steps of:
And S21, determining a reference speed according to the movement speed of each object and the preset weight.
In the embodiment of the application, the preset weight corresponding to any object can be determined according to the motion speed of the object relative to other objects, for example, the sun moves very slowly relative to the pedestrian, if the preset weight corresponding to the pedestrian is 1, the preset weight corresponding to the sun can be set to 10, so that the influence of the motion speed of the object moving very fast or very slow on the reference speed value can be reduced.
In the embodiment of the application, the motion speed corresponding to any object can be multiplied by a preset weight, then the products of the motion speeds corresponding to the objects and the preset weight are summed and averaged to obtain a weighted average value of the motion speeds of the objects, and the weighted average value is used as a reference speed. Specifically, the following equations (5) and (6) are referred to:
Wherein V t represents a reference speed, V i represents a movement speed corresponding to the ith object, a i represents a preset weight corresponding to the ith object, and N represents the total number of objects.
And step S22, determining an object with the smallest absolute value of the difference value between the motion speed and the reference speed in the objects as a reference object.
In the embodiment of the application, difference value calculation is performed according to the movement speed and the reference speed of each object, the difference value of the movement speed and the reference speed of each object is obtained, and the object corresponding to the difference value with the smallest absolute value is selected as the reference object according to the difference value of the movement speed and the reference speed of each object. For example, if |v A-Vt | is minimum among the objects, object a is selected as the reference object.
In the embodiment of the application, the reference speed is determined according to the movement speed of each object and the preset weight; and determining the object with the smallest absolute value of the difference value between the motion speed and the reference speed in the objects as a reference object. Therefore, the influence of the movement speed of each object on the reference speed can be adjusted through the preset weight, so that the reference speed is more coordinated with the movement speed of each object. Further, according to the absolute value of the difference between the reference speed and the motion speed of each object, the reference object is selected from the objects, and the reference speed is coordinated with the motion speed of each object after being adjusted by the preset weight, so that the object with the smallest absolute value of the difference between the motion speed and the reference speed in each object is determined as the reference object, and the reference object has more reference value.
Optionally, step S4 may include the steps of:
step S41, for any one of the target objects, acquiring a sub-image and an image coordinate of the target object according to an image frame corresponding to the target object.
In the embodiment of the application, for any target object, according to the image frames corresponding to the target object, the image part containing the target object in each image frame is respectively scratched, the image part obtained by the scratched image is used as the sub-image of the target object, and the coordinates of the central position of the image part of the scratched image are used as the image coordinates of the target object, so that the sub-image and the image coordinates of the target object corresponding to each image frame in the image frame corresponding to the target object are obtained.
Step S42, determining target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object, and updating sub-images corresponding to the target coordinates in the plurality of image frames into sub-images of the target object to obtain the latest image frame corresponding to the target object; wherein the latest image frame includes a plurality of updated image frames.
In the embodiment of the application, for the image frame corresponding to any target object, the image frame corresponding to the target object and the plurality of image frames can be in one-to-one correspondence according to the respective arrangement sequence. For any one of the image frames corresponding to the target object, the coordinates, which are consistent with the image coordinates of the target object, in the image frames consistent with the image frames in sequence in the plurality of image frames can be determined according to the image coordinates of the target object in the image frames, and the coordinates consistent with the image coordinates are taken as the target coordinates corresponding to the image frames. And determining the target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object corresponding to each image frame in the same way.
In the embodiment of the application, for the image frame corresponding to any target object, the image frame corresponding to the target object and the plurality of image frames can be in one-to-one correspondence according to the respective arrangement sequence. For any one of the plurality of image frames, positioning can be performed according to the target coordinates corresponding to the image frame, and the sub-image corresponding to the target coordinates is determined as the sub-image to be updated. Then, determining the sub-image corresponding to the target coordinate at the corresponding image coordinate of the target object as the sub-image for updating, and replacing the sub-image to be updated with the sub-image for updating, so that the sub-image corresponding to the target coordinate in the image frame is updated as the sub-image corresponding to the target object. And finally, respectively updating the sub-images corresponding to the target coordinates in the plurality of image frames into the sub-images corresponding to the target object in the same way, and taking the updated plurality of image frames as the latest image frames corresponding to the target object.
And step S43, generating the target video according to the latest image frames corresponding to the target objects.
In the embodiment of the application, after the latest image frame corresponding to the first target object is obtained, the latest image frame of the first target object can be used as a reference image frame of the second target object, the target coordinates matched with the image coordinates in the reference image frame are determined according to the image coordinates of the second target object, and the sub-image corresponding to the target coordinates in the reference image frame is updated to the sub-image corresponding to the second target object, so that the latest image frame corresponding to the second target object is obtained. And by analogy, finally obtaining the latest image frame of the last object in the objects, and compressing the latest image frame to generate a video file as a target video by utilizing video compression software according to the image in the latest image frame. For example, the latest image frame of the pedestrian is taken as a reference image frame, the target coordinates matched with the image coordinates in the reference image frame are determined according to the image coordinates of the sun, and the sub-image corresponding to the target coordinates in the reference image frame is updated to the sub-image corresponding to the sun, so that the latest image frame corresponding to the sun is obtained.
In the embodiment of the application, for any target object, a sub-image and image coordinates of the target object are acquired according to the image frame corresponding to the target object; determining target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object, and updating sub-images corresponding to the target coordinates in the plurality of image frames into the sub-images of the target object to obtain the latest image frame corresponding to the target object; wherein the latest image frame comprises a plurality of updated image frames; and generating the target video according to the latest image frames corresponding to the target objects. In this way, the sub-images corresponding to the target coordinates in the plurality of image frames can be conveniently updated through the sub-images and the image coordinates corresponding to the target objects, so that the sub-images of the target objects in the plurality of image frames are matched with the image frames corresponding to the target objects. In addition, as the latest image frames comprise a plurality of updated image frames, the target video is generated according to the latest image frames corresponding to all target objects, so that the sub-images of all target objects in the target video are matched with the image frames corresponding to all target objects, the movement speeds of all objects in the target video are more coordinated, the problem of poor video effect caused by movement disorder can be avoided to a certain extent, and the playing effect of the video is improved.
Optionally, before step S43, the method further includes:
Step S5, clipping the latest image frames corresponding to the target objects according to the target quantity to obtain clipped latest image frames corresponding to the target objects; the number of the image frames contained in the latest image frames after clipping is matched with the target number, the target number is the number of the image frames contained in the image frames corresponding to the reference object, and the reference object is the target object with the minimum number of the image frames contained in the image frames corresponding to the target objects.
In the embodiment of the application, the number of the image frames contained in the latest image frames corresponding to each target object is subjected to frame extraction or frame insertion processing, so that the number of the image frames contained in each latest image frame is different. The frame extraction process reduces the number of image frames contained in the corresponding latest image frame relative to the plurality of image frames, and the frame insertion process increases the number of image frames contained in the corresponding latest image frame relative to the plurality of image frames.
In the embodiment of the application, the target object with the least number of image frames contained in the latest image frames corresponding to each target object can be used as the reference object, and the number of image frames contained in the latest image frames corresponding to the reference object can be used as the target number correspondingly. And then adjusting the number of the image frames contained in the latest image frames corresponding to each target object according to the target number, specifically, reserving the number of the image frames which are consistent with the target number from the first image frame in each latest image frame, and deleting other image frames, namely, cutting the latest image frames corresponding to each target object, so that the number of the image frames contained in the cut latest image frames is consistent with the target number, and the number of the image frames contained in the cut latest image frames is matched with the target number. The method comprises the steps of cutting each latest image frame according to the number of targets, reserving a part which is common to the latest image frames corresponding to each target object, and taking the reserved common part of each latest image frame as the cut latest image frame corresponding to each target object.
In the embodiment of the application, the latest image frames corresponding to the target objects are cut according to the target quantity, so that the cut latest image frames corresponding to the target objects are obtained; the number of the image frames contained in the latest image frames after clipping is matched with the target number, the target number is the number of the image frames contained in the image frames corresponding to the reference object, and the reference object is the target object with the minimum number of the image frames contained in the image frames corresponding to the target objects. Since the target number is the number of image frames included in the image frames corresponding to the reference object, which is the target object having the smallest number of image frames included in the image frames corresponding to the respective target objects, it is possible to preserve the image frames of the common part corresponding to the target number in the latest image frames corresponding to the respective target objects, clip the image frames exceeding the target number so that the number of image frames included in the clipped latest image frames corresponding to the respective target objects matches the target number,
Fig. 2 is a flowchart illustrating another video processing method according to an embodiment of the present application, as shown in fig. 2, where the video processing method includes: steps 601 to 604 are respectively data marking, frame motion speed calculation, motion speed tuning and content fusion. The data marking may divide the image frame of the image according to the positions of the objects in the image frames included in the original video, each area only includes one object, and marks the objects in each area through the marks, for example, the image frames include sun, tree, pedestrian, etc., and the object a, object B, and object C may be used to represent sun, tree, pedestrian, respectively. The frame motion speed calculation may calculate the corresponding speed of the target object in each image frame according to the frame numbers of the plurality of image frames, the image coordinate information of the target object and the frame rate of the original video by using the formula (3), then calculate the average speed of the target object according to the speed of the target object corresponding to each image frame by using the formula (4), take the average speed as the motion speed of the target object, and finally determine the reference speed according to the motion speed of each object and the preset weight by using the formula (5).
The motion speed adjustment and optimization can determine a frame extraction proportion or a frame insertion proportion according to the motion speed of the reference object and the motion speed of the target object, and adjust the number of image frames contained in the plurality of image frames according to the frame extraction proportion or the frame insertion proportion to obtain the image frames corresponding to the target object. The content fusion can sequentially and one-to-one correspond to the image frames corresponding to the target objects with the image frames, and then respectively fuse the image frames corresponding to the target objects with the image frames to obtain new images, so that the latest image frames corresponding to the target objects are obtained. Then, the image frames corresponding to the target objects are cut according to the target quantity, the common parts of the image frames corresponding to the target objects are reserved, and the video compression software is utilized to compress the common parts of the image frames corresponding to the target objects to generate a video file serving as a target video. Under the condition that the target video comprises sun and pedestrians, the scene of running of people on the ground and alternate days and months in the sky is displayed when the target video is played, fast and slow moving objects are organically fused, and the playing effect of the video is improved.
An embodiment of the present application provides a video processing apparatus, as shown in fig. 3, the apparatus 70 includes:
a first determining module 701, configured to determine a motion speed of each object in an original video based on a plurality of image frames included in the original video;
A second determining module 702, configured to determine an object whose movement speed meets a target condition among the objects as a reference object;
an adjusting module 703, configured to adjust the number of image frames included in the original video according to the motion speed of the reference object and the motion speed of the target object, so as to obtain an image frame corresponding to the target object; the number of the image frames corresponding to the target object is matched with the movement speed of the reference object and the movement speed of the target object;
And the generating module 704 is configured to generate a target video according to the plurality of image frames and the image frames corresponding to the target objects.
Optionally, the adjusting module 703 is specifically configured to:
Under the condition that the motion speed of the target object is smaller than that of the reference object, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain image frames corresponding to the target object;
And under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object.
Optionally, the adjusting module 703 is specifically further configured to:
Determining a frame extraction proportion according to the motion speed of the reference object and the motion speed of the target object;
Extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame;
And forming an image frame corresponding to the target object according to the first target image frame.
Optionally, the adjusting module 703 is specifically further configured to:
determining a frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object;
acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame;
and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
Optionally, the first determining module 701 is specifically configured to:
acquiring image coordinate information of each object in a plurality of image frames included in the original video;
Calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames;
and determining the movement speed of each object according to the speed of each object corresponding to each image frame. Optionally, the second determining module 702 is specifically configured to:
determining a reference speed according to the motion speed of each object and preset weights;
and determining the object with the smallest absolute value of the difference value between the motion speed and the reference speed value in the objects as a reference object.
Optionally, the generating module 704 is specifically configured to:
for any target object, acquiring a sub-image and image coordinates of the target object according to an image frame corresponding to the target object;
Determining target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object, and updating sub-images corresponding to the target coordinates in the plurality of image frames into the sub-images of the target object to obtain the latest image frame corresponding to the target object; wherein the latest image frame comprises a plurality of updated image frames;
and generating the target video according to the latest image frames corresponding to the target objects.
Optionally, the apparatus 70 further includes:
The clipping module is used for clipping the latest image frames corresponding to the target objects according to the target quantity before the generating module generates the target video according to the latest image frames corresponding to the target objects, so as to obtain the clipped latest image frames corresponding to the target objects; the number of the image frames contained in the latest image frames after clipping is matched with the target number, the target number is the number of the image frames contained in the image frames corresponding to the reference object, and the reference object is the target object with the minimum number of the image frames contained in the image frames corresponding to the target objects.
The video processing device has the same advantages as the video processing method described above over the prior art, and will not be described here again.
The video processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video processing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 4, the embodiment of the present application further provides an electronic device 80, including a processor 801 and a memory 802, where the memory 802 stores a program or an instruction that can be executed on the processor 801, and the program or the instruction implements each step of the embodiment of the video processing method when executed by the processor 801, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 5 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 90 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 90 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, with the graphics processor 9041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes at least one of a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 909 may include a volatile memory or a nonvolatile memory, or the memory 909 may include both volatile and nonvolatile memories. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 909 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 910 may include one or more processing units; optionally, the processor 910 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the video processing method embodiment described above, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the video processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the video processing method embodiment described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (8)
1. A method of video processing, the method comprising:
determining the motion speed of each object in an original video based on a plurality of image frames included in the original video;
determining an object with the motion speed meeting the target condition in the objects as a reference object;
According to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object;
generating a target video according to the plurality of image frames and the image frames corresponding to the target objects;
the adjusting the number of the image frames contained in the original video according to the motion speed of the reference object and the motion speed of the target object to obtain the image frames corresponding to the target object includes:
Under the condition that the motion speed of the target object is smaller than that of the reference object, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain image frames corresponding to the target object;
And under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object.
2. The method according to claim 1, wherein the performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain an image frame corresponding to the target object includes:
Determining a frame extraction proportion according to the motion speed of the reference object and the motion speed of the target object;
Extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame;
And forming an image frame corresponding to the target object according to the first target image frame.
3. The method according to claim 1, wherein the performing the frame interpolation process on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain the image frame corresponding to the target object includes:
determining a frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object;
acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame;
and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
4. The method of claim 1, wherein determining the motion speed of each object in the original video based on a plurality of image frames included in the original video comprises:
acquiring image coordinate information of each object in a plurality of image frames included in the original video;
Calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames;
And determining the movement speed of each object according to the speed of each object corresponding to each image frame.
5. A video processing apparatus, the apparatus comprising:
the first determining module is used for determining the motion speed of each object in the original video based on a plurality of image frames included in the original video;
The second determining module is used for determining an object with the motion speed meeting the target condition in the objects as a reference object;
The adjusting module is used for adjusting the number of the image frames contained in the original video according to the movement speed of the reference object and the movement speed of the target object to obtain the image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the movement speed of the reference object and the movement speed of the target object;
The generation module is used for generating a target video according to the plurality of image frames and the image frames corresponding to the target objects;
the adjusting module is specifically configured to:
Under the condition that the motion speed of the target object is smaller than that of the reference object, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain image frames corresponding to the target object;
And under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object.
6. The apparatus of claim 5, wherein the adjustment module is further specifically configured to:
Determining a frame extraction proportion according to the motion speed of the reference object and the motion speed of the target object;
Extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame;
And forming an image frame corresponding to the target object according to the first target image frame.
7. The apparatus of claim 5, wherein the adjustment module is further specifically configured to:
determining a frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object;
acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame;
and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
8. The apparatus of claim 5, wherein the first determining module is specifically configured to:
acquiring image coordinate information of each object in a plurality of image frames included in the original video;
Calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames;
And determining the movement speed of each object according to the speed of each object corresponding to each image frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310149440.2A CN116156250B (en) | 2023-02-21 | 2023-02-21 | Video processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310149440.2A CN116156250B (en) | 2023-02-21 | 2023-02-21 | Video processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116156250A CN116156250A (en) | 2023-05-23 |
CN116156250B true CN116156250B (en) | 2024-09-13 |
Family
ID=86355987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310149440.2A Active CN116156250B (en) | 2023-02-21 | 2023-02-21 | Video processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116156250B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108900771A (en) * | 2018-07-19 | 2018-11-27 | 北京微播视界科技有限公司 | A kind of method for processing video frequency, device, terminal device and storage medium |
CN110198412A (en) * | 2019-05-31 | 2019-09-03 | 维沃移动通信有限公司 | A kind of video recording method and electronic equipment |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2907761B2 (en) * | 1995-09-06 | 1999-06-21 | 日本電信電話株式会社 | Video encoding information creation device for real-time fast-forward playback |
JP4196451B2 (en) * | 1998-10-20 | 2008-12-17 | カシオ計算機株式会社 | Imaging apparatus and continuous image imaging method |
CN109819161A (en) * | 2019-01-21 | 2019-05-28 | 北京中竞鸽体育文化发展有限公司 | A kind of method of adjustment of frame per second, device, terminal and readable storage medium storing program for executing |
CN111327908B (en) * | 2020-03-05 | 2022-11-11 | Oppo广东移动通信有限公司 | Video processing method and related device |
CN111405199B (en) * | 2020-03-27 | 2022-11-01 | 维沃移动通信(杭州)有限公司 | Image shooting method and electronic equipment |
CN113067994B (en) * | 2021-03-31 | 2022-08-19 | 联想(北京)有限公司 | Video recording method and electronic equipment |
CN113837136B (en) * | 2021-09-29 | 2022-12-23 | 深圳市慧鲤科技有限公司 | Video frame insertion method and device, electronic equipment and storage medium |
CN114913471B (en) * | 2022-07-18 | 2023-09-12 | 深圳比特微电子科技有限公司 | Image processing method, device and readable storage medium |
-
2023
- 2023-02-21 CN CN202310149440.2A patent/CN116156250B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108900771A (en) * | 2018-07-19 | 2018-11-27 | 北京微播视界科技有限公司 | A kind of method for processing video frequency, device, terminal device and storage medium |
CN110198412A (en) * | 2019-05-31 | 2019-09-03 | 维沃移动通信有限公司 | A kind of video recording method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN116156250A (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114422692B (en) | Video recording method and device and electronic equipment | |
CN113794829B (en) | Shooting method and device and electronic equipment | |
CN112927241A (en) | Picture capturing and thumbnail generating method, system, equipment and storage medium | |
CN113194256B (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN113207038B (en) | Video processing method, video processing device and electronic equipment | |
CN116156250B (en) | Video processing method and device | |
CN114390205B (en) | Shooting method and device and electronic equipment | |
CN117152660A (en) | Image display method and device | |
CN115242981B (en) | Video playing method, video playing device and electronic equipment | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN114222069B (en) | Shooting method, shooting device and electronic equipment | |
CN115439386A (en) | Image fusion method and device, electronic equipment and storage medium | |
CN114285988A (en) | Display method, display device, electronic equipment and storage medium | |
CN114915730B (en) | Shooting method and shooting device | |
CN112887621B (en) | Control method and electronic device | |
CN114157810B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN114666513B (en) | Image processing method and device | |
CN114449172B (en) | Shooting method and device and electronic equipment | |
CN116506680B (en) | Comment data processing method and device for virtual space and electronic equipment | |
CN114827477B (en) | Method, device, electronic equipment and medium for time-lapse photography | |
CN117528179A (en) | Video generation method and device | |
CN117788316A (en) | Image processing method, apparatus, electronic device, medium, and computer program product | |
CN115103119A (en) | Shooting method and device and electronic equipment | |
CN115242976A (en) | Shooting method, shooting device and electronic equipment | |
CN116916147A (en) | Image processing method, image sending device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |