[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116156079A - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116156079A
CN116156079A CN202310157128.8A CN202310157128A CN116156079A CN 116156079 A CN116156079 A CN 116156079A CN 202310157128 A CN202310157128 A CN 202310157128A CN 116156079 A CN116156079 A CN 116156079A
Authority
CN
China
Prior art keywords
images
display object
video
motion
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310157128.8A
Other languages
Chinese (zh)
Inventor
阎伟豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310157128.8A priority Critical patent/CN116156079A/en
Publication of CN116156079A publication Critical patent/CN116156079A/en
Priority to PCT/CN2024/077675 priority patent/WO2024174971A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a video processing method, a device, equipment and a storage medium, and belongs to the technical field of communication. The method comprises the following steps: under the condition that an M1 frame image of a first video is obtained, determining the position of a display object in the first video between two adjacent frames of images of the M1 frame image to obtain N pieces of motion information, wherein N is smaller than M1; acquiring N motion blur parameters associated with N motion information; respectively carrying out motion blurring processing on N frames of synthesized images based on N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by two adjacent frames of images; a second video is generated based on the N frames of the second image.

Description

Video processing method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video processing method, a device, equipment and a storage medium.
Background
Video compression is a common video processing technique that generally reduces the frame rate to compress video, thereby solving the problems of space occupation and traffic consumption caused by a large video volume.
In the related art, after the video is compressed by using the video compression technology, the compressed video becomes stuck due to the reduced frame rate, so that the fluency is low, and the video watching experience of a user is affected.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, a device, equipment and a storage medium, which can solve the problem of low fluency after video compression.
In a first aspect, an embodiment of the present application provides a video processing method, including:
under the condition that an M1 frame image of a first video is obtained, determining the position of a display object in the first video between two adjacent frames of images of the M1 frame image to obtain N pieces of motion information, wherein N is smaller than M1;
acquiring N motion blur parameters associated with N motion information;
respectively carrying out motion blurring processing on N frames of synthesized images based on N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by two adjacent frames of images;
a second video is generated based on the N frames of the second image.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the determining module is used for determining the position of a display object in the first video between two adjacent frames of images of the M1 frames of images under the condition that the M1 frames of images of the first video are acquired, so that N pieces of motion information are obtained, and N is smaller than M1;
the acquisition module is used for acquiring N motion blur parameters associated with the N motion information;
The blurring processing module is used for respectively carrying out motion blurring processing on N frames of synthesized images based on N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by two adjacent frames of images;
and the generation module is used for generating a second video based on the N frames of second images.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video processing method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the video processing method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement the steps of the video processing method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the computer program product being executable by at least one processor to implement the steps of the video processing method according to the first aspect.
In the embodiment of the application, in a scene of compressing a first video, after an M1 frame image of the first video is acquired, a display object in the first video is determined, and N pieces of motion information capable of reflecting the change of the display object between every two adjacent frames are obtained at positions between two adjacent frame images of the M1 frame image. On the basis, N motion blur parameters associated with N pieces of motion information are acquired, after N frames of synthesized images are obtained by synthesizing two adjacent frames of images, N frames of synthesized images are respectively subjected to motion blur processing based on the N motion blur parameters, and N frames of second images are obtained. Therefore, the second video is generated based on the N frames of the second image, the frame rate of the first video can be reduced by combining two frames into one frame, the compression of the first video is realized, the motion blurring processing can superimpose the motion blurring effect on the synthesized image, the motion process of the display object among frames is simulated in the synthesized image, and the inter-frame motion information of the display object which is missing in the first video is supplemented, so that the fluency of the second video is improved.
Drawings
Fig. 1 is a flow chart of a video processing method according to an embodiment of the present application;
Fig. 2 is a flow chart of a video processing method according to another embodiment of the present application;
fig. 3 is a flowchart of a video processing method according to still another embodiment of the present application;
fig. 4 is a flowchart of a video processing method according to still another embodiment of the present application;
FIG. 5 is a schematic diagram of an example of a second image provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Video compression is a common video processing technique that generally reduces the frame rate to compress video, thereby solving the problems of space occupation and traffic consumption caused by a large video volume. In the related art, after the video is compressed by using the video compression technology, the compressed video becomes stuck due to the reduced frame rate, so that the fluency is low, and the video watching experience of a user is affected.
Aiming at the problems in the related art, the embodiment of the application provides a video processing method, in a scene of compressing a first video, after an M1 frame image of the first video is acquired, a display object in the first video is determined, and N pieces of motion information capable of reflecting the inter-frame change of the display object between every two adjacent frames are obtained at the position between two adjacent frame images of the M1 frame image. On the basis, N motion blur parameters associated with N pieces of motion information are acquired, after N frames of synthesized images are obtained by synthesizing two adjacent frames of images, N frames of synthesized images are respectively subjected to motion blur processing based on the N motion blur parameters, and N frames of second images are obtained. Therefore, the second video is generated based on the N frames of the second image, the frame rate of the first video can be reduced by combining two frames into one frame, the compression of the first video is realized, the motion blurring treatment can superimpose the motion blurring effect on the synthesized image, the motion process of a display object among frames is simulated in the synthesized image, the fluency of the second video is ensured, and the problem of low fluency after the video compression in the related technology is solved.
The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application, where an execution subject of the video processing method may be an electronic device. The execution body is not limited to the present application.
As shown in fig. 1, the video processing method provided in the embodiment of the present application may include steps 110 to 140.
Step 110, determining the position of the display object in the first video between two adjacent frame images of the M1 frame images under the condition that the M1 frame image of the first video is acquired, and obtaining N pieces of motion information.
The first video is a video to be compressed, at least one display object in the first video may exist, and in different image frames, the display objects may be the same or different, which is not specifically limited in this application; the number of image frames in the first video may be greater than or equal to M1, N < M1.
At least one display object is included in each of the two adjacent frames of images, and motion change information of the at least one display object can be included in the generated motion information, that is, each motion information can reflect the inter-frame motion change of the at least one display object between the two adjacent frames.
In some embodiments, the electronic device may determine a position of the display object between adjacent multi-frame images to obtain N pieces of motion information when acquiring the M1-frame image of the first video.
Specifically, in the adjacent multi-frame images, the display object has different positions between every two frames of images, and the display object can generate a position change between every two frames of images, so that after the position of the display object between the adjacent multi-frame images is determined, the position change of the display object between the adjacent two frames of images can be obtained by acquiring the position change of the display object between the adjacent multi-frame images.
For example, M1 is 9, the electronic device needs to determine the position change of the display object between the adjacent 3 frames, and after acquiring 9 frames of images, the electronic device may first determine the position change of the display object between every two frames of images, for example, determine the position change of the display object between the 1 st frame and the 2 nd frame, and the 2 nd frame and the 3 rd frame, so as to obtain the position change of the display object between the 1 st frame and the 3 rd frame, and finally obtain the position change between the 1 st frame and the 3 rd frame, the 4 th frame and the 6 th frame, and the 7 th frame and the 9 th frame.
In some embodiments, every two adjacent frames of images may form a set of images, so that M1 frames of images correspond to M1/2 sets of images, each set of images corresponding to one piece of motion information, so that n=m1/2 pieces of motion information can be obtained.
For example, if M1 is 60, it is possible to determine a change in position of the display object between two adjacent frame images, for example, 1 st and 2 nd, 3 rd and 4 th, 5 th and 6 th frames … can obtain one piece of motion information based on a change in position of the display object between each group of images, 60 frame images correspond to 30 groups of images, and thus n=30 pieces of motion information can be obtained.
Step 120, N motion blur parameters associated with the N motion information are acquired.
And 130, respectively carrying out motion blurring processing on the N frames of synthesized images based on the N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by two adjacent frames of images.
Specifically, the electronic device may perform a synthesis process on two adjacent frame images in the M1 frame images, and each group of two adjacent frame images may generate one frame of synthesized image, thereby obtaining an n=m1/2 frame of synthesized image. Based on the corresponding relation between each group of adjacent two-frame images and the motion information and the motion blur parameters, and the corresponding relation between each group of adjacent two-frame images and the synthesized image, the motion blur parameters and the synthesized image are also in one-to-one correspondence relation, and the electronic equipment can perform motion blur processing on the corresponding adjacent two-frame images based on each motion blur parameter. By the motion blurring process, the motion process of the display object between frames can be simulated in the composite image, and the motion blurring effect is superimposed on the composite image.
And step 140, generating a second video based on the N frames of the second image.
The second video is a compressed video of the first video, and compared with the first video, the number of frames of the second video is obviously reduced.
In the video processing method provided by the embodiment of the application, in a scene of compressing a first video, after an M1 frame image of the first video is acquired, a display object in the first video is determined, and N pieces of motion information capable of reflecting the inter-frame change of the display object between every two adjacent frames are obtained at the position between two adjacent frame images of the M1 frame image. On the basis, N motion blur parameters associated with N pieces of motion information are acquired, after N frames of synthesized images are obtained by synthesizing two adjacent frames of images, N frames of synthesized images are respectively subjected to motion blur processing based on the N motion blur parameters, and N frames of second images are obtained. Therefore, the second video is generated based on the N frames of the second image, the frame rate of the first video can be reduced by combining two frames into one frame, the compression of the first video is realized, the motion blurring processing can superimpose the motion blurring effect on the synthesized image, the motion process of the display object among frames is simulated in the synthesized image, and the inter-frame motion information of the display object which is missing in the first video is supplemented, so that the fluency of the second video is improved.
The steps 110 to 140 are described in detail below in connection with specific embodiments.
Step 110 is involved, where M1 frame images of the first video are acquired, determining a display object in the first video, and obtaining N pieces of motion information at positions between two adjacent frame images of the M1 frame images.
In some embodiments of the present application, before step 110, the method may further specifically include: and according to a preset sampling interval, sampling the original image frames of the first video at intervals to obtain M1 frame images.
The original frame number of the first video is M2, and the preset sampling interval may be set according to specific requirements, for example, set to 1 frame, 2 frames or other values, which is not specifically limited in this application.
For example, if the number M2 of the original image frames of the first video is 120 and the preset sampling interval is 1, the electronic device may obtain m1=60 frame images after final sampling of the 1 st frame, the 3 rd frame, and the 5 th frame …; if the preset sampling interval is 2, the electronic device may acquire the 1 st frame, the 4 th frame, and the 7 th frame … to obtain an m1=40 frame image after final sampling.
In the embodiment of the application, after the first video is sampled according to the preset sampling interval, the number of image frames can be reduced. Therefore, by synthesizing two adjacent frames of images of the sampled image frames and generating the second video based on the synthesized image frames, the number of image frames in the second video can be effectively reduced, the frame rate of the first video is reduced, the effective compression of the first video is realized, and the occupied memory space and the flow consumption of the first video are reduced.
In some embodiments of the present application, before step 110, the method may further specifically include: and inserting a transition frame into the M2 frame image of the first video to obtain an M1 frame image.
Specifically, the electronic device may insert a transition frame in the M2 frame image of the first video based on a preset frame insertion interval, where the preset frame insertion interval may be set according to specific requirements, for example, set to insert a transition frame every 1 frame, every two frames, or every other number of frames, which is not specifically limited in this application.
For example, the first video has an original frame number of 120, and the electronic device may insert one transition frame between two adjacent frames based on the frame insertion technique, and finally insert 119 transition frames in the 120 frame image, so as to obtain an m2=239 frame image.
In the embodiment of the application, the transition frame is inserted into the M2 frame image, so that the difference degree between two adjacent frame images can be reduced, the displacement variation of the display object between the two adjacent frame images is reduced, and the motion information of the display object between the two adjacent frames of the M2 frame image is supplemented. Therefore, the synthesis processing and the motion blurring processing are performed based on the M1 frame image after the transition frame is inserted, so that the fluency of the second video can be further improved.
In some embodiments of the present application, two adjacent frame images may include a first frame image and a second frame image, fig. 2 is a schematic flow chart of a video processing method according to another embodiment of the present application, and step 110 may include step 210 and step 220 shown in fig. 2.
Step 210, based on the image recognition algorithm, acquiring a first pixel point coordinate of the display object in the first frame image and a second pixel point coordinate of the display object in the second frame image.
The first pixel point coordinates are pixel point coordinate values of the display object in the first frame image, and the second pixel point coordinates are pixel point coordinate values of the display object in the second frame image.
It should be noted that the type of the image recognition algorithm is not specifically limited in the present application, for example, the image recognition algorithm in the present application may include, but is not limited to: depth-first search algorithm, breadth-first search algorithm, dijkstra algorithm, bellman-Ford algorithm (Bellman-Ford), florided algorithm (Floyd-Warshall), prim algorithm (Prim), kruskarl algorithm.
And 220, determining a displacement distance and a displacement angle of the display object generated between two adjacent frames of images based on the first pixel point coordinates and the second pixel point coordinates, and obtaining motion information.
The first pixel point coordinates correspond to the initial positions of the display objects in the inter-frame change, the second pixel point coordinates correspond to the end positions of the display objects in the inter-frame change, and the displacement distance of the display objects can be obtained based on the initial positions and the end positions; the displacement angle may be a rotation angle of the display object during the inter-frame motion.
In the embodiment of the application, the pixel point coordinates of the display object in the two adjacent frames of images are obtained, and the displacement distance and the displacement angle generated between the two adjacent frames of images of the display object can be determined through the change of the pixel point coordinates, so that the motion information of the display object between the two adjacent frames of images can be obtained rapidly and accurately.
Involves step 120 of obtaining N motion blur parameters associated with N motion information.
The motion blur parameters are used for representing the blur degree, the blur degree is positively correlated with the displacement distance and the displacement angle, namely, the greater the displacement distance of the display object between two adjacent frames of images is, the higher the blur degree represented by the associated motion blur parameters is, and the greater the displacement angle of the display object between two adjacent frames of images is, the higher the blur degree represented by the associated motion blur parameters is.
And 130, respectively carrying out motion blurring processing on the N frames of synthesized images based on the N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by two adjacent frames of images.
In some embodiments of the present application, step 130 may specifically include: and performing motion blurring processing on the display object in the composite image based on the motion blurring parameters related to the displacement distance and the displacement angle.
The motion blur parameters are used for representing the blur degree, and the blur degree is positively correlated with the displacement distance and the displacement angle.
In the embodiment of the application, the blurring parameter during the motion blurring process is associated with the displacement distance and displacement angle of the display object, which are changed between frames, so that the corresponding motion blurring effect can be superimposed on the synthesized image based on the blurring parameter, and the adaptability of the motion blurring effect and the motion process of the display object is higher. For example, the moving speed of the display object is high, and the displacement distance between two adjacent frames of images is large, so that the blurring effect can be superimposed on the display object with the high moving speed in the composite image, and the adaptability of the blurring effect and the moving process of the display object is higher.
Alternatively, the motion blur processing adopted for different display objects in the same frame may be different, and different motion blur effect processing may be performed on the display objects according to the motion parameters of the display objects.
In some embodiments of the present application, step 130 may specifically include at least one of:
under the condition that the generated displacement angle is smaller than a preset angle threshold value, based on motion blur parameters related to the displacement distance, superimposing a linear blur effect on the display object and the motion trail of the display object in the motion direction of the display object;
under the condition that the displacement angle is larger than a preset angle threshold, based on motion blur parameters related to the displacement distance and the displacement angle, taking a focus in the synthesized image as a center, and superposing an annular motion blur effect on the display object and the motion track of the display object;
and under the condition that the size of the display object in two adjacent frames of images is changed, superimposing the radiation blurring effect with higher blurring degree for the display object as the display object is far away from a radiation center point, wherein the radiation center point is the center position of the synthesized image.
The preset angle threshold may be set according to specific requirements, for example, an arbitrary value between 0 and 10 degrees, and if the displacement angle is smaller than the preset angle threshold, the motion track of the display object between two adjacent frames of images may be considered as a linear motion track, and no displacement angle is generated, so that a linear blurring effect may be superimposed on the display object and the motion track thereof.
In one example, the preset angle threshold is 5 degrees, as shown in fig. 3, after the first frame image 301 and the second frame image 302 are combined, a combined image 303 is obtained, and since the displacement angle of the "ball" of the display object between 301 and 302 is 0, that is, the motion track thereof is a linear motion track, the linear blurring effect can be superimposed on the motion direction of the "ball" and the motion track thereof in the direction 1, so as to obtain the second image 304.
In another example, the preset angle threshold is 5 degrees, as shown in fig. 4, after the first frame image 401 and the second frame image 402 are combined, a combined image 403 may be obtained, and since the displacement angle of the "ball" of the display object between 401 and 402 is greater than 5 degrees, that is, the motion track thereof is a nonlinear motion track, as shown in fig. 5, an annular motion blur effect may be superimposed on the "ball" and the motion track thereof, so as to obtain the second image 501.
In another example, if the size of the display object in two adjacent frames of images changes, that is, if the display object is scaled, the electronic device may superimpose the radiation blurring effect with a higher blurring degree for the display object with the image center position of the composite image as the radiation center point.
In the embodiment of the application, based on different motion modes of the display object between two adjacent frames of images, the electronic device can superimpose different motion blurring effects on the display object. Specifically, if the motion trail of the display object between two adjacent frames of images is a linear motion trail, a linear blurring effect can be superimposed on the display object and its motion trail, so as to simulate the linear motion process of the display object in the composite image; if the angle of the display object changes in the motion process between two adjacent frames of images, an annular dynamic blurring effect can be superimposed on the display object and the motion track of the display object, so that the nonlinear motion process of the display object is simulated in the composite image; if the display object is scaled in two adjacent frames of images, the electronic device may superimpose the radiation blurring effect with higher blurring degree for the display object the farther from the radiation center point, so as to simulate the scaling process of the display object in the composite image. Therefore, the electronic equipment can superimpose the motion blurring effect with higher adaptability to the motion process of the display object on the display object in the composite image, and the motion process of the display object is better restored in the second image, so that the motion process of the display object in the second video is smoother and smoother.
And step 140, generating a second video based on the N frames of the second image.
In some embodiments, the electronic device may assemble the N frames of the second image to generate the second video.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module of the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method by using the video processing device, and the video processing device provided in the embodiment of the present application is described. The video processing apparatus will be described in detail below.
Fig. 6 is a schematic structural diagram of a video processing apparatus provided in the present application.
As shown in fig. 6, an embodiment of the present application provides a video processing apparatus 600, where the apparatus 600 includes a determining module 610, an obtaining module 620, a motion blur process 630, and a generating module 640.
The determining module 610 is configured to determine, when an M1 frame image of the first video is acquired, a position of a display object in the first video between two adjacent frame images of the M1 frame image, to obtain N pieces of motion information, where N is smaller than M1;
an acquiring module 620, configured to acquire N motion blur parameters associated with N motion information;
The blurring processing module 630 is configured to perform motion blurring processing on N frame composite images based on N motion blur parameters, to obtain N frame second images, where the composite images are obtained by synthesizing two adjacent frame images;
the generating module 640 is configured to generate a second video based on the N frames of the second image.
In the video processing device provided by the embodiment of the application, in a scene of compressing a first video, after an M1 frame image of the first video is acquired, a display object in the first video is determined, and N pieces of motion information capable of reflecting the inter-frame change of the display object between every two adjacent frames are obtained at a position between two adjacent frame images of the M1 frame image. On the basis, N motion blur parameters associated with N pieces of motion information are acquired, after N frames of synthesized images are obtained by synthesizing two adjacent frames of images, N frames of synthesized images are respectively subjected to motion blur processing based on the N motion blur parameters, and N frames of second images are obtained. Therefore, the second video is generated based on the N frames of the second image, the frame rate of the first video can be reduced by combining two frames into one frame, the compression of the first video is realized, the motion blurring processing can superimpose the motion blurring effect on the synthesized image, the motion process of the display object among frames is simulated in the synthesized image, and the inter-frame motion information of the display object which is missing in the first video is supplemented, so that the fluency of the second video is improved.
In some embodiments of the present application, the two adjacent frame images include a first frame image and a second frame image, and the determining module 610 includes: the acquisition unit is used for acquiring first pixel point coordinates of the display object in the first frame image and second pixel point coordinates of the display object in the second frame image based on an image recognition algorithm; and the determining unit is used for determining the displacement distance and the displacement angle of the display object generated between two adjacent frames of images based on the first pixel point coordinates and the second pixel point coordinates, and obtaining the motion information.
In some embodiments of the present application, the blurring processing module 630 is specifically configured to: performing motion blurring processing on a display object in the composite image based on motion blurring parameters associated with the displacement distance and the displacement angle; the motion blur parameters are used for representing the blur degree, and the blur degree is positively correlated with the displacement distance and the displacement angle.
In some embodiments of the present application, the blurring processing module 630 is specifically configured to at least one of: under the condition that the generated displacement angle is smaller than a preset angle threshold value, based on motion blur parameters related to the displacement distance, superimposing a linear blur effect on the display object and the motion trail of the display object in the motion direction of the display object; under the condition that the displacement angle is larger than a preset angle threshold, based on motion blur parameters related to the displacement distance and the displacement angle, taking a focus in the synthesized image as a center, and superposing an annular motion blur effect on the display object and the motion track of the display object; and under the condition that the size of the display object in two adjacent frames of images is changed, superimposing the radiation blurring effect with higher blurring degree for the display object as the display object is far away from a radiation center point, wherein the radiation center point is the center position of the synthesized image.
In some embodiments of the present application, the apparatus further comprises: and the sampling module is used for performing interval sampling on the M2 frame images of the first video according to a preset sampling interval to obtain M1 frame images.
In some embodiments of the present application, the apparatus further comprises: and the inserting frame processing module is used for inserting a transition frame into the M2 frame image of the first video to obtain an M1 frame image.
The video processing apparatus provided in the embodiments of the present application can implement each process implemented by the electronic device in the method embodiments of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
The video processing apparatus in the embodiment of the present application may be an electronic device, or may be a component, an integrated circuit, or a chip in the electronic device. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701 and a memory 702, where the memory 702 stores a program or an instruction that can be executed on the processor 701, and the program or the instruction implements each process of the above-mentioned video processing method or the video processing apparatus method embodiment when executed by the processor 701, and the process can achieve the same technical effect, so that repetition is avoided and no redundant description is made here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to facilitate management of charging, discharging, power consumption management, and the like by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 810 is configured to determine, when an M1 frame image of the first video is acquired, a position of a display object in the first video between two adjacent frame images of the M1 frame image, to obtain N pieces of motion information, where N is smaller than M1; the processor 810 is further configured to obtain N motion blur parameters associated with the N motion information; the processor 810 is further configured to perform motion blurring processing on the N frame composite images based on the N motion blur parameters, to obtain N frame second images, where the composite images are synthesized by two adjacent frame images; the processor 810 is further configured to generate a second video based on the N frames of the second image.
In some embodiments of the present application, the two adjacent frame images include a first frame image and a second frame image, the display object in the first video is determined, and the processor 810 is further configured to: acquiring a first pixel point coordinate of a display object in a first frame image and a second pixel point coordinate of the display object in a second frame image based on an image recognition algorithm; and determining the displacement distance and the displacement angle of the display object between two adjacent frames of images based on the first pixel point coordinates and the second pixel point coordinates to obtain motion information.
In some embodiments of the present application, the processor 810 is specifically configured to perform motion blur processing on the N frame composite images based on the N motion blur parameters, respectively: performing motion blurring processing on a display object in the composite image based on motion blurring parameters associated with the displacement distance and the displacement angle; the motion blur parameters are used for representing the blur degree, and the blur degree is positively correlated with the displacement distance and the displacement angle.
In some embodiments of the present application, the processor 810 is specifically configured to at least one of: under the condition that the generated displacement angle is smaller than a preset angle threshold value, based on motion blur parameters related to the displacement distance, superimposing a linear blur effect on the display object and the motion trail of the display object in the motion direction of the display object; under the condition that the displacement angle is larger than a preset angle threshold, based on motion blur parameters related to the displacement distance and the displacement angle, taking a focus in the synthesized image as a center, and superposing an annular motion blur effect on the display object and the motion track of the display object; and under the condition that the size of the display object in two adjacent frames of images is changed, superimposing the radiation blurring effect with higher blurring degree for the display object as the display object is far away from a radiation center point, wherein the radiation center point is the center position of the synthesized image.
In some embodiments of the present application, the processor 810 is further configured to: and according to a preset sampling interval, performing interval sampling on the M2 frame images of the first video to obtain M1 frame images.
In some embodiments of the present application, the processor 810 is further configured to: and inserting a transition frame into the M2 frame image of the first video to obtain an M1 frame image.
In the embodiment of the application, in a scene of compressing a first video, after an M1 frame image of the first video is acquired, a display object in the first video is determined, and N pieces of motion information capable of reflecting the change of the display object between every two adjacent frames are obtained at positions between two adjacent frame images of the M1 frame image. On the basis, N motion blur parameters associated with N pieces of motion information are acquired, after N frames of synthesized images are obtained by synthesizing two adjacent frames of images, N frames of synthesized images are respectively subjected to motion blur processing based on the N motion blur parameters, and N frames of second images are obtained. Therefore, the second video is generated based on the N frames of the second image, the frame rate of the first video can be reduced by combining two frames into one frame, the compression of the first video is realized, the motion blurring processing can superimpose the motion blurring effect on the synthesized image, the motion process of the display object among frames is simulated in the synthesized image, and the inter-frame motion information of the display object which is missing in the first video is supplemented, so that the fluency of the second video is improved.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, with the graphics processor 8041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input list 807 includes at least one of a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, physical keyboards, keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
The memory 809 can be used to store software programs as well as various data. The memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, N required application programs or instructions (such as sound playing, image playing, etc.), and the like. Further, the memory 809 may include volatile memory or nonvolatile memory, or the memory 809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 809 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 810 may include one or more processing units; optionally, the processor 810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction implements each process of the video processing method or the video processing apparatus method embodiment when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no detailed description is given here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media, examples of which include non-transitory computer readable storage media such as computer Read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, the video processing method or each process of the video processing device method embodiment can be realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the computer program product is executed by at least one processor to implement the respective processes of the video processing method or the video processing apparatus method embodiment, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to the order shown or discussed, but may also include the steps being performed in a substantially simultaneous manner or in an opposite order, for example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined, as well. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A method of video processing, the method comprising:
under the condition that an M1 frame image of a first video is acquired, determining the position of a display object in the first video between two adjacent frames of images of the M1 frame image to obtain N pieces of motion information, wherein N is smaller than M1;
acquiring N motion blur parameters associated with the N motion information;
respectively carrying out motion blurring processing on N frames of synthesized images based on the N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by the two adjacent frames of images;
and generating a second video based on the N frames of second images.
2. The method of claim 1, wherein the two adjacent frames of images include a first frame of image and a second frame of image, the determining a location of a display object in the first video between the two adjacent frames of images of the M1 frame of image comprising:
acquiring a first pixel point coordinate of the display object in the first frame image and a second pixel point coordinate of the display object in the second frame image based on an image recognition algorithm;
and determining a displacement distance and a displacement angle of the display object generated between the two adjacent frames of images based on the first pixel point coordinates and the second pixel point coordinates, so as to obtain the motion information.
3. The method according to claim 2, wherein the performing motion blurring processing on the N frame composite images based on the N motion blur parameters includes:
performing motion blurring processing on a display object in the composite image based on motion blurring parameters associated with the displacement distance and the displacement angle;
the motion blur parameter is used for representing a blur degree, and the blur degree is positively correlated with the displacement distance and the displacement angle.
4. A method according to claim 3, wherein said motion blurring of the display object in the composite image comprises at least one of:
when the generated displacement angle is smaller than a preset angle threshold value, based on a motion blur parameter associated with the displacement distance, superimposing a linear blur effect on the display object and a motion track thereof in the motion direction of the display object;
when the displacement angle is larger than a preset angle threshold, based on motion blur parameters related to the displacement distance and the displacement angle, taking a focus in the synthesized image as a center, and superposing an annular motion blur effect on the display object and a motion track of the display object;
And under the condition that the size of the display object in two adjacent frames of images is changed, superimposing the radiation blurring effect with higher blurring degree for the display object as the display object is far away from a radiation center point, wherein the radiation center point is the center position of the synthesized image.
5. The method according to claim 1, wherein the method further comprises:
and performing interval sampling on the M2 frame images of the first video according to a preset sampling interval to obtain the M1 frame images.
6. The method according to claim 1, wherein the method further comprises:
and inserting a transition frame into the M2 frame image of the first video to obtain the M1 frame image.
7. A video processing apparatus, the apparatus comprising:
the determining module is used for determining the position of a display object in the first video between two adjacent frames of images of the M1 frames of images under the condition that the M1 frames of images of the first video are acquired, so that N pieces of motion information are obtained, and N is smaller than M1;
the acquisition module is used for acquiring N motion blur parameters associated with the N motion information;
the blurring processing module is used for respectively carrying out motion blurring processing on N frames of synthesized images based on the N motion blurring parameters to obtain N frames of second images, wherein the synthesized images are synthesized by the two adjacent frames of images;
And the generation module is used for generating a second video based on the N frames of second images.
8. The apparatus of claim 7, wherein the two adjacent frames of images comprise a first frame of image and a second frame of image, the determining module comprising:
an acquisition unit, configured to acquire a first pixel point coordinate of the display object in the first frame image and a second pixel point coordinate of the display object in the second frame image based on an image recognition algorithm;
and the determining unit is used for determining the displacement distance and the displacement angle of the display object generated between the two adjacent frames of images based on the first pixel point coordinates and the second pixel point coordinates, and obtaining the motion information.
9. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video processing method of any of claims 1-6.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to any of claims 1-6.
CN202310157128.8A 2023-02-22 2023-02-22 Video processing method, device, equipment and storage medium Pending CN116156079A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310157128.8A CN116156079A (en) 2023-02-22 2023-02-22 Video processing method, device, equipment and storage medium
PCT/CN2024/077675 WO2024174971A1 (en) 2023-02-22 2024-02-20 Video processing method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310157128.8A CN116156079A (en) 2023-02-22 2023-02-22 Video processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116156079A true CN116156079A (en) 2023-05-23

Family

ID=86357975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310157128.8A Pending CN116156079A (en) 2023-02-22 2023-02-22 Video processing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN116156079A (en)
WO (1) WO2024174971A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024174971A1 (en) * 2023-02-22 2024-08-29 维沃移动通信有限公司 Video processing method and apparatus, and device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4666012B2 (en) * 2008-06-20 2011-04-06 ソニー株式会社 Image processing apparatus, image processing method, and program
JP2010015483A (en) * 2008-07-07 2010-01-21 Sony Corp Image processing device, image processing method and program
JP2012169701A (en) * 2011-02-09 2012-09-06 Canon Inc Image processing device, image processing method, and program
US10600157B2 (en) * 2018-01-05 2020-03-24 Qualcomm Incorporated Motion blur simulation
CN114419073B (en) * 2022-03-09 2022-08-12 荣耀终端有限公司 Motion blur generation method and device and terminal equipment
CN114862725B (en) * 2022-07-07 2022-09-27 广州光锥元信息科技有限公司 Method and device for realizing motion perception fuzzy special effect based on optical flow method
CN116156079A (en) * 2023-02-22 2023-05-23 维沃移动通信有限公司 Video processing method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024174971A1 (en) * 2023-02-22 2024-08-29 维沃移动通信有限公司 Video processing method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2024174971A1 (en) 2024-08-29

Similar Documents

Publication Publication Date Title
CN112637517B (en) Video processing method and device, electronic equipment and storage medium
CN113015007B (en) Video frame inserting method and device and electronic equipment
US12172053B2 (en) Electronic apparatus and control method therefor
CN114143568A (en) Method and equipment for determining augmented reality live image
CN112954212B (en) Video generation method, device and equipment
CN116156079A (en) Video processing method, device, equipment and storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114253449B (en) Screen capturing method, device, equipment and medium
CN114390205B (en) Shooting method and device and electronic equipment
CN115861579A (en) Display method and device thereof
CN115514859A (en) Image processing circuit, image processing method and electronic device
CN113793410B (en) Video processing method, device, electronic device and storage medium
CN112738398B (en) Image anti-shake method and device and electronic equipment
CN115834889A (en) Video encoding and decoding method and device, electronic equipment and medium
CN113362224B (en) Image processing method, device, electronic equipment and readable storage medium
CN115695946B (en) Wallpaper video playback method, device, electronic device and storage medium
CN114286002B (en) Image processing circuit, method, device, electronic equipment and chip
CN113709372B (en) Image generation method and electronic device
CN116744016A (en) Image processing method, device, electronic equipment and storage medium
CN116051695A (en) Image processing method and device
CN117271090A (en) Image processing method, device, electronic equipment and medium
CN117631932A (en) Screenshot method and device, electronic equipment and computer readable storage medium
CN117201941A (en) Image processing method, device, electronic equipment and readable storage medium
CN117714844A (en) Video processing method and device
CN117176935A (en) Image processing method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination