CN110868631B - Video editing method, device, terminal and storage medium - Google Patents
Video editing method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN110868631B CN110868631B CN201810989238.XA CN201810989238A CN110868631B CN 110868631 B CN110868631 B CN 110868631B CN 201810989238 A CN201810989238 A CN 201810989238A CN 110868631 B CN110868631 B CN 110868631B
- Authority
- CN
- China
- Prior art keywords
- segment
- video
- frame
- sequence
- residual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000012790 confirmation Methods 0.000 claims abstract description 28
- 230000007704 transition Effects 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 17
- 230000000694 effects Effects 0.000 claims description 9
- 239000003550 marker Substances 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 238000011084 recovery Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 230000006870 function Effects 0.000 description 65
- 238000010586 diagram Methods 0.000 description 23
- 238000012545 processing Methods 0.000 description 10
- 230000001960 triggered effect Effects 0.000 description 8
- 230000002093 peripheral effect Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the application discloses a video clipping method, a video clipping device, a terminal and a storage medium. The method comprises the following steps: displaying a first clipping interface, wherein the first clipping interface comprises a frame sequence of a first video; acquiring a selection instruction of a first segment in a frame sequence corresponding to a first video; displaying a frame sequence of n remaining segments of the first video after the first segment is removed in a first clipping interface, wherein n is a positive integer; and when the first confirmation instruction is acquired, generating a first target video containing the n remaining segments. The application provides a video clip function (which can be called as 'skip-clipping' function) for reversely selecting segments to be reserved in a rejecting mode, one or more segments are selected from an original video to be rejected through the function, then one or more remaining segments reserved are combined to generate a new video, flexible accepting and rejecting of the segments and combination of a plurality of discontinuous segments are supported, the video clip has more flexibility, and more clipping requirements are met.
Description
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a video clipping method, a video clipping device, a video clipping terminal and a storage medium.
Background
The short video application provides not only a basic video shooting function, but also extended functions of adding special effects, filters, props and the like so as to enrich product functions.
In the related art, a short video application is provided with a video clip function. After a user shoots a video by using a video shooting function of the short video application, the short video application displays an editing interface corresponding to the video. In the editing interface, options such as special effects, filters, props, cutting and the like are provided. After the user clicks the clipping option, the short video application skips to display the clipping interface corresponding to the video. In the cropping interface, the user can select a segment from the video and click the confirm button, and the short video application will store the segment as a video alone.
The video clipping function provided by the short video application simply stores a segment selected by a user from an original video as a new video. The video clipping function has certain limitation and cannot meet the requirements of more video clips.
Disclosure of Invention
The embodiment of the application provides a video clipping method, a video clipping device, a terminal and a storage medium, which can be used for solving the problems that the video clipping function provided by the related technology has certain limitation and cannot meet more video clipping requirements. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a video clipping method, where the method includes:
displaying a first clipping interface, wherein the first clipping interface comprises a sequence of frames of a first video;
acquiring a selection instruction of a first segment in a frame sequence corresponding to the first video;
displaying a sequence of frames of n remaining segments of the first video after the first segment is removed in the first clipping interface, wherein n is a positive integer;
and when the first confirmation instruction is acquired, generating a first target video containing the n remaining segments.
In another aspect, an embodiment of the present application provides a video clip apparatus, including:
the interface display module is used for displaying a first clipping interface, and the first clipping interface comprises a frame sequence of a first video;
the segment selection module is used for acquiring a selection instruction of a first segment in a frame sequence corresponding to the first video;
a segment display module, configured to display, in the first clipping interface, a sequence of frames of n remaining segments of the first video after the first segment is removed, where n is a positive integer;
and the video generation module is used for generating a first target video containing the n remaining segments when the first confirmation instruction is obtained.
In yet another aspect, embodiments of the present application provide a terminal, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video clipping method according to the above aspect.
In yet another aspect, embodiments of the present application provide a computer-readable storage medium having at least one instruction, at least one program, code set, or set of instructions stored therein, which is loaded and executed by a processor to implement a video clipping method as described in the above aspect.
In yet another aspect, the present application provides a computer program product for performing the video clipping method of the above aspect when the computer program product is executed.
The technical scheme provided by the embodiment of the application at least comprises the following beneficial effects:
by removing the segments selected by the user from the video and generating the target video containing the remaining segments, a novel video clipping function (which can be called as a "skip clipping" function) for reversely selecting the segments to be reserved in a removing manner is provided. Through the 'skip-clipping' function, a user can select one or more segments from an original video to be removed, then one or more remaining segments are combined to generate a new video, flexible selection and selection of the segments and combination of multiple non-continuous segments are supported, so that the video clip is more flexible, and more video clip requirements are met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow diagram of a method of video clipping provided by one embodiment of the present application;
fig. 2 to 4 are schematic diagrams illustrating interfaces involved in a video skip function;
FIG. 5 is a diagram illustrating an interface involved in the fragment ordering function;
FIG. 6 is a diagram illustrating an interface involved in an animation insertion function;
FIG. 7 is a diagram illustrating an interface involved in a video cropping function;
FIG. 8 is a diagram illustrating selection of a segment using a clip function;
FIG. 9 is a diagram illustrating selection of a segment using a skip function;
FIG. 10 is a diagram illustrating an interface involved in a frame cropping function;
FIG. 11 illustrates an interface diagram for a short video application;
FIG. 12 illustrates a functional block diagram of an application;
fig. 13 is a diagram illustrating a video track corresponding to a skip function;
FIG. 14 is a block diagram of a video clipping device provided by one embodiment of the present application;
FIG. 15 is a block diagram of a video clipping device according to another embodiment of the present application;
fig. 16 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before describing and explaining embodiments of the present application, some terms referred to in the embodiments of the present application will be explained.
1. Video cropping
Video cropping refers to the operation of selecting a segment from a video and saving the segment. In the embodiment of the present application, video cropping is simply referred to as "cropping".
2. Video frequency jump scissors
Video skipping refers to an operation of selecting a segment from a video and saving the remaining segments after the segment is removed. In the embodiment of the present application, video skip is simply referred to as "skip.
3. Cutting out picture
Frame cropping refers to an operation of adjusting the frame of an image frame included in a video. In the embodiment of the application, the frame cutting can be performed on part of the image frames in the video, and the frame cutting can also be performed on all the image frames in the video.
In the method provided by the embodiment of the application, the execution main body of each step may be a terminal. For example, the terminal may be an electronic device such as a mobile phone, a tablet Computer, an electronic book reader, a multimedia playing device, a wearable device, and a PC (Personal Computer). Optionally, the execution subject of each step is a target application installed and running in the terminal, for example, the target application may be a short video application or other application having a video capturing or processing function. For convenience of explanation, in the following method embodiments, only the execution subject of each step is described as a terminal, but this is not limitative.
Referring to fig. 1, a flow chart of a video clipping method according to an embodiment of the present application is shown. The method may comprise the steps of:
In the embodiment of the present application, the first clipping interface is a user interface for implementing a video skip function. Referring collectively to FIG. 2, a schematic diagram of a first clipping interface 20 is illustrated. The first clipping interface 20 comprises a sequence of frames 21 of a first video. Optionally, as shown in fig. 2, the first clipping interface 20 further includes a preview frame 22 for the user to preview the image frames contained in the first video in the preview frame 22. The image frame displayed in the preview frame 22 may be an image frame at the position of the positioning cursor in the frame sequence 21 of the first video. For example, if the positioning cursor in the frame sequence 21 of the first video is located at the position of the timestamp 5 seconds, the image frame of the first video at the timestamp 5 seconds is displayed in the preview frame 22.
In addition, in the embodiment of the present application, the trigger manner for displaying the first editing interface is not limited. For example, the terminal may display a first clipping interface after the first video is captured, and display a sequence of frames of the first video in the first clipping interface; for another example, the terminal may also display an editing interface related to the first video, where the editing interface includes a target operation control (e.g., a button) for triggering display of the first clipping interface, and after receiving a trigger signal corresponding to the target operation control in the editing interface, the terminal displays the first clipping interface and displays a sequence of the first video frames in the first clipping interface.
The frame sequence of the first video includes thumbnails of some or all of the image frames of the first video. The thumbnails may be arranged in sequence in time order, for example, the thumbnails may be arranged from left to right in sequence in the order of the timestamps of the image frames corresponding to the thumbnails in the first video from small to large. In addition, due to the limited display area of the screen, the thumbnails included in the frame sequence of the first video may not be completely displayed, for example, in two adjacent thumbnails, one thumbnail may be displayed superimposed on the other thumbnail and a partial area of the other thumbnail may be blocked.
The first segment is a segment of the first video, which contains several consecutive image frames of the first video. Also, the image frames included in the first segment are partial image frames in the first video, not all image frames.
The selection instruction corresponding to the first segment refers to an instruction triggered by a user for instructing selection of the first segment. For example, the selection instruction may be triggered by a touch operation, a mouse, a gesture, a voice, and the like, which is not limited in the embodiment of the present application.
In one example, the step 102 includes the following sub-steps:
1. displaying a positioning control corresponding to a sequence of frames of a first video;
the positioning control is an operation control for selecting a segment from a sequence of frames. Optionally, the positioning control comprises: a start timestamp positioning control and an end timestamp positioning control. The starting timestamp positioning control is an operation control for positioning a starting timestamp of the segment to be selected, and the ending timestamp positioning control is an operation control for positioning an ending timestamp of the segment to be selected.
Referring collectively to fig. 2, also included in the first clipping interface 20 are positioning controls including a start timestamp positioning control 23 on the left and an end timestamp positioning control 24 on the right. In fig. 2, the positioning control is only shown in an upper layer of the frame sequence, in other examples, the positioning control may also be shown above or below the frame sequence, which is not limited by this embodiment.
2. Acquiring a first dragging operation signal corresponding to the initial timestamp positioning control, and adjusting the initial timestamp corresponding to the initial timestamp positioning control according to the first dragging operation signal; and/or acquiring a second dragging operation signal corresponding to the ending timestamp positioning control, and adjusting the ending timestamp corresponding to the ending timestamp positioning control according to the second dragging operation signal;
when a user needs to adjust the starting timestamp of the segment to be selected, the starting timestamp positioning control can be clicked and dragged, and in the dragging process, the position of the starting timestamp positioning control is correspondingly changed, for example, the position of the starting timestamp positioning control moves along with the finger of the user. Optionally, a corresponding start timestamp is also displayed next to the start timestamp positioning control so that the user can know the currently positioned start timestamp.
Similarly, when the user needs to adjust the ending timestamp of the segment to be selected, the ending timestamp positioning control can be clicked and dragged, and in the dragging process, the position of the ending timestamp positioning control is correspondingly changed, for example, the position of the ending timestamp positioning control moves along with the finger of the user. Optionally, a corresponding end timestamp is also displayed next to the end timestamp positioning control so that the user knows the currently positioned end timestamp.
3. When a trigger signal corresponding to a selection control in the first clipping interface is acquired, a selection instruction of a first segment between the starting timestamp and the ending timestamp is acquired.
Referring collectively to fig. 2, a selection control 25 is also included in the first clipping interface 20. After the user has completed selecting the start and end timestamps, the pick control 25 may be clicked. Accordingly, when the terminal acquires the trigger signal corresponding to the selection control 25, the terminal acquires the selection instruction of the first segment between the start timestamp and the end timestamp.
In addition, in the process of adjusting the start timestamp, the terminal may further display an image frame corresponding to the currently located start timestamp in the preview frame 22 of the first clipping interface 20; and/or, during the process that the end timestamp is adjusted, the terminal may also display an image frame corresponding to the currently located end timestamp in the preview frame 22 of the first clipping interface 20. By the method, the user can more intuitively know the image frame corresponding to the currently positioned timestamp, so that whether the currently positioned timestamp meets the selection requirement or not is determined.
Optionally, the terminal selects a qualified image frame from the first video. Among them, the image frames that meet the condition may be referred to as important image frames, such as image frames containing a specific person or a specific object. The terminal may select a qualified image frame from the first video using an image recognition algorithm. In the process that the initial timestamp positioning control is adjusted, the terminal controls the initial timestamp positioning control to change between the image frames meeting the conditions, and displays the image frames corresponding to the initial timestamp in the first clipping interface; and/or in the process that the ending timestamp positioning control is adjusted, the terminal controls the ending timestamp positioning control to change between the image frames meeting the conditions, and the image frames corresponding to the ending timestamps are displayed in the first clipping interface. In this way, only the positioning control is allowed to be changed between some important image frames, which is beneficial for the user to select the position of the positioning control more efficiently and conveniently than the positioning control is allowed to be changed between any image frames.
And 103, displaying a frame sequence of n remaining segments after the first segment is removed in the first video in the first clipping interface, wherein n is a positive integer.
For example, if the duration of the first video is 20 seconds, the start timestamp of the first segment selected by the user is 5 seconds, and the end timestamp of the first segment is 10 seconds, then there are 2 remaining segments, including the first remaining segment and the second remaining segment, in the first video after the first segment is removed. Wherein the start timestamp of the first remaining segment is 0 and the end timestamp is 5 seconds, and the start timestamp of the second remaining segment is 10 seconds and the end timestamp is 20 seconds.
With combined reference to fig. 2 and fig. 3, after acquiring the selection instruction corresponding to the first segment in the frame sequence 21 of the first video, the terminal replaces the frame sequence 21 of the first video displayed in the first clipping interface 20 with the frame sequence 26 of the first remaining segment and the frame sequence 27 of the second remaining segment after the first segment is removed for display.
In addition, when displaying the frame sequences of the remaining segments, the terminal may reserve a certain interval between the frame sequences of two adjacent remaining segments to distinguish. Optionally, the terminal displays a gap mark between the frame sequences of the two adjacent remaining segments. For example, the spacing mark may be a predetermined pattern or other identifier, such as a small dot as shown in FIG. 3.
Further, for the frame sequence of any remaining segment of the first video displayed in the first clipping interface, the method flow described in the above step 102 and step 103 may also be adopted to further remove the segment from the remaining segment. For example, as shown in fig. 4, the user may further select a segment culling from the second remaining segment frame sequence 27 to obtain a first remaining segment frame sequence 26, a third remaining segment frame sequence 28, and a fourth remaining segment frame sequence 29.
And step 104, when the first confirmation instruction is acquired, generating a first target video containing the n remaining segments.
After the user completes the culling of one or more segments (including the first segment and optionally other segments) in the first video, the user may trigger a first confirmation instruction. For example, the first confirmation instruction may be triggered by a touch operation, a mouse, a gesture, a voice, and the like, which is not limited in this embodiment of the application. Referring to fig. 4 in combination, when the user clicks the hook button at the lower right corner of the first clipping interface 20 to trigger the first confirmation instruction, the terminal generates the first target video including the illustrated 3 remaining segments.
And after the terminal acquires the first confirmation instruction, generating a first target video according to the remaining segments in the first video and the arrangement sequence of the remaining segments. Optionally, assuming that the first video includes n remaining segments, the terminal splices the n remaining segments end to end according to the arrangement order in the first clipping interface to generate the first target video. The terminal can store the generated first target video.
In summary, in the technical solution provided in the embodiment of the present application, a target video including remaining segments is generated by removing segments selected by a user from a video, and a brand-new video clipping function (which may be referred to as a "skip clipping" function) is provided for reversely selecting segments to be retained in a removing manner. Through the 'skip-clipping' function, a user can select one or more segments from an original video to be removed, then one or more remaining segments are combined to generate a new video, flexible selection and selection of the segments and combination of multiple non-continuous segments are supported, so that the video clip is more flexible, and more video clip requirements are met.
In addition, the video clipping function provided by the related art only supports a user to select a segment from an original video and store the segment as a new video, the video clipping function provided by the related art does not support the user to select multiple segments from the original video and generate a new video containing the multiple segments, and if the user needs to acquire a new video containing multiple non-continuous segments in a certain original video, the user needs to splice the selected segments one by means of third-party video processing software, so that the operation is complex and inefficient. In the embodiment of the application, if a user needs to acquire a new video containing a plurality of discontinuous segments in an original video, the user can remove the unwanted segments from the original video, retain the needed residual segments, and then trigger the first confirmation instruction to obtain the new video containing the residual segments, and splicing is not needed by third-party video processing software, so that the operation is simple and efficient.
In an alternative embodiment provided based on the embodiment of fig. 1, a function of ordering the remaining segments by the user is also supported. The following steps may be further included after step 103:
1. acquiring a sliding operation signal of which the starting position is positioned between the frame sequence of the ith residual segment in the n residual segments and the ending position is positioned between the frame sequence of the jth residual segment in the n residual segments and the frame sequence of the (j + 1) th residual segment; wherein i is a positive integer less than or equal to n, and j is a positive integer less than n;
2. and moving the frame sequence of the ith residual segment to be displayed between the frame sequence of the jth residual segment and the frame sequence of the (j + 1) th residual segment according to the sliding operation signal.
As shown in fig. 5, it is assumed that 4 remaining segments are included, which are remaining segment 1, remaining segment 2, remaining segment 3, and remaining segment 4 shown in the figure, and the 4 remaining segments are arranged in order from left to right. The user performs a sliding operation with a start position located between the frame sequence of the remaining segment 1 and an end position located between the frame sequence of the remaining segment 3 and the frame sequence of the remaining segment 4. After receiving the sliding operation signal, the terminal moves the frame sequence of the remaining segment 1 between the frame sequence of the remaining segment 3 and the frame sequence of the remaining segment 4. After moving, the arrangement sequence of the remaining fragments from left to right is as follows: remaining segment 2, remaining segment 3, remaining segment 1, and remaining segment 4. At this time, if the user triggers a first confirmation instruction, the terminal splices the head and the tail according to the moving sequence to generate a first target video.
Optionally, in the process of receiving the sliding operation signal, the terminal may further move the frame sequence of the ith remaining segment along with the sliding track of the sliding operation signal, so as to achieve the effect of dragging the display.
In other possible embodiments, a function of swapping the positions of any two remaining segments may also be supported. Optionally, after acquiring the sliding operation signal of the frame sequence of the ith residual segment of which the start position is located in the n residual segments and the frame sequence of the jth residual segment of which the end position is located in the n residual segments, the terminal exchanges the positions of the frame sequence of the ith residual segment and the frame sequence of the jth residual segment; wherein i and j are positive integers less than or equal to n, and i and j are not equal.
In summary, in the technical solution provided in the embodiment of the present application, a function of adjusting the arrangement order of the remaining segments is further provided, and a user can directly drag the frame sequence of the remaining segments in the first clipping interface to achieve the purpose of adjusting the arrangement order of the remaining segments, so that the user can adjust the arrangement order of the remaining segments according to actual requirements, thereby enriching product functions.
In another optional embodiment provided based on the embodiment of fig. 1 or any one of the optional embodiments described above, a function of inserting a transition animation between two adjacent remaining segments by a user is also supported. When the number n of the remaining segments is an integer greater than 1, a gap mark is displayed between the sequence of the kth remaining segment and the sequence of the (k + 1) th remaining segment of the n remaining segments, and k is a positive integer less than n. Accordingly, the following method flow may be adopted to insert a transition animation between the kth remaining segment and the (k + 1) th remaining segment:
1. acquiring a first operation signal corresponding to an interval mark between a frame sequence of a kth remaining segment and a frame sequence of a (k + 1) th remaining segment;
in the embodiment of the present application, the form of the first operation signal is not limited. For example, the first operation signal may be a one-click operation signal corresponding to the interval mark, a two-click operation signal corresponding to the interval mark, or a press operation signal corresponding to the interval mark, or the like.
2. And inserting the target transition animation between the kth residual segment and the (k + 1) th residual segment according to the first operation signal.
In the embodiment of the application, the transition animation refers to an animation effect which is inserted between two segments and plays a role of linking. The target transition animation refers to a transition animation inserted between the kth remaining segment and the (k + 1) th remaining segment.
In a possible implementation mode, after the terminal receives the first operation signal, at least one candidate transition animation is displayed; acquiring a selection instruction corresponding to a target transition animation in the at least one candidate transition animation; and inserting the target transition animation between the kth residual segment and the (k + 1) th residual segment.
Referring to fig. 6 in combination, when the user clicks the interval mark (i.e. small black dot) between two remaining segments, the terminal displays a popup window 61, and the popup window 61 contains several candidate transition animations 62, from which the user can select a suitable transition animation according to the requirement. For example, the user may click on a target transition animation (e.g., "animation 3"), triggering a selection instruction for the target transition animation. Accordingly, the terminal subsequently inserts "animation 3" between the two remaining segments.
In another possible implementation manner, after receiving the first operation signal, the terminal generates a target transition animation according to the kth residual segment and/or the (k + 1) th residual segment; the target transition animation is inserted between the kth remaining segment and the (k + 1) th remaining segment.
When it is necessary to insert a transition animation between two adjacent remaining segments, the terminal may generate a transition animation related to the contents of the two remaining segments according to the contents of the two remaining segments. For example, the terminal extracts a plurality of image frames from the kth residual segment and/or the (k + 1) th residual segment, and adds animation effects to the extracted image frames to generate the target transition animation. The extracted image frames may be important image frames in the k-th remaining segment and/or the (k + 1) -th remaining segment, such as image frames containing a specific person or a specific object. In the above way, the relevance between the target transition animation inserted between the two remaining segments and the two remaining segments is stronger.
And the terminal records the target transition animation to be inserted between the kth residual segment and the (k + 1) th residual segment, and subsequently splices the target transition animation and the two residual segments to generate the first target video when splicing the residual segments to generate the first target video.
Optionally, after inserting the target transition animation between the kth remaining segment and the (k + 1) th remaining segment, the terminal adjusts the interval mark between the two remaining segments from the first display state to the second display state. The interval mark displayed in the first display state shows that transition animation is not inserted between the rest segments on the two sides of the interval mark; and the interval mark is displayed in a second display state and indicates that transition animations are inserted between the rest segments on two sides of the interval mark. The first display state and the second display state refer to two different display states, and at least one element in color, style and shape is different. By the method, the user can intuitively know which residual segments are inserted with the transition animation.
In addition, in this embodiment, only by taking the example that the target transition animation is inserted between the kth remaining segment and the (k + 1) th remaining segment, the user may insert the transition animation between any two adjacent remaining segments according to the requirement.
To sum up, in the technical solution provided in the embodiment of the present application, a function of inserting transition animations between two adjacent remaining segments is further provided, so that a user can design and generate richer and more diverse target videos.
In another optional embodiment provided based on the embodiment of fig. 1 or any one of the optional embodiments described above, a function of the user to restore the culled segment between two adjacent remaining segments is also supported. When the number n of the remaining segments is an integer greater than 1, a gap mark is displayed between the sequence of the kth remaining segment and the sequence of the (k + 1) th remaining segment of the n remaining segments, and k is a positive integer less than n. Accordingly, the following process flow can be adopted to recover the culled segment between the kth residual segment and the (k + 1) th residual segment:
1. acquiring a second operation signal corresponding to an interval mark between the frame sequence of the kth remaining segment and the frame sequence of the (k + 1) th remaining segment;
in the embodiment of the present application, the form of the second operation signal is not limited. For example, the second operation signal may be a one-click operation signal corresponding to the interval mark, a two-click operation signal corresponding to the interval mark, or a press operation signal corresponding to the interval mark, or the like.
It should be noted that, when the function of inserting a transition animation between two adjacent remaining segments and the function of restoring a culled segment are simultaneously supported, the functions to be executed may be distinguished by different operation signals corresponding to the interval marks. That is, the first operation signal and the second operation signal are different. For example, the first operation signal is a one-click operation signal, and the second operation signal is a long-press operation signal.
2. And replacing and displaying the frame sequence of the kth residual segment and the frame sequence of the (k + 1) th residual segment as the frame sequence of the recovery segment according to the second operation signal.
The recovery segment is a segment obtained by splicing the kth residual segment, the removed segment between the kth residual segment and the (k + 1) th residual segment, and the (k + 1) th residual segment in sequence.
With reference to fig. 4, the user presses the interval marks (i.e. small black dots) between the frame sequence 28 of the third remaining segment and the frame sequence 29 of the fourth remaining segment shown in the right interface of fig. 4, and the terminal splices the culled segments and the fourth remaining segment among the third remaining segment, the third remaining segment and the fourth remaining segment in sequence to obtain a restored segment, then cancels the display of the frame sequence 28 of the third remaining segment and the frame sequence 29 of the fourth remaining segment, and displays the frame sequence of the restored segment at the above positions, i.e. the right interface is changed to the left interface in fig. 4 to display (the positioning control may not be displayed on the upper layer of the frame sequence of the restored segment).
In summary, in the technical solution provided in the embodiment of the present application, a function of quickly recovering the removed segments is also provided. When a user executes more editing operations on an original video, the removed segments are recovered in a gradual rollback revocation mode, the efficiency is low, and some executed useful editing operations can be revoked.
In another optional embodiment provided based on the embodiment of fig. 1 or any one of the optional embodiments described above, the video clipping method provided in the embodiment of the present application further supports a video clipping function, and the corresponding method flow may be as follows:
1. displaying a second clipping interface, wherein the second clipping interface comprises a frame sequence of a second video;
in an embodiment of the present application, the second clipping interface is a user interface for implementing a video clipping function. Referring collectively to FIG. 7, a schematic diagram of a second clipping interface 70 is illustrated. The second clipping interface 70 comprises a sequence of frames 71 of a second video. Optionally, as shown in fig. 7, the second clipping interface 70 further includes a preview frame 72 for the user to preview the image frames contained in the second video in the preview frame 72.
In addition, in the embodiment of the present application, the manner of triggering the display of the second editing interface is not limited. For example, the terminal may display a second clipping interface after the second video is captured, and display a frame sequence of the second video in the second clipping interface; for another example, the terminal may also display an editing interface related to the second video, where the editing interface includes a target operation control (e.g., a button) for triggering display of the second clipping interface, and after receiving a trigger signal corresponding to the target operation control in the editing interface, the terminal displays the second clipping interface and displays a sequence of frames of the second video in the second clipping interface.
In the embodiment of the present application, both the first video and the second video may be any one of videos. With combined reference to fig. 2 and fig. 7, a cut button and a skip-cut button are displayed at the bottom of the user interface, and a user can trigger to display the second clipping interface through the cut button and can also trigger to display the first clipping interface through the skip-cut button.
The frame sequence of the second video includes thumbnails of some or all of the image frames of the second video. The thumbnails may be arranged sequentially in a time sequence, for example, the thumbnails may be arranged sequentially from left to right in the order of the timestamps of the image frames corresponding to the thumbnails in the second video from small to large.
2. Acquiring a selection instruction of a second segment in a frame sequence corresponding to a second video;
the second segment is a segment of the second video, which contains several consecutive image frames of the second video. Also, the image frames included in the second segment are part of the image frames in the second video, not all of the image frames.
The selection instruction corresponding to the second segment refers to an instruction triggered by a user for instructing selection of the second segment. For example, the selection instruction may be triggered by a touch operation, a mouse, a gesture, a voice, and the like, which is not limited in the embodiment of the present application. Optionally, the positioning control introduced in the embodiment of fig. 1 is used to select the second segment from the second video. For description of the positioning control, reference may be made to the embodiment in fig. 1, and details of this embodiment are not repeated.
Referring collectively to FIG. 7, a selection control 75 is also included in the second clipping interface 70. After the user has completed selecting the start timestamp and the end timestamp through the positioning control, the user may click on the selection control 75. Accordingly, when the terminal acquires the trigger signal corresponding to the selection control 75, the terminal acquires the selection instruction of the second segment between the start timestamp and the end timestamp.
After acquiring a selection instruction of a second segment in the frame sequence corresponding to the second video, the terminal may display the frame sequence of the second video instead of the frame sequence of the second segment, but this may result in that the user cannot select other segments from the second video. In the embodiment of the application, after acquiring the selection instruction corresponding to the second segment in the frame sequence of the second video, the terminal displays the frame sequence of the second video and displays the frame sequence of the second segment in the second clipping interface. That is, the frame sequence of the second video and the frame sequence of the user-selected segment are simultaneously displayed in the second clipping interface. In this way, when the user has a need to select other segments from the second video in addition to the second segment, the other segments can be selected directly in the frame sequence of the second video displayed in the second clipping interface in the manner described above, and the frame sequences of all the selected segments can be seen directly in the second clipping interface.
In the embodiment of the present application, the manner of displaying the frame sequence of the second video and the frame sequence of the second segment is not limited. In one possible embodiment, in the frame sequence of the second video, the frame sequence of the segment (such as the second segment or other segments) selected by the user is displayed in the form of a mark; in another possible embodiment, the sequence of frames of the segment (e.g., the second segment or other segments) that the user has selected is displayed below or above the sequence of frames of the second video. Similarly, the segments may be separated by the interval marks described above, and a user may adjust the arrangement order of the selected segments, and may also insert transition animations between any two adjacent segments, where the specific process may refer to the description above and is not described here again.
3. And when the second confirmation instruction is acquired, generating a second target video containing a second segment.
After the user has finished selecting the segment in the second video, the user may trigger a second confirmation instruction. For example, the second confirmation instruction may be triggered by a touch operation, a mouse, a gesture, a voice, and the like, which is not limited in this embodiment of the application. Referring to fig. 7, when the user clicks the hook button at the lower right corner of the second clipping interface 70 to trigger the second confirmation instruction, the terminal generates the second target video including all the segments selected by the user from the second video.
Optionally, if the user selects only one segment from the second video, the terminal generates a second target video containing the segment; and if the user selects a plurality of segments from the second video, the terminal splices the segments end to end according to the arrangement sequence in the second clipping interface to generate a second target video. The terminal may store the generated second target video.
In summary, on the basis of the above-described "skip-clipping" function, the embodiment of the present application further provides a "clipping" function, so that a user can select the "skip-clipping" function or the "clipping" function as required to clip a video.
With combined reference to FIGS. 8 and 9, FIG. 8 shows a schematic diagram of selecting a segment using the "cut" function; fig. 9 shows a schematic diagram of selecting a segment using the "skip shear" function. Assume that a user needs to select one segment having a time stamp between 0 and T1 and another segment having a time stamp between T2 and T3 from videos having a total time length of T3 and generate a target video including the two segments. If the 'clipping' function is adopted, the user selects the segment A in the graph 8 firstly, selects the segment B in the graph 8 secondly, and finally triggers a confirmation instruction to generate the target video containing the two segments. If the 'skip-and-cut' function is adopted, the user selects the segment C in the figure 9, and then the confirmation instruction is triggered to generate the target video containing the two remaining segments after the segment C is removed. It can be seen that, in the case that a plurality of segments need to be selected for splicing, fewer operation steps are required by adopting the 'skip shear' function.
In another optional embodiment provided based on the embodiment of fig. 1 or any one of the optional embodiments described above, the video clipping method provided in the embodiment of the present application further supports a frame clipping function, and the corresponding method flow may be as follows:
1. displaying a picture adjusting interface;
in the embodiment of the application, the frame adjustment interface is a user interface for implementing a frame clipping function. Referring collectively to FIG. 10, a schematic diagram of a frame adjustment interface 10 is illustrated. The frame adjustment interface 10 includes a preview display area 11. The preview display area 11 displays preview image frames of the third video. In addition, a frame adjustment frame 12 superimposed on the upper layer of the preview image frame is displayed on the frame adjustment interface 11. The position and size of the frame adjustment frame 12 are adjustable, and the user can select a desired frame by adjusting the position and/or size of the frame adjustment frame 12.
The frame scale of the preview image frame displayed in the preview display area 11 is a frame scale selected when the third video is captured. For example, the user selects 9:16 to take a self-portrait to obtain a third video, and the head area of the user is on the top in the shot picture. The user can adopt the method flow provided by the embodiment to select a proper frame scale to perform frame clipping on the image frame in the third video, and select the image area needing to be reserved.
Optionally, as shown in fig. 10, the frame adjustment interface 10 further includes a plurality of scale options 13, and each scale option 13 corresponds to a frame scale. For example, the above-mentioned frame scale may include: 9:16, 3:4, 1:1, 4:3, 16:9, etc. The aspect ratio of the frame adjustment frame 12 displayed by the terminal is consistent with the frame ratio corresponding to the ratio selection item 13 selected by the user. For example, if the ratio of the frame corresponding to the ratio selection item 13 selected by the user is 1:1, the terminal displays a frame adjustment frame 12 with an aspect ratio of 1:1 on the upper layer of the preview image frame. The user may drag the corners of the frame adjustment box 12 to scale the frame adjustment box 12 equally.
In addition, a scale selection item corresponding to any proportion can be further included in the picture frame adjusting interface. If the user selects the proportion selection item corresponding to any proportion, the user can adjust the length-width ratio of the frame adjusting box to any proportion.
2. Adjusting the position and/or size of the picture adjusting frame according to the operation instruction;
the user can drag the picture adjusting frame as a whole to adjust the position of the picture adjusting frame, and can also drag the corners of the picture adjusting frame to adjust the size of the picture adjusting frame.
3. And when a third confirmation instruction is acquired, intercepting an image area in the adjusted frame adjusting frame from at least one image frame of the third video to generate a third target video.
After the user has finished adjusting the frame, the user may trigger a third confirmation instruction. For example, the third confirmation instruction may be triggered by a touch operation, a mouse, a gesture, a voice, and the like, which is not limited in this embodiment of the application. As shown in fig. 10, the user clicks the hook button at the lower right corner of the frame adjustment interface 10 to trigger a third confirmation instruction, and the terminal displays a frame-adjusted preview image frame in the preview display area 11 of the frame adjustment interface 10, where the frame-adjusted preview image frame is an image area in a frame adjustment frame captured from an original preview image frame.
In addition, the terminal may respectively intercept an image area in the adjusted frame adjusting frame from each image frame of the third video, so as to generate a third target video. Or, the terminal may also respectively intercept image areas in the adjusted frame adjustment frame from a part of image frames (such as one or several image frames) of the third video, while the other part of image frames keep the original frame unchanged, then integrate the image frames after the frame adjustment and the image frames without the frame adjustment, and still generate the third target video according to the arrangement sequence of the image frames in the third video. The manner in which the third target video is generated may be custom selected by the user.
To sum up, in the technical scheme provided by the embodiment of the application, a function of performing frame cutting on the video obtained by shooting is further provided, so that the product functions are further enriched, and a user can select various different modes such as skip cutting, frame cutting and the like to perform clipping processing on the video.
The technical scheme provided by the embodiment of the application can be applied to short video application. The short video application is an application program integrating video shooting, editing, sharing and social functions.
Referring collectively to FIG. 11, a diagram of a relevant interface for a short video application is illustrated.
After the short video application is opened and run in the foreground, the main interface 110 is displayed, and the recommended short video can be played and some operation controls, such as a "home" button, a "find" button, a "message" button, a "my" button, a shooting button 110a, etc., are displayed in the main interface 110.
The user clicks the capture button 110a on the main interface 110 and the short video application jumps to display the capture interface 111. A finder frame and some operation controls such as a start/stop button 111a, a button 111b for adding music, a button 111c for selecting a video from the local, a button 111d for closing the shooting interface, a button 111e for switching the lens, and the like as shown in the drawing can be displayed in the shooting interface 111. The user can click the start/stop button 111a to take a short video.
After the shooting is completed, the short video application jumps to display the editing interface 112. The shot short video may be played and some operation controls, such as a button 112a for adding a special effect, a button 112b for adding music, a button 112c for clipping the short video, a button 112d for adding a filter, and the like, may be displayed in the editing interface 112.
If the user needs to perform the clipping or skip clipping operation described above on the short video, the user may click the button 112c for clipping the short video, and the short video application skips to display the clipping interface 113. At the bottom of the clipping interface 113, labels of "clip" and "skip clip" are displayed, and the user can select switching. If the user selects the "cut" tab, the short video application displays the second clipping interface described in the above embodiment, and the user can cut the short video in the second clipping interface according to the above described step flow. If the user selects the "skip-clipping" tab, the short video application displays the first clipping interface described in the above embodiment, and the user can skip-clip the short video in the first clipping interface according to the above described step flow.
After the clipping process for the short video application is completed, the user may save the new video obtained after the process, and may choose to save the new video locally or publish the new video.
Referring to fig. 12, a functional block diagram of a target application having video clip functionality provided by an embodiment of the present application is illustrated. The target application 120 may include: a video shooting module 121, a frame cropping module 122, a video clip module 123, and a video playing module 124.
The video shooting module 121 is used to shoot a video.
The frame cropping module 122 is used for cropping the frame of the video. Specifically, after the user determines the position and size of the frame adjustment frame, the frame clipping module 122 obtains a corresponding clipping region of the frame adjustment frame in the original preview image frame, where the clipping region may be represented by an origin coordinate, which may be a coordinate of a vertex (e.g., a top left vertex) of the frame adjustment frame in the original preview image frame, and a length and a width of the frame adjustment frame. The frame clipping module 122 creates a canvas according to the size of the clipping area, and renders the image content in the clipping area in the preview image frame onto the canvas, so as to obtain the clipped image frame. When the frame cropping needs to be performed on a plurality of image frames in the video, the frame cropping module 122 may repeatedly perform the above-described process. The frame cropping module 122 may implement the frame cropping function in an OpenGL environment. The frame cropping module 122 may send the frame cropped video to the video playing module 124 for playing.
The video clipping module 123 is used to skip and clip the video.
For skip cutting, after the user selects the segment to be removed, the video clipping module 123 obtains the position information (such as the start timestamp and the length) of the segment, the video clipping module 123 creates a new video track, and selects the remaining segments in the video to be inserted into the new video track according to the position information of the segment to be removed. For example, as shown in fig. 13, in an original video track, the total duration of a video is S, the start timestamp of a segment selected by a user to be removed is T, and the length of the segment is D; the new video track created by the video clip module 123 contains one segment with a start timestamp of 0 and an end timestamp of T and another segment with a start timestamp of T + D and an end timestamp of S. When the user confirms that the skipped video is generated, the video clipping module 123 sequentially splices the segments in the new video track to generate the skipped video.
For cropping, after the user selects a segment to be cropped, the video clipping module 123 obtains the position information (such as the start timestamp and the length) of the segment, and the video clipping module 123 creates a new video track and inserts the segment into the new video track. If the user selects a plurality of segments to be cut in sequence, the video clipping module 123 may insert the segments into the new video track in an order from small to large according to the timestamps of the segments. When the user confirms to generate the cropped video, the video cropping module 123 sequentially splices the segments in the new video track to generate the cropped video.
The video clipping module 123 may send the clipped/cropped video to the video playing module 124 for playing.
The technical scheme provided by the embodiment of the application supports the functions of video skip cutting, video cutting, picture cutting and the like on the existing video, and further expands the product function of the target application program (such as short video application) on the basis of adding conventional functions such as special effects, filters, props, music score and the like, so that a user has more operation choices when releasing the video, the user needs to be more touched, and the user experience is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to FIG. 14, a block diagram of a video clipping device according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The device can be a terminal or be arranged on the terminal. The apparatus 1400 may include: an interface display module 1410, a segment selection module 1420, a segment display module 1430, and a video generation module 1440.
An interface display module 1410 configured to display a first clipping interface, where the first clipping interface includes a sequence of frames of a first video.
A segment selecting module 1420, configured to obtain a selecting instruction corresponding to a first segment in the frame sequence of the first video.
A segment display module 1430 for displaying, in the first clipping interface, a sequence of frames of n remaining segments of the first video after the first segment is removed, where n is a positive integer.
The video generating module 1440 is configured to, when the first confirmation instruction is obtained, generate a first target video that includes the n remaining segments.
In summary, in the technical solution provided in the embodiment of the present application, a target video including remaining segments is generated by removing segments selected by a user from a video, and a brand-new video clipping function (which may be referred to as a "skip clipping" function) is provided for reversely selecting segments to be retained in a removing manner. Through the 'skip-clipping' function, a user can select one or more segments from an original video to be removed, then one or more remaining segments are combined to generate a new video, flexible selection and selection of the segments and combination of multiple non-continuous segments are supported, so that the video clip is more flexible, and more video clip requirements are met.
In an optional embodiment provided based on the embodiment of fig. 14, the segment selecting module 1420 is configured to:
displaying a positioning control corresponding to a sequence of frames of the first video, the positioning control comprising: a start timestamp positioning control and an end timestamp positioning control;
acquiring a first dragging operation signal corresponding to the initial timestamp positioning control, and adjusting the initial timestamp corresponding to the initial timestamp positioning control according to the first dragging operation signal; and/or acquiring a second dragging operation signal corresponding to the ending timestamp positioning control, and adjusting the ending timestamp corresponding to the ending timestamp positioning control according to the second dragging operation signal;
when a trigger signal corresponding to a selection control in the first clipping interface is acquired, acquiring a selection instruction of the first segment between the starting timestamp and the ending timestamp.
Optionally, the segment selecting module 1420 is further configured to:
selecting image frames meeting conditions from the first video;
controlling the starting timestamp positioning control to change between the eligible image frames and displaying the image frame corresponding to the starting timestamp in the first clipping interface in the process that the starting timestamp positioning control is adjusted; and/or controlling the ending timestamp positioning control to change between the eligible image frames and display the image frame corresponding to the ending timestamp in the first clipping interface in the process that the ending timestamp positioning control is adjusted.
In another optional embodiment provided based on the embodiment of fig. 14 or any one of the optional embodiments above, as shown in fig. 15, the apparatus 1400 further includes a sequence adjusting module 1450 for:
acquiring a sliding operation signal of which the starting position is positioned in the frame sequence of the ith residual segment in the n residual segments and the ending position is positioned between the frame sequence of the jth residual segment in the n residual segments and the frame sequence of the (j + 1) th residual segment; wherein i is a positive integer less than or equal to n, j is a positive integer less than n, and n is an integer greater than 1;
and moving the frame sequence of the ith residual segment to be displayed between the frame sequence of the jth residual segment and the frame sequence of the (j + 1) th residual segment according to the sliding operation signal.
In another optional embodiment provided based on the embodiment of fig. 14 or any one of the optional embodiments above, as shown in fig. 15, the apparatus 1400 further includes an animation insertion module 1460 for:
acquiring a first operation signal corresponding to an interval marker between a frame sequence of a kth remaining segment and a frame sequence of a (k + 1) th remaining segment in the n remaining segments, wherein k is a positive integer smaller than n, and n is an integer larger than 1;
displaying at least one candidate transition animation according to the first operation signal;
acquiring a selection instruction corresponding to a target transition animation in the at least one candidate transition animation;
inserting the target transition animation between the kth remaining segment and the (k + 1) th remaining segment.
In another optional embodiment provided based on the embodiment of fig. 14 or any one of the optional embodiments above, as shown in fig. 15, the apparatus 1400 further includes an animation insertion module 1460 for:
acquiring a first operation signal corresponding to an interval marker between a frame sequence of a kth remaining segment and a frame sequence of a (k + 1) th remaining segment in the n remaining segments, wherein k is a positive integer smaller than n, and n is an integer larger than 1;
after receiving the first operation signal, generating a target transition animation according to the kth residual segment and/or the (k + 1) th residual segment;
inserting the target transition animation between the kth remaining segment and the (k + 1) th remaining segment.
In another optional embodiment provided based on the embodiment of fig. 14 or any one of the optional embodiments above, as shown in fig. 15, the apparatus 1400 further includes a fragment restoring module 1470 configured to:
acquiring a second operation signal corresponding to the interval marker between the k-th residual segment frame sequence and the k + 1-th residual segment frame sequence, wherein k is a positive integer smaller than n, and n is an integer larger than 1;
replacing the frame sequence of the kth remaining segment and the frame sequence of the (k + 1) th remaining segment with the frame sequence of the recovery segment according to the second operation signal; wherein, the recovered segment is a segment obtained by sequentially splicing the kth residual segment, the rejected segments between the kth residual segment and the (k + 1) th residual segment, and the (k + 1) th residual segment.
In another optional embodiment provided on the basis of the embodiment of fig. 14 or any one of the optional embodiments above, the video generating module 1440 is configured to: and splicing the n remaining segments end to end according to the arrangement sequence in the first clipping interface to generate the first target video.
In another alternative embodiment provided based on the embodiment of figure 14 or any of the alternative embodiments described above,
the interface display module 1410 is further configured to display a second clipping interface, where the second clipping interface includes a frame sequence of a second video;
the segment selecting module 1420 is further configured to obtain a selecting instruction corresponding to a second segment in the frame sequence of the second video;
the video generating module 1440 is further configured to generate a second target video including the second segment when the second confirmation instruction is obtained.
Optionally, the segment display module 1430 is further configured to display the frame sequence of the second video and the frame sequence of the second segment in the second clipping interface.
In another optional embodiment provided based on the embodiment of fig. 14 or any one of the above optional embodiments, as shown in fig. 15, the apparatus 1400 further includes a frame size adjusting module 1480 configured to:
displaying a picture adjustment interface, wherein the picture adjustment interface comprises: the preview image frame of the third video and a frame adjusting frame superposed on the upper layer of the preview image frame;
adjusting the position and/or size of the picture adjusting frame according to the operation instruction;
and when a third confirmation instruction is acquired, intercepting an image area in the adjusted picture frame adjusting frame from at least one image frame of the third video to generate a third target video.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 16, a block diagram of a terminal 1600 according to an embodiment of the present application is shown. The terminal 1600 may be a mobile phone, a tablet computer, a smart television, a multimedia playing device, a PC, etc.
Generally, terminal 1600 includes: a processor 1601, and a memory 1602.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device may include: at least one of radio frequency circuitry 1604, a touch screen display 1605, a camera 1606, audio circuitry 1607, and a power supply 1608.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In an example embodiment, there is also provided a terminal comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions. The at least one instruction, at least one program, set of codes, or set of instructions is configured to be executed by one or more processors to implement the video clipping method described above.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction, at least one program, code set or set of instructions which, when executed by a processor of a terminal, implements the above-described video clipping method.
Alternatively, the computer-readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided for implementing the above-described video clipping method when executed.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and is not intended to limit the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (10)
1. A method of video clipping, the method comprising:
displaying a first clipping interface, wherein the first clipping interface comprises a sequence of frames of a first video;
acquiring a selection instruction of a first segment in a frame sequence corresponding to the first video;
displaying a sequence of frames of n remaining segments of the first video after the first segment is removed in the first clipping interface, wherein n is a positive integer;
when a first confirmation instruction is acquired, generating a first target video containing the n remaining segments;
the method further comprises the following steps:
displaying a picture adjustment interface, wherein the picture adjustment interface comprises: the preview image frame of the first target video and a frame adjusting frame superposed on the upper layer of the preview image frame;
adjusting the position and/or size of the picture adjusting frame according to the operation instruction;
when a third confirmation instruction is acquired, intercepting an image area in the adjusted picture adjusting frame from a partial image frame of the first target video to obtain a picture frame with an adjusted picture, wherein the picture frames of other image frames except the partial image frame in the first target video are not adjusted;
integrating the image frames with the adjusted frames and the image frames without the adjusted frames, and generating a third target video according to the arrangement sequence of the image frames in the first target video;
wherein n is an integer greater than 1, a gap mark is displayed between the sequence of the kth residual segment frame and the sequence of the (k + 1) th residual segment frame of the n residual segments, and k is a positive integer less than n;
after displaying, in the first clipping interface, a sequence of frames of n remaining segments of the first video after the first segment is culled, the method further comprises:
when a first operation signal corresponding to the interval marker between the kth residual segment frame sequence and the (k + 1) th residual segment frame sequence is received, extracting a plurality of image frames from the kth residual segment and/or the (k + 1) th residual segment, and adding an animation effect to the extracted image frames to generate a target transition animation, wherein the extracted image frames are image frames containing a specific character or a specific object in the kth residual segment and/or the (k + 1) th residual segment; inserting the target transition animation between the kth remaining segment and the (k + 1) th remaining segment; adjusting the interval mark between the kth residual segment and the (k + 1) th residual segment from a first display state to a second display state, wherein the interval mark displayed in the first display state is used for indicating that no transition animation is inserted between the kth residual segment and the (k + 1) th residual segment, and the interval mark displayed in the second display state is used for indicating that transition animation is inserted between the kth residual segment and the (k + 1) th residual segment;
when a second operation signal corresponding to the interval mark between the kth residual segment frame sequence and the (k + 1) th residual segment frame sequence is acquired, replacing and displaying the kth residual segment frame sequence and the (k + 1) th residual segment frame sequence as a recovery segment frame sequence according to the second operation signal; wherein, the recovered segment is a segment obtained by sequentially splicing the kth residual segment, the rejected segments between the kth residual segment and the (k + 1) th residual segment, and the (k + 1) th residual segment.
2. The method of claim 1, wherein the obtaining the selection instruction corresponding to the first segment of the sequence of frames of the first video comprises:
displaying a positioning control corresponding to a sequence of frames of the first video, the positioning control comprising: a start timestamp positioning control and an end timestamp positioning control;
acquiring a first dragging operation signal corresponding to the initial timestamp positioning control, and adjusting the initial timestamp corresponding to the initial timestamp positioning control according to the first dragging operation signal; and/or acquiring a second dragging operation signal corresponding to the ending timestamp positioning control, and adjusting the ending timestamp corresponding to the ending timestamp positioning control according to the second dragging operation signal;
when a trigger signal corresponding to a selection control in the first clipping interface is acquired, acquiring a selection instruction of the first segment between the starting timestamp and the ending timestamp.
3. The method of claim 2, further comprising:
selecting image frames meeting conditions from the first video;
controlling the starting timestamp positioning control to change between the eligible image frames and displaying the image frame corresponding to the starting timestamp in the first clipping interface in the process that the starting timestamp positioning control is adjusted; and/or controlling the ending timestamp positioning control to change between the eligible image frames and display the image frame corresponding to the ending timestamp in the first clipping interface in the process that the ending timestamp positioning control is adjusted.
4. The method of claim 1, wherein n is an integer greater than 1;
after displaying, in the first clipping interface, a sequence of frames of n remaining segments of the first video after the first segment is culled, the method further comprises:
acquiring a sliding operation signal of which the starting position is positioned in the frame sequence of the ith residual segment in the n residual segments and the ending position is positioned between the frame sequence of the jth residual segment in the n residual segments and the frame sequence of the (j + 1) th residual segment; wherein i is a positive integer less than or equal to n, and j is a positive integer less than n;
and moving the frame sequence of the ith residual segment to be displayed between the frame sequence of the jth residual segment and the frame sequence of the (j + 1) th residual segment according to the sliding operation signal.
5. The method of claim 1,
after displaying, in the first clipping interface, a sequence of frames of n remaining segments of the first video after the first segment is culled, the method further comprises:
acquiring a first operation signal corresponding to the interval marker between the sequence of frames of the kth remaining segment and the sequence of frames of the (k + 1) th remaining segment;
displaying at least one candidate transition animation according to the first operation signal;
acquiring a selection instruction corresponding to a target transition animation in the at least one candidate transition animation; inserting the target transition animation between the kth remaining segment and the (k + 1) th remaining segment.
6. The method of claim 1, wherein generating the first target video containing the n remaining segments comprises:
and splicing the n remaining segments end to end according to the arrangement sequence in the first clipping interface to generate the first target video.
7. The method according to any one of claims 1 to 6, further comprising:
displaying a second clipping interface, wherein the second clipping interface comprises a sequence of frames of a second video;
acquiring a selection instruction of a second segment in the frame sequence corresponding to the second video;
displaying a sequence of frames of the second video and a sequence of frames of the second segment in the second clipping interface;
and when a second confirmation instruction is acquired, generating a second target video containing the second segment.
8. A video clipping apparatus, characterized in that the apparatus comprises:
the interface display module is used for displaying a first clipping interface, and the first clipping interface comprises a frame sequence of a first video;
the segment selection module is used for acquiring a selection instruction of a first segment in a frame sequence corresponding to the first video;
a segment display module, configured to display, in the first clipping interface, a sequence of frames of n remaining segments of the first video after the first segment is removed, where n is a positive integer;
the video generation module is used for generating a first target video containing the n remaining segments when a first confirmation instruction is obtained;
the apparatus is further configured to:
displaying a picture adjustment interface, wherein the picture adjustment interface comprises: the preview image frame of the first target video and a frame adjusting frame superposed on the upper layer of the preview image frame;
adjusting the position and/or size of the picture adjusting frame according to the operation instruction;
when a third confirmation instruction is acquired, respectively intercepting image areas in the adjusted frame adjustment frame from a partial image frame of the first target video to obtain an image frame with an adjusted frame, wherein the frames of other image frames except the partial image frame in the first target video are not adjusted;
integrating the image frames with the adjusted frames and the image frames without the adjusted frames, and generating a third target video according to the arrangement sequence of the image frames in the first target video;
wherein n is an integer greater than 1, a gap mark is displayed between the sequence of the kth residual segment frame and the sequence of the (k + 1) th residual segment frame of the n residual segments, and k is a positive integer less than n;
the device further comprises:
an animation inserting module, configured to, when a first operation signal corresponding to the interval marker between the sequence of frames of the kth residual segment and the sequence of frames of the (k + 1) th residual segment is received, extract a number of image frames from the kth residual segment and/or the (k + 1) th residual segment, and add an animation effect to the extracted image frames, to generate a target transition animation, where the extracted image frames are image frames containing a specific character or a specific object in the kth residual segment and/or the (k + 1) th residual segment; inserting the target transition animation between the kth remaining segment and the (k + 1) th remaining segment; adjusting the interval mark between the kth residual segment and the (k + 1) th residual segment from a first display state to a second display state, wherein the interval mark displayed in the first display state is used for indicating that no transition animation is inserted between the kth residual segment and the (k + 1) th residual segment, and the interval mark displayed in the second display state is used for indicating that transition animation is inserted between the kth residual segment and the (k + 1) th residual segment;
a segment restoration module, configured to, when a second operation signal corresponding to the interval marker between the kth remaining segment frame sequence and the (k + 1) th remaining segment frame sequence is acquired, replace and display the kth remaining segment frame sequence and the (k + 1) th remaining segment frame sequence with the restored segment frame sequence according to the second operation signal; wherein, the recovered segment is a segment obtained by sequentially splicing the kth residual segment, the rejected segments between the kth residual segment and the (k + 1) th residual segment, and the (k + 1) th residual segment.
9. A terminal, characterized in that it comprises a processor and a memory in which at least one instruction, at least one program, set of codes or set of instructions is stored, which is loaded and executed by the processor to implement the method according to any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810989238.XA CN110868631B (en) | 2018-08-28 | 2018-08-28 | Video editing method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810989238.XA CN110868631B (en) | 2018-08-28 | 2018-08-28 | Video editing method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110868631A CN110868631A (en) | 2020-03-06 |
CN110868631B true CN110868631B (en) | 2021-12-14 |
Family
ID=69651500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810989238.XA Active CN110868631B (en) | 2018-08-28 | 2018-08-28 | Video editing method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110868631B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111356016B (en) | 2020-03-11 | 2022-04-22 | 北京小米松果电子有限公司 | Video processing method, video processing apparatus, and storage medium |
CN111629268B (en) * | 2020-05-21 | 2022-07-22 | Oppo广东移动通信有限公司 | Multimedia file splicing method and device, electronic equipment and readable storage medium |
CN113938751B (en) * | 2020-06-29 | 2023-12-22 | 抖音视界有限公司 | Video transition type determining method, device and storage medium |
CN111862936A (en) * | 2020-07-28 | 2020-10-30 | 游艺星际(北京)科技有限公司 | Method, device, electronic equipment and storage medium for generating and publishing works |
CN112004136A (en) * | 2020-08-25 | 2020-11-27 | 广州市百果园信息技术有限公司 | Method, device, equipment and storage medium for video clipping |
CN112565905B (en) * | 2020-10-24 | 2022-07-22 | 北京博睿维讯科技有限公司 | Image locking operation method, system, intelligent terminal and storage medium |
CN112702656A (en) * | 2020-12-21 | 2021-04-23 | 北京达佳互联信息技术有限公司 | Video editing method and video editing device |
CN114697749B (en) * | 2020-12-28 | 2024-09-03 | 北京小米移动软件有限公司 | Video editing method, device, storage medium and electronic equipment |
CN113038151B (en) * | 2021-02-25 | 2022-11-18 | 北京达佳互联信息技术有限公司 | Video editing method and video editing device |
CN113242466B (en) * | 2021-03-01 | 2023-09-05 | 北京达佳互联信息技术有限公司 | Video editing method, device, terminal and storage medium |
CN113038034A (en) * | 2021-03-26 | 2021-06-25 | 北京达佳互联信息技术有限公司 | Video editing method and video editing device |
CN113709560B (en) * | 2021-03-31 | 2024-01-02 | 腾讯科技(深圳)有限公司 | Video editing method, device, equipment and storage medium |
CN113099288A (en) * | 2021-03-31 | 2021-07-09 | 上海哔哩哔哩科技有限公司 | Video production method and device |
CN113099287A (en) * | 2021-03-31 | 2021-07-09 | 上海哔哩哔哩科技有限公司 | Video production method and device |
CN113473224B (en) * | 2021-06-29 | 2023-05-23 | 北京达佳互联信息技术有限公司 | Video processing method, video processing device, electronic equipment and computer readable storage medium |
CN113542890B (en) * | 2021-08-03 | 2023-06-13 | 厦门美图之家科技有限公司 | Video editing method, device, equipment and medium |
CN113852767B (en) * | 2021-09-23 | 2024-02-13 | 北京字跳网络技术有限公司 | Video editing method, device, equipment and medium |
CN113905274B (en) * | 2021-09-30 | 2024-05-17 | 安徽尚趣玩网络科技有限公司 | Video material splicing method and device based on EC (electronic control) identification |
CN116193047A (en) * | 2021-11-29 | 2023-05-30 | 北京字跳网络技术有限公司 | Video processing method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104469179A (en) * | 2014-12-22 | 2015-03-25 | 杭州短趣网络传媒技术有限公司 | Method for combining dynamic pictures into mobile phone video |
CN107943552A (en) * | 2017-11-16 | 2018-04-20 | 腾讯科技(成都)有限公司 | The page switching method and mobile terminal of a kind of mobile terminal |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7739599B2 (en) * | 2005-09-23 | 2010-06-15 | Microsoft Corporation | Automatic capturing and editing of a video |
CN101005609B (en) * | 2006-01-21 | 2010-11-03 | 腾讯科技(深圳)有限公司 | Method and system for forming interaction video frequency image |
CN101227568B (en) * | 2008-01-31 | 2011-05-04 | 成都索贝数码科技股份有限公司 | Special effect transforming method of video image |
JP2011254240A (en) * | 2010-06-01 | 2011-12-15 | Sony Corp | Image processing device, image processing method and program |
CN102290082B (en) * | 2011-07-05 | 2014-03-26 | 央视国际网络有限公司 | Method and device for processing brilliant video replay clip |
KR101328199B1 (en) * | 2012-11-05 | 2013-11-13 | 넥스트리밍(주) | Method and terminal and recording medium for editing moving images |
CN103096184B (en) * | 2013-01-18 | 2016-04-13 | 深圳市同洲电子股份有限公司 | A kind of video editing method and device |
WO2016095072A1 (en) * | 2014-12-14 | 2016-06-23 | 深圳市大疆创新科技有限公司 | Video processing method, video processing device and display device |
CN107465958A (en) * | 2017-09-07 | 2017-12-12 | 北京奇虎科技有限公司 | A kind of video sharing method, apparatus, electronic equipment and medium |
-
2018
- 2018-08-28 CN CN201810989238.XA patent/CN110868631B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104469179A (en) * | 2014-12-22 | 2015-03-25 | 杭州短趣网络传媒技术有限公司 | Method for combining dynamic pictures into mobile phone video |
CN107943552A (en) * | 2017-11-16 | 2018-04-20 | 腾讯科技(成都)有限公司 | The page switching method and mobile terminal of a kind of mobile terminal |
Non-Patent Citations (2)
Title |
---|
Photo2Video—A System for Automatically Converting Photographic Series Into Video;X.-S. Hua;《IEEE Transactions on Circuits and Systems for Video Technology》;20060724;全文 * |
菜鸟变大虾——轻松玩转视频编辑;徐健;《计算机与网络》;20041231;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110868631A (en) | 2020-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110868631B (en) | Video editing method, device, terminal and storage medium | |
US10656811B2 (en) | Animation of user interface elements | |
US20160323507A1 (en) | Method and apparatus for generating moving photograph | |
CN107750369B (en) | Electronic device for displaying a plurality of images and method for processing images | |
CN107562680A (en) | Data processing method, device and terminal device | |
US20130300750A1 (en) | Method, apparatus and computer program product for generating animated images | |
EP3677322A1 (en) | Virtual scene display method and device, and storage medium | |
CN114095776B (en) | Screen recording method and electronic equipment | |
CN110971953B (en) | Video playing method, device, terminal and storage medium | |
CN111638784A (en) | Facial expression interaction method, interaction device and computer storage medium | |
US20240177365A1 (en) | Previewing method and apparatus for effect application, and device, and storage medium | |
CN111770386A (en) | Video processing method, video processing device and electronic equipment | |
CN111679772B (en) | Screen recording method and system, multi-screen device and readable storage medium | |
CN114466232B (en) | Video processing method, device, electronic equipment and medium | |
KR101776674B1 (en) | Apparatus for editing video and the operation method | |
CN113068072A (en) | Video playing method, device and equipment | |
US10817167B2 (en) | Device, method and computer program product for creating viewable content on an interactive display using gesture inputs indicating desired effects | |
CN112307252A (en) | File processing method and device and electronic equipment | |
US10637905B2 (en) | Method for processing data and electronic apparatus | |
CN110703973B (en) | Image cropping method and device | |
CN115460448A (en) | Media resource editing method and device, electronic equipment and storage medium | |
JP2005354332A (en) | Image reproducer and program | |
CN115904168A (en) | Multi-device-based image material processing method and related device | |
CN112312203B (en) | Video playing method, device and storage medium | |
CN115767141A (en) | Video playing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40022652 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |