CN110381365A - Video takes out frame method, device and electronic equipment - Google Patents
Video takes out frame method, device and electronic equipment Download PDFInfo
- Publication number
- CN110381365A CN110381365A CN201910591313.1A CN201910591313A CN110381365A CN 110381365 A CN110381365 A CN 110381365A CN 201910591313 A CN201910591313 A CN 201910591313A CN 110381365 A CN110381365 A CN 110381365A
- Authority
- CN
- China
- Prior art keywords
- video
- interval
- track
- video frames
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000004044 response Effects 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims description 16
- 238000009432 framing Methods 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 230000002452 interceptive effect Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
A kind of video is provided in the embodiment of the present disclosure and takes out frame method, device and electronic equipment, belongs to technical field of data processing, this method comprises: obtaining the concern period for being directed to target video in track of video, the target video includes T video frame;Based on the concern period, the video frame on section [M, N] to be shown in the track of video is determined, wherein M, N, T are natural number, M < N < T;From the video frame extracted in the target video on the section [M-i, N+j], wherein i < M, N+j < T;In response to the operation in the track of video, multiple video frames are selected to be shown in the track of video on the section [M-i, N+j].By the scheme of the disclosure, the efficiency that video takes out frame is improved.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a video frame extraction method and apparatus, and an electronic device.
Background
With the development of the internet technology, more and more application programs are applied to the intelligent device, and as a classification of the application programs, more and more application programs are applied to the video application programs, and through the video application programs, a user can conveniently and quickly edit videos on the mobile terminal.
In the process of video editing, at least two interactive areas exist in an interactive interface of video editing software: the video display area is used for displaying a current frame in a video, the video track area is used for adjusting a video segment needing to be viewed or edited in a time dimension, and a thumbnail of the video to be edited is usually displayed in the video track area and needs to be extracted from the video to be edited, so that a user can select a time segment conveniently. Therefore, how to quickly and effectively extract the thumbnail for the video track area becomes a problem to be solved.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide a method, an apparatus, and an electronic device for video frame extraction, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides a video frame extraction method, including:
acquiring a focus time period aiming at a target video in a video track, wherein the target video comprises T video frames;
determining a video frame on an [ M, N ] interval to be displayed in the video track based on the attention time period, wherein M, N, T is a natural number, M < N < T;
extracting video frames on a [ M-i, N + j ] interval from the target video, wherein i < M, N + j < T;
selecting a plurality of video frames on the [ M-i, N + j ] interval for display on the video track in response to an operation on the video track.
According to a specific implementation manner of the embodiment of the present disclosure, after the extracting the video frames on the [ M-i, N + j ] interval from the target video, the method further includes:
and storing the video frames on the extracted [ M-i, N + j ] interval on a preset buffer.
According to a specific implementation manner of the embodiment of the present disclosure, the method further includes:
calculating the storage capacity occupied by the video frame on the extracted [ M-i, N + j ] interval;
and dynamically managing the cache based on the storage capacity.
According to a specific implementation manner of the embodiment of the present disclosure, the determining a video frame on an [ M, N ] interval to be displayed in the video track based on the attention time period includes:
calculating a difference distance between adjacent video frames in the target video;
judging whether the difference distance is larger than a preset threshold value or not;
and if so, extracting the adjacent video frames as the video frames on the [ M, N ] interval to be displayed.
According to a specific implementation manner of the embodiment of the present disclosure, after the extracting the video frames on the [ M-i, N + j ] interval from the target video, the method further includes:
and performing video optimization processing on the video frames in the [ M-i, N + j ] interval to enable the video frames in the [ M-i, N + j ] interval to occupy smaller storage space.
According to a specific implementation manner of the embodiment of the present disclosure, the extracting video frames on [ M-i, N + j ] interval from the target video includes:
setting asynchronously executed sub-threads;
and extracting video frames on the [ M-i, N + j ] interval from the target video based on the sub-thread.
According to a specific implementation manner of the embodiment of the present disclosure, the selecting a plurality of video frames on the [ M-i, N + j ] interval for displaying on the video track in response to the operation on the video track includes:
multiplexing frame view controls for a video display region;
displaying N-M video frames on the video track using the frame view control.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring a time period of interest in a video track for a target video includes:
judging whether a frame view display request exists in a current view;
if yes, canceling the previous frame view display request before the current view;
and determining the attention time period aiming at the target video in the video track based on the current frame view display request.
In a second aspect, an embodiment of the present disclosure provides a video frame extraction apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a focus time period aiming at a target video in a video track, and the target video comprises T video frames;
a determining module, configured to determine, based on the attention time period, a video frame on an [ M, N ] interval to be displayed in the video track, where M, N, T is a natural number, and M < N < T;
the extraction module is used for extracting video frames on [ M-i, N + j ] intervals from the target video, wherein i < M, N + j < T;
and the display module is used for responding to the operation on the video track and selecting a plurality of video frames on the [ M-i, N + j ] interval to be displayed on the video track.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video framing method of any of the first aspects or any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the video framing method in the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present disclosure also provides a computer program product including a computer program stored on a non-transitory computer readable storage medium, the computer program including program instructions that, when executed by a computer, cause the computer to perform the video framing method in the foregoing first aspect or any implementation manner of the first aspect.
The video frame extraction scheme in the embodiment of the disclosure comprises the steps of obtaining a focus time period for a target video in a video track, wherein the target video comprises T video frames;
determining a video frame on an [ M, N ] interval to be displayed in the video track based on the attention time period, wherein M, N, T is a natural number, M < N < T; extracting video frames on a [ M-i, N + j ] interval from the target video, wherein i < M, N + j < T; selecting a plurality of video frames on the [ M-i, N + j ] interval for display on the video track in response to an operation on the video track. By the scheme, the efficiency of video frame extraction is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a video frame extraction process according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of video editing software provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another video framing process provided in the embodiment of the present disclosure;
fig. 4 is a schematic diagram of another video framing process provided in the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a video frame extracting apparatus according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a video frame extraction method. The video frame extraction method provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrated in a server, a terminal device and the like.
Referring to fig. 1, a video frame extraction method provided by the embodiment of the present disclosure includes the following steps:
s101, obtaining a focus time period aiming at a target video in a video track, wherein the target video comprises T video frames.
The target video is an editing object aimed at in the process of video editing, and referring to fig. 2, at least two interactive areas exist in the interactive interface of the video editing software: the video track area is used for adjusting a video segment needing to be viewed or edited in a time dimension, and the video track area is usually used for displaying a thumbnail of a video to be edited so as to facilitate time segment selection of a user.
To facilitate editing, the video track typically displays an operation track for a predetermined length of time. For example, for a 60 second duration video, the video track may display 20 seconds duration video content. The target video may be composed of T video frames, T being a natural number. For ease of viewing, several video frames from the T video frames may be selected for placement on the video track.
The user may select a video time period of interest on the video track, for example, for a target video having a duration of 60 seconds, the user focuses on the 20 th to 40 th seconds of video content on the target track. By reading the start point time a1 and the end point time a2 of the target video on the video track, the period of attention of the target video can be acquired.
S102, determining a video frame on an [ M, N ] interval to be displayed in the video track based on the attention time period, wherein M, N, T is a natural number, and M < N < T.
For a target video with a duration of a, the starting time of the focus time period on the video track is a1, and the ending time point is a2, the video frames on the [ M, N ] interval to be displayed in the video track can be determined based on a, a1 and a 2.
The values of M and N can be determined by setting an integer function E (x), i.e., M ═ E (T × a1/a) and N ═ E (T × a2/a), so that video frames in the [ M, N ] interval to be displayed in the video track can be obtained. For example, for a target video with a duration of 60 seconds, the user focuses on the video content of 20 th to 40 th seconds on the target track, and there are 1440 video frames above the target video, so that M ═ E (1440 × 20/60) ═ 480, and N ═ E (1440 × 40/60) ═ 960. A video frame with a frame interval of [480,960] may be displayed on the video track.
S103, extracting video frames in the [ M-i, N + j ] interval from the target video, wherein i is less than M, and N + j is less than T.
The video frames in the [ M, N ] interval correspond to the time period of attention of the user, and when the user adjusts the time period of attention forward or backward on the video track, the new time period of attention of the user usually needs to be reanalyzed, which causes the video frames displayed on the video track to need to be recalculated, on one hand, the computing resources of the system are frequently called, which causes the waste of the system resources, and on the other hand, the user can always wait for a period of time to view the video frames in the new time period of attention, which causes the user experience to be poor.
For this reason, after the interval of the video frame is determined to be [ M, N ], the video frame can be further selected on more video frame intervals, so that other video frames outside the time period of interest of the user are preloaded in advance, and the loading efficiency of the video frame is improved.
Specifically, video frames in the [ M-i, N + j ] interval can be extracted from the target video, i, j is a natural number, and the specific number of i < M, N + j < T, i, and j can be set according to actual needs. Therefore, after the focus time period of the user is acquired, in addition to loading the video frames in the [ M, N ] interval on the video track, i + j other video frames adjacent to the video frame in the focus time period can be stored in the preset buffer, and when the user adjusts the focus time period forwards or backwards on the video track, the video frames stored in the preset buffer can be loaded according to the adjusted focus time period.
And S104, responding to the operation on the video track, and selecting a plurality of video frames on the [ M-i, N + j ] interval to be displayed on the video track.
According to the selection operation of the user on the video track, the attention time period of the target video on the video track can be determined, and when the attention time period after the user operates on the video track belongs to the time period corresponding to the [ M-i, N + j ] interval, a plurality of video frames can be directly selected from the [ M-i, N + j ] interval to be displayed as thumbnails.
In the process of displaying the video frames on the video track, all the video frames corresponding to the attention time period of the user on the [ M-i, N + j ] interval can be displayed, a part of key frames can be selected from the [ M-i, N + j ] interval for displaying, and the number of the video frames displayed on the video track can be set according to actual needs. Thereby making the video frames displayed on the video track more representative.
Through the scheme disclosed by the invention, the video frame can be extracted in advance according to actual needs, so that the efficiency of extracting and loading the video frame is improved.
In order to increase the loading speed of the video frame, according to a specific implementation manner of the embodiment of the present disclosure, after the video frame on the [ M-i, N + j ] interval is extracted from the target video, the extracted video frame on the [ M-i, N + j ] interval may be stored in a preset buffer. The preset cache can be a RAM or a storage medium such as Flash.
In the process of storing, as an embodiment, the storage capacity occupied by the video frames in the extracted [ M-i, N + j ] interval may also be calculated, and the cache is dynamically managed based on the storage capacity, for example, the storage space of the cache may be dynamically planned according to the storage capacity.
Referring to fig. 3, according to a specific implementation manner of the embodiment of the present disclosure, the determining a video frame on an [ M, N ] interval to be displayed in the video track based on the attention time period includes:
s301, calculating the difference distance between adjacent video frames in the target video.
For any two adjacent video frames x and y, the feature matrixes Mx and My of the video frame images x and y can be obtained in a down-sampling mode, and the down-sampling frequency can be set according to actual needs, so that the calculation consumption is reduced.
By calculating the Euclidean distance between the feature matrixes Mx and My, the difference distance Dxy between two adjacent images can be obtained, so that whether the two images are similar or not can be judged.
S302, judging whether the difference distance is larger than a preset threshold value.
By comparing the euclidean distance Dxy representing the difference distance with a preset threshold, it can be determined whether two adjacent images satisfy a certain difference.
And S303, if so, extracting the adjacent video frames into video frames on the [ M, N ] interval to be displayed.
According to a specific implementation manner of the embodiment of the present disclosure, after the video frames in the [ M-i, N + j ] interval are extracted from the target video, video optimization processing may be performed on the video frames in the [ M-i, N + j ] interval, so that the video frames in the [ M-i, N + j ] interval occupy a smaller storage space.
Referring to fig. 4, according to a specific implementation manner of the embodiment of the present disclosure, the extracting video frames on an [ M-i, N + j ] interval from the target video includes:
s401, the asynchronously executed sub-thread is set.
Besides the main thread of the video editing software, a sub-thread which is executed asynchronously with the main thread is separately set, and the frame extraction work aiming at the target video is executed through the sub-thread separately.
S402, extracting video frames in the [ M-i, N + j ] interval from the target video based on the sub thread.
And extracting the video frames in the [ M-i, N + j ] interval from the target video by the sub thread corresponding to the video frame extracting instruction. The video frame extraction algorithm is a common image processing algorithm in the prior art, and is not described herein again.
In order to improve the display efficiency of thumbnails on a video track, according to a specific implementation manner of the embodiment of the present disclosure, the selecting, in response to an operation on the video track, a plurality of video frames on an [ M-i, N + j ] interval for displaying on the video track includes: and multiplexing frame view controls of the video display area, and displaying N-M video frames on the video track by using the frame view controls.
In order to improve the efficiency of video frame extraction, according to a specific implementation manner of the embodiment of the present disclosure, in the process of acquiring a time period of interest for a target video in a video track, it may be determined whether a frame view display request exists in a current view, if so, a previous frame view display request before the current view is cancelled, and the time period of interest for the target video in the video track is determined based on the current frame view display request. In this way, the last frame view display request which is not required to be executed continuously can be timely abandoned, and the display efficiency of the system is improved.
Corresponding to the above method embodiment, referring to fig. 5, the present disclosure also provides a video frame extracting apparatus 50, including:
an obtaining module 501, configured to obtain a time period of interest in a video track for a target video, where the target video includes T video frames.
The target video is an editing object aimed at in the process of video editing, and referring to fig. 2, at least two interactive areas exist in the interactive interface of the video editing software: the video track area is used for adjusting a video segment needing to be viewed or edited in a time dimension, and the video track area is usually used for displaying a thumbnail of a video to be edited so as to facilitate time segment selection of a user.
To facilitate editing, the video track typically displays an operation track for a predetermined length of time. For example, for a 60 second duration video, the video track may display 20 seconds duration video content. The target video may be composed of T video frames, T being a natural number. For ease of viewing, several video frames from the T video frames may be selected for placement on the video track.
The user may select a video time period of interest on the video track, for example, for a target video having a duration of 60 seconds, the user focuses on the 20 th to 40 th seconds of video content on the target track. By reading the start point time a1 and the end point time a2 of the target video on the video track, the period of attention of the target video can be acquired.
A determining module 502, configured to determine, based on the attention time period, a video frame on an [ M, N ] interval to be displayed in the video track, where M, N, T is a natural number, and M < N < T.
For a target video with a duration of a, the starting time of the focus time period on the video track is a1, and the ending time point is a2, the video frames on the [ M, N ] interval to be displayed in the video track can be determined based on a, a1 and a 2.
The values of M and N can be determined by setting an integer function E (x), i.e., M ═ E (T × a1/a) and N ═ E (T × a2/a), so that video frames in the [ M, N ] interval to be displayed in the video track can be obtained. For example, for a target video with a duration of 60 seconds, the user focuses on the video content of 20 th to 40 th seconds on the target track, and there are 1440 video frames above the target video, so that M ═ E (1440 × 20/60) ═ 480, and N ═ E (1440 × 40/60) ═ 960. A video frame with a frame interval of [480,960] may be displayed on the video track.
An extracting module 503, configured to extract video frames in the [ M-i, N + j ] interval from the target video, where i < M, N + j < T.
The video frames in the [ M, N ] interval correspond to the time period of attention of the user, and when the user adjusts the time period of attention forward or backward on the video track, the new time period of attention of the user usually needs to be reanalyzed, which causes the video frames displayed on the video track to need to be recalculated, on one hand, the computing resources of the system are frequently called, which causes the waste of the system resources, and on the other hand, the user can always wait for a period of time to view the video frames in the new time period of attention, which causes the user experience to be poor.
For this reason, after the interval of the video frame is determined to be [ M, N ], the video frame can be further selected on more video frame intervals, so that other video frames outside the time period of interest of the user are preloaded in advance, and the loading efficiency of the video frame is improved.
Specifically, video frames in the [ M-i, N + j ] interval can be extracted from the target video, i, j is a natural number, and the specific number of i < M, N + j < T, i, and j can be set according to actual needs. Therefore, after the focus time period of the user is acquired, in addition to loading the video frames in the [ M, N ] interval on the video track, i + j other video frames adjacent to the video frame in the focus time period can be stored in the preset buffer, and when the user adjusts the focus time period forwards or backwards on the video track, the video frames stored in the preset buffer can be loaded according to the adjusted focus time period.
A display module 504, configured to select a plurality of video frames on the [ M-i, N + j ] interval for display on the video track in response to an operation on the video track.
According to the selection operation of the user on the video track, the attention time period of the target video on the video track can be determined, and when the attention time period after the user operates on the video track belongs to the time period corresponding to the [ M-i, N + j ] interval, a plurality of video frames can be directly selected from the [ M-i, N + j ] interval to be displayed as thumbnails.
In the process of displaying the video frames on the video track, all the video frames corresponding to the attention time period of the user on the [ M-i, N + j ] interval can be displayed, a part of key frames can be selected from the [ M-i, N + j ] interval for displaying, and the number of the video frames displayed on the video track can be set according to actual needs. Thereby making the video frames displayed on the video track more representative.
Through the scheme disclosed by the invention, the video frame can be extracted in advance according to actual needs, so that the efficiency of extracting and loading the video frame is improved.
The apparatus shown in fig. 5 may correspondingly execute the content in the above method embodiment, and details of the part not described in detail in this embodiment refer to the content described in the above method embodiment, which is not described again here.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video framing method of the above method embodiments.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the video framing method in the aforementioned method embodiments.
Referring now to FIG. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (11)
1. A method for video framing, comprising:
acquiring a focus time period aiming at a target video in a video track, wherein the target video comprises T video frames;
determining a video frame on an [ M, N ] interval to be displayed in the video track based on the attention time period, wherein M, N, T is a natural number, M < N < T;
extracting video frames on a [ M-i, N + j ] interval from the target video, wherein i < M, N + j < T;
selecting a plurality of video frames on the [ M-i, N + j ] interval for display on the video track in response to an operation on the video track.
2. The method of claim 1, wherein after the extracting the video frames in the [ M-i, N + j ] interval from the target video, the method further comprises:
and storing the video frames on the extracted [ M-i, N + j ] interval on a preset buffer.
3. The method of claim 2, further comprising:
calculating the storage capacity occupied by the video frame on the extracted [ M-i, N + j ] interval;
and dynamically managing the cache based on the storage capacity.
4. The method of claim 1, wherein determining a video frame over an [ M, N ] interval to be displayed in the video track based on the period of interest comprises:
calculating a difference distance between adjacent video frames in the target video;
judging whether the difference distance is larger than a preset threshold value or not;
and if so, extracting the adjacent video frames as the video frames on the [ M, N ] interval to be displayed.
5. The method of claim 1, wherein after the extracting the video frames in the [ M-i, N + j ] interval from the target video, the method further comprises:
and performing video optimization processing on the video frames in the [ M-i, N + j ] interval to enable the video frames in the [ M-i, N + j ] interval to occupy smaller storage space.
6. The method according to claim 1, wherein said extracting video frames over [ M-i, N + j ] interval from said target video comprises:
setting asynchronously executed sub-threads;
and extracting video frames on the [ M-i, N + j ] interval from the target video based on the sub-thread.
7. The method of claim 1, wherein selecting a plurality of video frames for display on the video track over the [ M-i, N + j ] interval in response to the operation on the video track comprises:
multiplexing frame view controls for a video display region;
displaying N-M video frames on the video track using the frame view control.
8. The method of claim 1, wherein obtaining the time period of interest for the target video in the video track comprises:
judging whether a frame view display request exists in a current view;
if yes, canceling the previous frame view display request before the current view;
and determining the attention time period aiming at the target video in the video track based on the current frame view display request.
9. A video framing apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a focus time period aiming at a target video in a video track, and the target video comprises T video frames;
a determining module, configured to determine, based on the attention time period, a video frame on an [ M, N ] interval to be displayed in the video track, where M, N, T is a natural number, and M < N < T;
the extraction module is used for extracting video frames on [ M-i, N + j ] intervals from the target video, wherein i < M, N + j < T;
and the display module is used for responding to the operation on the video track and selecting a plurality of video frames on the [ M-i, N + j ] interval to be displayed on the video track.
10. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video framing method of any of the preceding claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the video framing method of any of the preceding claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910591313.1A CN110381365A (en) | 2019-07-02 | 2019-07-02 | Video takes out frame method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910591313.1A CN110381365A (en) | 2019-07-02 | 2019-07-02 | Video takes out frame method, device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110381365A true CN110381365A (en) | 2019-10-25 |
Family
ID=68251619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910591313.1A Pending CN110381365A (en) | 2019-07-02 | 2019-07-02 | Video takes out frame method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110381365A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112333537A (en) * | 2020-07-27 | 2021-02-05 | 深圳Tcl新技术有限公司 | Video integration method and device and computer readable storage medium |
CN112911306A (en) * | 2021-01-15 | 2021-06-04 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113490051A (en) * | 2021-07-16 | 2021-10-08 | 北京奇艺世纪科技有限公司 | Video frame extraction method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5974386A (en) * | 1995-09-22 | 1999-10-26 | Nikon Corporation | Timeline display of sound characteristics with thumbnail video |
CN1979493A (en) * | 2005-12-08 | 2007-06-13 | 汤姆森许可贸易公司 | Method for editing media contents in a network environment, and device for cache storage of media data |
CN102722590A (en) * | 2012-06-25 | 2012-10-10 | 宇龙计算机通信科技(深圳)有限公司 | Terminal and image acquisition method |
CN105872675A (en) * | 2015-12-22 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and device for intercepting video animation |
CN109618225A (en) * | 2018-12-25 | 2019-04-12 | 百度在线网络技术(北京)有限公司 | Video takes out frame method, device, equipment and medium |
CN109936763A (en) * | 2017-12-15 | 2019-06-25 | 腾讯科技(深圳)有限公司 | The processing of video and dissemination method |
CN109947991A (en) * | 2017-10-31 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of extraction method of key frame, device and storage medium |
-
2019
- 2019-07-02 CN CN201910591313.1A patent/CN110381365A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5974386A (en) * | 1995-09-22 | 1999-10-26 | Nikon Corporation | Timeline display of sound characteristics with thumbnail video |
CN1979493A (en) * | 2005-12-08 | 2007-06-13 | 汤姆森许可贸易公司 | Method for editing media contents in a network environment, and device for cache storage of media data |
CN102722590A (en) * | 2012-06-25 | 2012-10-10 | 宇龙计算机通信科技(深圳)有限公司 | Terminal and image acquisition method |
CN105872675A (en) * | 2015-12-22 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and device for intercepting video animation |
CN109947991A (en) * | 2017-10-31 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of extraction method of key frame, device and storage medium |
CN109936763A (en) * | 2017-12-15 | 2019-06-25 | 腾讯科技(深圳)有限公司 | The processing of video and dissemination method |
CN109618225A (en) * | 2018-12-25 | 2019-04-12 | 百度在线网络技术(北京)有限公司 | Video takes out frame method, device, equipment and medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112333537A (en) * | 2020-07-27 | 2021-02-05 | 深圳Tcl新技术有限公司 | Video integration method and device and computer readable storage medium |
CN112333537B (en) * | 2020-07-27 | 2023-12-05 | 深圳Tcl新技术有限公司 | Video integration method, device and computer readable storage medium |
CN112911306A (en) * | 2021-01-15 | 2021-06-04 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and storage medium |
CN112911306B (en) * | 2021-01-15 | 2023-04-07 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113490051A (en) * | 2021-07-16 | 2021-10-08 | 北京奇艺世纪科技有限公司 | Video frame extraction method and device, electronic equipment and storage medium |
CN113490051B (en) * | 2021-07-16 | 2024-01-23 | 北京奇艺世纪科技有限公司 | Video frame extraction method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111510760B (en) | Video information display method and device, storage medium and electronic equipment | |
CN111399956B (en) | Content display method and device applied to display equipment and electronic equipment | |
CN110365973B (en) | Video detection method and device, electronic equipment and computer readable storage medium | |
CN112261226A (en) | Horizontal screen interaction method and device, electronic equipment and storage medium | |
CN109446025B (en) | Operation behavior playback method and device, electronic equipment and readable medium | |
CN113038234B (en) | Video processing method and device, electronic equipment and storage medium | |
CN110070593B (en) | Method, device, equipment and medium for displaying picture preview information | |
CN114363686B (en) | Method, device, equipment and medium for publishing multimedia content | |
EP4131983A1 (en) | Method and apparatus for processing three-dimensional video, readable storage medium, and electronic device | |
CN110381365A (en) | Video takes out frame method, device and electronic equipment | |
CN115209215B (en) | Video processing method, device and equipment | |
CN114302208A (en) | Video distribution method, video distribution device, electronic equipment, storage medium and program product | |
CN114860139A (en) | Video playing method, video playing device, electronic equipment, storage medium and program product | |
CN114760515A (en) | Method, device, equipment, storage medium and program product for displaying media content | |
CN110162394B (en) | Media object playing method and device, electronic equipment and storage medium | |
CN110134905B (en) | Page update display method, device, equipment and storage medium | |
CN114330277B (en) | Reading typesetting method, device, equipment and storage medium | |
CN109714626B (en) | Information interaction method and device, electronic equipment and computer readable storage medium | |
CN110908752A (en) | Control setting method and device, electronic equipment and interaction system | |
CN112492399B (en) | Information display method and device and electronic equipment | |
CN110381356B (en) | Audio and video generation method and device, electronic equipment and readable medium | |
CN110147283B (en) | Display content switching display method, device, equipment and medium | |
CN115114463B (en) | Method and device for displaying media content, electronic equipment and storage medium | |
CN114584709B (en) | Method, device, equipment and storage medium for generating zooming special effects | |
CN114786069B (en) | Video generation method, device, medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191025 |
|
RJ01 | Rejection of invention patent application after publication |