CN113473178A - Video processing method and device, electronic equipment and computer readable storage medium - Google Patents
Video processing method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN113473178A CN113473178A CN202110734130.8A CN202110734130A CN113473178A CN 113473178 A CN113473178 A CN 113473178A CN 202110734130 A CN202110734130 A CN 202110734130A CN 113473178 A CN113473178 A CN 113473178A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- new
- target video
- video material
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 16
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 26
- 238000012550 audit Methods 0.000 claims description 18
- 238000005516 engineering process Methods 0.000 claims description 14
- 239000000463 material Substances 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000012795 verification Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
- H04N21/42653—Internal components of the client ; Characteristics thereof for processing graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/432—Content retrieval operation from a local storage medium, e.g. hard-disk
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure provides a video processing method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of internet, in particular to the technical field of video processing. The specific implementation scheme is as follows: determining a target video; acquiring at least one video material matched with the target video; determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video; and taking the new video as the reply content of the target video, and replying the target video.
Description
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a video processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the continuous development of internet technology, video production and browsing become one of the more popular entertainment ways for users. Short videos are more and more popular with users due to the advantages of short video time, simple production mode and the like. At present, users often make short videos by shooting, clipping and other modes, and such a making mode needs users to mine and sort video materials to achieve video creation.
Disclosure of Invention
The disclosure provides a video processing method, a video processing device, an electronic device and a computer readable storage medium.
According to an aspect of the present disclosure, there is provided a video processing method including:
determining a target video;
acquiring at least one video material matched with the target video;
determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
and taking the new video as the reply content of the target video, and replying the target video.
According to another aspect of the present disclosure, there is provided a video processing apparatus including:
the determining module is used for determining a target video;
the acquisition module is used for acquiring at least one video material matched with the target video;
the synthesizing module is used for determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
and the reply module is used for replying the target video by taking the new video as the reply content of the target video.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of one aspect described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the above-described one aspect.
According to yet another aspect of the disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method according to the above-mentioned aspect.
According to the method and the device, on the basis of the target video, the new video can be quickly synthesized based on the target video material determined from the recommended video material, a user does not need to spend a large amount of time and energy to obtain the video material in a shooting or searching mode, and the creation mode of the new video is more convenient; in addition, the new video can reply the target video, and a new interaction mode is further provided, so that interactive communication can be performed between video publishers through the video, and better use experience is provided for video users.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flow chart of a video processing method provided according to an embodiment of the present disclosure;
fig. 2a is one of interface schematic diagrams of an electronic device according to an embodiment for implementing a video processing method provided by the present disclosure;
fig. 2b is a second schematic interface diagram of an electronic device for implementing the video processing method according to the embodiment of the disclosure;
fig. 3 is a scene schematic diagram of a video processing method according to an embodiment of the present disclosure;
fig. 4 is a flow chart of another video processing method provided in accordance with another embodiment of the present disclosure;
fig. 5 is a block diagram of a video processing apparatus provided according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a video processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present disclosure provides a video processing method. Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the present disclosure, and as shown in fig. 1, the video processing method includes the following steps:
and step S101, determining a target video.
It should be noted that the video processing method provided by the present disclosure may be applied to electronic devices such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, an intelligent wearable device, and the like.
Optionally, the target video may refer to a video currently being played by the electronic device; or the video may be determined based on user operation, for example, a video selected by a user from videos to be played may be determined as a target video; alternatively, the target video may be a video that is automatically determined by the electronic device based on a certain setting, for example, the electronic device may determine a video that includes a specified person in the video content as the target video, and the like.
And step S102, acquiring at least one video material matched with the target video.
The video material may be video, text, picture, audio, etc. The electronic device may obtain at least one video material matched with the target video from a video material library, where the video material library may be pre-stored in the electronic device, or the video material library may also be stored in a preset server, and the electronic device may obtain the video material from the video material library online.
In the embodiment of the present disclosure, the obtaining of the at least one video material matched with the target video may be implemented based on the video content of the target video. For example, if the video content in the target video includes a specific building, at least one video material matching the specific building may be obtained from a video material library, for example, a video or a picture also including the specific building may be used as the matched video material. For another example, if the video content of the target video includes children, the children song, the cartoon video, the cartoon map, etc. in the video material library may be used as the video material matching the target video.
Optionally, the step S102 may include:
extracting video features of the target video based on a content understanding technology;
and acquiring at least one video material matched with the video characteristics.
The specific implementation principle of the content understanding technology may refer to related technologies, and details of the disclosure are not repeated herein. As can be appreciated, after determining a target video, an electronic device may understand video content of the target video based on a content understanding technology to obtain video features of the target video; the video features may refer to video tags of the target video, video authors, video key frames, video background music, key video content, and so on.
Illustratively, the video feature may be key video content of the target video, and the key video content may be extracted based on a content understanding technology, for example, video content occurring more than a preset number of times in the target video may be determined as the key video content; assuming that the key video content includes sunset, it may be that video material matching sunset, such as video including sunset, audio, pictures, etc., is obtained from a video material library. Or, the video feature may be a video author, and may be a video material obtained from a video published by the video author, for example, audio, pictures, etc. used by the video author are taken as a matching video material. Of course, the video features and the video material matching the video features are merely exemplary and other possible forms are possible, and this disclosure is not exhaustive.
In the embodiment of the disclosure, the video features of the target video are extracted through a content understanding technology, and then at least one matched video material is obtained based on the video features. Therefore, the obtained video material can be more fit with the target video, the video material can be recommended to the user quickly, the user does not need to shoot or search for the video material with time and labor consumption, and the user can create a new video more conveniently.
Step S103, determining a target video material from the at least one video material based on the first input, and synthesizing the target video material and the target video to generate a new video.
Alternatively, the first input may be a user input operation received by the electronic device. For example, the target video is in a playing state, the at least one video material may be displayed in a display interface of the electronic device, and the at least one video material and the target video may belong to different display hierarchies, respectively. Or, a specific icon may be displayed in the display interface of the electronic device, and when the user clicks the icon, the at least one video material is displayed in the display interface. Further, the user may select a target video material from the displayed at least one video material, for example, when a one-click input is received when the user acts on a certain video material, the video material is determined as the target video material. Of course, the first input may be other input forms, for example, the first input may also be a specific sliding track, a voice input, and the like.
In the embodiment of the present disclosure, after the target video material is determined, the target video material and the target video are synthesized to generate a new video. That is, the new video is generated in conjunction with the target video material on the basis of the target video.
And step S104, taking the new video as the reply content of the target video, and replying the target video.
In the embodiment of the disclosure, the target video may be a video with a reply function, and the reply function may be implemented in the form of leaving a message, making a comment, making a barrage, and the like for the target video.
And after synthesizing the target video and the target video material to obtain a new video, replying the target video by taking the new video as the reply content of the target video. For example, the new video may be replied in the form of a comment, so that the video publisher of the target video can see the new video; or, the new video may be sent to the video publisher of the target video in the form of a new message, so that the video publisher of the target video can receive and view the new video.
It should be noted that, when the new video is used as the reply content of the target video and the target video is replied, the playing end of the target video can display the new video in real time, so that the video sender of the target video can see the new video in time. Further, when the new video is displayed at the playing end of the target video, and when the video publisher of the target video watches the new video, the playing end of the target video may also be based on the scheme of the above steps of the present disclosure, take the new video as the target video, obtain at least one video material matched with the new video, then determine the target video material therefrom, synthesize the determined target video material with the new video to generate the new video, which can also be used as the reply content of the new video to reply to the new video, and so on, that is, the video publisher and the viewer can realize interactive communication based on the video.
In the embodiment of the disclosure, after a target video is determined, at least one video material matched with the target video is obtained, then the target video material is determined from the at least one video material, the target video material and the target video are synthesized to generate a new video, the new video is used as reply content of the target video, and the target video is replied. Therefore, on the basis of the target video, the new video can be quickly synthesized based on the target video material determined from the recommended video material, a user does not need to spend a great deal of time and energy to obtain the video material in a shooting or searching mode, and the creation mode of the new video is more convenient; in addition, the new video can reply the target video, and a new interaction mode is further provided, so that interactive communication can be performed between video publishers through the video, and better use experience is provided for video users.
Optionally, the step S104 may include:
performing content audit on the new video;
and under the condition that the new video passes the audit, taking the new video as the reply content of the target video, and replying the target video.
In the embodiment of the present disclosure, in the case of generating a new video based on the target video and the target video material, the new video cannot be directly replied, and content verification needs to be performed on the new video, so as to ensure the safety and compliance of the new video.
The content of the new video is checked, whether the video content of the new video meets preset requirements or not can be checked, and the preset requirements can mean that political sensitive content is not involved, non-civilized content is not involved, and the like. And if the video content of the new video meets the preset requirement, determining that the new video passes the audit, and further enabling the electronic equipment to take the new video as the reply content of the target video so as to reply the target video.
In the embodiment of the disclosure, content verification is performed on the generated new video, so that only the new video passing the verification can reply and issue the target video, and further, the security of the video and the network environment can be ensured, and the safe operation of the video network can be ensured.
Further, after performing a content audit on the new video, the method further comprises:
screening the video content of the new video under the condition that the new video passes the audit;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
It can be understood that, in the case that the new video passes the audit, that is, the generated new video meets the preset requirement, the new video may be replied and distributed as the reply content of the target video, and in this case, the video content screening may be performed on the new video. For example, the new video may be subjected to video feature extraction based on a content understanding technology, and the extracted video features are taken as screened video content; or, the new video may be subjected to video content screening based on video quality, for example, screening out video content whose bit rate of the video is greater than a preset bit rate; or, the new video may be subjected to video content screening based on content including target characters, images, audio and the like, and the like.
In the embodiment of the disclosure, after the new video passing the audit is screened, the video content screened from the new video is used as the alternative video material, and the alternative video material is stored in the video material library. Further, the screened video contents can be selected as candidate video materials and become video materials of other videos, and new videos can be generated by synthesizing with other videos. By the method, the material amount of the video material library can be effectively expanded, the source of the video material in the video material library is more flexible and convenient, the generated new video can be effectively utilized, and a user does not need to manually collect the video material to expand the video material library.
Optionally, in this embodiment of the present disclosure, the at least one video material includes any one of:
in the video material library, video materials with the matching degree with the target video larger than a first preset value;
in the video material library, selecting a video material which is synthesized with the video to generate a new video, wherein the frequency of generating the new video is greater than a second preset value;
in the video material library, the bit rate of the video is greater than the video material of a third preset value;
and acquiring the video material matched with the target video from a video material library based on the user image of the current user.
In the embodiment of the present disclosure, after a target video is determined, at least one video material matched with the target video is acquired, and the at least one video material may be acquired in various manners. For example, the video material in the video material library, whose matching degree with the target video is greater than the first preset value, may be used as the matched video material, and the matching degree with the target video may refer to the similarity of the video content, or may also refer to the correlation degree with the target video, for example, if the target video is a game video, the video material including the cartoon character may be considered to have a higher correlation degree with the target video. Of course, the matching degree between the video material and the target video may also be determined by other manners, which is not described in detail in this disclosure.
The matching degree between the video material and the target video may be determined by a machine learning algorithm and a partial randomization strategy, for example, a pre-trained model for determining the matching degree of the related material may be used, and the training method of the model may be a model training method in reference to related technologies, which is not described in detail herein.
Optionally, the video material selected from the video material library to be synthesized with the video to generate a new video, the number of times being greater than a second preset value, may also be determined as the video material matched with the target video. For example, if a certain video material in the video material library is selected more than 5 times to be synthesized with a video to generate a new video, the video material may be considered to have a higher popularity or be more general, and if the second preset value is 5, the video material may be determined to be a video material matched with the target video.
Or, the video material with the video bit rate greater than the third preset value in the video material library may be used as the video material matched with the target video. The video bit rate can represent the video quality, the video material with the video bit rate larger than a certain preset value can be considered as the video material with better quality, and the video material is determined to be the video material matched with the target video, so that the generated new video can be ensured to have better video quality.
Still alternatively, the video material matching the target video may be obtained from a video material library based on the user image of the current user. Wherein the current user may refer to a viewer who is watching the target video. In the embodiment of the present disclosure, the target video may be played on a specific video application, and the current user may refer to a user who is using or running the video application, so that the electronic device may obtain a user portrait of the current user through the video application, where the user portrait includes a gender, an age, a historical browsing record, and the like of the current user. For example, by the age of the current user, if the age of the current user is more than 50 years old, the audio of the middle-aged and the elderly people, the health-preserving video and the like in the video material library can be taken as video materials matched with the target video; or, if the current user historical browsing record is mainly a children cartoon, determining the video material matched with the target video by using a children song, a cartoon picture and the like in the video material library. Therefore, the video material matched with the target video is determined based on the user portrait of the current user, so that the determined video material is higher in fitness with the current user, and better selection can be provided for the current user to select the target video material.
In the embodiment of the disclosure, after the target video is determined, at least one video material matched with the target video can be obtained in multiple ways, so that the way of obtaining the at least one video material is richer and more flexible.
In the embodiment of the present disclosure, there may be various ways for how to determine the target video, for example, determining a video in a playing state as the target video; alternatively, the determining the target video may include:
acquiring a playing video in a playing state;
and if a second input acting on the played video is received, determining the played video as a target video.
It is understood that the method provided by the present disclosure is applied to an electronic device, and the electronic device may play a video based on a video-class application. And when a second input acting on the played video is received, determining the played video as the target video. Alternatively, the second input may be a specific input operation acting on the playing video, such as a preset sliding track, or a preset voice input, or a specified operation acting on a preset virtual key of the playing video interface.
As shown in fig. 2a, in one scenario, for a played video in a playing state on an electronic device, a current user a has not replied to the played video, and if a click input of a user on a "button" key above a playing video interface is received, the played video is determined as a target video. Or, as shown in fig. 2B, in another scenario, for a playing video that is in a playing state on an electronic device, and a current user a and a video publisher B of the playing video are already in a battle state, that is, the current user replies to the playing video, or the video publisher of the playing video replies to the current user, if a click input applied by the current user to a "response" button in a video interface is received, the playing video is determined as a target video.
Therefore, for the played video in the playing state, the played video needs to be determined as the target video only under the condition that the second input acted on the played video by the user is received, so that the target video determined by misoperation of the user can be avoided, and the mode for determining the target video is more accurate.
Referring to fig. 3, for example, with regard to the video released by the video author a, the video author B can generate a response video 1 by combining with the video material on the basis of the main video to respond to the main video, after receiving the response video 1 responded by the video author B, the video author a can further generate a response video 2 by combining with the video material on the basis of the response video 1 to respond to the video author B, then the video author B can generate a response video 3 by combining with the video material on the basis of the response video 2 to respond to the video author a, the video author a can continue to generate a response video 4 by combining with the video material on the basis of the response video 3 to respond to the video author B, and the video author B can continue to generate a response video 5 by combining with the video material on the basis of the response video 4, in response to video author a. Similarly, other video authors, such as the video author C and the video author D, may perform video response with the video author a based on the above manner, and details thereof are not repeated herein. Therefore, the video publishers can exchange and interact based on the creation of new videos, a more flexible video interaction mode is provided, and video creation is simpler and more convenient.
Referring to fig. 4, fig. 4 is a flowchart of another video processing method according to another embodiment of the present disclosure, as shown in fig. 4, in a case that a main video work is determined, Content understanding is performed on the main video work based on a Content understanding technology to obtain video features of the main video work, material recommendation is performed on the main video work based on the video features, video editing is performed on the main video work and the recommended materials to synthesize a new video, the new video is subjected to video distribution by a User created Content (UGC) video distributor, security audit and quality verification are further performed on the new video, and in a case that the new video meets preset security audit requirements and quality requirements, video reply is issued, that is, the new video is replied to the main video work as reply Content of the main video work. The specific implementation process of this embodiment may refer to the description in the method embodiment described in fig. 1, and may achieve the same technical effect, which is not described herein again.
The embodiment of the disclosure also provides a video processing device.
Referring to fig. 5, fig. 5 is a structural diagram of a video processing apparatus according to an embodiment of the disclosure. As shown in fig. 5, the video processing apparatus 500 includes:
a determining module 501, configured to determine a target video;
an obtaining module 502, configured to obtain at least one video material matched with the target video;
a synthesizing module 503, configured to determine a target video material from the at least one video material based on a first input, and synthesize the target video material and the target video to generate a new video;
a replying module 504, configured to reply to the target video by using the new video as the reply content of the target video.
Optionally, the determining module 501 is further configured to:
acquiring a playing video in a playing state;
and if a second input acting on the played video is received, determining the played video as a target video.
Optionally, the reply module 504 is further configured to:
performing content audit on the new video;
and under the condition that the new video passes the audit, taking the new video as the reply content of the target video, and replying the target video.
Optionally, the video processing apparatus 500 further includes a filtering module, configured to:
screening the video content of the new video under the condition that the new video passes the audit;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
Optionally, the obtaining module 502 is further configured to:
extracting video features of the target video based on a content understanding technology;
and acquiring at least one video material matched with the video characteristics.
Optionally, the at least one video material comprises any one of:
in the video material library, video materials with the matching degree with the target video larger than a first preset value;
in the video material library, selecting a video material which is synthesized with the video to generate a new video, wherein the frequency of generating the new video is greater than a second preset value;
in the video material library, the bit rate of the video is greater than the video material of a third preset value;
and acquiring the video material matched with the target video from a video material library based on the user image of the current user.
It should be noted that the video processing apparatus 500 provided in this embodiment can implement all technical solutions of the foregoing video processing method embodiments, so that at least all technical effects can be achieved, and details are not described here again.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a video processing method. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the video processing method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (15)
1. A video processing method, comprising:
determining a target video;
acquiring at least one video material matched with the target video;
determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
and taking the new video as the reply content of the target video, and replying the target video.
2. The method of claim 1, wherein the determining a target video comprises:
acquiring a playing video in a playing state;
and if a second input acting on the played video is received, determining the played video as a target video.
3. The method of claim 1, wherein replying to the target video with the new video as the reply content of the target video comprises:
performing content audit on the new video;
and under the condition that the new video passes the audit, taking the new video as the reply content of the target video, and replying the target video.
4. The method of claim 3, further comprising:
screening the video content of the new video under the condition that the new video passes the audit;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
5. The method of claim 1, wherein said obtaining at least one video material matching the target video comprises:
extracting video features of the target video based on a content understanding technology;
and acquiring at least one video material matched with the video characteristics.
6. The method of any of claims 1-5, wherein the at least one video material comprises any of:
in the video material library, video materials with the matching degree with the target video larger than a first preset value;
in the video material library, selecting a video material which is synthesized with the video to generate a new video, wherein the frequency of generating the new video is greater than a second preset value;
in the video material library, the bit rate of the video is greater than the video material of a third preset value;
and acquiring the video material matched with the target video from a video material library based on the user image of the current user.
7. A video processing apparatus comprising:
the determining module is used for determining a target video;
the acquisition module is used for acquiring at least one video material matched with the target video;
the synthesizing module is used for determining a target video material from the at least one video material based on a first input, and synthesizing the target video material and the target video to generate a new video;
and the reply module is used for replying the target video by taking the new video as the reply content of the target video.
8. The apparatus of claim 7, wherein the means for determining is further configured to:
acquiring a playing video in a playing state;
and if a second input acting on the played video is received, determining the played video as a target video.
9. The apparatus of claim 7, wherein the reply module is further configured to:
performing content audit on the new video;
and under the condition that the new video passes the audit, taking the new video as the reply content of the target video, and replying the target video.
10. The apparatus of claim 9, further comprising a screening module to:
screening the video content of the new video under the condition that the new video passes the audit;
and taking the video content screened from the new video as an alternative video material, and storing the alternative video material into a video material library.
11. The apparatus of claim 7, wherein the means for obtaining is further configured to:
extracting video features of the target video based on a content understanding technology;
and acquiring at least one video material matched with the video characteristics.
12. The apparatus according to any of claims 7-11, wherein the at least one video material comprises any of:
in the video material library, video materials with the matching degree with the target video larger than a first preset value;
in the video material library, selecting a video material which is synthesized with the video to generate a new video, wherein the frequency of generating the new video is greater than a second preset value;
in the video material library, the bit rate of the video is greater than the video material of a third preset value;
and acquiring the video material matched with the target video from a video material library based on the user image of the current user.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110734130.8A CN113473178B (en) | 2021-06-30 | 2021-06-30 | Video processing method, video processing device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110734130.8A CN113473178B (en) | 2021-06-30 | 2021-06-30 | Video processing method, video processing device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113473178A true CN113473178A (en) | 2021-10-01 |
CN113473178B CN113473178B (en) | 2023-06-16 |
Family
ID=77874353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110734130.8A Active CN113473178B (en) | 2021-06-30 | 2021-06-30 | Video processing method, video processing device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113473178B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130216206A1 (en) * | 2010-03-08 | 2013-08-22 | Vumanity Media, Inc. | Generation of Composited Video Programming |
CN103928039A (en) * | 2014-04-15 | 2014-07-16 | 北京奇艺世纪科技有限公司 | Video compositing method and device |
CN109963166A (en) * | 2017-12-22 | 2019-07-02 | 上海全土豆文化传播有限公司 | Online Video edit methods and device |
CN112698769A (en) * | 2020-12-25 | 2021-04-23 | 北京字节跳动网络技术有限公司 | Information interaction method, device, equipment, storage medium and program product |
-
2021
- 2021-06-30 CN CN202110734130.8A patent/CN113473178B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130216206A1 (en) * | 2010-03-08 | 2013-08-22 | Vumanity Media, Inc. | Generation of Composited Video Programming |
CN103928039A (en) * | 2014-04-15 | 2014-07-16 | 北京奇艺世纪科技有限公司 | Video compositing method and device |
CN109963166A (en) * | 2017-12-22 | 2019-07-02 | 上海全土豆文化传播有限公司 | Online Video edit methods and device |
CN112698769A (en) * | 2020-12-25 | 2021-04-23 | 北京字节跳动网络技术有限公司 | Information interaction method, device, equipment, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN113473178B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107291352A (en) | Application program is redirected in a kind of word read method and its device | |
CN108961157B (en) | Image processing method, image processing device and terminal device | |
CN104866275B (en) | Method and device for acquiring image information | |
CN113407850B (en) | Method and device for determining and acquiring virtual image and electronic equipment | |
CN111770375B (en) | Video processing method and device, electronic equipment and storage medium | |
US20180026922A1 (en) | Messaging as a graphical comic strip | |
CN111225236B (en) | Method and device for generating video cover, electronic equipment and computer-readable storage medium | |
US12118319B2 (en) | Dialogue state rewriting and reply generating method and system, electronic device and storage medium | |
US20230368461A1 (en) | Method and apparatus for processing action of virtual object, and storage medium | |
CN112818224A (en) | Information recommendation method and device, electronic equipment and readable storage medium | |
CN114187405A (en) | Method, apparatus, device, medium and product for determining an avatar | |
CN112843681B (en) | Virtual scene control method and device, electronic equipment and storage medium | |
US11947628B2 (en) | Neural networks for accompaniment extraction from songs | |
EP4315325A1 (en) | Neural networks for changing characteristics of vocals | |
CN113660504A (en) | Message display method and device, electronic equipment and storage medium | |
CN114374885B (en) | Video key fragment determining method and device, electronic equipment and readable storage medium | |
CN113901244B (en) | Multimedia resource tag construction method, device, electronic device and storage medium | |
KR20240036715A (en) | Evolution of topics in messaging systems | |
CN112966756A (en) | Visual access rule generation method and device, machine readable medium and equipment | |
KR102808338B1 (en) | Choosing a Smart Media Overlay for Your Messaging System | |
CN113473178B (en) | Video processing method, video processing device, electronic equipment and computer readable storage medium | |
CN113873323B (en) | Video playing method, device, electronic equipment and medium | |
CN117041651A (en) | Barrage processing method and related equipment | |
CN113923477A (en) | Video processing method, device, electronic device, and storage medium | |
CN113327311A (en) | Virtual character based display method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |