WO2024222447A1 - Special effect video generation method and apparatus, electronic device, and storage medium - Google Patents
Special effect video generation method and apparatus, electronic device, and storage medium Download PDFInfo
- Publication number
- WO2024222447A1 WO2024222447A1 PCT/CN2024/086757 CN2024086757W WO2024222447A1 WO 2024222447 A1 WO2024222447 A1 WO 2024222447A1 CN 2024086757 W CN2024086757 W CN 2024086757W WO 2024222447 A1 WO2024222447 A1 WO 2024222447A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- special effect
- target
- identifier
- sticker
- video
- Prior art date
Links
- 230000000694 effects Effects 0.000 title claims abstract description 490
- 238000000034 method Methods 0.000 title claims abstract description 117
- 230000004044 response Effects 0.000 claims abstract description 50
- 230000001186 cumulative effect Effects 0.000 claims description 63
- 230000008569 process Effects 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 28
- 230000006870 function Effects 0.000 description 12
- 230000001960 triggered effect Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 6
- 101100521345 Mus musculus Prop1 gene Proteins 0.000 description 4
- 108700017836 Prophet of Pit-1 Proteins 0.000 description 4
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241000208818 Helianthus Species 0.000 description 1
- 235000003222 Helianthus annuus Nutrition 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 244000002460 Tithonia rotundifolia Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Definitions
- the embodiments of the present disclosure relate to the field of Internet of Things technology, and in particular to a special effect video generation method, device, electronic device and storage medium.
- processing videos to generate special effects videos with corresponding virtual special effects is a common application scenario of video editing functions.
- Various applications and platforms implement the above-mentioned function of generating special effects videos by providing users with preset special effects props.
- the embodiments of the present disclosure provide a special effect video generation method, device, electronic device and storage medium.
- an embodiment of the present disclosure provides a method for generating a special effects video, including:
- an embodiment of the present disclosure provides a special effects video generating device, including:
- Display module used to display the application interface
- a processing module configured to obtain region information in response to a first trigger operation on the special effect component, wherein the region information represents a target region feature of the target region;
- a generation module is used to display target special effect stickers in the application interface and generate special effect videos based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
- an electronic device including:
- a processor and a memory communicatively connected to the processor
- the memory stores computer-executable instructions
- the processor executes the computer-executable instructions stored in the memory to implement the special effects video generation method described in the first aspect and various possible designs of the first aspect.
- an embodiment of the present disclosure provides a computer-readable storage medium, in which computer execution instructions are stored.
- a processor executes the computer execution instructions, the special effects video generation method described in the first aspect and various possible designs of the first aspect is implemented.
- an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the special effects video generation method described in the first aspect and various possible designs of the first aspect.
- an embodiment of the present disclosure provides a computer program, which, when executed by a processor, implements the special effects video generation method as described in the first aspect and various possible designs of the first aspect.
- FIG1 is a diagram of an application scenario of a special effects video generation method provided by an embodiment of the present disclosure
- FIG2 is a flow chart of a method for generating special effects video according to an embodiment of the present disclosure
- FIG3 is a flow chart of a specific implementation method of step S102 in the embodiment shown in FIG2 ;
- FIG4 is a schematic diagram of a process for generating a special effects video provided by an embodiment of the present disclosure
- FIG5 is a schematic diagram of another process of generating a special effects video provided by an embodiment of the present disclosure.
- FIG6 is a schematic diagram of displaying a second mark provided by an embodiment of the present disclosure.
- FIG7 is a schematic diagram showing a third mark provided by an embodiment of the present disclosure.
- FIG8 is a schematic diagram of a process flow of generating a second identifier and/or a third identifier provided by an embodiment of the present disclosure
- FIG9 is a flow chart of interaction between a terminal device and a server provided in an embodiment of the present disclosure.
- FIG10 is another flow chart of interaction between a terminal device and a server provided in an embodiment of the present disclosure.
- FIG11 is a second flow chart of a method for generating special effect videos provided in an embodiment of the present disclosure.
- FIG12 is a schematic diagram of a process of responding to a first editing operation provided by an embodiment of the present disclosure
- FIG13 is a third flow chart of the special effects video generation method provided by an embodiment of the present disclosure.
- FIG14 is a schematic diagram of a process for setting a display position in a frame corresponding to each initial image provided by an embodiment of the present disclosure
- FIG15 is a schematic diagram of a process of setting special effect content corresponding to each initial image provided by an embodiment of the present disclosure
- FIG16 is a structural block diagram of a special effect video generating device provided by an embodiment of the present disclosure.
- FIG17 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
- FIG. 18 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present disclosure.
- user information including but not limited to user device information, user personal information, etc.
- data including but not limited to data used for analysis, stored data, displayed data, etc.
- user information including but not limited to user device information, user personal information, etc.
- data including but not limited to data used for analysis, stored data, displayed data, etc.
- an application interface is displayed; in response to a first trigger operation on a special effect component, regional information is obtained, wherein the regional information represents the target region characteristics of the target region; a target special effect sticker is displayed in the application interface, wherein the special effect content of the target special effect sticker is determined based on the regional information; and a special effect video is generated based on the target special effect sticker.
- a special effect video determined by the regional information of the target region representing the target region characteristics is displayed in the application interface.
- the special effect video generation method provided by the embodiment of the present disclosure can be applied to the application scenarios of video editing and generation. Specifically, it can be applied to the video editing scenario of generating regional position-related special effects.
- Regional position-related special effects include, for example, location punch-in special effects, scenery sharing special effects, etc.
- the specific implementation method of the above special effects will be described in detail in the subsequent embodiments.
- Figure 1 is an application scenario diagram of the special effect video generation method provided by the embodiment of the present disclosure. As shown in Figure 1, the method provided by the embodiment of the present disclosure can be applied to a terminal device, such as a smart phone.
- An application Application, APP
- An application capable of realizing the image editing function of adding special effects to the initial image is running on the terminal device.
- a special effect component for adding virtual special effects to the initial image is provided.
- the special effect component is triggered, and then the corresponding special effect sticker is displayed in the application interface, such as the "glasses" sticker shown in the figure, to achieve the purpose of adding special effects to the initial image.
- the content of the special effect stickers displayed after the special effect component is triggered is fixed, such as the "glasses” special effect in the above example. Its special effect content is fixed.
- the video special effects with the above-mentioned fixed content cannot meet the user's personalized image editing needs, nor can they automatically generate personalized special effect content.
- the user can only manually insert text or logo information into the initial picture to meet the personalized needs of image editing, resulting in low generation efficiency of special effect videos and small amount of information.
- the disclosed embodiments provide a special effect video generation method to solve the above problems.
- FIG. 2 is a flow chart of a special effect video generation method provided in an embodiment of the present disclosure.
- the method of this embodiment can be applied in a terminal device, and the special effect video generation method includes:
- Step S101 Display the application interface.
- Step S102 in response to a first trigger operation on the special effect component, obtaining region information, where the region information represents target region features of the target region.
- Step S103 Displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information.
- Step S104 Generate a special effects video based on the target special effects sticker.
- the execution subject of the method provided in the embodiment of the present application may be a terminal device, such as a smart phone, and the terminal device implements the special effect video generation method provided in the present embodiment by running the target application.
- the terminal device runs the target application
- the application interface of the target application is displayed through the display screen.
- a special effect component is provided in the application interface.
- the target special effect sticker corresponding to the special effect component is displayed in the application interface.
- the special effect content of the target special effect sticker is determined based on the regional information, and the regional information characterizes the target area characteristics of the target area.
- the regional information can be determined based on image data manually loaded by the user.
- the specific implementation method of step S102 includes:
- Step S1021 In response to a first trigger operation on the special effect component, image data is acquired.
- Step S1022 extract features from the image data to obtain region information.
- the area information includes at least one of the following: the area location of the target area, the geomorphic features of the target area, and the target object information in the target area.
- the terminal device after the terminal device responds to the first trigger operation for the special effect component, it obtains image data, and the image data may include a video or a picture.
- the image data as a picture as an example
- the terminal device loads the image data in response to the user instruction, it extracts the features of the target area of the target area shown in the picture, that is, the area information.
- the area information can be the area location, landform features and target object information.
- the area location refers to the location point where the target area is located, and it can also refer to the area range of the target area, or a combination of the two.
- the landform feature refers to the landform of the target area, such as grassland and glacier.
- the target object information in the target area refers to a certain specified target object located in the target area, or the characteristics of the target object, such as the type and appearance characteristics of the target object.
- the target object information is the feature information describing the "sunflower” in the target area, and further, the target object information can also be the feature information describing "flower” or "red sunflower".
- the specific implementation method can be set as needed, and will not be repeated here one by one.
- the image content in the image data includes a target area. After the terminal device extracts features from the image data, it obtains at least one of the area position corresponding to the target area represented by the image data, the topographic features of the target area, and the target object information of the target area.
- the area information includes descriptive information representing "grassland”, and more specifically, the area information includes the character string "grassland”; for another example, the area information includes descriptive information representing "A region”, and more specifically, the area information includes the character string "A region”, and the area information may also directly include specific location coordinates.
- the regional information can simultaneously represent the regional location and landform features, such as "grassland in area A”. The specific implementation of the regional information can be set as needed and will not be described in detail here.
- the special effect content of the target special effect sticker includes the first identifier.
- the specific implementation steps of step S103 include:
- Step 1031 In response to a first trigger operation on the special effect component, a first logo is displayed in the application interface.
- the terminal device after responding to the first trigger operation on the special effect component, obtains the area information input by the user, or the area information obtained by the user's request, and determines the target area category of the corresponding target area according to the area information. Then, the text, icon and other information corresponding to the target area category or the identification mark of the target area category, i.e., the first mark, is obtained; and the first mark representing the target area category is displayed in the application interface.
- Figure 4 is a schematic diagram of a process for generating a special effects video provided by an embodiment of the present disclosure.
- the application interface is a camera viewfinder interface
- a special effects component C1 (shown as C1 in the figure) is provided in the camera viewfinder interface.
- the terminal device obtains the area information, and based on the area information, determines the corresponding target area category (A area), and then displays the first identification "A area, here I come" generated based on the target area category in the camera viewfinder interface, that is, the special effects content of the target special effects sticker.
- the terminal device responds to the third trigger operation on the camera viewfinder interface to capture image data.
- the image data may include a single-frame picture, multiple-frame pictures or videos captured by the camera of the terminal device.
- the terminal device directly renders the image data and the target special effect stickers to generate a final special effect video; in another possible implementation, after the process of capturing the image data is completed, the terminal device returns to the image editing page to preview the superimposed display effect of the image data and the target special effect stickers.
- the user can further edit the image data or the target special effect stickers to change the content of the special effect image.
- the image data and the target special effect stickers are fused and rendered to generate the final special effect video.
- the target special effect stickers are displayed in the camera viewfinder interface at the same time, so that the user can preview the content of the final special effect video while shooting image data, thereby improving the efficiency of video editing.
- FIG5 is a schematic diagram of another process of generating a special effect video provided by an embodiment of the present disclosure.
- the application interface is an image editing interface
- a special effect component C2 shown as C2 in the figure
- an album component shown as C3 in the figure
- the pre-generated image data is loaded, and the image data includes a video or at least one frame of a picture.
- the terminal device determines it as image data and displays the image data in the image editing interface; before or after this, the special effect component C2 is triggered, for example, the user clicks on the special effect component C2, and then the terminal device obtains the corresponding target area category (area B) based on the area information, and then the terminal device displays the content of the image data in the first layer in the application interface, specifically, for example, one of the methods of playing a video, displaying the first frame of a video, displaying a frame of image, and displaying multiple frames of images in a carousel, and the first identification "Area B, I'm here" generated based on the target area category is displayed in the second layer of the application interface, wherein, optionally, the first layer is located below the second layer, that is, the target special effect sticker covers the content of the image data, so that the target special effect sticker has a
- the terminal device previews the superimposed display effect of the image data and the target special effect stickers.
- the user can further edit the image data or the target special effect stickers to change the content of the special effect image.
- the generation instructions entered by the user are executed, and the terminal device renders the image data and the target special effect stickers to generate the final special effect video.
- step S1031 the method further includes:
- Step S1032 Displaying a second identifier in the application interface according to the target area category.
- the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of target events corresponding to the target area category, and the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker.
- the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker.
- the target area category is the input parameter of the above-mentioned target event, and the target area category has a one-to-one correspondence with the target event.
- the terminal device D1 is located in the target area category, after the above-mentioned target event is triggered, the cumulative triggering number of the target event corresponding to the target area category is increased by 1.
- the terminal device when it triggers the target event, it sends a notification message to the server.
- the server obtains the notification message carrying the same area information to obtain the cumulative trigger data.
- the server responds to the data request sent by the terminal device and sends the cumulative trigger quantity to the terminal device, so that the terminal device can generate a second identifier based on the cumulative trigger quantity corresponding to the target area category.
- the target event can also be a target event for the terminal device based on the target special effect.
- the sticker generates corresponding video effects and publishes the video effects. Exemplarily, publishing refers to uploading the special effects video to the server corresponding to the target application so that other users can watch the video effects in the target application.
- FIG. 6 is a schematic diagram of displaying a second identifier provided by an embodiment of the present disclosure.
- the first identifier that is, the character string "A area, I am here”
- the character string "The total number of people who check in in area A: 65535" is displayed at the second moment, that is, the content of the second identifier.
- the cumulative number of triggering the target event (such as publishing a special effects video based on the target special effects sticker) is 65535 times.
- the first identifier and the second identifier can also appear at the same time, and the second identifier can always be located at the same position in the application interface, or it can be located at different positions in the application interface as time changes.
- step S1031 the method further includes:
- Step S1033 Displaying a third identifier in the application interface according to the target area category.
- the special effect content of the target special effect sticker also includes a third identifier, which represents the ranking of the target area category in all area categories based on the cumulative trigger number.
- the third identifier corresponds to the second identifier.
- the third identifier is generated based on the second identifier. Specifically, for example, among the many terminal devices running the target application, when the terminal device D1 triggers the above-mentioned target special effect sticker by calling the special effect component in the target application and generates the corresponding video special effect, it is regarded as triggering a target event.
- the cumulative trigger number of the target event corresponding to area A is increased by 1.
- the cumulative trigger number of other areas will also increase. Therefore, different areas correspond to a dynamically changing cumulative trigger number, and based on the cumulative trigger number, the target area category has a ranking sequence in all areas (including the target area category and other areas), such as a ranking sequence based on the cumulative trigger number from most to least.
- the ranking sequence corresponding to all the above-mentioned areas, and/or the ranking of the target area category in the ranking sequence can be obtained by the server after obtaining the cumulative trigger data corresponding to all the areas by comparing the cumulative trigger data of all the areas. Thereafter, the server responds to the data request sent by the terminal device and sends the ranking sequence corresponding to all the areas, and/or the ranking of the target area category in the ranking sequence to the terminal device, so that the terminal device can generate a third identifier based on the ranking sequence corresponding to all the areas, and/or the ranking of the target area category in the ranking sequence.
- the terminal device generates a third identifier according to the ranking sequence of the cumulative number of triggers, and/or the ranking of the target area category in the ranking sequence, and displays the third identifier in the application interface.
- Figure 7 is a schematic diagram of displaying a third identifier provided by an embodiment of the present disclosure. As shown in Figure 7, in the application interface, based on the time sequence, the first identifier is displayed at the first moment, the second identifier is displayed at the second moment, and the third identifier is displayed at the third moment; the third identifier includes the ranking of each area (area A, area B, area C, etc.) in all areas based on the cumulative number of triggers. As shown in the figure, area B ranks first, area A ranks second, and area C ranks third.
- the second identifier can be displayed at the same time, that is, on the basis of displaying the ranking, the cumulative number of triggers corresponding to each area is displayed.
- the third identifier can also be displayed after the first identifier or the second identifier is displayed, or while the first identifier or the second identifier is displayed, which will not be repeated here.
- the third identifier can always be located at the same position in the application interface, or it can be located at different positions in the application interface as time changes.
- steps S1032 and S1033 can be executed simultaneously with step S1031, or can be executed sequentially or simultaneously after step S1031.
- the execution order between step S1032 and step S1033 can be set based on needs, and no specific restrictions are made here.
- FIG. 8 is a schematic flow chart of a generation process of a second identifier and/or a third identifier provided in an embodiment of the present disclosure. Exemplarily, as shown in FIG. 8, before displaying the second identifier and/or the third identifier, the following steps are also included:
- Step S1020A Send a first data request to the server, where the first data request includes first information representing the target special effect sticker and second information representing the target area category.
- Step S1020B receiving event data returned by the server, the event data including at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area category among all area categories based on the cumulative number of triggers.
- Step S1020C Generate a second identifier and/or a third identifier based on the event data.
- the method further includes: in response to a second trigger operation, displaying the second identifier and/or the third identifier in the application interface. That is, in a possible implementation, the first identifier and/or the second identifier is in an invisible state after being generated in the initial state; after the terminal device receives and responds to the second trigger operation, the second identifier and/or the third identifier is displayed in the application interface.
- FIG9 is a flowchart of an interaction between a terminal device and a server provided by an embodiment of the present disclosure.
- a terminal device D1 located in area A triggers After the above-mentioned target event, a first data request is sent to the corresponding server (i.e., the server of the target application) through the target application.
- the first data request includes first information representing the target special effect sticker prop_1 and second information representing area A.
- the server After receiving the first data request, the server obtains the notification information for the target special effect sticker prop_1 sent by all terminal devices in area A (including the target terminal device and other terminal devices) based on the first information and the second information therein, and obtains the cumulative trigger quantity Data_1 (shown as Data_1 in the figure) of the target event corresponding to area A (i.e., the event for generating a special effect video based on the target special effect sticker prop_1), and sends the obtained cumulative trigger quantity Data_1 to the terminal device D1.
- the terminal device D1 generates a second identifier through the cumulative trigger quantity Data_1.
- FIG10 is another interactive flow chart between a terminal device and a server provided in an embodiment of the present disclosure.
- the server while obtaining the cumulative trigger quantity Data_1 of the target event corresponding to the A area, the server asynchronously obtains the cumulative trigger quantity of the target event corresponding to other areas, such as the cumulative trigger quantity Data_2 corresponding to the B area (shown as Data_2 in the figure) and the cumulative trigger quantity Data_3 corresponding to the C area (shown as Data_3 in the figure) through the notification information for the target special effect sticker prop_1 sent by the terminal devices in other areas.
- the cumulative trigger quantities of all multiple areas are ranked to obtain event data, and then the event data is sent to the terminal device D1, and the terminal device D1 generates a second identifier and/or a third identifier through the event data.
- the event data at least includes the cumulative triggering number of the target event corresponding to the target area, and/or the ranking of the target area category in all area categories based on the cumulative triggering number.
- the server sends the ranking sequence Array_data (event data) corresponding to the target event to the terminal device, and the terminal device generates a second identifier and/or a third identifier through the ranking sequence Array_data.
- the terminal device can render the target special effect sticker into the corresponding initial picture or initial video, thereby generating a special effect video.
- the rendering process is an existing technology and will not be described here.
- the terminal device can further send a notification message to the server so that the server can update the cumulative trigger quantity and ranking sequence, thereby realizing the dynamic update of the special effect content of the target special effect sticker in the special effect video.
- the special effect content determined by the regional information when the terminal device triggers the target special effect sticker is displayed in the application interface, thereby achieving a dynamic mapping between the regional information and the special effect content of the target special effect sticker triggered by the terminal device, thereby enabling the special effect video generated based on the target special effect sticker to match the regional information, without the need for manual editing by the user, thereby improving the efficiency of video generation and enriching the amount of information in the special effect video.
- FIG. 11 is a flow chart of the special effect video generation method provided in the embodiment of the present disclosure.
- the special effect video generation method includes:
- Step S201 Display the application interface.
- Step S202 In response to a first trigger operation on the special effect component, a target special effect sticker is displayed in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information.
- Step S203 In response to the first editing operation on the target special effect sticker, the special effect content of the target special effect sticker is set to a fourth identifier, wherein the fourth identifier represents a custom area category.
- Step S204 Displaying a fourth logo in the application interface.
- the terminal device after responding to the first trigger operation for the special effect component, the terminal device first displays the corresponding target special effect sticker in the application interface according to the regional information (the specific method for obtaining the regional information can be found in the introduction in the previous embodiment), such as the first identification, the second identification, the third identification, etc. in the embodiment shown in Figure 2.
- the regional information the specific method for obtaining the regional information can be found in the introduction in the previous embodiment
- the specific implementation method has been described in detail in the embodiment shown in Figure 2 and will not be repeated here.
- the target area category corresponding to the regional information implemented by the location coordinates has a multi-level feature, that is, the same device can be located in the corresponding city A, in the A_1 district in the city A, or in the A_1_1 park in the A_1 district (that is, the regional information can correspond to multiple regional levels); therefore, the regional level corresponding to the first identification of the terminal device based on the device regional information may not match the regional level of interest to the user, resulting in the problem of regional level misalignment.
- the terminal device responds to the first editing operation input by the user to modify the special effect content of the target special effect sticker, thereby realizing the modification of the regional level.
- the content of the displayed target special effect sticker includes the first identifier, and the content of the first identifier is the string "A City"; afterwards, the terminal device responds to the first editing operation and modifies the content of the first identifier to "A_1_1 Park" (the fourth identifier). Since the fourth identifier is generated based on the user's operation and corresponds to the first identifier representing the target area category where the area information is located, the fourth identifier represents the Define the area category.
- the specific content of the custom area category can be determined according to the specific input of the user, and will not be repeated here.
- Step S205 Based on the custom area category, a fifth identifier and/or a sixth identifier is displayed in the application interface, wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all areas based on the cumulative number of triggers.
- the terminal device can also display the fifth identifier and/or the sixth identifier corresponding to the fourth identifier in the application interface, wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; the sixth identifier represents the ranking of the custom area category in all areas based on the cumulative number of triggers.
- the fifth identifier is equivalent to the second identifier in the previous embodiment, and the sixth identifier is equivalent to the identifier in the previous embodiment.
- the difference is that the second identifier and the third identifier are generated based on the target area category, while the fifth identifier and the sixth identifier are generated based on the custom area category.
- the generation method of the fifth identifier and the sixth identifier is also based on the custom area category, by sending a data request to the server and receiving the time information sent by the server.
- the specific implementation method of step S205 includes:
- Step S2051 Send a second data request, where the second data request includes first information representing the target special effect sticker and second information representing the custom area category;
- Step S2052 receiving event data returned by the server, the event data including at least the cumulative number of triggers of the target event corresponding to the custom area category, and/or the ranking of the custom area category in all areas based on the cumulative number of triggers;
- Step S2053 Generate a fifth identifier and/or a sixth identifier according to the event data, and display them.
- the specific method for generating the fifth identifier and/or the sixth identifier mentioned above can refer to the specific implementation method for generating the second identifier and/or the third identifier in the embodiment shown in FIG8 , which will not be described in detail here.
- Figure 12 is a schematic diagram of a process of responding to a first editing operation provided by an embodiment of the present disclosure.
- the terminal device responds to the first editing operation input by the user and modifies the first identifier to the fourth identifier (the content is "I am in A_1_1 Park”).
- the terminal device obtains the corresponding custom area category "A_1_1 Park” based on the content of the fourth identifier, and based on the custom area category "A_1_1 Park", the second identifier is modified to the fifth identifier (the content is "305 check-ins") and the third identifier is modified to the sixth identifier (the content is "A_1_1 Park: Nationally ranked 1402”), and displayed in the application interface, thereby achieving the content setting and update of the target special effect sticker.
- the first identifier needs to be modified.
- the second and third identifiers will be automatically matched and corrected based on the change in the area (the target area category becomes a custom area category), without the need for manual modification, thereby improving image editing efficiency.
- Step S206 Generate a special effects video based on the target special effects sticker.
- steps S201-S202 and step S206 are consistent with the specific implementation of steps S101-S103 in the embodiment shown in FIG. 2 .
- steps S101-S103 please refer to the discussion in steps S101-S103, which will not be repeated here.
- the special effect content of the target special effect sticker is modified based on the first editing operation so that the regional level represented by the target special effect sticker can match the regional level of interest to the user, thereby avoiding the problem of regional level misalignment, improving the accuracy of the information displayed in the special effect video, and meeting the personalized needs of users.
- FIG. 13 is a flow chart of the special effect video generation method provided in the embodiment of the present disclosure.
- the special effect video generation method includes:
- Step S301 Display the application interface.
- Step S302 In response to a first trigger operation on the special effect component, a target special effect sticker is displayed in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information.
- Step S303 In response to the second editing operation on the target special effect sticker, set the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
- the second editing operation is an operation for setting the display position and/or special effect content of the target special effect sticker in the special effect video.
- the second editing operation may include coordinate information, through which the display position of the target special effect sticker in the special effect video is set; the second editing operation may include content information, such as a string, through which the special effect content of the target special effect sticker is set, for example, the special effect content of the target special effect sticker is changed from the first identifier to the fourth identifier.
- the special effect video is generated based on at least two frames of initial pictures and target special effect stickers, and the terminal device obtains at least two pre-generated frames of initial pictures and target special effect stickers, and renders them to obtain the special effect video.
- the second editing operation includes at least two first sub-editing operations for the initial picture, and the first sub-editing operation is used to edit a single frame of the initial picture, so that the target special effect sticker appears in different positions in at least two frames of the initial picture.
- a possible implementation of step S303 includes:
- Step S3031 In the first sub-editing operation for the initial picture, the display position in the frame corresponding to each initial picture should be set so that when the special effect video is played, the target special effect stickers are displayed at the display position in the frame corresponding to each initial picture.
- FIG14 is a schematic diagram of a process for setting the display position in the frame corresponding to each initial image provided by an embodiment of the present disclosure.
- the initial images used to generate the special effect video include initial image P1, initial image P2, and initial image P3.
- the second editing operation includes three first sub-editing operations, namely, operation op_01 for initial image P1 (shown as op_01 in the figure), operation op_02 for initial image P2 (shown as op_02 in the figure), and operation op_03 for initial image P3 (shown as op_03 in the figure).
- Each first sub-editing operation corresponds to a click operation.
- the terminal device After the terminal device responds to each first sub-editing operation, the corresponding click coordinates are obtained respectively. According to each click coordinate, the corresponding display positions o1, o2, and o3 in the frame are obtained respectively.
- the target special effect stickers are also simultaneously displayed in the corresponding display positions in the frames of each initial picture.
- the display content in the above video editing interface is equivalent to a preview of the special effects video, that is, when the special effects video is played, the target special effects stickers are respectively displayed at the display positions in the frames corresponding to each initial picture.
- the terminal device responds to the second editing instruction and sets the appearance position of the target special effect sticker in each initial picture, so that in the generated special effect video, the target special effect sticker can achieve dynamic movement based on the image frame dimension. This can avoid the problem of the target special effect sticker blocking the key target in the image (such as the human body and face), and improve the visual effect of the special effect video.
- the second editing operation includes at least two second sub-editing operations for the initial picture.
- Another possible implementation of step S303 includes:
- Step S3032 In response to the second sub-editing operation for the initial picture, set the special effect content of the target special effect sticker in at least one frame of the initial picture, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each initial picture.
- the second sub-editing operation is for the initial picture, and is used to change the special effect content of the target special effect sticker in the initial picture.
- the second sub-editing operation is implemented in the same way as the first editing operation in the previous embodiment, for example, it is used to modify the first identifier corresponding to the target special effect sticker to the fourth identifier, so that each frame of the initial picture Like special effect content corresponding to different target special effect stickers.
- FIG15 is a schematic diagram of a process for setting special effect content corresponding to each initial image provided by an embodiment of the present disclosure.
- the initial images used to generate special effect videos include initial images P1, initial images P2, and initial images P3.
- the second editing operation includes three second sub-editing operations, namely, operation op_11 (shown as op_11 in the figure) for initial image P1, operation op_12 (shown as op_12 in the figure) for initial image P2, and operation op_13 (shown as op_13 in the figure) for initial image P3.
- Each second sub-editing operation corresponds to a string input operation.
- the terminal device After the terminal device responds to each second sub-editing operation, the corresponding special effect content is set in each initial image.
- the special effect content of the target special effect sticker in the initial image P1 includes the string "A City”
- the special effect content of the target special effect sticker in the initial image P1 includes the string "A_1 District”
- the special effect content of the target special effect sticker in the initial image P3 includes the string "A_1_1 Park”.
- the generated special effects video is rendered based on the initial image P1, the initial image P2 and the initial image P3 in combination with the target special effects stickers.
- the target special effects stickers are respectively displayed based on the special effects content corresponding to each initial picture, that is, the character string "A City” is displayed in the video screen corresponding to the initial image P1, the character string “A_1 District” is displayed in the video screen corresponding to the initial image P2, and the character string “A_1_1 Park” is displayed in the video screen corresponding to the initial image P3.
- the terminal device responds to the second editing instruction to set the special effect content of the target special effect sticker in each initial picture, so that in the generated special effect video, the target special effect sticker can realize dynamic content transformation based on the image frame dimension, thereby improving the information richness and information display efficiency in the special effect video.
- Step S304 Generate a special effects video based on the target special effects sticker.
- steps S301-S302 and step S304 is consistent with the specific implementation of steps S101-S103 in the embodiment shown in FIG2 .
- steps S101-S103 which will not be repeated here.
- the solution provided in this embodiment can also be implemented on the basis of the embodiment shown in FIG11 , that is, after sequentially executing steps S201-S205 in the embodiment shown in FIG11 , steps S303-S304 in this embodiment are executed to further set the display position and/or special effect content of the target special effect sticker, which can be set according to specific needs, which will not be repeated here.
- FIG16 is a structural block diagram of a special effects video generation device provided by an embodiment of the present disclosure.
- the special effects video generation device 4 includes:
- Display module 41 used to display the application interface
- a processing module 42 configured to obtain region information in response to a first trigger operation on the special effect component, wherein the region information represents a target region having target region characteristics;
- the generation module 43 is used to display the target special effect stickers in the application interface and generate a special effect video based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
- the processing module 42 is specifically used to: acquire image data in response to a first trigger operation on the special effect component; perform feature extraction on the image data to obtain area information, wherein the area information includes the area location and/or topographic features of the target area.
- the special effect content of the target special effect sticker includes a first identifier; when the generation module 43 displays the target special effect sticker in the application interface, it is specifically used to: determine the target area category of the target area based on the area information; generate the first identifier based on the target area category; and display the first identifier in the application interface.
- the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of the target event corresponding to the target area category, and the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker; when the generation module 43 displays the target special effect sticker in the application interface, it is also used to: display the second identifier in the application interface.
- the special effect content of the target special effect sticker also includes a third identifier, and the third identifier represents the ranking of the target area category in all area categories based on the cumulative number of triggers; when the generation module 43 displays the target special effect sticker in the application interface, it is also used to: display the third identifier in the application interface.
- the generation module 32 is also used to: send a first data request to a server, wherein the first data request includes first information representing the target special effect sticker and second information representing the target area category; receive event data returned by the server, wherein the event data includes at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area in all areas based on the cumulative number of triggers; and generate a second identifier and/or a third identifier based on the event data.
- the generation module 32 is further used to: in response to a first editing operation on the target special effect sticker, set the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category; display the target special effect sticker in the application interface; The fourth logo.
- the generation module 32 is also used to: display a fifth identifier and/or a sixth identifier in the application interface based on the custom area category; wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all area categories based on the cumulative number of triggers.
- the generation module 32 is also used to: in response to a second editing operation on the target special effect sticker, set the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
- the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers
- the second editing operation includes at least two first sub-editing operations for the initial pictures
- the generation module 32 in response to the second editing operation for the target special effects stickers, sets the display position of the target special effects stickers in the special effects video, and is specifically used to: in response to the first sub-editing operation for the initial pictures, set the display position in the frame corresponding to each of the initial pictures, so that when the special effects video is played, the target special effects stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
- the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers
- the second editing operation includes at least two second sub-editing operations for the initial pictures
- the generation module 32 is also used to: in response to the second sub-editing operation for the initial picture, set the special effects content of the target special effects stickers in at least one frame of the initial picture, so that when the special effects video is played, the target special effects stickers are displayed based on the special effects content corresponding to each of the initial pictures.
- the generation module 33 is also used to: obtain pre-generated image data, the image data including a video or at least one frame of a picture; display the content of the image data on a first layer within the application interface; when the generation module 33 displays the target special effect stickers within the application interface, it is also used to: display the target special effect stickers on a second layer of the application interface, wherein the first layer is located below the second layer.
- the application interface is a camera viewfinder interface
- the generation module 33 is also used for at least one of the following: in response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effects video in combination with the target special effects stickers; and in the process of capturing the image data, displaying the target special effects stickers in the viewfinder interface.
- the display module 41, the processing module 42 and the generating module 43 are connected.
- the special effect video generating device 4 provided in this embodiment can implement the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, which will not be described in detail in this embodiment.
- FIG17 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. As shown in FIG17 , the electronic device 5 includes:
- the memory 52 stores computer executable instructions
- the processor 51 executes the computer-executable instructions stored in the memory 52 to implement the special effect video generation method in the embodiments shown in Figures 2 to 15.
- processor 51 and the memory 52 are connected via a bus 53 .
- An embodiment of the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored.
- the computer-executable instructions are executed by a processor, they are used to implement the special effects video generation method provided by any one of the embodiments corresponding to Figures 2 to 15 of the present disclosure.
- the present disclosure provides a computer program product, including a computer program.
- the computer program is executed by a processor, the special effect video generation method in the embodiments shown in FIG. 2 to FIG. 15 is implemented.
- FIG. 18 it shows a schematic diagram of the structure of an electronic device 900 suitable for implementing the embodiment of the present disclosure
- the electronic device 900 may be a terminal device or a server.
- the terminal device may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (Portable Android Devices, PADs), portable multimedia players (PMPs), vehicle terminals (such as vehicle navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
- PDAs personal digital assistants
- PADs Portable Android Devices
- PMPs portable multimedia players
- vehicle terminals such as vehicle navigation terminals
- fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 18 is only an example and should not impose any limitations on the functions and scope of use of the embodiment of the present disclosure.
- the electronic device 900 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 901, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 902 or a program loaded from a storage device 908 to a random access memory (RAM) 903.
- a processing device e.g., a central processing unit, a graphics processing unit, etc.
- RAM random access memory
- Various programs and data required for the operation of the electronic device 900 are also stored in the RAM 903.
- the processing device 901, the ROM 902, and the RAM 903 are connected to each other via a bus 904.
- I/O Input/Output
- An interface 905 is also connected to the bus 904 .
- the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 907 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 908 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 909.
- the communication device 909 may allow the electronic device 900 to communicate with other devices wirelessly or by wire to exchange data.
- FIG. 18 shows an electronic device 900 having various devices, it should be understood that it is not required to implement or have all of the devices shown. More or fewer devices may be implemented or have alternatively.
- an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program can be downloaded and installed from a network through a communication device 909, or installed from a storage device 908, or installed from a ROM 902.
- the processing device 901 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
- the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
- Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, device or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried.
- This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above.
- the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device.
- the program code contained on the computer-readable medium may be stored in any suitable manner.
- Media transmission includes but is not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
- the computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
- the computer-readable medium carries one or more programs.
- the electronic device executes the method shown in the above embodiment.
- Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages.
- the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
- LAN Local Area Network
- WAN Wide Area Network
- each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
- the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
- each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure may be implemented by software or hardware.
- the name of a unit does not limit the unit itself in some cases.
- the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
- exemplary types of hardware logic components include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), complex programmable logic devices (CPLDs), and the like.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOCs systems on chips
- CPLDs complex programmable logic devices
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
- a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or flash memory erasable programmable read-only memory
- CD-ROM portable compact disk read-only memory
- CD-ROM compact disk read-only memory
- magnetic storage device or any suitable combination of the foregoing.
- a method for generating a special effect video comprising:
- obtaining regional information in response to a first trigger operation on a special effects component includes: acquiring image data in response to a first trigger operation on the special effects component; performing feature extraction on the image data to obtain regional information, wherein the regional information includes the regional location and/or topographic features of the target area.
- the special effect content of the target special effect sticker includes a first identifier
- displaying the target special effect sticker in the application interface includes: determining the target area category of the target area according to the area information; generating the first identifier according to the target area category; and displaying the first identifier in the application interface.
- the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of the target event corresponding to the target area category, and the target event includes a terminal device generating a corresponding video special effect based on the target special effect sticker; displaying the target special effect sticker in the application interface also includes: displaying the second identifier in the application interface.
- the special effect content of the target special effect sticker also includes a third identifier, and the third identifier represents the ranking of the target area category in all area categories based on the cumulative number of triggers; displaying the target special effect sticker in the application interface also includes: displaying the third identifier in the application interface.
- the method also includes: sending a first data request to a server, the first data request including first information characterizing the target special effect sticker and second information characterizing the target area category; receiving event data returned by the server, the event data including at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area in all areas based on the cumulative number of triggers; generating a second identifier and/or a third identifier based on the event data.
- the method also includes: in response to a first editing operation on the target special effect sticker, setting the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category; and displaying the fourth identifier in the application interface.
- the method further includes: based on the custom area category, displaying a fifth identifier and/or a sixth identifier in the application interface; wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all area categories based on the cumulative number of triggers.
- the method further includes: in response to a second editing operation on the target special effect sticker, setting the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
- the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers, and the second editing operation includes at least two first sub-editing operations for the initial pictures;
- the display position of the target special effects stickers in the special effects video is set in response to the second editing operation for the target special effects stickers, including: in response to the first sub-editing operation for the initial pictures, the display position in the frame corresponding to each of the initial pictures is set, so that when the special effects video is played, the target special effects stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
- the special effect video is generated based on at least two frames of initial pictures and the target special effect stickers
- the second editing operation includes at least two second sub-editing operations for the initial picture
- the method further includes: in response to the second sub-editing operations for the initial picture An editing operation is performed to set the special effect content of the target special effect sticker in at least one frame of the initial picture, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each of the initial pictures.
- the method also includes: obtaining pre-generated image data, the image data including a video or at least one frame of a picture; displaying the content of the image data on a first layer within the application interface; displaying the target special effect stickers within the application interface includes: displaying the target special effect stickers on a second layer of the application interface, wherein the first layer is located below the second layer.
- the application interface is a camera viewfinder interface
- the method further includes at least one of the following: in response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effects video in combination with the target special effects stickers; and during the process of capturing the image data, displaying the target special effects stickers in the viewfinder interface.
- a special effect video generating device including:
- a display module is used to display an application interface; a processing module is used to obtain area information in response to a first trigger operation on a special effect component, wherein the area information represents a target area having target area characteristics; a generation module is used to display target special effect stickers in the application interface, and generate a special effect video based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
- the processing module is specifically used to: acquire image data in response to a first trigger operation on a special effect component; perform feature extraction on the image data to obtain area information, wherein the area information includes the area location and/or topographic features of the target area.
- the special effect content of the target special effect sticker includes a first identifier; when the generation module displays the target special effect sticker in the application interface, it is specifically used to: determine the target area category of the target area according to the area information; generate the first identifier according to the target area category; and display the first identifier in the application interface.
- the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of the target event corresponding to the target area category, and the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker.
- the generation module displays the target special effect sticker in the application interface, it is also used to: display the second logo in the application interface.
- the special effect content of the target special effect sticker also includes a third identifier, and the third identifier represents the ranking of the target area category in all area categories based on the cumulative number of triggers; when the generation module displays the target special effect sticker in the application interface, it is also used to: display the third identifier in the application interface.
- the generation module is also used to: send a first data request to a server, wherein the first data request includes first information representing the target special effect sticker and second information representing the target area category; receive event data returned by the server, wherein the event data includes at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area in all areas based on the cumulative number of triggers; and generate a second identifier and/or a third identifier based on the event data.
- the generation module is also used to: in response to a first editing operation on the target special effect sticker, set the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category; and display the fourth identifier in the application interface.
- the generation module is also used to: display a fifth identifier and/or a sixth identifier in the application interface based on the custom area category; wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all area categories based on the cumulative number of triggers.
- the generation module is also used to: in response to a second editing operation on the target special effect sticker, set the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
- the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers
- the second editing operation includes at least two first sub-editing operations for the initial pictures
- the generation module sets the display position of the target special effects stickers in the special effects video in response to the second editing operation for the target special effects stickers, it is specifically used to: in response to the first sub-editing operation for the initial pictures, set the display position in the frame corresponding to each of the initial pictures, so that when the special effects video is played, the target special effects stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
- the special effect video is based on at least two frames of initial pictures. and the target special effect sticker is generated, the second editing operation includes at least two second sub-editing operations for the initial picture, and the generation module is also used to: in response to the second sub-editing operation for the initial picture, set the special effect content of the target special effect sticker in at least one frame of the initial picture, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each of the initial pictures.
- the generation module is also used to: obtain pre-generated image data, the image data including a video or at least one frame of a picture; display the content of the image data on the first layer within the application interface; when the generation module 33 displays the target special effect stickers within the application interface, it is also used to: display the target special effect stickers on the second layer of the application interface, wherein the first layer is located below the second layer.
- the application interface is a camera viewfinder interface
- the generation module is further used for at least one of the following: in response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effects video in combination with the target special effects stickers; and in the process of capturing the image data, displaying the target special effects stickers in the viewfinder interface.
- an electronic device comprising: a processor, and a memory communicatively connected to the processor;
- the memory stores computer-executable instructions
- the processor executes the computer-executable instructions stored in the memory to implement the special effects video generation method described in the first aspect and various possible designs of the first aspect.
- a computer-readable storage medium stores computer execution instructions.
- the special effects video generation method described in the first aspect and various possible designs of the first aspect is implemented.
- an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the special effects video generation method described in the first aspect and various possible designs of the first aspect.
Landscapes
- Studio Devices (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Embodiments of the present disclosure provide a special effect video generation method and apparatus, an electronic device, and a storage medium. The method comprises: displaying an application interface; in response to a first trigger operation for a special effect component, obtaining area information, the area information representing a target area feature of a target area; displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined on the basis of the area information; and generating a special effect video on the basis of the target special effect sticker.
Description
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求于2023年4月28日提交的名称为“特效视频生成方法、装置、电子设备及存储介质”的中国专利申请第202310485260.1号的优先权,该申请的公开通过引用被全部结合于此。This application claims priority to Chinese Patent Application No. 202310485260.1, filed on April 28, 2023, entitled “Special Effects Video Generation Method, Device, Electronic Device and Storage Medium”, the disclosure of which is incorporated herein by reference in its entirety.
本公开实施例涉及物联网技术领域,尤其涉及一种特效视频生成方法、装置、电子设备及存储介质。The embodiments of the present disclosure relate to the field of Internet of Things technology, and in particular to a special effect video generation method, device, electronic device and storage medium.
当前,针对视频进行处理,而生成带有相应的虚拟特效的特效视频,是常见的视频编辑功能的应用场景。各类应用程序及平台,通过为用户提供预设的特效道具,来实现上述生成特效视频的功能。Currently, processing videos to generate special effects videos with corresponding virtual special effects is a common application scenario of video editing functions. Various applications and platforms implement the above-mentioned function of generating special effects videos by providing users with preset special effects props.
发明内容Summary of the invention
本公开实施例提供一种特效视频生成方法、装置、电子设备及存储介质。The embodiments of the present disclosure provide a special effect video generation method, device, electronic device and storage medium.
第一方面,本公开实施例提供一种特效视频生成方法,包括:In a first aspect, an embodiment of the present disclosure provides a method for generating a special effects video, including:
显示应用界面;响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征目标区域的目标区域特征;在所述应用界面内显示目标特效贴纸,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的;基于所述目标特效贴纸,生成特效视频。Displaying an application interface; in response to a first trigger operation for a special effect component, obtaining area information, wherein the area information represents target area characteristics of a target area; displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information; and generating a special effect video based on the target special effect sticker.
第二方面,本公开实施例提供一种特效视频生成装置,包括:In a second aspect, an embodiment of the present disclosure provides a special effects video generating device, including:
显示模块,用于显示应用界面;Display module, used to display the application interface;
处理模块,用于响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征目标区域的目标区域特征;A processing module, configured to obtain region information in response to a first trigger operation on the special effect component, wherein the region information represents a target region feature of the target region;
生成模块,用于在所述应用界面内显示目标特效贴纸,并基于所述目标特效贴纸,生成特效视频,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的。
A generation module is used to display target special effect stickers in the application interface and generate special effect videos based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
第三方面,本公开实施例提供一种电子设备,包括:In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
处理器,以及与所述处理器通信连接的存储器;A processor, and a memory communicatively connected to the processor;
所述存储器存储计算机执行指令;The memory stores computer-executable instructions;
所述处理器执行所述存储器存储的计算机执行指令,以实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。The processor executes the computer-executable instructions stored in the memory to implement the special effects video generation method described in the first aspect and various possible designs of the first aspect.
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, in which computer execution instructions are stored. When a processor executes the computer execution instructions, the special effects video generation method described in the first aspect and various possible designs of the first aspect is implemented.
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。In a fifth aspect, an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the special effects video generation method described in the first aspect and various possible designs of the first aspect.
第六方面,本公开实施例提供一种计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。In a sixth aspect, an embodiment of the present disclosure provides a computer program, which, when executed by a processor, implements the special effects video generation method as described in the first aspect and various possible designs of the first aspect.
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure or related technologies, the following briefly introduces the drawings required for use in the embodiments or related technical descriptions. Obviously, the drawings described below are some embodiments of the present disclosure. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative labor.
图1为本公开实施例提供的特效视频生成方法的一种应用场景图;FIG1 is a diagram of an application scenario of a special effects video generation method provided by an embodiment of the present disclosure;
图2为本公开实施例提供的特效视频生成方法的流程示意图一;FIG2 is a flow chart of a method for generating special effects video according to an embodiment of the present disclosure;
图3为图2所示实施例中步骤S102的具体实现方式的流程图;FIG3 is a flow chart of a specific implementation method of step S102 in the embodiment shown in FIG2 ;
图4为本公开实施例提供的一种生成特效视频的过程示意图;FIG4 is a schematic diagram of a process for generating a special effects video provided by an embodiment of the present disclosure;
图5为本公开实施例提供的另一种生成特效视频的过程示意图;FIG5 is a schematic diagram of another process of generating a special effects video provided by an embodiment of the present disclosure;
图6为本公开实施例提供的一种显示第二标识的示意图;FIG6 is a schematic diagram of displaying a second mark provided by an embodiment of the present disclosure;
图7为本公开实施例提供的一种显示第三标识的示意图;FIG7 is a schematic diagram showing a third mark provided by an embodiment of the present disclosure;
图8为本公开实施例提供的一种第二标识和/或第三标识的生成过程流程示意图;FIG8 is a schematic diagram of a process flow of generating a second identifier and/or a third identifier provided by an embodiment of the present disclosure;
图9为本公开实施例提供的一种终端设备与服务器之间的交互流程图;FIG9 is a flow chart of interaction between a terminal device and a server provided in an embodiment of the present disclosure;
图10为本公开实施例提供的另一种终端设备与服务器之间的交互流程图;
FIG10 is another flow chart of interaction between a terminal device and a server provided in an embodiment of the present disclosure;
图11为本公开实施例提供的特效视频生成方法的流程示意图二;FIG11 is a second flow chart of a method for generating special effect videos provided in an embodiment of the present disclosure;
图12为本公开实施例提供的一种响应第一编辑操作的过程示意图;FIG12 is a schematic diagram of a process of responding to a first editing operation provided by an embodiment of the present disclosure;
图13为本公开实施例提供的特效视频生成方法的流程示意图三;FIG13 is a third flow chart of the special effects video generation method provided by an embodiment of the present disclosure;
图14为本公开实施例提供的一种设置各初始图片对应的帧中显示位置的过程示意图;FIG14 is a schematic diagram of a process for setting a display position in a frame corresponding to each initial image provided by an embodiment of the present disclosure;
图15为本公开实施例提供的一种设置各初始图片对应的特效内容的过程示意图;FIG15 is a schematic diagram of a process of setting special effect content corresponding to each initial image provided by an embodiment of the present disclosure;
图16为本公开实施例提供的特效视频生成装置的结构框图;FIG16 is a structural block diagram of a special effect video generating device provided by an embodiment of the present disclosure;
图17为本公开实施例提供的一种电子设备的结构示意图;FIG17 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure;
图18为本公开实施例提供的电子设备的硬件结构示意图。FIG. 18 is a schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present disclosure.
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solution and advantages of the embodiments of the present disclosure clearer, the technical solution in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of the embodiments of the present disclosure, not all of the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present disclosure.
需要说明的是,本公开所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,并且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准,并提供有相应的操作入口,供用户选择授权或者拒绝。It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data used for analysis, stored data, displayed data, etc.) involved in this disclosure are all information and data authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data must comply with the relevant laws, regulations and standards of relevant countries and regions, and provide corresponding operation entrances for users to choose to authorize or refuse.
在相关技术中的方案中,当特效道具被用户选定并触发后,显示的特效贴纸的内容是固定的,而无法生成适合用户的个性化特效内容,导致特效视频的生成效率低、信息量少等的问题。In the solutions in the related art, when the special effects props are selected and triggered by the user, the content of the displayed special effects stickers is fixed, and personalized special effects content suitable for the user cannot be generated, resulting in low efficiency in generating special effects videos and small amount of information.
根据本公开实施例提供的特效视频生成方法、装置、电子设备及存储介质,显示应用界面;响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征目标区域的目标区域特征;在所述应用界面内显示目标特效贴纸,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的;基于所述目标特效贴纸,生成特效视频。通过在触发特效组件后,在应用界面内显示由表征目标地区特征的目标地区的区域信息确定的特效
内容,从而实现了区域信息与终端设备所触发的目标特效贴纸的特效内容之间的动态映射,进而使基于目标特效贴纸生成的特效视频能够与区域特征相匹配,无需用户进行手动编辑或输入,提高了视频生成效率,丰富了特效视频中的信息量。According to the special effect video generation method, device, electronic device and storage medium provided by the embodiments of the present disclosure, an application interface is displayed; in response to a first trigger operation on a special effect component, regional information is obtained, wherein the regional information represents the target region characteristics of the target region; a target special effect sticker is displayed in the application interface, wherein the special effect content of the target special effect sticker is determined based on the regional information; and a special effect video is generated based on the target special effect sticker. After the special effect component is triggered, a special effect video determined by the regional information of the target region representing the target region characteristics is displayed in the application interface. content, thereby realizing dynamic mapping between regional information and the special effect content of the target special effect sticker triggered by the terminal device, so that the special effect video generated based on the target special effect sticker can match the regional features without the need for manual editing or input by the user, thereby improving the efficiency of video generation and enriching the amount of information in the special effect video.
下面对本公开实施例的应用场景进行解释:The application scenarios of the embodiments of the present disclosure are explained below:
本公开实施例提供的特效视频生成方法,可以应用于视频编辑及生成的应用场景。具体地,可以应用于生成区域位置相关特效的视频编辑场景中,区域位置相关特效例如包括,地点打卡特效、风景分享特效等,上述特效的具体实现方式,会在后续实施例中进行详细介绍。图1为本公开实施例提供的特效视频生成方法的一种应用场景图,如图1所示,本公开实施例提供的方法,可以应用于终端设备,例如智能手机,终端设备上运行有能够实现为初始图像增加特效的图像编辑功能的应用程序(Application,APP),在该应用程序的应用界面内,设置有为初始图像增加虚拟特效的特效组件,当用户操作终端设备点击特效组件后,特效组件被触发,之后在该应用界面内显示对应的特效贴纸,例如图中所示的“眼镜”贴纸,实现在初始图像中添加特效的目的。The special effect video generation method provided by the embodiment of the present disclosure can be applied to the application scenarios of video editing and generation. Specifically, it can be applied to the video editing scenario of generating regional position-related special effects. Regional position-related special effects include, for example, location punch-in special effects, scenery sharing special effects, etc. The specific implementation method of the above special effects will be described in detail in the subsequent embodiments. Figure 1 is an application scenario diagram of the special effect video generation method provided by the embodiment of the present disclosure. As shown in Figure 1, the method provided by the embodiment of the present disclosure can be applied to a terminal device, such as a smart phone. An application (Application, APP) capable of realizing the image editing function of adding special effects to the initial image is running on the terminal device. In the application interface of the application, a special effect component for adding virtual special effects to the initial image is provided. When the user operates the terminal device to click on the special effect component, the special effect component is triggered, and then the corresponding special effect sticker is displayed in the application interface, such as the "glasses" sticker shown in the figure, to achieve the purpose of adding special effects to the initial image.
相关技术中,针对上述典型的视频编辑的应用场景中,特效组件触发后所显示的特效贴纸的内容是固定的,例如上述示例中的“眼镜”特效。其特效内容是固定的。然而,在另一些应用场景下,例如以“地点打卡”、“风景分享”为主题的视频编辑中,上述固定内容的视频特效无法满足用户个性化图像编辑需求,也无法自动生成个性化的特效内容,此种情况下,用户只能通过手动方式在初始图片中插入文字或标识等信息,来满足图像编辑的个性化需要,造成了特效视频的生成效率低、信息量少等的问题。本公开实施例提供一种特效视频生成方法以解决上述问题。In the related art, for the above-mentioned typical video editing application scenarios, the content of the special effect stickers displayed after the special effect component is triggered is fixed, such as the "glasses" special effect in the above example. Its special effect content is fixed. However, in other application scenarios, such as video editing with themes such as "location check-in" and "scenery sharing", the video special effects with the above-mentioned fixed content cannot meet the user's personalized image editing needs, nor can they automatically generate personalized special effect content. In this case, the user can only manually insert text or logo information into the initial picture to meet the personalized needs of image editing, resulting in low generation efficiency of special effect videos and small amount of information. The disclosed embodiments provide a special effect video generation method to solve the above problems.
参考图2,图2为本公开实施例提供的特效视频生成方法的流程示意图一。本实施例的方法可以应用在终端设备中,该特效视频生成方法包括:Referring to FIG. 2 , FIG. 2 is a flow chart of a special effect video generation method provided in an embodiment of the present disclosure. The method of this embodiment can be applied in a terminal device, and the special effect video generation method includes:
步骤S101:显示应用界面。Step S101: Display the application interface.
步骤S102:响应于针对特效组件的第一触发操作,获得区域信息,区域信息表征目标区域的目标区域特征。Step S102: in response to a first trigger operation on the special effect component, obtaining region information, where the region information represents target region features of the target region.
步骤S103:在应用界面内显示目标特效贴纸,其中,目标特效贴纸的特效内容是基于区域信息确定的。
Step S103: Displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information.
步骤S104:基于目标特效贴纸,生成特效视频。Step S104: Generate a special effects video based on the target special effects sticker.
示例性地,参考图1所示应用场景的示意图,本申请实施例提供的方法的执行主体可以为终端设备,例如智能手机,终端设备通过运行目标应用,来实现本实施例提供的特效视频生成方法。具体地,终端设备在运行目标应用后,通过显示屏显示目标应用的应用界面,进一步地,在应用界面内,设置有特效组件,用户通过操作终端设备,通过点击、长按等第一触发操作触发该特效组件后,在应用界面内显示该特效组件对应的目标特效贴纸。其中,目标特效贴纸的特效内容是基于区域信息确定的,区域信息表征目标区域的目标区域特征。在一种可能的实现方式中,区域信息可以基于用户手动加载的图像数据确定。示例性地,步骤S102的具体实现方式包括:Exemplarily, referring to the schematic diagram of the application scenario shown in FIG1 , the execution subject of the method provided in the embodiment of the present application may be a terminal device, such as a smart phone, and the terminal device implements the special effect video generation method provided in the present embodiment by running the target application. Specifically, after the terminal device runs the target application, the application interface of the target application is displayed through the display screen. Furthermore, a special effect component is provided in the application interface. After the user triggers the special effect component by operating the terminal device through a first trigger operation such as clicking or long pressing, the target special effect sticker corresponding to the special effect component is displayed in the application interface. Among them, the special effect content of the target special effect sticker is determined based on the regional information, and the regional information characterizes the target area characteristics of the target area. In one possible implementation method, the regional information can be determined based on image data manually loaded by the user. Exemplarily, the specific implementation method of step S102 includes:
步骤S1021:响应于针对特效组件的第一触发操作,获取图像数据。Step S1021: In response to a first trigger operation on the special effect component, image data is acquired.
步骤S1022:针对图像数据进行特征提取,得到区域信息。Step S1022: extract features from the image data to obtain region information.
其中,所述区域信息包括以下至少一种:所述目标区域的区域位置、目标区域的地貌特征、所述目标区域内的目标对象信息。The area information includes at least one of the following: the area location of the target area, the geomorphic features of the target area, and the target object information in the target area.
示例性地,终端设备在响应于针对特效组件的第一触发操作后,获得图像数据,图像数据可以包括视频或图片。以图像数据为图片为例,终端设备响应于用户指令而加载该图像数据后,对图像数据进行特征提取,可以得到图片中所表现的目标区域的目标区域特征,即区域信息。其中,区域信息可以为区域位置、地貌特征以及目标对象信息。其中,区域位置是指目标区域所在的位置点,也可以指目标区域的区域范围,或者二者的合集。地貌特征是指目标区域的地貌,例如草地、冰川。目标区域内的目标对象信息,是指位于目标区域内具有某种指定的目标对象,或者目标对象所具有的特征,如目标对象的类型、外观特征等,具体地,例如,目标对象信息为描述在目标区域内具有的“向日葵”的特征信息,进一步地,目标对象信息还可以为描述“花”或者“红色向日葵”的特征信息。具体实现方式可根据需要设置,此处不再一一举例赘述。进一步地,图像数据中的图像内容包括目标区域,终端设备对图像数据进行特征提取后,得到图像数据所表现的目标区域所对应的区域位置、目标区域的地貌特征、目标区域的目标对象信息中的至少一种。例如,区域信息包括表征“草原”的描述信息,更具体地,区域信息包含字符串“草原”;再例如,区域信息包括表征“A地区”的描述信息,更具体地,区域信息包含字符串“A地”,区域信息还可以直接包含具体的位置坐标。当然,在另一种可能的
实现方式中,区域信息可以同时表征区域位置和地貌特征,例如“A地草原”。区域信息的具体实现方式可以根据需要设置,此处不再赘述。Exemplarily, after the terminal device responds to the first trigger operation for the special effect component, it obtains image data, and the image data may include a video or a picture. Taking the image data as a picture as an example, after the terminal device loads the image data in response to the user instruction, it extracts the features of the target area of the target area shown in the picture, that is, the area information. Among them, the area information can be the area location, landform features and target object information. Among them, the area location refers to the location point where the target area is located, and it can also refer to the area range of the target area, or a combination of the two. The landform feature refers to the landform of the target area, such as grassland and glacier. The target object information in the target area refers to a certain specified target object located in the target area, or the characteristics of the target object, such as the type and appearance characteristics of the target object. Specifically, for example, the target object information is the feature information describing the "sunflower" in the target area, and further, the target object information can also be the feature information describing "flower" or "red sunflower". The specific implementation method can be set as needed, and will not be repeated here one by one. Furthermore, the image content in the image data includes a target area. After the terminal device extracts features from the image data, it obtains at least one of the area position corresponding to the target area represented by the image data, the topographic features of the target area, and the target object information of the target area. For example, the area information includes descriptive information representing "grassland", and more specifically, the area information includes the character string "grassland"; for another example, the area information includes descriptive information representing "A region", and more specifically, the area information includes the character string "A region", and the area information may also directly include specific location coordinates. Of course, in another possible In the implementation, the regional information can simultaneously represent the regional location and landform features, such as "grassland in area A". The specific implementation of the regional information can be set as needed and will not be described in detail here.
在一种可能的实现方式中,目标特效贴纸的特效内容包括第一标识。具体地,如图3所示,步骤S103的具体实现步骤包括:In a possible implementation, the special effect content of the target special effect sticker includes the first identifier. Specifically, as shown in FIG3 , the specific implementation steps of step S103 include:
步骤1031:响应于针对特效组件的第一触发操作,在应用界面内显示第一标识。Step 1031: In response to a first trigger operation on the special effect component, a first logo is displayed in the application interface.
示例性地,在响应针对特效组件的第一触发操作后,终端设备得到用户输入的区域信息,或者由用户请求而获得的区域信息,根据区域信息,确定对应的目标区域的目标区域类别。之后,获取与该目标区域类别或目标区域类别的识别标识对应的文字、图标等信息,即第一标识;并将表征该目标区域类别的第一标识,显示在应用界面内。Exemplarily, after responding to the first trigger operation on the special effect component, the terminal device obtains the area information input by the user, or the area information obtained by the user's request, and determines the target area category of the corresponding target area according to the area information. Then, the text, icon and other information corresponding to the target area category or the identification mark of the target area category, i.e., the first mark, is obtained; and the first mark representing the target area category is displayed in the application interface.
图4为本公开实施例提供的一种生成特效视频的过程示意图,如图4所示,示例性地,一种可能的实现方式中,应用界面为相机取景框界面,在相机取景框界面内,设置有特效组件C1(图中示为C1),当用户点击该特效组件C1后,终端设备获取区域信息,并基于该区域信息,确定对应的目标区域类别(A区域),之后,在相机取景框界面内显示基于目标区域类别生成的第一标识“A区域,我来了”,即目标特效贴纸的特效内容。之后,终端设备响应于针对相机取景框界面的第三触发操作,拍摄图像数据,图像数据可以包括通过终端设备的摄像头拍摄的单帧图片、多帧图片或视频,一种可能的实现方式中,在拍摄图像数据的过程结束后,终端设备直接对图像数据和目标特效贴纸进行渲染,生成最终的特效视频;而在另一种可能的实现方式中,在拍摄图像数据的过程结束后,终端设备返回图像编辑页面,对图像数据和目标特效贴纸的叠加展示效果进行预览展示,用户可以进一步对图像数据或目标特效贴纸进行编辑,从而改变特效图像的内容,在编辑结束后,对图像数据和目标特效贴纸进行融合及渲染,生成最终的特效视频。Figure 4 is a schematic diagram of a process for generating a special effects video provided by an embodiment of the present disclosure. As shown in Figure 4, exemplarily, in a possible implementation method, the application interface is a camera viewfinder interface, and a special effects component C1 (shown as C1 in the figure) is provided in the camera viewfinder interface. When the user clicks on the special effects component C1, the terminal device obtains the area information, and based on the area information, determines the corresponding target area category (A area), and then displays the first identification "A area, here I come" generated based on the target area category in the camera viewfinder interface, that is, the special effects content of the target special effects sticker. Afterwards, the terminal device responds to the third trigger operation on the camera viewfinder interface to capture image data. The image data may include a single-frame picture, multiple-frame pictures or videos captured by the camera of the terminal device. In one possible implementation, after the process of capturing the image data is completed, the terminal device directly renders the image data and the target special effect stickers to generate a final special effect video; in another possible implementation, after the process of capturing the image data is completed, the terminal device returns to the image editing page to preview the superimposed display effect of the image data and the target special effect stickers. The user can further edit the image data or the target special effect stickers to change the content of the special effect image. After the editing is completed, the image data and the target special effect stickers are fused and rendered to generate the final special effect video.
其中,在拍摄图像数据的过程中,在相机取景框界面内,同时显示目标特效贴纸,使用户可以在拍摄图像数据的过程中,同时预览最终生成的特效视频的内容,提高视频的编辑效率。In the process of shooting image data, the target special effect stickers are displayed in the camera viewfinder interface at the same time, so that the user can preview the content of the final special effect video while shooting image data, thereby improving the efficiency of video editing.
图5为本公开实施例提供的另一种生成特效视频的过程示意图,如图5所示,示例性地,一种可能的实现方式中,应用界面为图像编辑界面,在图像编辑界面内,设置特效组件C2(图中示为C2)和相册组件(图中示为C3),
通过触发相册组件,加载预生成的图像数据,图像数据包括视频或至少一帧图片。具体地,例如,当用户点击相册组件后,从开启的相册页面内选择一帧图像、多帧图像或者一段视频,终端设备将其确定为图像数据,并将图像数据显示在图像编辑界面内;在此之前或之后,特效组件C2被触发,例如用户点击特效组件C2,之后终端设备获取基于区域信息,确定对应的目标区域类别(B区域),再之后,终端设备在应用界面内的第一图层显示图像数据的内容,具体地,例如播放视频、显示视频的首帧、显示一帧图片、轮播显示多帧图片等方式中的一种,在应用界面的第二图层显示基于目标区域类别生成的第一标识“B区域,我来了”,其中,可选地,第一图层位于第二图层之下,即目标特效贴纸覆盖在图像数据的内容之上,从而使目标特效贴纸具有更好的显示效果。此时,终端设备对图像数据和目标特效贴纸的叠加展示效果进行预览展示,用户可以进一步对图像数据或目标特效贴纸进行编辑,从而改变特效图像的内容,在编辑结束后,执行用户输入的生成指令,终端设备对图像数据和目标特效贴纸进行渲染,生成最终的特效视频。FIG5 is a schematic diagram of another process of generating a special effect video provided by an embodiment of the present disclosure. As shown in FIG5, exemplarily, in a possible implementation, the application interface is an image editing interface, and in the image editing interface, a special effect component C2 (shown as C2 in the figure) and an album component (shown as C3 in the figure) are set. By triggering the album component, the pre-generated image data is loaded, and the image data includes a video or at least one frame of a picture. Specifically, for example, when the user clicks on the album component, selects a frame of image, multiple frames of image or a video from the opened album page, the terminal device determines it as image data and displays the image data in the image editing interface; before or after this, the special effect component C2 is triggered, for example, the user clicks on the special effect component C2, and then the terminal device obtains the corresponding target area category (area B) based on the area information, and then the terminal device displays the content of the image data in the first layer in the application interface, specifically, for example, one of the methods of playing a video, displaying the first frame of a video, displaying a frame of image, and displaying multiple frames of images in a carousel, and the first identification "Area B, I'm here" generated based on the target area category is displayed in the second layer of the application interface, wherein, optionally, the first layer is located below the second layer, that is, the target special effect sticker covers the content of the image data, so that the target special effect sticker has a better display effect. At this time, the terminal device previews the superimposed display effect of the image data and the target special effect stickers. The user can further edit the image data or the target special effect stickers to change the content of the special effect image. After the editing is completed, the generation instructions entered by the user are executed, and the terminal device renders the image data and the target special effect stickers to generate the final special effect video.
在一种可能的实现方式中,如图3所示,在步骤S1031之后,还包括:In a possible implementation, as shown in FIG3 , after step S1031, the method further includes:
步骤S1032:根据目标区域类别,在应用界面内显示第二标识。Step S1032: Displaying a second identifier in the application interface according to the target area category.
其中,具体地,目标特效贴纸的特效内容还包括第二标识,第二标识表征目标区域类别对应的目标事件的累计触发数量,目标事件包括终端设备基于目标特效贴纸生成对应的视频特效。例如,一种可能的实现方式中,在运行目标应用(APP)的众多终端设备中,当终端设备D1在目标应用内通过调用特效组件,触发上述目标特效贴纸,生成对应的视频特效后,即视为触发一次目标事件,由于上述目标事件的触发是基于目标区域类别实现的,即目标区域类别不同时,目标特效贴纸的特效内容不同,因此,目标区域类别为上述目标事件的输入参数,目标区域类别与目标事件具有一一对应的关系。当终端设备D1位于目标区域类别时,上述目标事件触发后,目标区域类别对应的目标事件的累计触发数量加1。Specifically, the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of target events corresponding to the target area category, and the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker. For example, in one possible implementation, among the many terminal devices running the target application (APP), when the terminal device D1 triggers the above-mentioned target special effect sticker by calling the special effect component in the target application and generates the corresponding video special effect, it is regarded as triggering a target event. Since the triggering of the above-mentioned target event is based on the target area category, that is, when the target area category is different, the special effect content of the target special effect sticker is different. Therefore, the target area category is the input parameter of the above-mentioned target event, and the target area category has a one-to-one correspondence with the target event. When the terminal device D1 is located in the target area category, after the above-mentioned target event is triggered, the cumulative triggering number of the target event corresponding to the target area category is increased by 1.
在具体实现上,示例性地,当终端设备触发目标事件后,向服务器发送通知消息,服务器通过获取携带同一区域信息的通知消息,得到累计触发数据,之后服务器响应终端设备发送的数据请求,将累计触发数量发送给终端设备,以使终端设备能够基于目标区域类别对应的累计触发数量生成第二标识。在另一种可能的实现方式中,目标事件还可以为终端设备基于目标特效
贴纸生成对应的视频特效,并将视频特效发布,示例性地,发布是指将特效视频上传至目标应用对应的服务器,以使其他用户可以在目标应用内观看到该视频特效。In specific implementation, for example, when the terminal device triggers the target event, it sends a notification message to the server. The server obtains the notification message carrying the same area information to obtain the cumulative trigger data. Then the server responds to the data request sent by the terminal device and sends the cumulative trigger quantity to the terminal device, so that the terminal device can generate a second identifier based on the cumulative trigger quantity corresponding to the target area category. In another possible implementation, the target event can also be a target event for the terminal device based on the target special effect. The sticker generates corresponding video effects and publishes the video effects. Exemplarily, publishing refers to uploading the special effects video to the server corresponding to the target application so that other users can watch the video effects in the target application.
进一步地,终端设备根据累计触发数量,生成对应的第二标识,并将第二标识显示在应用界面内。图6为本公开实施例提供的一种显示第二标识的示意图,如图6所示,在应用界面内,基于时间顺序,在第一时刻显示第一标识,即字符串“A区域,我来了”,之后,在第二时刻显示字符串“A区域总打卡人数:65535”,即第二标识的内容。表征在该目标区域类别(例如图4或图5所示实施例中的A区域或B区域),累计触发目标事件(例如发布基于目标特效贴纸的特效视频)的次数为65535次。其中,在其他可能的实现方式中,第一标识和第二标识还可以同时出现,第二标识可以始终位于应用界面内的同一位置,也可以随着时间变化而位于应用界面内的不同的位置。Furthermore, the terminal device generates a corresponding second identifier according to the cumulative number of triggers, and displays the second identifier in the application interface. Figure 6 is a schematic diagram of displaying a second identifier provided by an embodiment of the present disclosure. As shown in Figure 6, in the application interface, based on the time sequence, the first identifier, that is, the character string "A area, I am here", is displayed at the first moment, and then, the character string "The total number of people who check in in area A: 65535" is displayed at the second moment, that is, the content of the second identifier. It is characterized in that in the target area category (such as area A or area B in the embodiment shown in Figure 4 or Figure 5), the cumulative number of triggering the target event (such as publishing a special effects video based on the target special effects sticker) is 65535 times. Among them, in other possible implementations, the first identifier and the second identifier can also appear at the same time, and the second identifier can always be located at the same position in the application interface, or it can be located at different positions in the application interface as time changes.
在一种可能的实现方式中,如图3所示,在步骤S1031之后,还包括:In a possible implementation, as shown in FIG3 , after step S1031, the method further includes:
步骤S1033:根据目标区域类别,在应用界面内显示第三标识。Step S1033: Displaying a third identifier in the application interface according to the target area category.
其中,示例性地,目标特效贴纸的特效内容还包括第三标识,第三标识表征目标区域类别在全部区域类别中基于累计触发数量的排名。该第三标识与第二标识对应,一种可能的实现方式中,第三标识是基于第二标识生成的。具体地,例如,在运行目标应用的众多终端设备中,当终端设备D1在目标应用内通过调用特效组件,触发上述目标特效贴纸,生成对应的视频特效后,即视为触发一次目标事件,当该终端设备D1位于A区域(目标区域类别)时,上述目标事件触发后,A区域对应的目标事件的累计触发数量加1。同时,位于其他区域的终端设备在触发上述目标事件后,也会导致其他区域的累计触发数量增加,因此,不同区域对应一个动态变化的累计触发数量,而基于该累计触发数量,目标区域类别在全部区域(包括目标区域类别和其他区域)中存在排名序列,例如基于累计触发数量由多至少进行排名后的排名序列。Among them, exemplarily, the special effect content of the target special effect sticker also includes a third identifier, which represents the ranking of the target area category in all area categories based on the cumulative trigger number. The third identifier corresponds to the second identifier. In a possible implementation, the third identifier is generated based on the second identifier. Specifically, for example, among the many terminal devices running the target application, when the terminal device D1 triggers the above-mentioned target special effect sticker by calling the special effect component in the target application and generates the corresponding video special effect, it is regarded as triggering a target event. When the terminal device D1 is located in area A (target area category), after the above-mentioned target event is triggered, the cumulative trigger number of the target event corresponding to area A is increased by 1. At the same time, after the terminal devices located in other areas trigger the above-mentioned target event, the cumulative trigger number of other areas will also increase. Therefore, different areas correspond to a dynamically changing cumulative trigger number, and based on the cumulative trigger number, the target area category has a ranking sequence in all areas (including the target area category and other areas), such as a ranking sequence based on the cumulative trigger number from most to least.
在具体实现上,示例性地,上述全部区域对应的排名序列,和/或目标区域类别在排名序列中的排名,可以是服务器得到全部区域对应的累计触发数据后,通过比较全部区域的累计触发数据而得到的,之后服务器响应终端设备发送的数据请求,将全部区域对应的排名序列,和/或目标区域类别在排名序列中的排名发送给终端设备的,以使终端设备能够基于全部区域对应的排名序列,和/或目标区域类别在排名序列中的排名生成第三标识。
In a specific implementation, exemplarily, the ranking sequence corresponding to all the above-mentioned areas, and/or the ranking of the target area category in the ranking sequence, can be obtained by the server after obtaining the cumulative trigger data corresponding to all the areas by comparing the cumulative trigger data of all the areas. Thereafter, the server responds to the data request sent by the terminal device and sends the ranking sequence corresponding to all the areas, and/or the ranking of the target area category in the ranking sequence to the terminal device, so that the terminal device can generate a third identifier based on the ranking sequence corresponding to all the areas, and/or the ranking of the target area category in the ranking sequence.
进一步地,终端设备根据该累计触发数量的排名序列,和/或目标区域类别在该排名序列中的排名,生成第三标识,并将第三标识显示在应用界面内。图7为本公开实施例提供的一种显示第三标识的示意图,如图7所示,在应用界面内,基于时间顺序,在第一时刻显示第一标识,在第二时刻显示第二标识,在第三时刻显示第三标识;第三标识中包括各区域(A区域、B区域、C区域等)在全部区域中基于累计触发数量的排名。如图所示,B区域排名第一、A区域排名第二、C区域排名第三。在此基础上,可以同时显示第二标识,即在显示排名的基础上,显示各区域对应的累计触发数量。在其他可能的实现方式中,还可以在显示第一标识或第二标识之后,或者显示第一标识或第二标识的同时,显示第三标识,此处不再赘述。其中,第三标识可以始终位于应用界面内的同一位置,也可以随着时间变化而位于应用界面内的不同的位置。Further, the terminal device generates a third identifier according to the ranking sequence of the cumulative number of triggers, and/or the ranking of the target area category in the ranking sequence, and displays the third identifier in the application interface. Figure 7 is a schematic diagram of displaying a third identifier provided by an embodiment of the present disclosure. As shown in Figure 7, in the application interface, based on the time sequence, the first identifier is displayed at the first moment, the second identifier is displayed at the second moment, and the third identifier is displayed at the third moment; the third identifier includes the ranking of each area (area A, area B, area C, etc.) in all areas based on the cumulative number of triggers. As shown in the figure, area B ranks first, area A ranks second, and area C ranks third. On this basis, the second identifier can be displayed at the same time, that is, on the basis of displaying the ranking, the cumulative number of triggers corresponding to each area is displayed. In other possible implementations, the third identifier can also be displayed after the first identifier or the second identifier is displayed, or while the first identifier or the second identifier is displayed, which will not be repeated here. Among them, the third identifier can always be located at the same position in the application interface, or it can be located at different positions in the application interface as time changes.
需要说明的是,上述步骤S1032和步骤S1033,可以与步骤S1031同时执行,也可以在步骤S1031之后顺序执行或同时执行,步骤S1032和步骤S1033之间的执行顺序可以基于需要设置,此处不做具体限制。It should be noted that the above-mentioned steps S1032 and S1033 can be executed simultaneously with step S1031, or can be executed sequentially or simultaneously after step S1031. The execution order between step S1032 and step S1033 can be set based on needs, and no specific restrictions are made here.
进一步地,下面对第二标识和/或第三标识的生成过程,做进一步介绍,图8为本公开实施例提供的一种第二标识和/或第三标识的生成过程流程示意图,示例性地,如图8所示,在显示第二标识和/或第三标识之前,还包括步骤:Further, the generation process of the second identifier and/or the third identifier is further introduced below. FIG. 8 is a schematic flow chart of a generation process of a second identifier and/or a third identifier provided in an embodiment of the present disclosure. Exemplarily, as shown in FIG. 8, before displaying the second identifier and/or the third identifier, the following steps are also included:
步骤S1020A:向服务器发送第一数据请求,第一数据请求中包括表征目标特效贴纸的第一信息和表征目标区域类别的第二信息。Step S1020A: Send a first data request to the server, where the first data request includes first information representing the target special effect sticker and second information representing the target area category.
步骤S1020B:接收服务器返回的事件数据,事件数据至少包括目标区域对应的目标事件的累计触发数量,和/或目标区域类别在全部区域类别中基于累计触发数量的排名。Step S1020B: receiving event data returned by the server, the event data including at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area category among all area categories based on the cumulative number of triggers.
步骤S1020C:根据事件数据,生成第二标识和/或第三标识。Step S1020C: Generate a second identifier and/or a third identifier based on the event data.
可选地,在生成第二标识和/或第三标识之后,还包括:响应于第二触发操作,在应用界面内显示第二标识和/或第三标识。即在一种可能的实现方式中,第一标识和/或第二标识在初始状态下生成后处于不可见状态;在终端设备接收并响应第二触发操作后,在应用界面内显示第二标识和/或第三标识。Optionally, after generating the second identifier and/or the third identifier, the method further includes: in response to a second trigger operation, displaying the second identifier and/or the third identifier in the application interface. That is, in a possible implementation, the first identifier and/or the second identifier is in an invisible state after being generated in the initial state; after the terminal device receives and responds to the second trigger operation, the second identifier and/or the third identifier is displayed in the application interface.
图9为本公开实施例提供的一种终端设备与服务器之间的交互流程图,如图9所示,示例性地,位于A区域(目标区域类别)的终端设备D1触发
上述目标事件后,通过目标应用向对应的服务器(即目标应用的服务端)发送第一数据请求,第一数据请求中包括表征目标特效贴纸prop_1的第一信息和表征A区域的第二信息,服务器接收到第一数据请求后,基于其中的第一信息和第二信息,获取该A区域内所有终端设备(包括目标终端设备和其他终端设备)发送的针对目标特效贴纸prop_1的通知信息,得到该A区域对应的目标事件(即基于目标特效贴纸prop_1生成特效视频的事件)的累计触发数量Data_1(图中示为Data_1),并将获取得到的累计触发数量Data_1发送至终端设备D1,终端设备D1通过累计触发数量Data_1生成第二标识。FIG9 is a flowchart of an interaction between a terminal device and a server provided by an embodiment of the present disclosure. As shown in FIG9 , for example, a terminal device D1 located in area A (target area category) triggers After the above-mentioned target event, a first data request is sent to the corresponding server (i.e., the server of the target application) through the target application. The first data request includes first information representing the target special effect sticker prop_1 and second information representing area A. After receiving the first data request, the server obtains the notification information for the target special effect sticker prop_1 sent by all terminal devices in area A (including the target terminal device and other terminal devices) based on the first information and the second information therein, and obtains the cumulative trigger quantity Data_1 (shown as Data_1 in the figure) of the target event corresponding to area A (i.e., the event for generating a special effect video based on the target special effect sticker prop_1), and sends the obtained cumulative trigger quantity Data_1 to the terminal device D1. The terminal device D1 generates a second identifier through the cumulative trigger quantity Data_1.
图10为本公开实施例提供的另一种终端设备与服务器之间的交互流程图,示例性地,服务器在获取该A区域对应的目标事件的累计触发数量Data_1的同时,通过其他区域的终端设备发送的针对目标特效贴纸prop_1的通知信息,异步获取其他区域对应的目标事件的累计触发数量,例如B区域对应的累计触发数量Data_2(图中示为Data_2)、C区域对应的累计触发数量Data_3(图中示为Data_3)。并对多个全部区域的累计触发数量进行排名,得到事件数据,之后将事件数据发送给终端设备D1,终端设备D1通过事件数据生成第二标识和/或第三标识。其中,事件数据至少包括目标区域对应的目标事件的累计触发数量,和/或目标区域类别在全部区域类别中基于累计触发数量的排名,一种可能的实现方式中,事件数据包括排名序列Array_data,该排名序列Array_data中存储有区域排名序列以及各区域对应的累计触发数量,例如排名序列Array_data={A=a;B=c;C=c},其中,A、B、C分别为表征A区域、B区域、C区域的区域标识,a、b、c为不同区域的累计触发数量,且a=>b=>c(或者a<=b<=c)。当然,在其他可能的实现方式中,排名序列Array_data中也可以仅包括区域排名序列,例如,Array_data={A;B;C}。服务器将目标事件对应的排名序列Array_data(事件数据)发送至终端设备,终端设备通过排名序列Array_data生成第二标识和/或第三标识。FIG10 is another interactive flow chart between a terminal device and a server provided in an embodiment of the present disclosure. Exemplarily, while obtaining the cumulative trigger quantity Data_1 of the target event corresponding to the A area, the server asynchronously obtains the cumulative trigger quantity of the target event corresponding to other areas, such as the cumulative trigger quantity Data_2 corresponding to the B area (shown as Data_2 in the figure) and the cumulative trigger quantity Data_3 corresponding to the C area (shown as Data_3 in the figure) through the notification information for the target special effect sticker prop_1 sent by the terminal devices in other areas. And the cumulative trigger quantities of all multiple areas are ranked to obtain event data, and then the event data is sent to the terminal device D1, and the terminal device D1 generates a second identifier and/or a third identifier through the event data. Among them, the event data at least includes the cumulative triggering number of the target event corresponding to the target area, and/or the ranking of the target area category in all area categories based on the cumulative triggering number. In a possible implementation method, the event data includes a ranking sequence Array_data, which stores the area ranking sequence and the cumulative triggering number corresponding to each area, for example, the ranking sequence Array_data = {A = a; B = c; C = c}, wherein A, B, and C are the area identifiers representing area A, area B, and area C, respectively, a, b, and c are the cumulative triggering numbers of different areas, and a=>b=>c (or a<=b<=c). Of course, in other possible implementation methods, the ranking sequence Array_data may also only include the area ranking sequence, for example, Array_data = {A; B; C}. The server sends the ranking sequence Array_data (event data) corresponding to the target event to the terminal device, and the terminal device generates a second identifier and/or a third identifier through the ranking sequence Array_data.
之后,在应用界面内显示目标特效贴纸后,基于用户输入的生成指令,终端设备即可将目标特效贴纸渲染至对应的初始图片或初始视频中,从而生成特效视频,该渲染过程为已有技术,此处不再赘述。之后,可选地,终端设备在生成特效视频后,基于之前的介绍,可以进一步向服务器发送通知信息,以使服务器能够对累计触发数量和排名序列进行更新,实现特效视频中目标特效贴纸的特效内容的动态更新。
After that, after the target special effect sticker is displayed in the application interface, based on the generation instruction input by the user, the terminal device can render the target special effect sticker into the corresponding initial picture or initial video, thereby generating a special effect video. The rendering process is an existing technology and will not be described here. After that, optionally, after the special effect video is generated, based on the previous introduction, the terminal device can further send a notification message to the server so that the server can update the cumulative trigger quantity and ranking sequence, thereby realizing the dynamic update of the special effect content of the target special effect sticker in the special effect video.
通过显示应用界面;响应于针对特效组件的第一触发操作,在应用界面内显示目标特效贴纸,其中,目标特效贴纸的特效内容是基于区域信息确定的;基于目标特效贴纸,生成特效视频。通过在触发特效组件后,在应用界面内显示由终端设备触发目标特效贴纸时的区域信息确定的特效内容,从而实现了区域信息,与终端设备所触发的目标特效贴纸的特效内容之间的动态映射,进而使基于目标特效贴纸生成的特效视频能够与区域信息相匹配,无需用户手动编辑,提高了视频生成效率,丰富了特效视频中的信息量。By displaying an application interface; in response to a first trigger operation on a special effect component, displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined based on the regional information; and generating a special effect video based on the target special effect sticker. After the special effect component is triggered, the special effect content determined by the regional information when the terminal device triggers the target special effect sticker is displayed in the application interface, thereby achieving a dynamic mapping between the regional information and the special effect content of the target special effect sticker triggered by the terminal device, thereby enabling the special effect video generated based on the target special effect sticker to match the regional information, without the need for manual editing by the user, thereby improving the efficiency of video generation and enriching the amount of information in the special effect video.
参考图11,图11为本公开实施例提供的特效视频生成方法的流程示意图二。本实施例中在图2所示实施例的基础上,进一步增加对目标特效贴纸进行修改和设置的过程,该特效视频生成方法包括:Referring to FIG. 11 , FIG. 11 is a flow chart of the special effect video generation method provided in the embodiment of the present disclosure. In this embodiment, based on the embodiment shown in FIG. 2 , a process of modifying and setting the target special effect sticker is further added. The special effect video generation method includes:
步骤S201:显示应用界面。Step S201: Display the application interface.
步骤S202:响应于针对特效组件的第一触发操作,在应用界面内显示目标特效贴纸,其中,目标特效贴纸的特效内容是基于区域信息确定的。Step S202: In response to a first trigger operation on the special effect component, a target special effect sticker is displayed in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information.
步骤S203:响应于针对目标特效贴纸的第一编辑操作,将目标特效贴纸的特效内容设置为第四标识,其中,第四标识表征自定义区域类别。Step S203: In response to the first editing operation on the target special effect sticker, the special effect content of the target special effect sticker is set to a fourth identifier, wherein the fourth identifier represents a custom area category.
步骤S204:在应用界面内显示第四标识。Step S204: Displaying a fourth logo in the application interface.
示例性地,终端设备在响应针对特效组件的第一触发操作后,首先根据区域信息(区域信息的具体获取方法可参见之前实施例中的介绍),在应用界面内显示对应的目标特效贴纸,例如图2所示实施例中的第一标识、第二标识、第三标识等。具体实现方式在图2所示实施例中已进行详细介绍,此处不再赘述。由于以位置坐标实现的区域信息所对应的目标区域类别具有多层级特征,即同一设备既可以位于对应A市内,也可以位于A市中的A_1区,还可以位于A_1区内的A_1_1公园(即区域信息可以对应多个区域层级);因此,终端设备基于设备区域信息的第一标识对应的区域层级可能并不匹配用户所感兴趣的区域层级,从而导致区域层级错位的问题。为了解决上述问题,本实施例中,终端设备通过对用户输入的第一编辑操作进行响应,来修改目标特效贴纸的特效内容,从而实现区域层级的修改。具体地,例如,在基于之前步骤,所显示的目标特效贴纸的内容包括第一标识,第一标识的内容为字符串“A市”;之后,终端设备响应第一编辑操作,将第一标识的内容修改为“A_1_1公园”(第四标识)。由于该第四标识是基于用户的操作而生成的,与表征区域信息所在的目标区域类别的第一标识相对应的,第四标识表征自
定义区域类别。该自定义区域类别的具体内容可以根据用户的具体输入确定,此处不再赘述。Exemplarily, after responding to the first trigger operation for the special effect component, the terminal device first displays the corresponding target special effect sticker in the application interface according to the regional information (the specific method for obtaining the regional information can be found in the introduction in the previous embodiment), such as the first identification, the second identification, the third identification, etc. in the embodiment shown in Figure 2. The specific implementation method has been described in detail in the embodiment shown in Figure 2 and will not be repeated here. Since the target area category corresponding to the regional information implemented by the location coordinates has a multi-level feature, that is, the same device can be located in the corresponding city A, in the A_1 district in the city A, or in the A_1_1 park in the A_1 district (that is, the regional information can correspond to multiple regional levels); therefore, the regional level corresponding to the first identification of the terminal device based on the device regional information may not match the regional level of interest to the user, resulting in the problem of regional level misalignment. In order to solve the above problem, in this embodiment, the terminal device responds to the first editing operation input by the user to modify the special effect content of the target special effect sticker, thereby realizing the modification of the regional level. Specifically, for example, based on the previous steps, the content of the displayed target special effect sticker includes the first identifier, and the content of the first identifier is the string "A City"; afterwards, the terminal device responds to the first editing operation and modifies the content of the first identifier to "A_1_1 Park" (the fourth identifier). Since the fourth identifier is generated based on the user's operation and corresponds to the first identifier representing the target area category where the area information is located, the fourth identifier represents the Define the area category. The specific content of the custom area category can be determined according to the specific input of the user, and will not be repeated here.
步骤S205:基于自定义区域类别,在应用界面内显示第五标识,和/或第六标识,其中,第五标识表征自定义区域类别对应的目标事件的累计触发数量;第六标识表征自定义区域类别在全部区域中基于累计触发数量的排名。Step S205: Based on the custom area category, a fifth identifier and/or a sixth identifier is displayed in the application interface, wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all areas based on the cumulative number of triggers.
进一步地,基于第一编辑指令,得到自定义区域类别后,终端设备还可以在应用界面内显示与第四标识对应的第五标识和/或第六标识,其中,第五标识表征自定义区域类别对应的目标事件的累计触发数量;第六标识表征自定义区域类别在全部区域中基于累计触发数量的排名。第五标识相当于之前实施例中的第二标识,第六标识相当于之前实施例中的标识,区别在于第二标识和第三标识是基于目标区域类别生成的,而第五标识和第六标识是基于自定义区域类别生成类别的。相应的,第五标识和第六标识的生成方法,也是基于自定义区域类别的基础上,通过向服务器发送数据请求,并接收服务器发送的时间信息实现的。具体地,步骤S205的具体实现方式包括:Furthermore, based on the first editing instruction, after obtaining the custom area category, the terminal device can also display the fifth identifier and/or the sixth identifier corresponding to the fourth identifier in the application interface, wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; the sixth identifier represents the ranking of the custom area category in all areas based on the cumulative number of triggers. The fifth identifier is equivalent to the second identifier in the previous embodiment, and the sixth identifier is equivalent to the identifier in the previous embodiment. The difference is that the second identifier and the third identifier are generated based on the target area category, while the fifth identifier and the sixth identifier are generated based on the custom area category. Accordingly, the generation method of the fifth identifier and the sixth identifier is also based on the custom area category, by sending a data request to the server and receiving the time information sent by the server. Specifically, the specific implementation method of step S205 includes:
步骤S2051:发送第二数据请求,第二数据请求中包括表征目标特效贴纸的第一信息和表征自定义区域类别的第二信息;Step S2051: Send a second data request, where the second data request includes first information representing the target special effect sticker and second information representing the custom area category;
步骤S2052:接收服务器返回的事件数据,事件数据至少包括自定义区域类别对应的目标事件的累计触发数量,和/或自定义区域类别在全部区域中基于累计触发数量的排名;Step S2052: receiving event data returned by the server, the event data including at least the cumulative number of triggers of the target event corresponding to the custom area category, and/or the ranking of the custom area category in all areas based on the cumulative number of triggers;
步骤S2053:根据事件数据,生成第五标识和/或第六标识,并进行展示。Step S2053: Generate a fifth identifier and/or a sixth identifier according to the event data, and display them.
上述第五标识和/或第六标识的具体生成方法,可以参考图8所示实施例中生成第二标识和/或第三标识的具体实现方法,此处不再赘述。The specific method for generating the fifth identifier and/or the sixth identifier mentioned above can refer to the specific implementation method for generating the second identifier and/or the third identifier in the embodiment shown in FIG8 , which will not be described in detail here.
图12为本公开实施例提供的一种响应第一编辑操作的过程示意图,如图12所示,在应用界面内显示第一标识(内容为“我在A区域”)、第二标识(内容为“65535次打卡”)和第三标识(内容为“A区域:全国排名3”)的情况下,终端设备响应用户输入的第一编辑操作,将第一标识修改为第四标识(内容为“我在A_1_1公园”,),之后,终端设备根据该第四标识的内容,得到对应的自定义区域类别“A_1_1公园”,并基于该自定义区域类别“A_1_1公园”,对应将第二标识修改第五标识(内容为“305次打卡”)、将第三标识修改第六标识(内容为“A_1_1公园:全国排名1402”),并在应用界面内展示,从而实现目标特效贴纸的内容设置和更新。上述过程中,仅需要对第一标识进行
修改后,第二标识和第三标识会基于区域的变化(目标区域类别变为自定义区域类别),自动匹配修正,无需手动修改,提高图像编辑效率。Figure 12 is a schematic diagram of a process of responding to a first editing operation provided by an embodiment of the present disclosure. As shown in Figure 12, when the first identifier (the content is "I am in area A"), the second identifier (the content is "65,535 check-ins") and the third identifier (the content is "Area A: Nationally ranked 3") are displayed in the application interface, the terminal device responds to the first editing operation input by the user and modifies the first identifier to the fourth identifier (the content is "I am in A_1_1 Park"). After that, the terminal device obtains the corresponding custom area category "A_1_1 Park" based on the content of the fourth identifier, and based on the custom area category "A_1_1 Park", the second identifier is modified to the fifth identifier (the content is "305 check-ins") and the third identifier is modified to the sixth identifier (the content is "A_1_1 Park: Nationally ranked 1402"), and displayed in the application interface, thereby achieving the content setting and update of the target special effect sticker. In the above process, only the first identifier needs to be modified. After the modification, the second and third identifiers will be automatically matched and corrected based on the change in the area (the target area category becomes a custom area category), without the need for manual modification, thereby improving image editing efficiency.
步骤S206:基于目标特效贴纸,生成特效视频。Step S206: Generate a special effects video based on the target special effects sticker.
在本实施例中,步骤S201-S202、步骤S206的具体实现方式,与图2所示实施例中步骤S101-S103的具体实现方式一致,详细论述请参考步骤S101-S103中的论述,这里不再赘述。In this embodiment, the specific implementation of steps S201-S202 and step S206 is consistent with the specific implementation of steps S101-S103 in the embodiment shown in FIG. 2 . For detailed discussion, please refer to the discussion in steps S101-S103, which will not be repeated here.
本实施例中,通过基于第一编辑操作,对目标特效贴纸的特效内容进行修改,使目标特效贴纸所表征的区域层级能够与用户所感兴趣的区域层级相匹配,从而避免区域层级错位的问题,提高特效视频中所展现的信息的准确性,满足用户的个性化需求。In this embodiment, the special effect content of the target special effect sticker is modified based on the first editing operation so that the regional level represented by the target special effect sticker can match the regional level of interest to the user, thereby avoiding the problem of regional level misalignment, improving the accuracy of the information displayed in the special effect video, and meeting the personalized needs of users.
参考图13,图13为本公开实施例提供的特效视频生成方法的流程示意图三。本实施例中在图2或图11所示实施例的基础上,进一步细化对目标特效贴纸的位置和内容进行设置的方案,该特效视频生成方法包括:Referring to FIG. 13 , FIG. 13 is a flow chart of the special effect video generation method provided in the embodiment of the present disclosure. In this embodiment, based on the embodiment shown in FIG. 2 or FIG. 11 , the scheme for setting the position and content of the target special effect sticker is further refined. The special effect video generation method includes:
步骤S301:显示应用界面。Step S301: Display the application interface.
步骤S302:响应于针对特效组件的第一触发操作,在应用界面内显示目标特效贴纸,其中,目标特效贴纸的特效内容是基于区域信息确定的。Step S302: In response to a first trigger operation on the special effect component, a target special effect sticker is displayed in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information.
步骤S303:响应于针对目标特效贴纸的第二编辑操作,设置目标特效贴纸在特效视频中的显示位置,和/或目标特效贴纸在特效视频中的特效内容。Step S303: In response to the second editing operation on the target special effect sticker, set the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
示例性地,第二编辑操作是用于对目标特效贴纸在特效视频中的显示位置和/或特效内容进行设置的操作。其中,第二编辑操作中可以包括坐标信息,通过坐标信息来设置目标特效贴纸在特效视频中的显示位置;第二编辑操作中可以包括内容信息,例如字符串,通过内容信息来设置目标特效贴纸的特效内容,例如将目标特效贴纸的特效内容由第一标识,修改为第四标识。Exemplarily, the second editing operation is an operation for setting the display position and/or special effect content of the target special effect sticker in the special effect video. The second editing operation may include coordinate information, through which the display position of the target special effect sticker in the special effect video is set; the second editing operation may include content information, such as a string, through which the special effect content of the target special effect sticker is set, for example, the special effect content of the target special effect sticker is changed from the first identifier to the fourth identifier.
在一种可能的实现方式中,特效视频基于至少两帧初始图片和目标特效贴纸生成,终端设备通过获取预生成的至少两帧初始图片和目标特效贴纸,进行渲染而得到特效视频。在此种情况下,示例性地,第二编辑操作包括至少两个针对初始图片的第一子编辑操作,第一子编辑操作用于对单帧初始图片进行编辑,从而使目标特效贴纸出现在至少两帧初始图片中的不同位置。步骤S303的一种可能的实现方式包括:
In a possible implementation, the special effect video is generated based on at least two frames of initial pictures and target special effect stickers, and the terminal device obtains at least two pre-generated frames of initial pictures and target special effect stickers, and renders them to obtain the special effect video. In this case, exemplarily, the second editing operation includes at least two first sub-editing operations for the initial picture, and the first sub-editing operation is used to edit a single frame of the initial picture, so that the target special effect sticker appears in different positions in at least two frames of the initial picture. A possible implementation of step S303 includes:
步骤S3031:应于针对初始图片的第一子编辑操作,设置各初始图片对应的帧中显示位置,以使特效视频播放时,目标特效贴纸分别显示于各初始图片对应的帧中显示位置处。Step S3031: In the first sub-editing operation for the initial picture, the display position in the frame corresponding to each initial picture should be set so that when the special effect video is played, the target special effect stickers are displayed at the display position in the frame corresponding to each initial picture.
图14为本公开实施例提供的一种设置各初始图片对应的帧中显示位置的过程示意图,如图14所示,用于生成特效视频的初始图片包括初始图片P1、初始图片P2和初始图片P3,第二编辑操作包括三个第一子编辑操作,分别为针对初始图片P1的操作op_01(图中示为op_01)、为针对初始图片P2的操作op_02(图中示为op_02)和为针对初始图片P3的操作op_03(图中示为op_03)。每一第一子编辑操作均对应一次点击操作,终端设备响应各第一子编辑操作后,分别得到对应的点击坐标,根据各点击坐标,分别得到对应的帧中显示位置o1、o2、o3。在基于上述帧中显示位置进行设置后,在视频编辑阶段,即应用界面为视频编辑界面时,视频编辑界面内基于预设间隔依次或循环显示上述初始图像P1、初始图片P2和初始图片P3的过程中,目标特效贴纸也同时显示在每一初始图片中对应的帧中显示位置,上述视频编辑界面内的显示内容相当于特效视频的预览,即特效视频播放时,目标特效贴纸分别显示于各初始图片对应的帧中显示位置处。FIG14 is a schematic diagram of a process for setting the display position in the frame corresponding to each initial image provided by an embodiment of the present disclosure. As shown in FIG14 , the initial images used to generate the special effect video include initial image P1, initial image P2, and initial image P3. The second editing operation includes three first sub-editing operations, namely, operation op_01 for initial image P1 (shown as op_01 in the figure), operation op_02 for initial image P2 (shown as op_02 in the figure), and operation op_03 for initial image P3 (shown as op_03 in the figure). Each first sub-editing operation corresponds to a click operation. After the terminal device responds to each first sub-editing operation, the corresponding click coordinates are obtained respectively. According to each click coordinate, the corresponding display positions o1, o2, and o3 in the frame are obtained respectively. After the settings are made based on the display positions in the above frames, in the video editing stage, that is, when the application interface is the video editing interface, in the process of displaying the above initial image P1, initial picture P2 and initial picture P3 in sequence or in a loop based on preset intervals in the video editing interface, the target special effect stickers are also simultaneously displayed in the corresponding display positions in the frames of each initial picture. The display content in the above video editing interface is equivalent to a preview of the special effects video, that is, when the special effects video is played, the target special effects stickers are respectively displayed at the display positions in the frames corresponding to each initial picture.
本实施例步骤中,终端设备通过响应第二编辑指令,分别设置目标特效贴纸在各初始图片中的出现位置,从而使生成的特效视频中,目标特效贴纸能够实现基于图像帧维度的动态移动。从而能够实现避免目标特效贴纸遮挡图像中的关键目标(例如人体躯干、人脸)的问题,提高特效视频的视觉效果。In the steps of this embodiment, the terminal device responds to the second editing instruction and sets the appearance position of the target special effect sticker in each initial picture, so that in the generated special effect video, the target special effect sticker can achieve dynamic movement based on the image frame dimension. This can avoid the problem of the target special effect sticker blocking the key target in the image (such as the human body and face), and improve the visual effect of the special effect video.
示例性地,第二编辑操作包括至少两个针对初始图片的第二子编辑操作,步骤S303的另一种可能的实现方式包括:Exemplarily, the second editing operation includes at least two second sub-editing operations for the initial picture. Another possible implementation of step S303 includes:
步骤S3032:响应于针对初始图片的第二子编辑操作,设置目标特效贴纸在至少一帧初始图片中的特效内容,以使特效视频播放时,目标特效贴纸分别基于各初始图片对应的特效内容进行显示。Step S3032: In response to the second sub-editing operation for the initial picture, set the special effect content of the target special effect sticker in at least one frame of the initial picture, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each initial picture.
示例性地,与第一子编辑操作类似,第二子编辑操作针对于初始图片,用于改变初始图片中目标特效贴纸的特效内容,一种可能的实现方式中,第二子编辑操作与之前实施例中的第一编辑操作的实现方式相同,例如用于将目标特效贴纸对应的第一标识修改为第四标识,从而使每一帧初始图
像对应不同的目标特效贴纸的特效内容。Exemplarily, similar to the first sub-editing operation, the second sub-editing operation is for the initial picture, and is used to change the special effect content of the target special effect sticker in the initial picture. In a possible implementation, the second sub-editing operation is implemented in the same way as the first editing operation in the previous embodiment, for example, it is used to modify the first identifier corresponding to the target special effect sticker to the fourth identifier, so that each frame of the initial picture Like special effect content corresponding to different target special effect stickers.
图15为本公开实施例提供的一种设置各初始图片对应的特效内容的过程示意图,如图15所示,用于生成特效视频的初始图片包括初始图片P1、初始图片P2和初始图片P3,第二编辑操作包括三个第二子编辑操作,分别为针对初始图片P1的操作op_11(图中示为op_11)、为针对初始图片P2的操作op_12(图中示为op_12)和为针对初始图片P3的操作op_13(图中示为op_13)。每一第二子编辑操作均对应一次字符串输入操作,终端设备响应各第二子编辑操作后,分别在各初始图片中,设置对应的特效内容。如图中所示,初始图像P1中的目标特效贴纸的特效内容包括字符串“A市”、初始图像P1中的目标特效贴纸的特效内容包括字符串“A_1区”、初始图像P3中的目标特效贴纸的特效内容包括字符串“A_1_1公园”。在基于初始图像P1、初始图像P2和初始图像P3,结合目标特效贴纸进行渲染,生成的特效视频在播放时,在目标特效贴纸分别基于各初始图片对应的特效内容进行显示,即在初始图像P1对应的视频画面中,显示字符串“A市”、初始图像P2对应的视频画面中,显示字符串“A_1区”、初始图像P3对应的视频画面中,显示字符串“A_1_1公园”。FIG15 is a schematic diagram of a process for setting special effect content corresponding to each initial image provided by an embodiment of the present disclosure. As shown in FIG15 , the initial images used to generate special effect videos include initial images P1, initial images P2, and initial images P3. The second editing operation includes three second sub-editing operations, namely, operation op_11 (shown as op_11 in the figure) for initial image P1, operation op_12 (shown as op_12 in the figure) for initial image P2, and operation op_13 (shown as op_13 in the figure) for initial image P3. Each second sub-editing operation corresponds to a string input operation. After the terminal device responds to each second sub-editing operation, the corresponding special effect content is set in each initial image. As shown in the figure, the special effect content of the target special effect sticker in the initial image P1 includes the string "A City", the special effect content of the target special effect sticker in the initial image P1 includes the string "A_1 District", and the special effect content of the target special effect sticker in the initial image P3 includes the string "A_1_1 Park". The generated special effects video is rendered based on the initial image P1, the initial image P2 and the initial image P3 in combination with the target special effects stickers. When the special effects video is played, the target special effects stickers are respectively displayed based on the special effects content corresponding to each initial picture, that is, the character string "A City" is displayed in the video screen corresponding to the initial image P1, the character string "A_1 District" is displayed in the video screen corresponding to the initial image P2, and the character string "A_1_1 Park" is displayed in the video screen corresponding to the initial image P3.
本实施例步骤中,终端设备通过响应第二编辑指令,分别设置目标特效贴纸在各初始图片中的特效内容,从而使生成的特效视频中,目标特效贴纸能够实现基于图像帧维度的动态内容变换,提高特效视频中的信息丰富度以及信息展示效率。In the steps of this embodiment, the terminal device responds to the second editing instruction to set the special effect content of the target special effect sticker in each initial picture, so that in the generated special effect video, the target special effect sticker can realize dynamic content transformation based on the image frame dimension, thereby improving the information richness and information display efficiency in the special effect video.
步骤S304:基于目标特效贴纸,生成特效视频。Step S304: Generate a special effects video based on the target special effects sticker.
在本实施例中,步骤S301-S302、步骤S304的具体实现方式,与图2所示实施例中步骤S101-S103的具体实现方式一致,详细论述请参考步骤S101-S103中的论述,这里不再赘述。同时,需要说明的是,本实施例所提供的方案,还可以在图11所示实施例的基础上实现,即在依次执行完图11所示实施例中步骤S201-步骤S205之后,执行本实施例中的步骤S303-步骤S304,实现对目标特效贴纸的显示位置和/或特效内容的进一步设置,可以根据具体需要进行设置,此处不再赘述。In this embodiment, the specific implementation of steps S301-S302 and step S304 is consistent with the specific implementation of steps S101-S103 in the embodiment shown in FIG2 . For detailed discussion, please refer to the discussion in steps S101-S103, which will not be repeated here. At the same time, it should be noted that the solution provided in this embodiment can also be implemented on the basis of the embodiment shown in FIG11 , that is, after sequentially executing steps S201-S205 in the embodiment shown in FIG11 , steps S303-S304 in this embodiment are executed to further set the display position and/or special effect content of the target special effect sticker, which can be set according to specific needs, which will not be repeated here.
对应于上文实施例的特效视频生成方法,图16为本公开实施例提供的特效视频生成装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图16,特效视频生成装置4,包括:
Corresponding to the special effects video generation method of the above embodiment, FIG16 is a structural block diagram of a special effects video generation device provided by an embodiment of the present disclosure. For ease of explanation, only the parts related to the embodiment of the present disclosure are shown. Referring to FIG16 , the special effects video generation device 4 includes:
显示模块41,用于显示应用界面;Display module 41, used to display the application interface;
处理模块42,用于响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征具有目标区域特征的目标区域;A processing module 42, configured to obtain region information in response to a first trigger operation on the special effect component, wherein the region information represents a target region having target region characteristics;
生成模块43,用于在所述应用界面内显示目标特效贴纸,并基于所述目标特效贴纸,生成特效视频,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的。The generation module 43 is used to display the target special effect stickers in the application interface and generate a special effect video based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
在本公开的一个实施例中,处理模块42,具体用于:响应于针对特效组件的第一触发操作,获取图像数据;针对所述图像数据进行特征提取,得到区域信息,其中,所述区域信息包括所述目标区域的区域位置和/或地貌特征。In one embodiment of the present disclosure, the processing module 42 is specifically used to: acquire image data in response to a first trigger operation on the special effect component; perform feature extraction on the image data to obtain area information, wherein the area information includes the area location and/or topographic features of the target area.
在本公开的一个实施例中,所述目标特效贴纸的特效内容包括第一标识;生成模块43在所述应用界面内显示目标特效贴纸时,具体用于:根据所述区域信息,确定所述目标区域的目标区域类别;根据所述目标区域类别,生成所述第一标识;在所述应用界面内显示所述第一标识。In one embodiment of the present disclosure, the special effect content of the target special effect sticker includes a first identifier; when the generation module 43 displays the target special effect sticker in the application interface, it is specifically used to: determine the target area category of the target area based on the area information; generate the first identifier based on the target area category; and display the first identifier in the application interface.
在本公开的一个实施例中,所述目标特效贴纸的特效内容还包括第二标识,所述第二标识表征所述目标区域类别对应的目标事件的累计触发数量,所述目标事件包括终端设备基于所述目标特效贴纸生成对应的视频特效;所述生成模块43在所述应用界面内显示目标特效贴纸时,还用于:在所述应用界面内显示所述第二标识。In one embodiment of the present disclosure, the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of the target event corresponding to the target area category, and the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker; when the generation module 43 displays the target special effect sticker in the application interface, it is also used to: display the second identifier in the application interface.
在本公开的一个实施例中,所述目标特效贴纸的特效内容还包括第三标识,所述第三标识表征所述目标区域类别在全部区域类别中基于累计触发数量的排名;所述生成模块43在所述应用界面内显示目标特效贴纸时,还用于:在所述应用界面内显示所述第三标识。In one embodiment of the present disclosure, the special effect content of the target special effect sticker also includes a third identifier, and the third identifier represents the ranking of the target area category in all area categories based on the cumulative number of triggers; when the generation module 43 displays the target special effect sticker in the application interface, it is also used to: display the third identifier in the application interface.
在本公开的一个实施例中,所述生成模块32还用于:向服务器发送第一数据请求,所述第一数据请求中包括表征所述目标特效贴纸的第一信息和表征所述目标区域类别的第二信息;接收所述服务器返回的事件数据,所述事件数据至少包括目标区域对应的目标事件的累计触发数量,和/或目标区域在全部区域中基于累计触发数量的排名;根据所述事件数据,生成第二标识和/或第三标识。In one embodiment of the present disclosure, the generation module 32 is also used to: send a first data request to a server, wherein the first data request includes first information representing the target special effect sticker and second information representing the target area category; receive event data returned by the server, wherein the event data includes at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area in all areas based on the cumulative number of triggers; and generate a second identifier and/or a third identifier based on the event data.
在本公开的一个实施例中,所述生成模块32还用于:响应于针对所述目标特效贴纸的第一编辑操作,将所述目标特效贴纸的特效内容设置为第四标识,其中,所述第四标识表征自定义区域类别;在所述应用界面内显示所述
第四标识。In one embodiment of the present disclosure, the generation module 32 is further used to: in response to a first editing operation on the target special effect sticker, set the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category; display the target special effect sticker in the application interface; The fourth logo.
在本公开的一个实施例中,所述生成模块32还用于:基于所述自定义区域类别,在所述应用界面内显示第五标识,和/或第六标识;其中,所述第五标识表征所述自定义区域类别对应的目标事件的累计触发数量;所述第六标识表征所述自定义区域类别在全部区域类别中基于累计触发数量的排名。In one embodiment of the present disclosure, the generation module 32 is also used to: display a fifth identifier and/or a sixth identifier in the application interface based on the custom area category; wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all area categories based on the cumulative number of triggers.
在本公开的一个实施例中,所述生成模块32还用于:响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置,和/或所述目标特效贴纸在所述特效视频中的特效内容。In one embodiment of the present disclosure, the generation module 32 is also used to: in response to a second editing operation on the target special effect sticker, set the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
在本公开的一个实施例中,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第一子编辑操作;所述生成模块32在响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置时,具体用于:响应于针对所述初始图片的第一子编辑操作,设置各所述初始图片对应的帧中显示位置,以使所述特效视频播放时,所述目标特效贴纸分别显示于各所述初始图片对应的帧中显示位置处。In one embodiment of the present disclosure, the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers, and the second editing operation includes at least two first sub-editing operations for the initial pictures; the generation module 32, in response to the second editing operation for the target special effects stickers, sets the display position of the target special effects stickers in the special effects video, and is specifically used to: in response to the first sub-editing operation for the initial pictures, set the display position in the frame corresponding to each of the initial pictures, so that when the special effects video is played, the target special effects stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
在本公开的一个实施例中,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第二子编辑操作,所述生成模块32还用于:响应于针对所述初始图片的第二子编辑操作,设置所述目标特效贴纸在至少一帧所述初始图片中的特效内容,以使所述特效视频播放时,所述目标特效贴纸分别基于各所述初始图片对应的特效内容进行显示。In one embodiment of the present disclosure, the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers, the second editing operation includes at least two second sub-editing operations for the initial pictures, and the generation module 32 is also used to: in response to the second sub-editing operation for the initial picture, set the special effects content of the target special effects stickers in at least one frame of the initial picture, so that when the special effects video is played, the target special effects stickers are displayed based on the special effects content corresponding to each of the initial pictures.
在本公开的一个实施例中,所述生成模块33,还用于:获取预生成的图像数据,所述图像数据包括视频或至少一帧图片;在所述应用界面内的第一图层显示所述图像数据的内容;所述生成模块33在在所述应用界面内显示目标特效贴纸时,还用于:在所述应用界面的第二图层显示所述目标特效贴纸,其中,所述第一图层位于所述第二图层之下。In one embodiment of the present disclosure, the generation module 33 is also used to: obtain pre-generated image data, the image data including a video or at least one frame of a picture; display the content of the image data on a first layer within the application interface; when the generation module 33 displays the target special effect stickers within the application interface, it is also used to: display the target special effect stickers on a second layer of the application interface, wherein the first layer is located below the second layer.
在本公开的一个实施例中,所述应用界面为相机取景框界面,所述生成模块33,还用于以下至少一项:响应于针对所述相机取景框界面的第三触发操作,拍摄图像数据,所述图像数据用于结合所目标特效贴纸生成所述特效视频;在拍摄所述图像数据的过程中,在所述取景框界面内,显示所述目标特效贴纸。
In one embodiment of the present disclosure, the application interface is a camera viewfinder interface, and the generation module 33 is also used for at least one of the following: in response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effects video in combination with the target special effects stickers; and in the process of capturing the image data, displaying the target special effects stickers in the viewfinder interface.
其中,显示模块41、处理模块42和生成模块43连接。本实施例提供的特效视频生成装置4可以执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。The display module 41, the processing module 42 and the generating module 43 are connected. The special effect video generating device 4 provided in this embodiment can implement the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, which will not be described in detail in this embodiment.
图17为本公开实施例提供的一种电子设备的结构示意图,如图17所示,该电子设备5包括:FIG17 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. As shown in FIG17 , the electronic device 5 includes:
处理器51,以及与处理器51通信连接的存储器52;A processor 51, and a memory 52 communicatively connected to the processor 51;
存储器52存储计算机执行指令;The memory 52 stores computer executable instructions;
处理器51执行存储器52存储的计算机执行指令,以实现如图2-图15所示实施例中的特效视频生成方法。The processor 51 executes the computer-executable instructions stored in the memory 52 to implement the special effect video generation method in the embodiments shown in Figures 2 to 15.
其中,可选地,处理器51和存储器52通过总线53连接。Optionally, the processor 51 and the memory 52 are connected via a bus 53 .
相关说明可以对应参见图2-图15所对应的实施例中的步骤所对应的相关描述和效果进行理解,此处不做过多赘述。The relevant instructions can be understood by referring to the relevant descriptions and effects corresponding to the steps in the embodiments corresponding to Figures 2 to 15, and no further details will be given here.
本公开实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,计算机执行指令被处理器执行时用于实现本公开图2-图15所对应的实施例中任一实施例提供的特效视频生成方法。An embodiment of the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored. When the computer-executable instructions are executed by a processor, they are used to implement the special effects video generation method provided by any one of the embodiments corresponding to Figures 2 to 15 of the present disclosure.
本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如图2-图15所示实施例中的特效视频生成方法,The present disclosure provides a computer program product, including a computer program. When the computer program is executed by a processor, the special effect video generation method in the embodiments shown in FIG. 2 to FIG. 15 is implemented.
参考图18,其示出了适于用来实现本公开实施例的电子设备900的结构示意图,该电子设备900可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图18示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring to FIG. 18 , it shows a schematic diagram of the structure of an electronic device 900 suitable for implementing the embodiment of the present disclosure, and the electronic device 900 may be a terminal device or a server. The terminal device may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (Portable Android Devices, PADs), portable multimedia players (PMPs), vehicle terminals (such as vehicle navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc. The electronic device shown in FIG. 18 is only an example and should not impose any limitations on the functions and scope of use of the embodiment of the present disclosure.
如图18所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read Only Memory,简称ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,简称RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)
接口905也连接至总线904。As shown in FIG. 18 , the electronic device 900 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 901, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 902 or a program loaded from a storage device 908 to a random access memory (RAM) 903. Various programs and data required for the operation of the electronic device 900 are also stored in the RAM 903. The processing device 901, the ROM 902, and the RAM 903 are connected to each other via a bus 904. Input/Output (I/O) An interface 905 is also connected to the bus 904 .
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图18示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 907 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 908 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 909. The communication device 909 may allow the electronic device 900 to communicate with other devices wirelessly or by wire to exchange data. Although FIG. 18 shows an electronic device 900 having various devices, it should be understood that it is not required to implement or have all of the devices shown. More or fewer devices may be implemented or have alternatively.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from a network through a communication device 909, or installed from a storage device 908, or installed from a ROM 902. When the computer program is executed by the processing device 901, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当
的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium disclosed above may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, device or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried. This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device. The program code contained on the computer-readable medium may be stored in any suitable manner. Media transmission includes but is not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。The computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device executes the method shown in the above embodiment.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages. The program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server. In cases involving remote computers, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings illustrate the possible architecture, function and operation of the system, method and computer program product according to various embodiments of the present disclosure. In this regard, each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some implementations as replacements, the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。The units involved in the embodiments described in the present disclosure may be implemented by software or hardware. The name of a unit does not limit the unit itself in some cases. For example, the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执
行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described above may be performed at least in part by one or more hardware logic components. For example, and without limitation, exemplary types of hardware logic components that may be used include: field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips (SOCs), complex programmable logic devices (CPLDs), and the like.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing. A more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
第一方面,根据本公开的一个或多个实施例,提供了一种特效视频生成方法,包括:In a first aspect, according to one or more embodiments of the present disclosure, a method for generating a special effect video is provided, comprising:
显示应用界面;响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征目标区域的目标区域特征;在所述应用界面内显示目标特效贴纸,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的;基于所述目标特效贴纸,生成特效视频。Displaying an application interface; in response to a first trigger operation for a special effect component, obtaining area information, wherein the area information represents target area characteristics of a target area; displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information; and generating a special effect video based on the target special effect sticker.
根据本公开的一个或多个实施例,所述响应于针对特效组件的第一触发操作,获得区域信息,包括:响应于针对特效组件的第一触发操作,获取图像数据;针对所述图像数据进行特征提取,得到区域信息,其中,所述区域信息包括所述目标区域的区域位置和/或地貌特征。According to one or more embodiments of the present disclosure, obtaining regional information in response to a first trigger operation on a special effects component includes: acquiring image data in response to a first trigger operation on the special effects component; performing feature extraction on the image data to obtain regional information, wherein the regional information includes the regional location and/or topographic features of the target area.
根据本公开的一个或多个实施例,所述目标特效贴纸的特效内容包括第一标识;所述在所述应用界面内显示目标特效贴纸,包括:根据所述区域信息,确定所述目标区域的目标区域类别;根据所述目标区域类别,生成所述第一标识;在所述应用界面内显示所述第一标识。According to one or more embodiments of the present disclosure, the special effect content of the target special effect sticker includes a first identifier; displaying the target special effect sticker in the application interface includes: determining the target area category of the target area according to the area information; generating the first identifier according to the target area category; and displaying the first identifier in the application interface.
根据本公开的一个或多个实施例,所述目标特效贴纸的特效内容还包括第二标识,所述第二标识表征所述目标区域类别对应的目标事件的累计触发数量,所述目标事件包括终端设备基于所述目标特效贴纸生成对应的视频特效;所述在所述应用界面内显示目标特效贴纸,还包括:在所述应用界面内显示所述第二标识。
According to one or more embodiments of the present disclosure, the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of the target event corresponding to the target area category, and the target event includes a terminal device generating a corresponding video special effect based on the target special effect sticker; displaying the target special effect sticker in the application interface also includes: displaying the second identifier in the application interface.
根据本公开的一个或多个实施例,所述目标特效贴纸的特效内容还包括第三标识,所述第三标识表征所述目标区域类别在全部区域类别中基于累计触发数量的排名;所述在所述应用界面内显示目标特效贴纸,还包括:在所述应用界面内显示所述第三标识。According to one or more embodiments of the present disclosure, the special effect content of the target special effect sticker also includes a third identifier, and the third identifier represents the ranking of the target area category in all area categories based on the cumulative number of triggers; displaying the target special effect sticker in the application interface also includes: displaying the third identifier in the application interface.
根据本公开的一个或多个实施例,所述方法还包括:向服务器发送第一数据请求,所述第一数据请求中包括表征所述目标特效贴纸的第一信息和表征所述目标区域类别的第二信息;接收所述服务器返回的事件数据,所述事件数据至少包括目标区域对应的目标事件的累计触发数量,和/或目标区域在全部区域中基于累计触发数量的排名;根据所述事件数据,生成第二标识和/或第三标识。According to one or more embodiments of the present disclosure, the method also includes: sending a first data request to a server, the first data request including first information characterizing the target special effect sticker and second information characterizing the target area category; receiving event data returned by the server, the event data including at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area in all areas based on the cumulative number of triggers; generating a second identifier and/or a third identifier based on the event data.
根据本公开的一个或多个实施例,所述方法还包括:响应于针对所述目标特效贴纸的第一编辑操作,将所述目标特效贴纸的特效内容设置为第四标识,其中,所述第四标识表征自定义区域类别;在所述应用界面内显示所述第四标识。According to one or more embodiments of the present disclosure, the method also includes: in response to a first editing operation on the target special effect sticker, setting the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category; and displaying the fourth identifier in the application interface.
根据本公开的一个或多个实施例,所述方法还包括:基于所述自定义区域类别,在所述应用界面内显示第五标识,和/或第六标识;其中,所述第五标识表征所述自定义区域类别对应的目标事件的累计触发数量;所述第六标识表征所述自定义区域类别在全部区域类别中基于累计触发数量的排名。According to one or more embodiments of the present disclosure, the method further includes: based on the custom area category, displaying a fifth identifier and/or a sixth identifier in the application interface; wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all area categories based on the cumulative number of triggers.
根据本公开的一个或多个实施例,所述方法还包括:响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置,和/或所述目标特效贴纸在所述特效视频中的特效内容。According to one or more embodiments of the present disclosure, the method further includes: in response to a second editing operation on the target special effect sticker, setting the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
根据本公开的一个或多个实施例,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第一子编辑操作;所述响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置,包括:响应于针对所述初始图片的第一子编辑操作,设置各所述初始图片对应的帧中显示位置,以使所述特效视频播放时,所述目标特效贴纸分别显示于各所述初始图片对应的帧中显示位置处。According to one or more embodiments of the present disclosure, the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers, and the second editing operation includes at least two first sub-editing operations for the initial pictures; the display position of the target special effects stickers in the special effects video is set in response to the second editing operation for the target special effects stickers, including: in response to the first sub-editing operation for the initial pictures, the display position in the frame corresponding to each of the initial pictures is set, so that when the special effects video is played, the target special effects stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
根据本公开的一个或多个实施例,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第二子编辑操作,所述方法还包括:响应于针对所述初始图片的第二子
编辑操作,设置所述目标特效贴纸在至少一帧所述初始图片中的特效内容,以使所述特效视频播放时,所述目标特效贴纸分别基于各所述初始图片对应的特效内容进行显示。According to one or more embodiments of the present disclosure, the special effect video is generated based on at least two frames of initial pictures and the target special effect stickers, the second editing operation includes at least two second sub-editing operations for the initial picture, and the method further includes: in response to the second sub-editing operations for the initial picture An editing operation is performed to set the special effect content of the target special effect sticker in at least one frame of the initial picture, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each of the initial pictures.
根据本公开的一个或多个实施例,所述方法还包括:获取预生成的图像数据,所述图像数据包括视频或至少一帧图片;在所述应用界面内的第一图层显示所述图像数据的内容;所述在所述应用界面内显示目标特效贴纸,包括:在所述应用界面的第二图层显示所述目标特效贴纸,其中,所述第一图层位于所述第二图层之下。According to one or more embodiments of the present disclosure, the method also includes: obtaining pre-generated image data, the image data including a video or at least one frame of a picture; displaying the content of the image data on a first layer within the application interface; displaying the target special effect stickers within the application interface includes: displaying the target special effect stickers on a second layer of the application interface, wherein the first layer is located below the second layer.
根据本公开的一个或多个实施例,所述应用界面为相机取景框界面,所述方法还包括以下至少一项:响应于针对所述相机取景框界面的第三触发操作,拍摄图像数据,所述图像数据用于结合所目标特效贴纸生成所述特效视频;在拍摄所述图像数据的过程中,在所述取景框界面内,显示所述目标特效贴纸。According to one or more embodiments of the present disclosure, the application interface is a camera viewfinder interface, and the method further includes at least one of the following: in response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effects video in combination with the target special effects stickers; and during the process of capturing the image data, displaying the target special effects stickers in the viewfinder interface.
第二方面,根据本公开的一个或多个实施例,提供了一种特效视频生成装置,包括:In a second aspect, according to one or more embodiments of the present disclosure, a special effect video generating device is provided, including:
显示模块,用于显示应用界面;处理模块,用于响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征具有目标区域特征的目标区域;生成模块,用于在所述应用界面内显示目标特效贴纸,并基于所述目标特效贴纸,生成特效视频,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的。A display module is used to display an application interface; a processing module is used to obtain area information in response to a first trigger operation on a special effect component, wherein the area information represents a target area having target area characteristics; a generation module is used to display target special effect stickers in the application interface, and generate a special effect video based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
根据本公开的一个或多个实施例,处理模块,具体用于:响应于针对特效组件的第一触发操作,获取图像数据;针对所述图像数据进行特征提取,得到区域信息,其中,所述区域信息包括所述目标区域的区域位置和/或地貌特征。According to one or more embodiments of the present disclosure, the processing module is specifically used to: acquire image data in response to a first trigger operation on a special effect component; perform feature extraction on the image data to obtain area information, wherein the area information includes the area location and/or topographic features of the target area.
根据本公开的一个或多个实施例,所述目标特效贴纸的特效内容包括第一标识;生成模块在所述应用界面内显示目标特效贴纸时,具体用于:根据所述区域信息,确定所述目标区域的目标区域类别;根据所述目标区域类别,生成所述第一标识;在所述应用界面内显示所述第一标识。According to one or more embodiments of the present disclosure, the special effect content of the target special effect sticker includes a first identifier; when the generation module displays the target special effect sticker in the application interface, it is specifically used to: determine the target area category of the target area according to the area information; generate the first identifier according to the target area category; and display the first identifier in the application interface.
根据本公开的一个或多个实施例,所述目标特效贴纸的特效内容还包括第二标识,所述第二标识表征所述目标区域类别对应的目标事件的累计触发数量,所述目标事件包括终端设备基于所述目标特效贴纸生成对应的视频特
效;所述生成模块在所述应用界面内显示目标特效贴纸时,还用于:在所述应用界面内显示所述第二标识。According to one or more embodiments of the present disclosure, the special effect content of the target special effect sticker also includes a second identifier, which represents the cumulative triggering number of the target event corresponding to the target area category, and the target event includes the terminal device generating the corresponding video special effect based on the target special effect sticker. When the generation module displays the target special effect sticker in the application interface, it is also used to: display the second logo in the application interface.
根据本公开的一个或多个实施例,所述目标特效贴纸的特效内容还包括第三标识,所述第三标识表征所述目标区域类别在全部区域类别中基于累计触发数量的排名;所述生成模块在所述应用界面内显示目标特效贴纸时,还用于:在所述应用界面内显示所述第三标识。According to one or more embodiments of the present disclosure, the special effect content of the target special effect sticker also includes a third identifier, and the third identifier represents the ranking of the target area category in all area categories based on the cumulative number of triggers; when the generation module displays the target special effect sticker in the application interface, it is also used to: display the third identifier in the application interface.
根据本公开的一个或多个实施例,所述生成模块还用于:向服务器发送第一数据请求,所述第一数据请求中包括表征所述目标特效贴纸的第一信息和表征所述目标区域类别的第二信息;接收所述服务器返回的事件数据,所述事件数据至少包括目标区域对应的目标事件的累计触发数量,和/或目标区域在全部区域中基于累计触发数量的排名;根据所述事件数据,生成第二标识和/或第三标识。According to one or more embodiments of the present disclosure, the generation module is also used to: send a first data request to a server, wherein the first data request includes first information representing the target special effect sticker and second information representing the target area category; receive event data returned by the server, wherein the event data includes at least the cumulative number of triggers of the target event corresponding to the target area, and/or the ranking of the target area in all areas based on the cumulative number of triggers; and generate a second identifier and/or a third identifier based on the event data.
根据本公开的一个或多个实施例,所述生成模块还用于:响应于针对所述目标特效贴纸的第一编辑操作,将所述目标特效贴纸的特效内容设置为第四标识,其中,所述第四标识表征自定义区域类别;在所述应用界面内显示所述第四标识。According to one or more embodiments of the present disclosure, the generation module is also used to: in response to a first editing operation on the target special effect sticker, set the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category; and display the fourth identifier in the application interface.
根据本公开的一个或多个实施例,所述生成模块还用于:基于所述自定义区域类别,在所述应用界面内显示第五标识,和/或第六标识;其中,所述第五标识表征所述自定义区域类别对应的目标事件的累计触发数量;所述第六标识表征所述自定义区域类别在全部区域类别中基于累计触发数量的排名。According to one or more embodiments of the present disclosure, the generation module is also used to: display a fifth identifier and/or a sixth identifier in the application interface based on the custom area category; wherein the fifth identifier represents the cumulative number of triggers of the target event corresponding to the custom area category; and the sixth identifier represents the ranking of the custom area category in all area categories based on the cumulative number of triggers.
根据本公开的一个或多个实施例,所述生成模块还用于:响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置,和/或所述目标特效贴纸在所述特效视频中的特效内容。According to one or more embodiments of the present disclosure, the generation module is also used to: in response to a second editing operation on the target special effect sticker, set the display position of the target special effect sticker in the special effect video, and/or the special effect content of the target special effect sticker in the special effect video.
根据本公开的一个或多个实施例,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第一子编辑操作;所述生成模块在响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置时,具体用于:响应于针对所述初始图片的第一子编辑操作,设置各所述初始图片对应的帧中显示位置,以使所述特效视频播放时,所述目标特效贴纸分别显示于各所述初始图片对应的帧中显示位置处。According to one or more embodiments of the present disclosure, the special effects video is generated based on at least two frames of initial pictures and the target special effects stickers, and the second editing operation includes at least two first sub-editing operations for the initial pictures; when the generation module sets the display position of the target special effects stickers in the special effects video in response to the second editing operation for the target special effects stickers, it is specifically used to: in response to the first sub-editing operation for the initial pictures, set the display position in the frame corresponding to each of the initial pictures, so that when the special effects video is played, the target special effects stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
根据本公开的一个或多个实施例,所述特效视频基于至少两帧初始图片
和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第二子编辑操作,所述生成模块还用于:响应于针对所述初始图片的第二子编辑操作,设置所述目标特效贴纸在至少一帧所述初始图片中的特效内容,以使所述特效视频播放时,所述目标特效贴纸分别基于各所述初始图片对应的特效内容进行显示。According to one or more embodiments of the present disclosure, the special effect video is based on at least two frames of initial pictures. and the target special effect sticker is generated, the second editing operation includes at least two second sub-editing operations for the initial picture, and the generation module is also used to: in response to the second sub-editing operation for the initial picture, set the special effect content of the target special effect sticker in at least one frame of the initial picture, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each of the initial pictures.
根据本公开的一个或多个实施例,所述生成模块,还用于:获取预生成的图像数据,所述图像数据包括视频或至少一帧图片;在所述应用界面内的第一图层显示所述图像数据的内容;所述生成模块33在在所述应用界面内显示目标特效贴纸时,还用于:在所述应用界面的第二图层显示所述目标特效贴纸,其中,所述第一图层位于所述第二图层之下。According to one or more embodiments of the present disclosure, the generation module is also used to: obtain pre-generated image data, the image data including a video or at least one frame of a picture; display the content of the image data on the first layer within the application interface; when the generation module 33 displays the target special effect stickers within the application interface, it is also used to: display the target special effect stickers on the second layer of the application interface, wherein the first layer is located below the second layer.
根据本公开的一个或多个实施例,所述应用界面为相机取景框界面,所述生成模块,还用于以下至少一项:响应于针对所述相机取景框界面的第三触发操作,拍摄图像数据,所述图像数据用于结合所目标特效贴纸生成所述特效视频;在拍摄所述图像数据的过程中,在所述取景框界面内,显示所述目标特效贴纸。According to one or more embodiments of the present disclosure, the application interface is a camera viewfinder interface, and the generation module is further used for at least one of the following: in response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effects video in combination with the target special effects stickers; and in the process of capturing the image data, displaying the target special effects stickers in the viewfinder interface.
第三方面,根据本公开的一个或多个实施例,提供了一种电子设备,包括:处理器,以及与所述处理器通信连接的存储器;In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device, comprising: a processor, and a memory communicatively connected to the processor;
所述存储器存储计算机执行指令;The memory stores computer-executable instructions;
所述处理器执行所述存储器存储的计算机执行指令,以实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。The processor executes the computer-executable instructions stored in the memory to implement the special effects video generation method described in the first aspect and various possible designs of the first aspect.
第四方面,根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。In a fourth aspect, according to one or more embodiments of the present disclosure, a computer-readable storage medium is provided, wherein the computer-readable storage medium stores computer execution instructions. When a processor executes the computer execution instructions, the special effects video generation method described in the first aspect and various possible designs of the first aspect is implemented.
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的特效视频生成方法。In a fifth aspect, an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the special effects video generation method described in the first aspect and various possible designs of the first aspect.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上
述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an explanation of the technical principles used. Those skilled in the art should understand that the scope of disclosure involved in the present disclosure is not limited to the technical solutions formed by a specific combination of the above technical features, but should also cover other technical solutions formed by any combination of the above technical features or their equivalent features without departing from the above disclosure concept. The above features are replaced with (but not limited to) technical features with similar functions disclosed in this disclosure to form a technical solution.
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。In addition, although each operation is described in a specific order, this should not be understood as requiring these operations to be performed in the specific order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Similarly, although some specific implementation details are included in the above discussion, these should not be interpreted as limiting the scope of the present disclosure. Some features described in the context of a separate embodiment can also be implemented in a single embodiment in combination. On the contrary, the various features described in the context of a single embodiment can also be implemented in multiple embodiments individually or in any suitable sub-combination mode.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。
Although the subject matter has been described in language specific to structural features and/or methodological logical actions, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or actions described above. On the contrary, the specific features and actions described above are merely example forms of implementing the claims.
Claims (18)
- 一种特效视频生成方法,包括:A special effects video generation method, comprising:显示应用界面;Display the application interface;响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征目标区域的目标区域特征;In response to a first trigger operation on the special effect component, obtaining region information, wherein the region information represents a target region feature of the target region;在所述应用界面内显示目标特效贴纸,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的;Displaying a target special effect sticker in the application interface, wherein the special effect content of the target special effect sticker is determined based on the area information;基于所述目标特效贴纸,生成特效视频。Based on the target special effect sticker, a special effect video is generated.
- 根据权利要求1所述的方法,其中,所述响应于针对特效组件的第一触发操作,获得区域信息,包括:The method according to claim 1, wherein the obtaining of the region information in response to the first trigger operation on the special effect component comprises:响应于针对特效组件的第一触发操作,获取图像数据;In response to a first trigger operation on the special effect component, acquiring image data;针对所述图像数据进行特征提取,得到区域信息,其中,所述区域信息包括以下至少一种:Perform feature extraction on the image data to obtain region information, wherein the region information includes at least one of the following:所述目标区域的区域位置、目标区域的地貌特征、所述目标区域内的目标对象信息。The area location of the target area, the geomorphic features of the target area, and the target object information in the target area.
- 根据权利要求1或2所述的方法,其中,所述目标特效贴纸的特效内容包括第一标识;The method according to claim 1 or 2, wherein the special effect content of the target special effect sticker includes a first identifier;所述在所述应用界面内显示目标特效贴纸,包括:The displaying of the target special effect sticker in the application interface includes:根据所述区域信息,确定所述目标区域的目标区域类别;Determining a target area category of the target area according to the area information;根据所述目标区域类别,生成所述第一标识;generating the first identifier according to the target area category;在所述应用界面内显示所述第一标识。The first identifier is displayed in the application interface.
- 根据权利要求3所述的方法,其中,所述目标特效贴纸的特效内容还包括第二标识,所述第二标识表征所述目标区域类别对应的目标事件的累计触发数量,所述目标事件包括终端设备基于所述目标特效贴纸生成对应的视频特效;The method according to claim 3, wherein the special effect content of the target special effect sticker further includes a second identifier, the second identifier represents the cumulative triggering number of target events corresponding to the target area category, and the target event includes a terminal device generating a corresponding video special effect based on the target special effect sticker;所述在所述应用界面内显示目标特效贴纸,还包括:The displaying of the target special effect sticker in the application interface also includes:在所述应用界面内显示所述第二标识。The second identifier is displayed in the application interface.
- 根据权利要求4所述的方法,其中,所述目标特效贴纸的特效内容还包括第三标识,所述第三标识表征所述目标区域类别在全部区域类别中基于累计触发数量的排名;The method according to claim 4, wherein the special effect content of the target special effect sticker further includes a third identifier, wherein the third identifier represents a ranking of the target area category among all area categories based on a cumulative number of triggers;所述在所述应用界面内显示目标特效贴纸,还包括: The displaying of the target special effect sticker in the application interface also includes:在所述应用界面内显示所述第三标识。The third identifier is displayed in the application interface.
- 根据权利要求3至5中任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 3 to 5, wherein the method further comprises:向服务器发送第一数据请求,所述第一数据请求中包括表征所述目标特效贴纸的第一信息和表征所述目标区域类别的第二信息;Sending a first data request to a server, wherein the first data request includes first information representing the target special effect sticker and second information representing the target area category;接收所述服务器返回的事件数据,所述事件数据至少包括目标区域类别对应的目标事件的累计触发数量,和/或目标区域类别在全部区域类别中基于累计触发数量的排名;receiving event data returned by the server, the event data including at least a cumulative number of triggers of target events corresponding to the target area category, and/or a ranking of the target area category among all area categories based on the cumulative number of triggers;根据所述事件数据,生成第二标识和/或第三标识。A second identifier and/or a third identifier is generated according to the event data.
- 根据权利要求1至6中任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 6, wherein the method further comprises:响应于针对所述目标特效贴纸的第一编辑操作,将所述目标特效贴纸的特效内容设置为第四标识,其中,所述第四标识表征自定义区域类别;In response to a first editing operation on the target special effect sticker, setting the special effect content of the target special effect sticker to a fourth identifier, wherein the fourth identifier represents a custom area category;在所述应用界面内显示所述第四标识。The fourth identifier is displayed in the application interface.
- 根据权利要求7所述的方法,其中,所述方法还包括:The method according to claim 7, wherein the method further comprises:基于所述自定义区域类别,在所述应用界面内显示第五标识,和/或第六标识;Based on the custom area category, displaying a fifth identifier and/or a sixth identifier in the application interface;其中,所述第五标识表征所述自定义区域类别对应的目标事件的累计触发数量;The fifth identifier represents the cumulative triggering quantity of the target event corresponding to the custom area category;所述第六标识表征所述自定义区域类别在全部区域类别中基于累计触发数量的排名。The sixth identifier represents the ranking of the user-defined area category among all area categories based on the cumulative number of triggers.
- 根据权利要求1至8中任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 8, wherein the method further comprises:响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置,和/或所述目标特效贴纸在所述特效视频中的特效内容。In response to a second editing operation on the target special effect sticker, a display position of the target special effect sticker in the special effect video and/or a special effect content of the target special effect sticker in the special effect video are set.
- 根据权利要求9所述的方法,其中,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第一子编辑操作;The method according to claim 9, wherein the special effect video is generated based on at least two frames of initial pictures and the target special effect stickers, and the second editing operation includes at least two first sub-editing operations for the initial pictures;所述响应于针对所述目标特效贴纸的第二编辑操作,设置所述目标特效贴纸在所述特效视频中的显示位置,包括:In response to the second editing operation on the target special effect sticker, setting the display position of the target special effect sticker in the special effect video includes:响应于针对所述初始图片的第一子编辑操作,设置各所述初始图片对应的帧中的显示位置,以使所述特效视频播放时,所述目标特效贴纸分别显示于各所述初始图片对应的帧中的显示位置处。 In response to the first sub-editing operation for the initial picture, the display position in the frame corresponding to each of the initial pictures is set so that when the special effect video is played, the target special effect stickers are respectively displayed at the display position in the frame corresponding to each of the initial pictures.
- 根据权利要求9或10所述的方法,其中,所述特效视频基于至少两帧初始图片和所述目标特效贴纸生成,所述第二编辑操作包括至少两个针对所述初始图片的第二子编辑操作,所述方法还包括:The method according to claim 9 or 10, wherein the special effect video is generated based on at least two frames of the initial picture and the target special effect sticker, the second editing operation includes at least two second sub-editing operations for the initial picture, and the method further includes:响应于针对所述初始图片的第二子编辑操作,设置所述目标特效贴纸在至少一帧所述初始图片中的特效内容,以使所述特效视频播放时,所述目标特效贴纸分别基于各所述初始图片对应的特效内容进行显示。In response to the second sub-editing operation for the initial picture, the special effect content of the target special effect sticker in at least one frame of the initial picture is set, so that when the special effect video is played, the target special effect sticker is displayed based on the special effect content corresponding to each of the initial pictures.
- 根据权利要求1至11中任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 11, wherein the method further comprises:获取预生成的图像数据,所述图像数据包括视频或至少一帧图片;Acquire pre-generated image data, wherein the image data includes a video or at least one frame of a picture;在所述应用界面内的第一图层显示所述图像数据的内容;Displaying the content of the image data in a first layer within the application interface;所述在所述应用界面内显示目标特效贴纸,包括:The displaying of the target special effect sticker in the application interface includes:在所述应用界面的第二图层显示所述目标特效贴纸,其中,所述第一图层位于所述第二图层之下。The target special effect sticker is displayed in a second layer of the application interface, wherein the first layer is located below the second layer.
- 根据权利要求1至12中任一项所述的方法,其中,所述应用界面为相机取景框界面,所述方法还包括以下至少一项:The method according to any one of claims 1 to 12, wherein the application interface is a camera viewfinder interface, and the method further comprises at least one of the following:响应于针对所述相机取景框界面的第三触发操作,拍摄图像数据,所述图像数据用于结合所目标特效贴纸生成所述特效视频;In response to a third trigger operation on the camera viewfinder interface, capturing image data, wherein the image data is used to generate the special effect video in combination with the target special effect sticker;在拍摄图像数据的过程中,在所述取景框界面内,显示所述目标特效贴纸。During the process of capturing image data, the target special effect sticker is displayed in the viewfinder interface.
- 一种特效视频生成装置,包括:A special effects video generating device, comprising:显示模块,用于显示应用界面;Display module, used to display the application interface;处理模块,用于响应于针对特效组件的第一触发操作,获得区域信息,所述区域信息表征具有目标区域特征的目标区域;A processing module, configured to obtain region information in response to a first trigger operation on the special effect component, wherein the region information represents a target region having target region characteristics;生成模块,用于在所述应用界面内显示目标特效贴纸,并基于所述目标特效贴纸,生成特效视频,其中,所述目标特效贴纸的特效内容是基于所述区域信息确定的。A generation module is used to display target special effect stickers in the application interface and generate special effect videos based on the target special effect stickers, wherein the special effect content of the target special effect stickers is determined based on the area information.
- 一种电子设备,包括:处理器,以及与所述处理器通信连接的存储器;An electronic device comprises: a processor, and a memory communicatively connected to the processor;所述存储器存储计算机执行指令;The memory stores computer-executable instructions;所述处理器执行所述存储器存储的计算机执行指令,以实现如权利要求1至13中任一项所述的特效视频生成方法。The processor executes the computer-executable instructions stored in the memory to implement the special effect video generation method according to any one of claims 1 to 13.
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算 机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至13任一项所述的特效视频生成方法。A computer-readable storage medium storing a computer The computer executes the instructions, and when the processor executes the instructions, the special effects video generation method as described in any one of claims 1 to 13 is implemented.
- 一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至13中任一项所述的特效视频生成方法。A computer program product comprises a computer program, which, when executed by a processor, implements the special effects video generation method according to any one of claims 1 to 13.
- 一种计算机程序,该计算机程序被处理器执行时实现权利要求1至13中任一项所述的特效视频生成方法。 A computer program, which, when executed by a processor, implements the special effects video generation method according to any one of claims 1 to 13.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310485260.1 | 2023-04-28 | ||
CN202310485260.1A CN118869901A (en) | 2023-04-28 | 2023-04-28 | Special effect video generation method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024222447A1 true WO2024222447A1 (en) | 2024-10-31 |
Family
ID=93173593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2024/086757 WO2024222447A1 (en) | 2023-04-28 | 2024-04-09 | Special effect video generation method and apparatus, electronic device, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN118869901A (en) |
WO (1) | WO2024222447A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007013787A (en) * | 2005-07-01 | 2007-01-18 | Matsushita Electric Ind Co Ltd | Image special effect device and image special effect system |
CN108174099A (en) * | 2017-12-29 | 2018-06-15 | 光锐恒宇(北京)科技有限公司 | Method for displaying image, device and computer readable storage medium |
CN110650304A (en) * | 2019-10-23 | 2020-01-03 | 维沃移动通信有限公司 | Video generation method and electronic equipment |
CN111770298A (en) * | 2020-07-20 | 2020-10-13 | 珠海市魅族科技有限公司 | Video call method and device, electronic equipment and storage medium |
CN112422844A (en) * | 2020-09-23 | 2021-02-26 | 上海哔哩哔哩科技有限公司 | Method, device and equipment for adding special effect in video and readable storage medium |
CN115841583A (en) * | 2021-09-18 | 2023-03-24 | 北京字跳网络技术有限公司 | Special effect testing method, device, equipment and storage medium |
-
2023
- 2023-04-28 CN CN202310485260.1A patent/CN118869901A/en active Pending
-
2024
- 2024-04-09 WO PCT/CN2024/086757 patent/WO2024222447A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007013787A (en) * | 2005-07-01 | 2007-01-18 | Matsushita Electric Ind Co Ltd | Image special effect device and image special effect system |
CN108174099A (en) * | 2017-12-29 | 2018-06-15 | 光锐恒宇(北京)科技有限公司 | Method for displaying image, device and computer readable storage medium |
CN110650304A (en) * | 2019-10-23 | 2020-01-03 | 维沃移动通信有限公司 | Video generation method and electronic equipment |
CN111770298A (en) * | 2020-07-20 | 2020-10-13 | 珠海市魅族科技有限公司 | Video call method and device, electronic equipment and storage medium |
CN112422844A (en) * | 2020-09-23 | 2021-02-26 | 上海哔哩哔哩科技有限公司 | Method, device and equipment for adding special effect in video and readable storage medium |
CN115841583A (en) * | 2021-09-18 | 2023-03-24 | 北京字跳网络技术有限公司 | Special effect testing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN118869901A (en) | 2024-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113190314B (en) | Interactive content generation method and device, storage medium and electronic equipment | |
WO2022007724A1 (en) | Video processing method and apparatus, and device and storage medium | |
US12112772B2 (en) | Method and apparatus for video production, device and storage medium | |
CN110070496B (en) | Method and device for generating image special effect and hardware device | |
CN110070593B (en) | Method, device, equipment and medium for displaying picture preview information | |
WO2021197024A1 (en) | Video effect configuration file generation method, and video rendering method and device | |
JP2023540753A (en) | Video processing methods, terminal equipment and storage media | |
US12019669B2 (en) | Method, apparatus, device, readable storage medium and product for media content processing | |
US20240040069A1 (en) | Image special effect configuration method, image recognition method, apparatus and electronic device | |
CN110070592B (en) | Generation method and device of special effect package and hardware device | |
CN112000267A (en) | Information display method, device, equipment and storage medium | |
WO2023088006A1 (en) | Cloud game interaction method and apparatus, readable medium, and electronic device | |
CN113365010B (en) | Volume adjusting method, device, equipment and storage medium | |
WO2024131621A1 (en) | Special effect generation method and apparatus, electronic device, and storage medium | |
WO2024046360A1 (en) | Media content processing method and apparatus, device, readable storage medium, and product | |
WO2024222447A1 (en) | Special effect video generation method and apparatus, electronic device, and storage medium | |
CN115309317B (en) | Media content acquisition method, apparatus, device, readable storage medium and product | |
CN115033136A (en) | Material display method, device, equipment, computer readable storage medium and product | |
CN117082292A (en) | Video generation method, apparatus, device, storage medium, and program product | |
CN111199519B (en) | Method and device for generating special effect package | |
WO2024078409A1 (en) | Image preview method and apparatus, and electronic device and storage medium | |
GB2600341A (en) | Image special effect processing method and apparatus, electronic device and computer-readable storage medium | |
US20230215065A1 (en) | Method, apparatus, device and medium for image special effect processing | |
CN111290995A (en) | Resource management method and device | |
WO2024123244A1 (en) | Text video generation method and apparatus, electronic device and storage medium |