[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116450861A - Creative data generation method, delivery method, system, server and storage medium - Google Patents

Creative data generation method, delivery method, system, server and storage medium Download PDF

Info

Publication number
CN116450861A
CN116450861A CN202310336452.6A CN202310336452A CN116450861A CN 116450861 A CN116450861 A CN 116450861A CN 202310336452 A CN202310336452 A CN 202310336452A CN 116450861 A CN116450861 A CN 116450861A
Authority
CN
China
Prior art keywords
data
mode
creative
material data
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310336452.6A
Other languages
Chinese (zh)
Inventor
陈佳榕
牛也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310336452.6A priority Critical patent/CN116450861A/en
Publication of CN116450861A publication Critical patent/CN116450861A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/41Indexing; Data structures therefor; Storage structures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a creative data generation method, a delivery method, a system, a server and a storage medium, wherein the method comprises the following steps: acquiring multi-mode material data, wherein the multi-mode material data comprises at least two modes of images, videos, voices and texts; preprocessing the material data aiming at the material data of any mode, and determining a material element set of the material data; processing the material elements in the material element set of the material data aiming at the material data of any mode to obtain the material element set after the material data processing so as to form the processed material data; the processed material data is different from the material element set before processing; and obtaining creative data according to the processed multi-mode material data. According to the technical scheme provided by the embodiment of the application, the material elements of the multi-mode material data can be automatically adjusted and the creative data can be generated, so that the generation efficiency of the creative data can be improved, and the creative data can be individually put in.

Description

Creative data generation method, delivery method, system, server and storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a creative data generation method, a delivery method, a system, a server and a storage medium.
Background
The creative data can be used for propaganda and popularization of objects such as goods, services and the like by creative data players; creative data may be understood as script content that is propagated primarily in the form of text, sound, images, video, etc. as a carrier. Under the background, how to provide a technical solution to improve the generating efficiency of creative data becomes a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, the embodiments of the present application provide a creative data generation method, a delivery method, a system, a server and a storage medium, so as to improve the generation efficiency of creative data.
In order to achieve the above purpose, the embodiments of the present application provide the following technical solutions.
In a first aspect, an embodiment of the present application provides a method for generating method creative data, including:
acquiring multi-mode material data, wherein the multi-mode material data comprises at least two modes of images, videos, voices and texts;
preprocessing the material data aiming at the material data of any mode, and determining a material element set of the material data;
Processing the material elements in the material element set of the material data aiming at the material data of any mode to obtain the material element set after the material data processing so as to form the processed material data; wherein the material element set after material data processing is different from the material element set before processing;
and obtaining creative data according to the processed multi-mode material data.
In a second aspect, an embodiment of the present application provides a creative data delivery method, including:
acquiring creative data generated according to the creative data generation method according to the first aspect;
scoring the creative data by using a pre-trained scoring model to obtain the score of the creative data;
and determining the released creative data according to the scores of the creative data.
In a third aspect, an embodiment of the present application provides a creative data delivery system, including: the system comprises a client, a creative data generation platform and a creative data delivery platform;
the client displays an creative data material input interface; the creative data material input interface is used for inputting multi-mode material data by creative data players; sending the multi-mode material data to the creative data generation platform;
The creative data generation platform is configured to perform the creative data generation method as described in the first aspect above;
the creative data delivery platform is configured to perform the creative data delivery method as set forth in the second aspect above;
the creative data generation platform and the creative data delivery platform are integrated or independently arranged.
In a fourth aspect, an embodiment of the present application provides a server, including a memory, and a processor, where the memory stores a program, and the processor invokes the program stored in the memory to implement the creative data generation method according to the first aspect, or implement the creative data delivery method according to the second aspect.
In a fifth aspect, an embodiment of the present application provides a storage medium storing a computer program, where the computer program implements the creative data generation method according to the first aspect or the creative data delivery method according to the second aspect when executed.
In a sixth aspect, an embodiment of the present application provides a computer program, which when executed implements the creative data generation method according to the first aspect or the creative data delivery method according to the second aspect.
The creative data generation method provided by the embodiment of the application supports adjustment of the material elements in the material data, so that new creative data is generated by combining the multi-mode material data of the adjusted material elements. Based on the above, the embodiment of the application can acquire multi-mode material data; therefore, preprocessing the material data aiming at the material data of any mode, and determining a material element set of the material data; processing the material elements in the material element set of the material data aiming at the material data of any mode to obtain the material element set after the material data processing so as to form the processed material data; the material element set after material data processing is different from the material element set before processing, that is, the material elements in the processed material data are adjusted, so that the embodiment of the application can generate new creative data according to the processed multi-mode material data of the adjusted material elements, and realize automatic generation of the new creative data.
Therefore, the technical scheme provided by the embodiment of the application can be used for adjusting the material elements of the material data of each mode in the multi-mode material data, so that the new creative data can be automatically generated by combining the multi-mode material data of the material elements, and different creative data can be rapidly obtained. Because the material data used for generating the creative data are multi-mode data, after the material elements of the multi-mode material data are adjusted, the embodiment of the application can obtain the creative data combined by different multi-mode materials based on the combination of the material data of different modes, so that the types of the material modes of the generated creative data are diversified; in addition, by changing the modes of the material data used for generating the creative data, the creative data combined by the materials of different modes can be flexibly generated. Therefore, the scheme provided by the embodiment of the application has the advantage of improving the creative data generation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic flow chart of a creative data generation method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a framework of a creative data generation device according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of another method for generating creative data according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of another creative data generation method according to an embodiment of the present application.
Fig. 5 is another flow chart of the creative data generation method provided in the embodiment of the application.
Fig. 6 is a flow chart of a creative data delivery method according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a framework of a creative data generation system provided in an embodiment of the present application.
Fig. 8 is a schematic diagram of a server according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In order to improve the exposure of objects such as goods, services and the like, providers of the objects such as goods, services and the like can become creative data players; creative data players can put creative data for objects such as commodities, services and the like on the creative data putting platform. For example, the provider of the commodity can serve as an advertiser to put advertisements on the commodity on the advertisement putting platform so as to improve the exposure degree of the commodity.
Creative data publishers (e.g., advertisers) can design creative data with creative data designers (e.g., ad designers) to obtain creative data for promotion and promotion; the creative data dispenser can provide materials for the creative data designer, the creative data designer designs according to the materials provided by the creative data dispenser, and the creative data designer continuously coordinates with the design requirements of the creative data dispenser, so that creative data which can be used for being used for dispensing and satisfied by the creative data dispenser is finally obtained. The creative data designer designs creative data in the mode, so that time investment and labor investment of manual design can be increased.
Based on the above, the embodiment of the application provides a creative data generation method, which adjusts the material elements of the provided multi-mode material data, so that new creative data is automatically generated by combining the multi-mode material data of the adjusted material elements, and the generation efficiency of the creative data is improved.
Referring to fig. 1, fig. 1 is a flow chart illustrating a creative data generation method according to an embodiment of the present application. The method flow can be applied to a server, and the server can be a server corresponding to the creative data generation platform.
As shown, the process may include the following steps.
Step S100, multi-mode material data is obtained, wherein the multi-mode material data comprises at least two modes of images, videos, voices and texts.
Optionally, the creative data presenter may provide the multi-modal material data, or the creative data presenter may select the multi-modal material data from a creative data material library of a server.
The multi-modal material data may include: material data of any single modality of image modality, text modality, video modality, and voice modality; the combination of the material data of the image mode and the material data of the text mode, the combination of the material data of the video mode and the material data of the voice mode, or the combination of the material data of the video mode, the material data of the text mode and the material data of the voice mode, the material data of the image mode, the material data of the text mode and the material data of the video mode, or the material data of the image mode, the material data of the video mode, the material data of the voice mode and the material data of the text mode can be used for generating creative data.
In an optional implementation, when the material data provided by the creative data provider is a multi-mode material data combination, the embodiment of the application can utilize a feature extraction model (such as a multi-mode extraction model) to perform mode identification and extraction on the material data formed by each mode combination when acquiring the multi-mode material data, so as to obtain each single material data of a well-classified mode type, so that subsequent processing is facilitated. For example, when the acquired multi-mode material data is a combination of the image-mode material data and the text-mode material data, the multi-mode extraction model can be utilized to identify and extract according to the text features and the image features contained in the multi-mode material data, so as to obtain the image-mode material data and obtain the text-mode material data. Wherein, the multi-modal refers to that the different existence forms or information sources of the data include a plurality of data, each existence form or each information source can be called a mode, and the data composed of two or more modes is called multi-modal data (such as multi-modal material data combined by text, image, voice and video).
The corresponding text feature item may be one that has the ability to positively identify text content when the text feature is selected; have the ability to distinguish target text from other text; and the number of the selected characteristic items cannot be too large; feature item separation is relatively easy to achieve. For example, a word, word or phrase may be employed in chinese text as a feature item representing the text as a text feature.
The image features may include geometric features (e.g., position and orientation, perimeter, area, major and minor axes, distance (which may include euclidean distance, manhattan distance, chebyshev distance, cosine distance)) of the material data of the image modality; or may be shape features (geometry analysis: squareness, circularity, invariant moment, eccentricity, polygon description, curve description); amplitude features (moment, projection); histogram features (statistical features): mean, variance, energy, entropy, etc.; various features such as color features (color histogram, color moment) are selected as image features.
Of course, in other embodiments, when the obtained multi-mode material data is a combination of video-mode and voice-mode material data, the multi-mode extraction model may be utilized to identify and extract according to video features and voice features contained in the multi-mode material data, so as to obtain the video-mode material data and obtain the voice-mode material data.
The extraction of the voice feature of the material data of the voice mode may be performed by using, as the voice feature, a feature parameter of the material data of the voice mode, which can represent the nature of the material data of the voice mode. The voice signals contained in the material data of the voice mode can be divided into a time domain, a frequency domain and an inverse frequency domain, and the characteristic parameters are acquired to obtain the voice characteristics.
Video feature extraction for material data of a video modality can be classified into feature extraction of sound and feature extraction of an image (extraction of a key frame).
Of course, when the material data provided by the creative data provider is material data of a single mode, the material data of the single mode can be directly subjected to subsequent processing.
Step S101, preprocessing the material data of any mode to determine a material element set of the material data.
In an optional implementation, for the material data of any mode, the embodiment of the application may extract the material elements of the material data according to the constituent elements (called as material elements) of the material data, so as to use the collection of the material elements of the material data as the collection of the material elements of the material data, so as to implement preprocessing of the material data.
After the multi-mode material data is subjected to mode identification to obtain the material data of each mode, the material data of each mode is preprocessed for facilitating subsequent processing, and split processing is carried out on the material data of each mode to obtain each material element set for forming the material data of each mode.
In one example, the material elements of the material data in the text mode are text vectors, and the embodiment of the application can preprocess the material data in the text mode to obtain a text vector set composed of the text vectors. The material elements of the material data of the image mode are image elements, and the embodiment of the application can preprocess the material data of the image mode to obtain an image element set formed by the image elements. The material elements of the material data of the voice mode are voice segments, and the embodiment of the application can preprocess the material data of the voice mode to obtain a voice segment set formed by the voice segments. The material elements of the material data of the video mode are video segments, and the embodiment of the application can preprocess the material data of the video mode to obtain a video segment set formed by the video segments.
The material data of each mode is preprocessed to obtain a material element set, so that subsequent processing can be performed based on each material element in the material element set.
Step S102, processing the material elements in the material element set of the material data aiming at the material data of any mode to obtain the material element set after the material data processing so as to form the processed material data; wherein the material element set after material data processing is different from the material element set before processing.
In the foregoing step, the material data of each modality has been preprocessed, resulting in a material element set of the material data of each modality. Therefore, the material element set of the material data of each mode can be further processed, namely, part or all of the material elements in the material element set of the material data of each mode are adjusted, so that the material element set after the material data processing is obviously different from the material element set before the material data processing, and the expressive force of the creative data can be changed. For example, after preprocessing the material data in the text mode, a text vector set composed of text vectors is obtained, and the text vector in one of the text vector sets obtained after preprocessing the material data in the text mode is multiplied, so that the text content corresponding to the text vector after the multiplication is amplified in the material data in the whole text mode to highlight the text content represented by the text vector.
In one embodiment, the material elements may be processed according to the importance degree of the material elements in the overall material data, where the importance degree may be represented by a weight value set corresponding to the material elements in the material element set.
Step S103, obtaining creative data according to the processed multi-mode material data.
After the material element set of the material data of each mode is processed, the material element set after the material data processing has obvious difference with the material element set before the material data processing aiming at the material data of any mode, so that the material data after the processing has obvious difference with the material element set before the processing. Therefore, based on the combination of the processed multi-mode material data, the embodiment of the application can generate new creative data, and the content in the generated new creative data has new expression effect, expressive force and attractive force.
Therefore, the technical scheme provided by the embodiment of the application can be used for adjusting the material elements of the material data of each mode in the multi-mode material data, so that the new creative data can be automatically generated by combining the multi-mode material data of the material elements, and different creative data can be rapidly obtained. Because the material data used for generating the creative data are multi-modal, after the material elements of the multi-modal material data are adjusted, the embodiment of the application can obtain the creative data combined by different multi-modal materials based on the combination of the material data of different modalities, so that the generated creative data have diversified material modalities; in addition, by changing the modes of the material data used for generating the creative data, the creative data combined by the materials of different modes can be flexibly generated. Therefore, the scheme provided by the embodiment of the application has the advantage of improving the creative data generation efficiency.
In order to facilitate understanding the implementation of the creative data generation method, please refer to fig. 2, fig. 2 is a schematic frame diagram of a creative data generation device according to an embodiment of the present application.
As shown, the apparatus includes:
the material data acquisition module 200 is used for acquiring multi-mode material data;
the preprocessing module 201 is configured to preprocess the material data according to the material data of any mode, and determine a material element set of the material data;
the processing module 202 is configured to process, for material data of any mode, material elements in a material element set of the material data to obtain a material element set after material data processing, so as to form processed material data; wherein the material element set after material data processing is different from the material element set before processing;
and the creative data generation module 203 is configured to obtain creative data according to the processed multi-mode material data.
In order to perform modal identification and division on the multi-modal material data, the apparatus may further include a multi-modal extraction model 210, configured to extract the material data of each modality acquired by the material data acquisition module 200, so as to facilitate a subsequent preprocessing process. Of course, when the material data provided by the creative data provider is single-mode material data, the single-mode material data can be directly subjected to subsequent preprocessing, and the multi-mode extraction model 210 is not required.
Alternatively, the preprocessing module 201 may be a model for processing material data of various modalities, which is constructed based on a model of a transducer as a base.
Therefore, the technical scheme provided by the embodiment of the application can be used for adjusting the material elements of the material data of each mode in the multi-mode material data, so that the new creative data can be automatically generated by combining the multi-mode material data of the material elements, and different creative data can be rapidly obtained. Because the material data used for generating the creative data are multi-modal, after the material elements of the multi-modal material data are adjusted, the embodiment of the application can obtain the creative data combined by different multi-modal materials based on the combination of the material data of different modalities, so that the generated creative data have diversified material modalities; in addition, by changing the modes of the material data used for generating the creative data, the creative data combined by the materials of different modes can be flexibly generated. Therefore, the scheme provided by the embodiment of the application has the advantage of improving the creative data generation efficiency.
Based on the foregoing, it can be known that, in the creative data generation method provided by the embodiment of the present application, in order to improve the form diversity of the creative data, the content of the creative data is personalized, and the creative data is generated based on the material data of any one mode or at least two modes of video, text, image and voice. In one embodiment, when the story data includes a story data of a text modality and a story data of an image modality, the story data of the text modality and the story data of the image modality may be processed, respectively, to generate creative data.
Optionally, referring to fig. 3, fig. 3 is a schematic flow chart of a creative data generation method according to an embodiment of the present application.
As shown, the process may include the steps of:
step S300, multi-mode material data is obtained, wherein the multi-mode material data comprises material data of an image mode and material data of a text mode.
Step S301, aiming at the material data of the text mode, vectorizing characters in the material data of the text mode to obtain a character vector set;
step S302, aiming at the material data of the image mode, carrying out image splitting on the material data of the image mode to obtain an image element set.
Alternatively, step S300 may be implemented by the material data obtaining module 200 in the creative data generation device shown in fig. 2.
The material data of the text mode and the material data of the image mode are preprocessed, so that the subsequent processing of each element in the element set can be facilitated, and creative data with different expressive force can be obtained.
Alternatively, step S301 may be implemented by the text model 2011 included in the preprocessing module 201 in the creative data generation device shown in fig. 2, and step S302 may be implemented by the image model 2012 in the preprocessing module 201 in the creative data generation device shown in fig. 2.
After obtaining the preprocessed element set, the element set is processed, please continue to refer to fig. 3, and the creative data generation method may further include the following steps:
step S303, decoding the text vector set aiming at the material data of the text mode, and regenerating text description according to the decoding result, wherein the regenerated text description forms the material data of the processed text mode;
step S304, for the material data of the image mode, at least one processing operation of rearrangement, modification and addition of the image elements is performed on the image element set, so as to obtain a processed image element set, so as to form the material data of the processed image mode.
As an alternative implementation, step S303 may be implemented using the text processing model 2021 included in the processing module 202 in the creative data generation device shown in fig. 2, and step S304 may be implemented using the image processing model 2022 included in the processing module 202 in the creative data generation device shown in fig. 2.
Step S305, combining the processed material data of the image mode and the processed material data of the text mode to obtain the creative data with combined graphics and texts.
As an alternative implementation, step S305 may be implemented by using the teletext multimodal model 2031 included in the creative data generation module 203 in the creative data generation device shown in fig. 2.
In one embodiment, each text vector in the text vector set is decoded to obtain a processed text description, for example, a certain column of text vectors is deleted, or a certain column of text vectors is multiplied, so that a certain word in the material data of the text mode is deleted, or a certain word is amplified, so that different creative data expression modes are realized, and individuation of creative data is enhanced.
For the material data of the image mode, the processed material data of the image mode is obtained by processing the image elements in the image element set, for example, the line outline of a certain image element is changed, or the color expression of a certain image element is changed, so that the outline of a certain image element is changed from a circle to a triangle, or the color of a certain image element is changed from blue to red. A change in visual effect is achieved.
Therefore, based on the processed material data of the image mode and the material data of the text mode, a plurality of image-text material data are synthesized to form creative data. For example, a 3-fold enlarged prompt banner may be displayed on the image material data of the triangle with the red color. Thereby enhancing expressive force of creative data.
In the text vector set preprocessed by the material data in the text mode, each text vector is decoded to form a new text description, and in the image element set preprocessed by the material data in the image mode, each image element is rearranged, modified and added with any processing operation, so that the finally generated creative data has outstanding characteristics, different expressive force of the creative data is realized, and the creative data has attractive force to audiences of the creative data, so that the delivery requirements of creative data players can be met.
In other embodiments, the modes of the material data may further include material data of a voice mode and material data of a video mode; thereby generating creative data according to the material data of the voice mode and the material data of the video mode. Referring to fig. 4, fig. 4 is a schematic flow chart of a creative data generation method according to an embodiment of the present application.
As shown, the process may include the steps of:
step S400, multi-mode material data is obtained, wherein the multi-mode material data comprises material data of a voice mode and material data of a video mode.
Step S401, aiming at the material data of the voice mode, splitting the voice segment in the material data of the voice mode to obtain a voice segment set;
step S402, aiming at the material data of the video mode, splitting the video segments in the material data of the video mode to obtain a video segment set.
As an optional implementation, the step S400 is implemented by the material data obtaining module 200 in the creative data generation device shown in fig. 2; the step S401 may be implemented by using the voice model 2013 included in the preprocessing module 201 in the creative data generation device shown in fig. 2; step S402 may be implemented using the video model 2014 included in the preprocessing module 201 in the creative data generation device shown in fig. 2.
The material data of the voice mode and the material data of the video mode are preprocessed, so that the subsequent processing of each element in the element set can be facilitated, and creative data with different expressive force can be obtained. Please continue to refer to fig. 4.
Step S403, for the material data of the voice mode, performing at least one of the following processing operations on the voice segments in the voice segment set to obtain a processed voice segment, where the processed voice segment forms the material data of the processed voice mode:
performing intonation recognition on the voice segments in the voice segment set, and adjusting intonation on the voice segments in the voice segment set according to the intonation recognition result;
performing volume recognition on the voice segments in the voice segment set, and adjusting the volume of the voice segments in the voice segment set according to the volume recognition result;
and identifying key voice segments in the voice segment set, and performing highlighting treatment on the key voice segments.
In the above process of processing each speech segment, identifying an important speech segment in the speech segment set, and performing highlighting processing on the important speech segment may be performing highlighting processing on an important intonation or an important volume in the speech segment based on a intonation recognition result or based on a volume recognition result; of course, the accent recognition result and the volume recognition result can be simultaneously based on the accent recognition result and the volume recognition result, so as to carry out highlighting treatment on the accent and the accent volume in the voice section; highlighting may be performed, for example, when the intonation in a speech segment suddenly rises, denoted as a accent speech segment, or when the volume in a speech segment suddenly increases, denoted as a accent speech segment. In other embodiments, it is also possible to highlight the speech segments based on the speech content with the accent class description. For example, the voice content is a prompt voice content such as "mainly" and "focused on" which represents the focus, so that the highlighting of the corresponding voice segment can be performed.
Step S404, for the material data of the video mode, performing at least one of the following processing operations on the video segments in the video segment set to obtain a processed video segment, where the processed video segment forms the material data of the processed video mode:
rearranging the sequence of video segments in the video segment set;
and identifying important video segments in the video segment set, and highlighting the important video segments.
In the rearrangement of the order of the video segments in the video segment set, the playing order of each video segment may be rearranged, or the arrangement order of the video frames in each video segment may be rearranged, that is, the rearrangement of the video segments may refer to the rearrangement of the order of the video frames in the video segment, or the rearrangement of all the video segments in the video segment element set.
The identifying the key video segments in the video segment set may be based on the identification of the voice volume and voice intonation of the video segments, or may be based on the identification of the key video segments based on the content of the video voice in the video segments, so as to identify the key video segments and further perform the highlighting. For example, the corresponding video segment can be determined to be the key video segment according to the rising of the intonation, and further the brightness of the key video segment is improved by adjusting the brightness parameter of the key video segment, so as to realize the key video segment highlighting.
As an alternative implementation, step S403 may be implemented using the voice processing model 2023 included in the processing module 202 in the creative data generation device shown in fig. 2; step S404 may be implemented using the video processing model 2024 included in the processing module 202 in the creative data generation device shown in fig. 2.
After the processed video segment material element set and the processed voice segment material element set are obtained, creative data can be formed. With continued reference to fig. 4, the process may further include the steps of:
step S405, aligning the voice segment of the processed voice mode material data with the video segment of the processed video mode material data to obtain the creative data combining the audio and the video.
As an alternative implementation, the step S405 may be implemented by using the audio/video multimodal model 2032 included in the creative data generation module 203 in the creative data generation device shown in fig. 2.
In the voice segment set obtained by preprocessing the material data of the voice mode, each voice segment is adjusted and optimized, and in the video segment set obtained by preprocessing the material data of the video mode, each video segment is adjusted and optimized, so that the finally generated creative data has outstanding characteristics, the creative data has different expressive force, and the creative data has attractive force to audiences of the creative data, so that the delivery requirements of creative data delivery users can be met.
In other embodiments, the modes of the material data may further include material data of an image mode, material data of a text mode, material data of a voice mode, and material data of a video mode; thereby generating creative data according to the material data of the four modes. Referring to fig. 5, fig. 5 is another flow chart of the creative data generation method according to the embodiment of the application.
As shown, the process may include the steps of:
step S500, multi-mode material data is obtained, wherein the multi-mode material data comprises image-mode material data, text-mode material data, voice-mode material data and video-mode material data.
Step S501, aiming at the material data of the text mode, vectorizing characters in the material data of the text mode to obtain a character vector set;
step S502, aiming at the material data of the image mode, carrying out image splitting on the material data of the image mode to obtain an image element set;
step S503, aiming at the material data of the voice mode, splitting the voice segment in the material data of the voice mode to obtain a voice segment set;
step S504, aiming at the material data of the video mode, splitting the video segments in the material data of the video mode to obtain a video segment set.
As an optional implementation, the step S500 is implemented by the material data obtaining module 200 in the creative data generation device shown in fig. 2; the step S501 may be implemented by the text model 2011 included in the preprocessing module 201 in the creative data generation device shown in fig. 2; the step S502 may be implemented by the image model 2012 in the preprocessing module 201 in the creative data generation device shown in fig. 2; the step S503 may be implemented by using the voice model 2013 included in the preprocessing module 201 in the creative data generation device shown in fig. 2; step S504 may be implemented using the video model 2014 included in the preprocessing module 201 in the creative data generation device shown in fig. 2.
The material data of the voice mode and the material data of the video mode are preprocessed, so that the subsequent processing of each element in the element set can be facilitated, and creative data with different expressive force can be obtained. Please continue to refer to fig. 5.
As shown in fig. 5, the process further includes the steps of:
step S505, decoding the text vector set aiming at the material data of the text mode, and regenerating text description according to the decoding result, wherein the regenerated text description forms the material data of the processed text mode;
Step S506, aiming at the material data of the image mode, at least one processing operation of rearrangement, modification and addition of the image elements is carried out on the image element set to obtain a processed image element set so as to form the material data of the processed image mode;
step S507, for the material data of the voice mode, performing at least one of the following processing operations on the voice segments in the voice segment set to obtain a processed voice segment, where the processed voice segment forms the material data of the processed voice mode:
performing intonation recognition on the voice segments in the voice segment set, and adjusting intonation on the voice segments in the voice segment set according to the intonation recognition result;
performing volume recognition on the voice segments in the voice segment set, and adjusting the volume of the voice segments in the voice segment set according to the volume recognition result;
identifying key voice segments in the voice segment set, and highlighting the key voice segments;
step S508, for the material data of the video mode, performing at least one of the following processing operations on the video segments in the video segment set to obtain a processed video segment, where the processed video segment forms the material data of the processed video mode:
Rearranging the sequence of video segments in the video segment set;
identifying key video segments in the video segment set, and highlighting the key video segments;
as an alternative implementation, step S505 may be implemented using the text processing model 2021 included in the processing module 202 in the creative data generation device shown in fig. 2; step S506 may be implemented using the image processing model 2022 included in the processing module 202 in the creative data generation device shown in fig. 2; step S507 may be implemented by the voice processing model 2023 included in the processing module 202 in the creative data generation device shown in fig. 2; step S508 may be implemented by the video processing model 2024 included in the processing module 202 in the creative data generation device shown in fig. 2.
Step S509, combining the processed material data of the image mode and the processed material data of the text mode to obtain the material data with combined graphics and texts;
step S510, aligning the voice segment of the processed voice mode material data with the video segment of the processed video mode material data to obtain the audio and video combined material data;
and S511, synthesizing the image-text combined material data with the audio and video combined material data, and performing layout arrangement adjustment on the material positions in the synthesized result to obtain creative data.
As an alternative implementation, the material data obtained in the step S509 and combined with graphics may be implemented by using the graphics multi-modal model 2031 included in the creative data generation module 203 in the creative data generation device shown in fig. 2, the material data obtained in the step S510 and combined with audio and video may be implemented by using the audio and video multi-modal model 2032 included in the creative data generation module 203 in the creative data generation device shown in fig. 2, and the step S511 may be implemented by using the unified multi-modal model 2033 included in the creative data generation module 203 in the creative data generation device shown in fig. 2.
In some embodiments, at least one operation of intonation adjustment, volume adjustment and emphasis audio content highlighting is performed on each speech segment in the speech segment set, so as to obtain a processed speech segment, for example, the occurrence sequence of two speech segment sums may be changed, or the speech volume of the speech segment may be increased, or the emphasis audio content in the speech segment may be highlighted, so that in the material data of the processed speech mode formed by the processed speech segment, new speech information may be conveyed, or the emphasis audio content may be played by enlarging the corresponding speech intonation.
For the material data of the video mode, the video segments in the video segment set are processed to obtain the material data of the processed video mode, for example, the frame segment sequence of a certain video segment is changed, or the important video content of the certain video segment is highlighted, so that the sequence of the video content in the certain video segment is changed (for example, the door is opened and changed to be closed), or the important video content of the certain video segment is amplified (for example, the fish in the river is amplified). The visual effect and visual feeling change during video playing are achieved, and the infectivity of the material data of the video mode is adjusted.
Thus, based on the processed material data of the voice mode and the processed material data of the video mode, synthesizing to obtain a plurality of audio and video material data; and further combining the image-text material data to form creative data, and generating creative data based on the diversified and processed material data, so that the comprehensive expressive force of the creative data is adjusted, and individuation of the generated creative data is enhanced.
Alternatively, the step of performing layout arrangement adjustment on the material positions in the composite result may be implemented by using the beautification model 204 included in the creative data generating device shown in fig. 2, so as to obtain final creative data.
By adjusting the synthesis result of synthesizing the image-text combined material data and the audio and video combined material data, for example, the audio and video combined material data can be embedded into a blank area in the image-text combined material data or can be arranged side by side with the image-text combined material data, and fusion operation is performed to achieve the effect of naturally embedding the audio and video combined material data into the image-text combined material data. The final creative data can be enabled to meet the basic format requirement of the creative data, and the content display of the creative data can be enabled to be more suitable.
In other embodiments, after the material data based on the processed image mode and the material data of the processed text mode are synthesized, the obtained material data of the image-text combination and the material data of the voice mode are combined, and the material data of the voice mode is embedded into the material data of the image-text, so that the voice playing content of the material data of the voice mode is consistent with the processed text content in the material data of the image-text combination, and the listening effect of the material data of the image-text combination is improved.
Or, after the material data based on the processed image mode and the material data of the processed text mode are synthesized, the obtained image-text combined material data is combined with the material data of the video mode, and the material data of the video mode is embedded into the material data of the image-text combined, so that the voice playing content and the video displaying content of the material data of the video mode are consistent with the description of the processed text content in the material data of the image-text combined, and the listening effect of the material data of the image-text combined is further improved.
The material data of the four modes of the image, the text, the voice and the video can be combined at will, a foundation is provided based on the throwing requirements of different creative data throwers and the material data, the creative data of different modes are generated rapidly, and the generation efficiency of the creative data is improved.
In order to further meet the delivery requirements of the creative data delivery person, the finally obtained creative data can be directly used for delivery of the creative data delivery person. The embodiment of the application also provides a creative data delivery method.
Referring to fig. 6, fig. 6 is a flow chart illustrating a creative data delivery method according to an embodiment of the present application. The method can be applied to a server, and the server can be a server corresponding to the creative data delivery platform. In an alternative implementation, the server corresponding to the creative data generation platform and the server corresponding to the creative data delivery platform may be integrated (e.g., servers belonging to the same server cluster may implement creative data generation and creative data delivery), or may be set independently.
As shown, the process may include the steps of:
step S600, creative data is acquired.
The creative data is generated by the creative data generation method according to the previous embodiment. The quick generation of the creative data can provide a reliable delivery data base for subsequent creative data delivery, and the delivery base requirement is met.
And step S601, scoring the creative data by using a pre-trained scoring model to obtain the score of the creative data.
As an optional implementation, the embodiment of the application can form platform historical data by utilizing the delivery channel of the creative data and the click rate/conversion rate of the groups facing the creative data, so that a scoring model is trained through the platform historical data, and the creative data is scored according to the scoring model to obtain the score of the creative data.
In order to realize personalized delivery in different modes so as to meet different requirements of creative data delivery users, the scoring model can score creative data obtained by using the creative data generation method provided by the embodiment of the application. In an optional implementation, the scoring model provided in the embodiment of the present application may include a first class scoring model and a second class scoring model, where the first class scoring model may score for creative data, and the second class scoring model may score for creative data and a user to which the creative data is to be pushed.
Optionally, for the first class scoring model, the first class scoring model may score each creative data, so that creative data according to the score obtained by each creative data may be automatically delivered according to the set ordering. For example, 5 creative data are obtained based on the creative data generation method provided in the embodiment of the present application, and are respectively creative data 01, creative data 02, creative data 03, creative data 04, and creative data 05, after scoring each creative data, the creative data may be ranked according to the order from high score to low score, and the creative data with push ranks in the set ranks (for example, the creative data with push ranks in the first three) may be pushed; therefore, the first class scoring model can score 5 pieces of creative data respectively, and automatically puts the creative data of the first three of the scoring orders. Of course, in other embodiments, the set ordering may be pushing the first creative data in order of the scores from high to low.
In other implementations, the second type scoring model can score creative data and users for whom the creative data is to be pushed. For example, the second class scoring model may score different users (e.g., different user types) to which the same creative data is to be pushed, respectively, to obtain different scores for the same creative data for different users (e.g., different scores for the same creative data for different user types); and based on different users (such as different user types), the creative data delivery platform can select creative data with high scores corresponding to the users and automatically deliver the creative data to the users. For example, for a user type to be dropped (e.g., a crowd of creative data to be dropped), the creative data drop platform may drop the creative data to the user type to be dropped according to the score of each creative data; at this time, the second class scoring model can be utilized to determine the score corresponding to each creative data in the user type to be released according to the user type to be released and the creative data meeting the set ordering is automatically selected to be released to the user type to be released based on the score of each creative data. For example, according to the score of the creative data facing the crowd to be put, the creative data is ranked from high to low, and the creative data ranked as the first creative data is put, at this time, the creative data putting platform can select the creative data with the score of the first name according to the score result of the second class scoring model, and put the creative data to the crowd to be put.
In a further example, it is assumed that there are 5 pieces of creative data obtained based on the creative data generation method provided by the embodiment of the present application, which are respectively creative data 01, creative data 02, creative data 03, creative data 04, and creative data 05, and users for which the creative data is oriented are classified into a type of users and B type of users; the second class scoring model may score 5 creative data when facing the user of the type a, for example, the score of creative data 01 is 8, the score of creative data 02 is 6, the score of creative data 03 is 7, the score of creative data 04 is 5, and the score of creative data 05 is 8.5, thereby selecting creative data 05 to be pushed to the user of the type a; when the user facing the type B is classified, 5 creative data can be respectively scored, for example, the score of the creative data 01 is 7, the score of the creative data 02 is 8, the score of the creative data 03 is 6, the score of the creative data 04 is 4, the score of the creative data 05 is 3, and then the creative data 02 is selected to be pushed to the user facing the type B. Therefore, the scores of the second class of scoring models on different users for the same creative data can be different, and personalized delivery of the creative data to the users can be realized.
Step S602, according to the score of the creative data, the delivered creative data is determined.
According to the score of the creative data, the placed creative data is determined, namely after each creative data is scored by using a first class scoring model, a plurality of creative data which accords with a set ordering range are automatically placed according to the score ordering; or, the second class scoring model may be used to score each creative data for different types of users, and then the plurality of creative data in accordance with the set ordering range may be automatically delivered according to the score ordering of each creative data under each type of users.
Therefore, according to the creative data delivery method provided by the embodiment of the application, the plurality of creative data generated by the creative data generation method are obtained, and then the scoring model is used for continuously scoring each piece of creative data, so that the creative data in the set ordering is automatically delivered according to the scoring of each piece of creative data (for example, the creative data with the front score ordering is automatically delivered). According to the embodiment of the application, automatic delivery can be performed according to the sorting order corresponding to the scoring of each creative data. Although the material data input by the creative data presenter is the same, the types and manifestations of the generated creative data are different after the material data are processed based on the creative data generation method provided by the embodiment of the application.
It should be noted that, in an alternative implementation, based on the click rate and conversion rate of different users (users refer to audience of creative data) for each creative data being different, the scoring model may determine scoring of each creative data for different users (e.g., different user types), and further for any type of user, the embodiments of the present application may determine automatic delivery of creative data for that type of user based on the scoring results of the creative data for that type of user. Therefore, based on the creative data delivery method provided by the embodiment of the application, personalized delivery of creative data aiming at different users can be realized, so that the delivery of the creative data has pertinence.
In order to enable the creative data to have different scores for different users, in one embodiment, the step of scoring the creative data by using a pre-trained scoring model to obtain the score of the creative data may include:
and scoring each creative data according to the user group characteristics, the channel characteristics and the creative data characteristics of the creative data for any creative data by using a pre-trained scoring model (such as the second class scoring model) to obtain the score of the creative data for the user group.
Optionally, the scoring model is trained based on the click rate and the conversion rate of the user group oriented to the creative data, so that the embodiment of the application can score the creative data according to the group characteristics, the channel characteristics and the creative data characteristics of the user group oriented to the creative data by using the trained scoring model, and the scoring of the scoring model is different when the same creative data is oriented to different user groups. According to the method and the device for delivering the creative data, the proper creative data can be automatically delivered for different users (for example, the creative data with the corresponding score of the first name is automatically delivered for different users), so that the delivery of the creative data is targeted.
Based on the foregoing, it can be known that the creative data delivery person can provide the material data for generating the creative data, and the creative data delivery platform can directly and automatically generate the creative data for delivery according to the material data of the creative data input by the creative data delivery person. For ease of understanding, please refer to fig. 7, fig. 7 is a schematic diagram of a framework of the creative data generation system provided in the embodiment of the present application.
As shown in fig. 7, a creative data delivery system for generating creative data may include: a client 10, a creative data generation platform 20 and a creative data delivery platform 30.
The client 10 may be considered as a terminal device used by a creative data presenter, such as a notebook, a smart phone, a smart tablet, etc. used by the creative data presenter. The client 10 may interact with the creative data delivery platform 30 by accessing a platform web page of the creative data delivery platform 30 or by an application for delivering creative data.
The creative data delivery platform 30 may be a server platform providing a creative data delivery service, and the creative data delivery platform 30 may be implemented by a single server, or a server system consisting of multiple servers.
In the creative data delivery system provided in the embodiment of the present application, the client 10 displays a creative data material input interface; the creative data material input interface is used for inputting multi-mode material data by creative data players; transmitting the multi-modal material data to the creative data generation platform 20;
the creative data generation platform 20 is configured to perform a creative data generation method according to any one of the previous embodiments;
the creative data delivery platform 30 is configured to perform the creative data delivery method according to any of the previous embodiments;
The creative data generation platform 20 and the creative data delivery platform 30 provided in the embodiments of the present application may be integrated or may be set independently of each other; the creative data generation platform 20 and the creative data delivery platform 30 may be selected to be integrated based on different service requirements or the creative data generation platform 20 and the creative data delivery platform 30 may be independently configured to be targeted to different users.
According to the technical scheme provided by the embodiment of the application, creative data can be directly generated according to the acquired multi-mode material data, so that a plurality of different creative data can be rapidly obtained; the multi-modal material data can comprise at least two modalities of images, voice, text and video; therefore, creative data of different modes can be obtained based on the material data combined by different modes, so that the generated creative data has the characteristic of mode diversity, and different creative data can be flexibly generated according to the change of the material data modes; when creative data is generated based on a plurality of material data of different modes, the material data of various modes are further preprocessed, material elements in a material element set of various modes obtained after preprocessing are processed, and material elements in the originally acquired multi-mode material data are changed; the personalization of the generated creative data is further enhanced. Meanwhile, as the click rate and the conversion rate of the same creative data by different users can be different, the scoring of the same creative data for different users can be different, so that the corresponding creative data is selected for different users to automatically put on based on the scoring of the creative data for different users, and the putting of the creative data can be targeted.
The embodiment of the application further provides a server, which comprises a memory, and a processor, wherein the memory stores the program of the creative data generation method or the creative data delivery method described in the above embodiment, and the processor calls the program stored in the memory to realize the creative data generation method described in the above embodiment or the creative data delivery method described in the above embodiment.
Optionally, fig. 8 is a schematic diagram of an optional hardware device architecture provided in an embodiment of the present application, where the hardware device architecture may be the architecture of the server described above; referring to fig. 8, the hardware architecture of the electronic device may include: at least one memory 3 and at least one processor 1; the memory stores a program, and the processor calls the program to execute the creative data generation method or the creative data delivery method, and further, at least one communication interface 2 and at least one communication bus 4; the processor 1 and the memory 3 may be located in the same electronic device, e.g. the processor 1 and the memory 3 may be located in a server device or a terminal device; the processor 1 and the memory 3 may also be located in different electronic devices.
As an alternative implementation of the disclosure of the embodiments of the present application, the memory 3 may store a program, and the processor 1 may call the program to execute the creative data generation method or the creative data delivery method provided in the foregoing embodiments of the present application.
In the embodiment of the application, the hardware device may be a tablet computer, a notebook computer, an intelligent sound box and other devices capable of executing the creative data generation method or the creative data delivery method.
In the embodiment of the application, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete communication with each other through the communication bus 4; it is clear that the communication connection schematic of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 shown in fig. 8 is only an alternative way;
alternatively, the communication interface 2 may be an interface of a communication module, such as an interface of a GSM module;
the processor 1 may be a CPU or a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present application.
The memory 3 may comprise a high-speed RAM memory or may further comprise a non-volatile memory, such as at least one disk memory.
It should be noted that, the implementation terminal device may further include other devices (not shown) that may not be necessary for the disclosure of the embodiments of the present application; embodiments of the present application will not be described in detail herein, as such other devices may not be necessary to an understanding of the present disclosure.
The embodiment of the application also provides a storage medium, and a computer program is stored, and the computer program realizes the creative data generation method according to the embodiment or the creative data delivery method according to the embodiment when being executed.
The embodiment of the application also provides a computer program which is executed to realize the creative data generation method or the creative data delivery method.
Although the embodiments of the present application are disclosed above, the present application is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention shall be defined by the appended claims.

Claims (10)

1. A creative data generation method, comprising:
acquiring multi-mode material data, wherein the multi-mode material data comprises at least two modes of images, videos, voices and texts;
Preprocessing the material data aiming at the material data of any mode, and determining a material element set of the material data;
processing the material elements in the material element set of the material data aiming at the material data of any mode to obtain the material element set after the material data processing so as to form the processed material data; wherein the material element set after material data processing is different from the material element set before processing;
and obtaining creative data according to the processed multi-mode material data.
2. The creative data generation method of claim 1, wherein the multi-modal story data includes textual-modal story data and image-modal story data; the preprocessing the material data aiming at the material data of any mode, and determining the material element set of the material data comprises the following steps:
aiming at the material data of the text mode, vectorizing characters in the material data of the text mode to obtain a character vector set;
and aiming at the material data of the image mode, carrying out image splitting on the material data of the image mode to obtain an image element set.
3. The creative data generation method of claim 2, wherein the processing the material elements in the material element set of the material data for the material data of any modality to obtain the material element set after the material data processing to form the processed material data includes:
Decoding the text vector set aiming at the material data of the text mode, and regenerating text description according to a decoding result, wherein the regenerated text description forms the material data of the processed text mode;
and performing at least one processing operation of rearrangement, modification and addition of the image elements on the image element set aiming at the material data of the image mode to obtain a processed image element set so as to form the material data of the processed image mode;
the obtaining creative data according to the processed multi-mode material data comprises the following steps:
and combining the processed material data of the image mode and the processed material data of the text mode to obtain the creative data with combined graphics and texts.
4. The creative data generation method of claim 1, wherein the story data includes story data of a voice modality and story data of a video modality; the preprocessing the material data aiming at the material data of any mode, and determining the material element set of the material data comprises the following steps:
aiming at the material data of the voice mode, splitting voice segments in the material data of the voice mode to obtain a voice segment set;
And aiming at the material data of the video mode, splitting the video segments in the material data of the video mode to obtain a video segment set.
5. The creative data generation method of claim 4, wherein the processing of the material elements in the material element set of the material data for the material data of any modality to obtain the material element set after the material data processing to form the processed material data includes:
for the material data of the voice mode, executing at least one of the following processing operations on the voice segments in the voice segment set to obtain processed voice segments, wherein the processed voice segments form the material data of the processed voice mode:
performing intonation recognition on the voice segments in the voice segment set, and adjusting intonation on the voice segments in the voice segment set according to the intonation recognition result;
performing volume recognition on the voice segments in the voice segment set, and adjusting the volume of the voice segments in the voice segment set according to the volume recognition result;
identifying key voice segments in the voice segment set, and highlighting the key voice segments;
and for the material data of the video mode, executing at least one of the following processing operations on the video segments in the video segment set to obtain processed video segments, wherein the processed video segments form the material data of the processed video mode:
Rearranging the sequence of video segments in the video segment set;
identifying key video segments in the video segment set, and highlighting the key video segments;
according to the processed multi-mode material data, obtaining creative data includes:
and aligning the voice segment of the processed voice-mode material data with the video segment of the processed video-mode material data to obtain the creative data combining the audio and the video.
6. The creative data generation method according to claim 1, wherein the story data includes story data of a text modality, story data of an image modality, story data of a voice modality, and story data of a video modality; the processed multi-mode material data comprises the following steps: the method comprises the steps of processing material data of a text mode, processing material data of an image mode, processing material data of a voice mode and processing material data of a video mode;
the obtaining creative data according to the processed multi-mode material data comprises the following steps:
combining the processed material data of the image mode and the processed material data of the text mode to obtain the material data with combined graphics and texts;
Aligning the voice segment of the processed voice-mode material data with the video segment of the processed video-mode material data to obtain audio-video combined material data;
and synthesizing the material data with the combined image and text and the material data with the combined audio and video, and carrying out layout arrangement adjustment on the material positions in the synthesized result to obtain creative data.
7. A creative data delivery method includes:
acquiring creative data, the creative data being generated according to the creative data generation method according to any one of claims 1-6;
scoring the creative data by using a pre-trained scoring model to obtain the score of the creative data;
and determining the released creative data according to the scores of the creative data.
8. A creative data delivery system, comprising: the system comprises a client, a creative data generation platform and a creative data delivery platform;
the client displays an creative data material input interface; the creative data material input interface is used for inputting multi-mode material data by creative data players; sending the multi-mode material data to the creative data generation platform;
The creative data generation platform being configured to perform the creative data generation method of any one of claims 1-6;
the creative data delivery platform being configured to perform the creative data delivery method of claim 7;
the creative data generation platform and the creative data delivery platform are integrated or independently arranged.
9. A server comprising a memory, a processor, the memory storing a program, the processor invoking the program stored in the memory to implement the creative data generation method of any one of claims 1-6 or to implement the creative data delivery method of claim 7.
10. A storage medium in which a computer program is stored which, when executed, implements the creative data generation method of any one of claims 1-6 or the creative data delivery method of claim 7.
CN202310336452.6A 2023-03-28 2023-03-28 Creative data generation method, delivery method, system, server and storage medium Pending CN116450861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310336452.6A CN116450861A (en) 2023-03-28 2023-03-28 Creative data generation method, delivery method, system, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310336452.6A CN116450861A (en) 2023-03-28 2023-03-28 Creative data generation method, delivery method, system, server and storage medium

Publications (1)

Publication Number Publication Date
CN116450861A true CN116450861A (en) 2023-07-18

Family

ID=87126832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310336452.6A Pending CN116450861A (en) 2023-03-28 2023-03-28 Creative data generation method, delivery method, system, server and storage medium

Country Status (1)

Country Link
CN (1) CN116450861A (en)

Similar Documents

Publication Publication Date Title
CN113473182B (en) Video generation method and device, computer equipment and storage medium
JP6606275B2 (en) Computer-implemented method and apparatus for push distributing information
US10152719B2 (en) Virtual photorealistic digital actor system for remote service of customers
CN103607647B (en) Method, system and advertisement playing device are recommended in the advertisement of multimedia video
JP5843207B2 (en) Intuitive computing method and system
US9619812B2 (en) Systems and methods for engaging an audience in a conversational advertisement
US20130076788A1 (en) Apparatus, method and software products for dynamic content management
KR101912237B1 (en) Method for Attaching Hash-Tag Using Image Recognition Process and Software Distributing Server Storing Software for the same Method
CN105427122A (en) System and method for selecting and presenting advertisements based on natural language processing of voice-based input
CN109474562B (en) Display method and device of identifier, and response method and device of request
CN111031334A (en) Recommendation method, device and equipment for text virtual gift content and storage medium
CN104735153B (en) A kind of resource supplying and the method and device obtained
CN104602130A (en) Interactive advertisement implementation method and interactive advertisement implementation system
CN105893404A (en) Natural information identification based pushing system and method, and client
CN114598933A (en) Video content processing method, system, terminal and storage medium
KR20140114444A (en) Key Word Detection Device, Control Method and Control Program for Same, and Display Apparatus
KR102149035B1 (en) Advertising page creation and management system for performance marketing
US7120583B2 (en) Information presentation system, information presentation apparatus, control method thereof and computer readable memory
US20220124279A1 (en) Channel layering of video content for augmented reality (ar) or control-based separation
CN116450861A (en) Creative data generation method, delivery method, system, server and storage medium
CN109299378B (en) Search result display method and device, terminal and storage medium
WO2021184153A1 (en) Summary video generation method and device, and server
KR102334403B1 (en) Contents production apparatus inserting advertisement in animation, and control method thereof
CN116957669A (en) Advertisement generation method, advertisement generation device, computer readable medium and electronic equipment
CN113438532B (en) Video processing method, video playing method, video processing device, video playing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination