CN114581124A - Propaganda page generation method based on natural language processing and related device - Google Patents
Propaganda page generation method based on natural language processing and related device Download PDFInfo
- Publication number
- CN114581124A CN114581124A CN202210168361.1A CN202210168361A CN114581124A CN 114581124 A CN114581124 A CN 114581124A CN 202210168361 A CN202210168361 A CN 202210168361A CN 114581124 A CN114581124 A CN 114581124A
- Authority
- CN
- China
- Prior art keywords
- text
- image
- target
- propaganda
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000003058 natural language processing Methods 0.000 title claims abstract description 18
- 230000000694 effects Effects 0.000 claims abstract description 112
- 238000004590 computer program Methods 0.000 claims description 23
- 238000004891 communication Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 230000001737 promoting effect Effects 0.000 description 24
- 239000003795 chemical substances by application Substances 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 239000012634 fragment Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0276—Advertisement creation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Entrepreneurship & Innovation (AREA)
- General Engineering & Computer Science (AREA)
- Game Theory and Decision Science (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Technology Law (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to the technical field of artificial intelligence, and provides a propaganda page generation method based on natural language processing and a related device, wherein the method comprises the following steps: receiving a publicity page generation request sent by a user, wherein the publicity page generation request comprises a publicity file; analyzing the propaganda file to obtain character information and activity information of propaganda characters; acquiring a target text corresponding to the character information and the activity information, text characteristics of the target text, a target image and image characteristics of the target image; acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model; and generating the propaganda page corresponding to the target text and the target image based on the propaganda page style. By adopting the method and the device, the generation efficiency and the display effect of the propaganda page can be improved, and the propaganda effect can be improved.
Description
Technical Field
The application relates to the technical field of artificial intelligence, and mainly relates to a method for generating a publicity page based on natural language processing and a related device.
Background
With the rapid development of the internet, the marketing approaches of enterprises or merchants are gradually shifted from off-line to on-line, and the goals of brand and product marketing, customer service and the like are achieved by means of the internet. For example, in the insurance field, a promotion page of a promotion can be published through the internet to improve promotion effects. However, the publicity page made by the existing image template software has a single picture and a high repetition rate, and is difficult to improve the publicity effect.
Disclosure of Invention
The embodiment of the application provides a method and a related device for generating a publicity page based on natural language processing, which can improve the generation efficiency and the display effect of the publicity page and are beneficial to improving the publicity effect.
In a first aspect, an embodiment of the present application provides a method for generating a promotion page based on natural language processing, where:
receiving a propaganda page generation request sent by a user, wherein the propaganda page generation request comprises a propaganda file;
analyzing the propaganda file to obtain character information and activity information of propaganda characters;
acquiring a target text corresponding to the character information and the activity information, text characteristics of the target text, a target image and image characteristics of the target image;
acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model;
and generating the propaganda page corresponding to the target text and the target image based on the propaganda page style.
In a second aspect, an embodiment of the present application provides a publicity page generating apparatus based on natural language processing, wherein:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a propaganda page generation request sent by a user, and the propaganda page generation request comprises a propaganda file;
the analysis unit is used for analyzing the propaganda file to obtain the character information and the activity information of the propaganda character;
an acquiring unit, configured to acquire a target text corresponding to the person information and the activity information, a text feature of the target text, and an image feature of a target image and the target image; acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model;
and the generating unit is used for generating the propaganda page corresponding to the target text and the target image based on the propaganda page style.
In a third aspect, an embodiment of the present application provides a computer device, including a processor, a memory, a communication interface, and a computer program, where the memory stores the computer program, the computer program is configured to be executed by the processor, and the computer program includes instructions for some or all of the steps described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, where the computer program makes a computer execute to implement part or all of the steps described in the first aspect.
The embodiment of the application has the following beneficial effects:
after the method and the related device for generating the publicity page based on the natural language processing are adopted, if a publicity page generation request sent by a user is received, the publicity file in the publicity page generation request is analyzed to obtain the character information and the activity information of the publicity character. And then acquiring a target text corresponding to the character information and the activity information, text characteristics of the target text, and image characteristics of the target image and the target image. And then acquiring a publicity page style corresponding to the text features and the image features based on the page style model, and generating a publicity page corresponding to the target text and the target image based on the publicity page style. Therefore, the propaganda page can be generated based on the propaganda file, and the generation efficiency of the propaganda page can be improved. And the corresponding propaganda page is generated based on the propaganda page style obtained by the text characteristics and the image characteristics and is used for propagandizing the character and the related activities, so that the display effect of the propagandizing page can be improved, and the propagandizing effect can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
fig. 1 is a schematic flowchart of a method for generating a publicity page based on natural language processing according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a publicity page generating apparatus based on natural language processing according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work according to the embodiments of the present application are within the scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The network architecture applied by the embodiment of the application comprises a server and electronic equipment. The number of the electronic devices and the number of the servers are not limited in the embodiment of the application, and the servers can provide services for the electronic devices at the same time. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The server may alternatively be implemented as a server cluster consisting of a plurality of servers.
The electronic device may be a Personal Computer (PC), a notebook computer, or a smart phone, and may also be an all-in-one machine, a palm computer, a tablet computer (pad), a smart television playing terminal, a vehicle-mounted terminal, or a portable device. The operating system of the PC-side electronic device, such as a kiosk or the like, may include, but is not limited to, operating systems such as Linux system, Unix system, Windows series system (e.g., Windows xp, Windows 7, etc.), Mac OS X system (operating system of apple computer), and the like. The operating system of the electronic device at the mobile end, such as a smart phone, may include, but is not limited to, an operating system such as an android system, an IOS (operating system of an apple mobile phone), a Window system, and the like.
The electronic device may install and run the application program, and the server may be a server corresponding to the application program installed in the electronic device, and provide an application service for the application program. The application program may be a single integrated application software, or an applet embedded in another application, or a system on a web page, and the like, which is not limited herein.
The propaganda content referred in this application may be propaganda content in the insurance field, or may be propaganda content in the real estate field, or may be propaganda content in the medical field, or may be propaganda content in the internet field, etc., which is not limited herein.
For example, if the insurance agent needs to promote the insurance service for which the insurance agent is responsible, the service information of the insurance service can be sent to the server through the electronic device. The server can generate a publicity page based on the character information and the business information of the insurance agent and return the publicity page to the electronic equipment, so that the insurance agent can use the publicity page to publicize the business and the self publicity.
The embodiment of the application provides a publicity page generation method based on natural language processing, which can be executed by a publicity page generation device based on natural language processing. The device can be realized by software and/or hardware, can be generally integrated in electronic equipment or a server, can improve the generation efficiency and the display effect of the propaganda page, and is favorable for improving the propaganda effect.
Referring to fig. 1, fig. 1 is a schematic flow diagram illustrating a method for generating a publicity page based on natural language processing according to the present application. Taking the application of the method to a server as an example for illustration, the method includes the following steps S101 to S105, wherein:
s101, receiving a promotion page generation request sent by a user, wherein the promotion page generation request comprises a promotion file.
The application does not limit the user, and the user can be a planner of the campaign and the like. In one possible example, the user is a promotional character. In the embodiment of the present application, the propaganda person refers to a propaganda person of the current campaign, and may be an instructor of the current campaign, for example, a host. Or may be an object to be advertised in the campaign, such as a hero character.
Illustratively, in the insurance field, the insurance agent can be a propaganda figure, can hold propaganda activities such as small-scale exchanges and the like, introduces oneself while introducing the insurance business, so as to improve the activeness and the influence of oneself in the insurance field.
The promotion page generation request is used to generate a promotion page. The campaign generation request may include a promotion file. The promotion file may be a plan or discussion file for a campaign, which may be in the form of text, image, audio, or video, and the like, without limitation. It can be understood that, in the case that the promotion file is a voice file or a video file of a conference discussion, the step of sorting the promotion content can be omitted, and the efficiency of generating the promotion page can be improved.
The information in the promotional documents is not limited by the present application. The information in the promotion document may be various items of information entered based on the hints information prior to submission of the promotion page generation request, or a document that results from importing the document. Illustratively, the propaganda activity information related to the propaganda page is input one by one according to the prompt information of the text input box, and the character image of the propaganda character is imported. The number of the personal information and the event information classified as described above may be 1 or more.
In an embodiment of the present application, the promotion document may include character information and event information of a promotional character. The character information of the propaganda character may include identification information such as a name of the propaganda character, a number of the employee id, and the like. The character information of the promotional character may also include introduction information of the promotional character, and may include basic information of the promotional character, such as a character image, age, sex, a working experience, and the like. The image of the person is usually an image suitable for promotion, such as a certificate photo, a work photo, a prize drawing photo and the like of a propaganda person.
If the uploaded character image does not meet the propaganda requirements (for example, pixels are low, the background is fuzzy, or the character image is a live photo), target recognition can be performed on the basis of the character image in the propaganda file to obtain a face image and a limb image; and adjusting the face image and the limb image based on a preset image template to obtain a figure image in the propaganda page.
The preset image template can be a background image, and therefore the background image, the face image and the limb image can be combined to obtain a character image in the propaganda page. The preset image template may be an image generation model obtained by a publicity algorithm or a neural network training, and the like, which is not limited herein. Therefore, the display effect of the figure image can be improved, and the promotion effect can be improved.
In one possible example, the character image is a cartoon character image. It can be understood that the cartoon character image is adopted for image display, which is beneficial to improving the propaganda effect. The cartoon character image generation step can comprise the steps of firstly obtaining character characteristics corresponding to a face image and a limb image; and generating a cartoon face image corresponding to the character characteristics based on a preset image template. Therefore, the method is beneficial to improving the fitting property of the generated cartoon image.
In the embodiment of the present application, the activity information may include an activity subject, an activity time, an activity place, an activity object, an activity form, and the like, which are not limited herein. The activity theme is used for briefly introducing activity contents, such as endowment plans and the like. The event time may include a time at which the event is specifically held, may also include a time at which the event is participated in, and the like. For example, if the campaign is an entry campaign, the campaign time may be the entry time and the participation time. Activity times may be classified as upcoming, ongoing, scheduled, periodic events, and the like.
The activity object may include an object that may participate in the activity (hereinafter, referred to as a launch object) and may also include an object that holds the activity (hereinafter, referred to as a hold object). For example, the holding object may be an insurance agent, an insurance company, a financial specialist, etc., and the delivery object may be a potential customer, a purchased customer, a promotion customer, etc. The activity types may include online activities and offline activities, and may also include types of participation in the activities, e.g., praise, forward, vote, entry, and the like.
The campaign information may also include service information for services to be advertised. Illustratively, the service to be advertised is a life insurance product, and the attribute information of the life insurance product may include an investment amount, an investment mode, a payment period, and the like.
In the embodiment of the present application, the personal information is divided into the personal text and the personal image, and the activity information is divided into the activity text and the moving image, based on the type of information in the activity text. It can be understood that the readability of the promotional page can be improved by introducing the promotional activities in both text and image.
S102, analyzing the propaganda file to obtain the character information and the activity information of the propaganda character.
The method for parsing the promotion document is not limited in the present application. If the promotion file includes speech, then in one possible example, step S102 may include the steps of: performing voiceprint recognition on the propaganda file to obtain at least two voice fragments, wherein each voice fragment corresponds to an identity and time information; performing semantic recognition on the voice fragments to obtain at least two texts; and classifying the at least two texts to obtain the character information and the activity information of the propaganda characters.
It is understood that there may be multiple pieces of the voice of a character in the voice file. Thus, in this example, the promotional files are first divided into voice segments for each character based on differences in voiceprint characteristics. And then recognizing the text of each voice segment. And classifying the texts to obtain the character information and the activity information of the propaganda characters. Thus, the efficiency and accuracy of the analysis can be improved.
It should be noted that, if the promotion file includes text, the step of classifying the text may be performed. Further, if the promotion file includes text, the character information of the promotion character includes character text of the promotion character, and the campaign information of the promotion character includes campaign text of the promotion character, then in one possible example, step S102 may include: and classifying the texts in the propaganda file to obtain the character texts and the activity texts of the propaganda characters.
If the promotion file includes an image, the persona information of the promotional persona includes a persona image of the promotional persona, and the campaign information of the promotional persona includes a live action of the promotional persona, then in one possible example, step S102 may include: and classifying the images in the propaganda file to obtain the figure images and the moving images of the propaganda figures.
S103, acquiring a target text corresponding to the character information and the activity information, text characteristics of the target text, a target image and image characteristics of the target image.
In the embodiment of the present application, the target text refers to text that can be used for a promotion page. The text features of the target text may include activity information such as activity subject, activity time, activity place, activity object, activity product, etc., or may include business information of the business to be advertised. The textual features of the target text may also include character introductory information for the promotional characters, such as a practitioner experience associated with the campaign, and the like. Illustratively, the promotional character is a fund manager, the character characteristics may be name, educational experience, research direction, income information, and the like. Or the text features of the target text may include information related to the promotional character in the service to be promoted, such as commissions and rebates of the promotional character to the service to be promoted. It can be appreciated that the readability of the page display can be improved by introducing the campaign in text.
The target image refers to an image which can be used for publicizing a page, and the image characteristics of the target image can comprise a product image of a business to be publicized, or a solution image of a related question and the like. The image features of the target image may alternatively include images of persons promoting the person, etc. It can be understood that the inclusion of images in the promotional page may improve the readability of the page display.
In one possible example, step S103 may include the following steps A1-A4, wherein:
and A1, acquiring a first text and a first image corresponding to the activity information.
In the embodiment of the present application, the first text may be included in the campaign information, or may be obtained based on the campaign information, and may be used for promoting the text of the page, and the like. That is, the first text may be a text selected from the activity information, or may be a text searched or processed based on the activity information. The first image may be included in the campaign information, or may be obtained based on the campaign information, an image that may be used to promote a page, etc. That is, the first image may be an image selected from the moving information, or may be an image searched or processed based on the moving information. The first text and the first image, and the method for acquiring the first text and the first image are not limited in the present application.
In one possible example, where the activity information includes activity text and activity images, step A1 may include the following steps A11-A15, where:
and A11, carrying out named entity recognition on the active text to obtain a first sub-text corresponding to the active text.
The Named Entity Recognition (NER) is an important basic tool in application fields such as information extraction, question-answering system, syntactic analysis, machine translation and the like. Generally, the person identified by the named entity is the named entity which identifies three major categories (entity category, time category and number category), seven minor categories (name of person, organization name, place name, time, date, currency and percentage) in the text to be processed. For example, the sentence: xiaoming morning 8 o' clock goes to school to attend class. The named entity recognition is performed on the sentence, and the information which should be extracted is { person name: xiaoming; time: 8 am in the morning; a place: school }.
The first sub-text may be included in or obtained based on the campaign text, text that may be used to promote the page, etc. For example, the first sub-text may be a text vector obtained by performing named entity recognition on the active text, or may be a text vector obtained by weighting the weight values of the entities, and the like. The weight value of each entity is used to indicate the importance of the entity in the sentence or the fragment, for example, the title "shallow-talk pension plan, locked quality life", the entity may be pension and quality, wherein the pension may have a weight value of 0.8, and the quality may have a weight value of 0.6. The method for determining the weight value of the entity is not limited, and the determination can be performed based on the part of speech of the entity, the position of the entity in the sentence, and the like.
And A12, acquiring a second sub text corresponding to the moving image.
The second subfile is text which is acquired from the moving image and used for promoting the page. The second sub-text may include summary information of the moving image, or introduction information of the moving image, and the like, and the application is not limited to the second sub-text and the method for obtaining the second sub-text.
In one possible example, step a12 may include steps a121 and a122, where:
and A121, carrying out target identification on the moving image to obtain the image attribute of the moving object.
And A122, generating a second sub text corresponding to the moving image based on the image attribute.
In this embodiment of the application, the activity object may be a service to be advertised or a product corresponding to the service to be advertised, and the like, which is not limited herein. The image attribute of the moving object includes image features such as size, shape, and color of the moving object, or may include characters obtained by image recognition, which is not limited herein. The second subfile may be obtained by combining characters corresponding to image attributes, or may be generated based on characters of image features corresponding to image attributes, or the like.
It can be understood that, in step a121 and step a122, the second sub-text corresponding to the moving image is generated based on the image attribute of the moving object obtained by performing the target recognition on the moving image, and the accuracy rate of acquiring the text can be improved.
It should be noted that step a11 may be performed before step a 12. Step a11 may be performed after step a12, and step a11 may be performed simultaneously with step a12, which is not limited herein.
And A13, combining the first sub-text and the second sub-text to obtain a first text corresponding to the activity information.
In an embodiment of the present application, the first text includes a first sub-text and a second sub-text. The first text includes features that differ between the first sub-text and the second sub-text and similar features. The combination method is not limited, and the classification can be performed based on the first sub-text and the second sub-text to obtain at least two types of text sets; and combining the text vectors corresponding to the text set to obtain a first text corresponding to the activity information.
And A14, acquiring other moving images based on the first text.
In the embodiment of the present application, the other moving images refer to moving images found based on the first text, and may be understood as moving images that are not uploaded by the user. Other live action images may include promotional scene images, such as scenegraphs of promotions preceding a promotional character, and the like. The other moving images may alternatively include a question and answer image of the service to be advertised for explaining a problem that may be encountered in transacting the service to be advertised or purchasing a product corresponding to the service to be advertised.
In a possible example, where the other active images comprise quiz images of the business to be advertised, step a14 may comprise steps a141 and a142, where:
a141, determining the question of the service to be advertised and the reference answer of the question based on the first text.
In this embodiment of the application, the problem of the service to be advertised may be a service flow of the service to be advertised determined based on the first text, or a problem obtained by performing statistics on service information, or a common problem obtained by performing statistics, or the like. The reference answer to the question may be a preset standard answer, or an answer obtained by searching a network or a standard protocol, and the like, and is not limited herein.
And A142, generating the question-answer image based on the question and the reference answer.
In the embodiment of the present application, the question-answer image may be an image obtained by arranging and combining each question and a reference answer to the question. The style of the questions may be the same or different. The style of the reference answer may be the same or different, and the style of the reference answer may be the same or different from the question to which it corresponds. The order of the questions and the reference answers may be determined based on the number of questions asked of the questions or the hierarchy of the questions, and the like, and is not limited herein.
It can be understood that, in step a141 and step a142, the question-answer image of the service to be advertised is generated based on the question of the service to be advertised determined by the first text and the reference answer of the question, so that paraphrasing can be performed on the service to be advertised, and the promotion effect of the service to be advertised can be improved.
And A15, combining the moving image and the other moving images to obtain a first image corresponding to the moving information.
In the embodiment of the present application, the first image includes a moving image and other moving images. The first image may include a feature having a difference between the moving image and the other moving images and a similar feature. The method for combining the moving images and other moving images is not limited, similar images can be fused, or the image characteristics can be adjusted to be uniform.
It is understood that, in steps a 11-a 15, the first sub-text corresponding to the active text and the second sub-text corresponding to the active image are combined to obtain the first text corresponding to the active information. And combining other moving images acquired by the first text with the moving images to obtain a first image corresponding to the moving information. Therefore, the text and the image are obtained from the angles of the text and the image, and the comprehensiveness of obtaining the first image and the first text corresponding to the activity information is improved.
And A2, acquiring a second text and a second image corresponding to the character information.
In the embodiment of the present application, the second text may be included in the personal information, or may be obtained based on the personal information, and may be used for a promotional page. That is, the second text may be text selected from the personal information, or may be text searched or processed based on the personal information. The second image may include an image in or obtained based on the persona information that may be used to promote a page, etc. That is, the second image may be an image selected from the personal information, or may be an image searched or processed based on the personal information. The second text and the second image, and the method for acquiring the second text and the second image are not limited in the present application.
In one possible example, the character information includes character text and a character image, the character text including at least two kinds of character sub-text, step a2 may include the following steps a21 to a23, wherein:
and A21, selecting the target character sub-text from the character sub-texts.
In the embodiment of the present application, the target person sub-text is a person sub-text selected from the person texts. The present application is not limited to the selection method, and in one possible example, the step a21 may include the following steps a211 to a213, where:
and A211, searching a historical publicity page based on the first text.
In an embodiment of the present application, the historical publicity pages may include publicity pages that are the same as the theme, type, etc. of the campaign in the first text. It can be understood that the target character sub-text is selected based on the historical propaganda page, and the selection accuracy can be improved.
A212, acquiring historical personage information of the historical publicity personages from the historical publicity pages.
In the embodiment of the present application, the historical character information refers to information corresponding to a historical publicity character in a historical publicity page. It should be noted that the number of the history publicity pages may be 1 or more. The number of the history personal information may be 1 or more.
And A213, selecting a target person sub-text from the person sub-text based on the similarity value between the attribute of the historical person information and the attribute of the person sub-text.
In this embodiment of the present application, the target person sub-text may be a person sub-text having a similarity value with the attribute of the historical person information greater than a preset threshold. The preset threshold may be a fixed value, or may be determined based on the number of the historical personal information or the number of the personal sub-texts, which is not limited herein.
It can be understood that, in steps a211 to a213, the target character sub-text is selected according to the similarity value between the attribute of the historical activity information of the historical propaganda character in the historical propaganda page searched by the first text and the attribute of the character sub-text, so that the accuracy of selecting the target character sub-text can be improved, and the display effect of the text in the propaganda page can be improved.
And A22, conducting named entity recognition on the target character sub-text to obtain a second text corresponding to the character information.
And A23, adjusting the character image based on a preset image template to obtain a second image corresponding to the character information.
The step of performing named entity recognition on the target person sub-text may refer to the description of step a11, and will not be described herein. The preset image template may also refer to the foregoing, and may include a background image, or may generate a model for an image obtained by a publicity algorithm or neural network training, and the like. The method for adjusting the figure image by the preset image template is not limited, and the method can be obtained by firstly extracting the face image and the limb image from the figure image and then adjusting the face image and the limb image based on the preset image template. Specifically, the shooting angle of the propaganda figure can be determined based on the face image and the limb image; acquiring a target image template corresponding to a shooting angle from a preset image template; and combining the background in the target image template with the face image and the limb image to obtain a second image. Or the human characteristics corresponding to the face image and the limb image can be obtained based on a preset image template, and then a second image is generated based on the human characteristics.
It is to be understood that, in steps a 21-a 23, the named entity recognition is performed on the target person sub-text selected from the person sub-texts, so as to obtain the second text corresponding to the person information. And adjusting the face image based on a preset image template to obtain a second image corresponding to the person information. Therefore, the accuracy of acquiring the second text and the second image can be improved.
A3, acquiring a target text and a text feature of the target text corresponding to the character information and the activity information based on the first text and the second text.
A4, acquiring a target image corresponding to the character information and the activity information and the image characteristics of the target image based on the first image and the second image.
Step a3 may classify the first text and the second text to fuse the texts and obtain the texts with differences. Step a4 may classify the first image and the second image to fuse the images and obtain images with differences.
It is to be understood that, in steps a1 to a4, text and images corresponding to the event information and the personal information, respectively, are acquired first. And respectively acquiring text characteristics of the target text and the target text, and image characteristics of the target image and the target image based on the respectively acquired text and image. Therefore, the accuracy of acquiring the images and the texts can be improved, and the accuracy of acquiring the text features and the image features can be improved.
It should be noted that step a1 may be performed before step a2, step a1 may be performed after step a2, and step a1 may be performed simultaneously with step a 2. Step A3 may be performed before step a4, step A3 may be performed after step a4, and step A3 may be performed simultaneously with step a4, which is not limited herein.
And S104, acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model.
In an embodiment of the application, the page generation model is used for obtaining a publicity page style, so that the publicity page can be generated based on the publicity page style and the contents of the publicity page. The page generation model is not limited in this application, and in one possible example, the page generation model is a Generative Adaptive Networks (GAN).
The method for training the GAN can comprise the following steps: acquiring a reference publicity page from a network; classifying the reference publicity pages to obtain training data and test data; carrying out unsupervised learning based on the training data to obtain an initialization model of the GAN; and carrying out supervised learning on the initialization model of the GAN based on the test data to obtain the initial model of the GAN.
The reference publicity pages can be publicity pages of various styles, and each style carries label information of image features and text features. The label information can be obtained by classifying based on a CNN convolutional neural network obtained by training in advance. Therefore, the propaganda page style is obtained through the trained GAN, and the display effect of generating the propaganda page is favorably improved.
And S105, generating a publicity page corresponding to the target text and the target image based on the publicity page style.
In an embodiment of the present application, a promotion page may be used in front of a campaign to increase the impact of the campaign. Or may be used after the campaign is completed to enhance the campaign effectiveness. The promotion page includes a target text and a target image. It can be understood that the propaganda page is generated based on the text features of the target text and the propaganda page style obtained by the image features of the target image, and is used for propaganda of characters and related activities, so that the display effect of the propaganda page can be improved, and the propaganda effect can be improved.
The method and the device for the publicity page arrangement are not limited in the arrangement sequence between the target text and the target image in the publicity page, and can be determined based on the historical publicity page, or can be obtained by sequencing based on the abstracts corresponding to the target text and the target image.
It should be noted that the publicity page style may include a plurality of sub-styles, i.e., styles corresponding to different text types and different image types. Therefore, after the promotion page style is obtained, the target text and the target image may be processed based on the promotion page style so that the target text and the target image satisfy the corresponding promotion page style.
In the method shown in fig. 1, when a promotion page generation request sent by a user is received, a promotion file in the promotion page generation request is analyzed to obtain character information and activity information of a promotion character. And acquiring the text characteristics of the target text and the target text corresponding to the character information and the activity information, and the image characteristics of the target image and the target image. And then acquiring a publicity page style corresponding to the text features and the image features based on the page style model, and generating a publicity page corresponding to the target text and the target image based on the publicity page style. Therefore, the propaganda page can be generated based on the propaganda file, and the generation efficiency of the propaganda page can be improved. And the corresponding propaganda page is generated based on the propaganda page style obtained by the text characteristics and the image characteristics and is used for propagandizing the character and the related activities, so that the display effect of the propagandizing page can be improved, and the propagandizing effect can be improved.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a promotion page generation apparatus based on natural language processing according to the present application, consistent with the embodiment shown in fig. 1. As shown in fig. 2, the apparatus 200 for generating a publicity page based on natural language processing includes:
the receiving unit 201 is configured to receive a promotion page generation request sent by a user, where the promotion page generation request includes a promotion file;
the parsing unit 202 is configured to parse the promotion file to obtain character information and activity information of a promotion character;
the acquiring unit 203 is configured to acquire a target text corresponding to the person information and the activity information, a text feature of the target text, and a target image and an image feature of the target image; acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model;
the generating unit 204 is configured to generate a promotion page corresponding to the target text and the target image based on the promotion page style.
In a possible example, the obtaining unit 203 is specifically configured to obtain a first text and a first image corresponding to the activity information; acquiring a second text and a second image corresponding to the character information; acquiring a target text corresponding to the character information and the activity information and text characteristics of the target text based on the first text and the second text; and acquiring a target image corresponding to the person information and the activity information and image characteristics of the target image based on the first image and the second image.
In a possible example, the activity information includes an activity text and a moving image, and the obtaining unit 203 is specifically configured to perform named entity recognition on the activity text to obtain a first sub-text corresponding to the activity text; acquiring a second sub text corresponding to the moving image; combining the first sub-text and the second sub-text to obtain a first text corresponding to the activity information; acquiring other moving images based on the first text; and combining the moving image and the other moving images to obtain a first image corresponding to the moving information.
In a possible example, the obtaining unit 203 is specifically configured to perform target recognition on the moving image, to obtain an image attribute of the moving object; and generating a second sub text corresponding to the moving image based on the image attribute.
In a possible example, the other moving images include a question-answer image of a service to be advertised, and the obtaining unit 203 is specifically configured to determine a question of the service to be advertised and a reference answer to the question based on the first text; generating the question-answer image based on the question and the reference answer.
In a possible example, the personal information includes a personal text and a personal image, the personal text includes at least two kinds of personal sub-texts, and the obtaining unit 203 is specifically configured to select a target personal sub-text from the personal sub-texts; carrying out named entity recognition on the target character sub-text to obtain a second text corresponding to the character information; and adjusting the figure image based on a preset image template to obtain a second image corresponding to the figure information.
In a possible example, the obtaining unit 203 is specifically configured to search a history promotion page based on the first text; acquiring historical character information of historical propaganda characters from the historical propaganda page; and selecting a target person sub-text from the person sub-texts based on the similarity value between the attribute of the historical person information and the attribute of the person sub-text.
The detailed processes executed by each unit in the promotion page generation apparatus 200 based on natural language processing can refer to the execution steps in the foregoing method embodiments, and are not described herein.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure. As shown in fig. 3, the computer device 300 includes a processor 310, a memory 320, and a communication interface 330. The processor 310, memory 320, and communication interface 330 are interconnected by a bus 350. The related functions implemented by the receiving unit 201 shown in fig. 2 may be implemented by the communication interface 330, and the related functions implemented by the parsing unit 202, the obtaining unit 203 and the generating unit 204 shown in fig. 2 may be implemented by the processor 310.
The memory 320 has stored therein a computer program 340, the computer program 340 being configured to be executed by the processor 310, the computer program 340 comprising instructions for:
receiving a propaganda page generation request sent by a user, wherein the propaganda page generation request comprises a propaganda file;
analyzing the propaganda file to obtain character information and activity information of propaganda characters;
acquiring a target text corresponding to the character information and the activity information, text characteristics of the target text, a target image and image characteristics of the target image;
acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model;
and generating the propaganda page corresponding to the target text and the target image based on the propaganda page style.
In one possible example, in the aspect of obtaining the target text corresponding to the person information and the activity information and the text feature of the target text, and the target image and the image feature of the target image, the computer program 340 includes instructions specifically configured to:
acquiring a first text and a first image corresponding to the activity information;
acquiring a second text and a second image corresponding to the character information;
acquiring a target text corresponding to the character information and the activity information and text characteristics of the target text based on the first text and the second text;
and acquiring a target image corresponding to the person information and the activity information and image characteristics of the target image based on the first image and the second image.
In one possible example, the activity information comprises an activity text and an activity image, and in the aspect of obtaining the first text and the first image corresponding to the activity information, the computer program 340 comprises instructions specifically configured to:
conducting named entity recognition on the active text to obtain a first sub text corresponding to the active text;
acquiring a second sub-text corresponding to the moving image;
combining the first sub-text and the second sub-text to obtain a first text corresponding to the activity information;
acquiring other moving images based on the first text;
and combining the moving image and the other moving images to obtain a first image corresponding to the moving information.
In one possible example, in said acquiring the second sub-text corresponding to the moving image, the computer program 340 comprises instructions for specifically performing the following steps:
carrying out target recognition on the moving image to obtain the image attribute of the moving object;
and generating a second sub text corresponding to the moving image based on the image attribute.
In one possible example, said further active images comprise quiz images of a service to be advertised, said computer program 340 comprising instructions for performing in particular, in said obtaining further active images based on said first text, the steps of:
determining a question of the service to be advertised and a reference answer of the question based on the first text;
generating the question-answer image based on the question and the reference answer.
In one possible example, the personal information includes a personal text and a personal image, the personal text includes at least two types of personal sub-texts, and in the obtaining of the second text and the second image corresponding to the personal information, the computer program 340 includes instructions specifically configured to:
selecting a target person sub-text from the person sub-texts;
carrying out named entity recognition on the target character sub-text to obtain a second text corresponding to the character information;
and adjusting the figure image based on a preset image template to obtain a second image corresponding to the figure information.
In one possible example, in the selecting the target person sub-text from the person sub-text, the computer program 340 includes instructions specifically for:
searching a historical publicity page based on the first text;
acquiring historical character information of historical propaganda characters from the historical propaganda page;
and selecting a target person sub-text from the person sub-texts based on the similarity value between the attribute of the historical person information and the attribute of the person sub-text.
Embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for causing a computer to execute to implement part or all of the steps of any one of the methods described in the method embodiments, and the computer includes an electronic device or a server.
Embodiments of the application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform to implement some or all of the steps of any of the methods recited in the method embodiments. The computer program product may be a software installation package and the computer comprises an electronic device or a server.
In the above-described embodiments, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like. For example, the blockchain may store a page style model, a preset image template, a historical publicity page, information related to publicity characters, etc., which is not limited herein.
The block chain in the embodiment of the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains information of a batch of network transactions, which is used to verify the validity (anti-counterfeiting) of the information and generate the next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will also appreciate that the embodiments described in this specification are presently preferred and that no particular act or mode of operation is required in the present application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, at least one unit or component may be combined or integrated with another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on at least one network unit. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a hardware mode or a software program mode.
The integrated unit, if implemented in the form of a software program module and sold or used as a stand-alone product, may be stored in a computer readable memory. With such an understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned memory comprises: various media that can store program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A publicity page generation method based on natural language processing is characterized by comprising the following steps:
receiving a propaganda page generation request sent by a user, wherein the propaganda page generation request comprises a propaganda file;
analyzing the propaganda file to obtain character information and activity information of propaganda characters;
acquiring a target text corresponding to the character information and the activity information, text characteristics of the target text, a target image and image characteristics of the target image;
acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model;
and generating the propaganda page corresponding to the target text and the target image based on the propaganda page style.
2. The method of claim 1, wherein the obtaining of the target text and the text feature of the target text corresponding to the person information and the activity information, and the target image and the image feature of the target image comprises:
acquiring a first text and a first image corresponding to the activity information;
acquiring a second text and a second image corresponding to the character information;
acquiring a target text corresponding to the character information and the activity information and text characteristics of the target text based on the first text and the second text;
and acquiring a target image corresponding to the person information and the activity information and image characteristics of the target image based on the first image and the second image.
3. The method according to claim 2, wherein the activity information includes an activity text and an activity image, and the obtaining a first text and a first image corresponding to the activity information includes:
carrying out named entity recognition on the active text to obtain a first sub-text corresponding to the active text;
acquiring a second sub-text corresponding to the moving image;
combining the first sub-text and the second sub-text to obtain a first text corresponding to the activity information;
acquiring other moving images based on the first text;
and combining the moving image and the other moving images to obtain a first image corresponding to the moving information.
4. The method according to claim 3, wherein the obtaining of the second sub-text corresponding to the moving image comprises:
carrying out target recognition on the moving image to obtain the image attribute of the moving object;
and generating a second sub text corresponding to the moving image based on the image attribute.
5. The method according to claim 3, wherein the other moving images include a question and answer image of a service to be advertised, and the acquiring of the other moving images based on the first text includes:
determining a question of the service to be advertised and a reference answer of the question based on the first text;
generating the question-answer image based on the question and the reference answer.
6. The method of claim 2, wherein the personal information comprises a personal text and a personal image, the personal text comprises at least two types of personal sub-texts, and the obtaining of the second text and the second image corresponding to the personal information comprises:
selecting a target person sub-text from the person sub-texts;
carrying out named entity recognition on the target character sub-text to obtain a second text corresponding to the character information;
and adjusting the figure image based on a preset image template to obtain a second image corresponding to the figure information.
7. The method of claim 6, wherein the selecting the target person sub-text from the person sub-texts comprises:
searching a historical publicity page based on the first text;
acquiring historical character information of historical propaganda characters from the historical propaganda page;
and selecting a target person sub-text from the person sub-texts based on the similarity value between the attribute of the historical person information and the attribute of the person sub-text.
8. A publicity page generating apparatus based on natural language processing, comprising:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a propaganda page generation request sent by a user, and the propaganda page generation request comprises a propaganda file;
the analysis unit is used for analyzing the propaganda file to obtain the character information and the activity information of the propaganda character;
an acquiring unit, configured to acquire a target text corresponding to the person information and the activity information, a text feature of the target text, and an image feature of a target image and the target image; acquiring a publicity page style corresponding to the text feature and the image feature based on a page style model;
and the generating unit is used for generating the propaganda page corresponding to the target text and the target image based on the propaganda page style.
9. A computer device, characterized in that it comprises a processor, a memory and a communication interface, wherein the memory stores a computer program configured to be executed by the processor, the computer program comprising instructions for carrying out the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, the computer program causing a computer to execute to implement the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210168361.1A CN114581124A (en) | 2022-02-23 | 2022-02-23 | Propaganda page generation method based on natural language processing and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210168361.1A CN114581124A (en) | 2022-02-23 | 2022-02-23 | Propaganda page generation method based on natural language processing and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114581124A true CN114581124A (en) | 2022-06-03 |
Family
ID=81773533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210168361.1A Pending CN114581124A (en) | 2022-02-23 | 2022-02-23 | Propaganda page generation method based on natural language processing and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114581124A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117389559A (en) * | 2023-10-16 | 2024-01-12 | 百度在线网络技术(北京)有限公司 | Target page generation method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096995A (en) * | 2016-05-31 | 2016-11-09 | 腾讯科技(深圳)有限公司 | Advertising creative processing method and advertising creative processing means |
CN113315998A (en) * | 2021-04-23 | 2021-08-27 | 浙江海鲤智慧科技有限公司 | Intelligent video propaganda system and method |
CN113377971A (en) * | 2021-05-31 | 2021-09-10 | 北京达佳互联信息技术有限公司 | Multimedia resource generation method and device, electronic equipment and storage medium |
-
2022
- 2022-02-23 CN CN202210168361.1A patent/CN114581124A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096995A (en) * | 2016-05-31 | 2016-11-09 | 腾讯科技(深圳)有限公司 | Advertising creative processing method and advertising creative processing means |
CN113315998A (en) * | 2021-04-23 | 2021-08-27 | 浙江海鲤智慧科技有限公司 | Intelligent video propaganda system and method |
CN113377971A (en) * | 2021-05-31 | 2021-09-10 | 北京达佳互联信息技术有限公司 | Multimedia resource generation method and device, electronic equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117389559A (en) * | 2023-10-16 | 2024-01-12 | 百度在线网络技术(北京)有限公司 | Target page generation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680019B (en) | Examination scheme implementation method, device, equipment and storage medium | |
CN110569377B (en) | Media file processing method and device | |
CN110325986B (en) | Article processing method, article processing device, server and storage medium | |
US11138284B2 (en) | Systems and methods for determining credibility at scale | |
CN112395979B (en) | Image-based health state identification method, device, equipment and storage medium | |
CN111339404A (en) | Content popularity prediction method and device based on artificial intelligence and computer equipment | |
Rajaram et al. | Video influencers: Unboxing the mystique | |
US20230032728A1 (en) | Method and apparatus for recognizing multimedia content | |
WO2019074446A1 (en) | System and method for processing a loan application | |
US20190220903A1 (en) | Audience-based optimization of communication media | |
CN114037545A (en) | Client recommendation method, device, equipment and storage medium | |
CN113011884A (en) | Account feature extraction method, device and equipment and readable storage medium | |
CN111353891A (en) | Auxiliary method and device for identifying suspicious groups in fund transaction data | |
CN113919437A (en) | Method, device, equipment and storage medium for generating client portrait | |
CN111858686B (en) | Data display method, device, terminal equipment and storage medium | |
CN112381513A (en) | Information approval method and device, electronic equipment and storage medium | |
Shron | Thinking with Data: How to Turn Information into Insights | |
CN114581124A (en) | Propaganda page generation method based on natural language processing and related device | |
CN111539782B (en) | Deep learning-based merchant information data processing method and system | |
WO2018132860A2 (en) | System and method for determining rank | |
Riddell et al. | A call for clarity in contemporary authorship attribution evaluation | |
CN116383478A (en) | Transaction recommendation method, device, equipment and storage medium | |
Rytsarev et al. | Application of the principal component analysis to detect semantic differences during the content analysis of social networks | |
CN116703515A (en) | Recommendation method and device based on artificial intelligence, computer equipment and storage medium | |
Mushtaq et al. | Vision and Audio-based Methods for First Impression Recognition Using Machine Learning Algorithms: A Review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |