[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113407815B - Method and device for generating scene theme - Google Patents

Method and device for generating scene theme Download PDF

Info

Publication number
CN113407815B
CN113407815B CN202010182006.0A CN202010182006A CN113407815B CN 113407815 B CN113407815 B CN 113407815B CN 202010182006 A CN202010182006 A CN 202010182006A CN 113407815 B CN113407815 B CN 113407815B
Authority
CN
China
Prior art keywords
scene
combination
category
optional
category combination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010182006.0A
Other languages
Chinese (zh)
Other versions
CN113407815A (en
Inventor
简晓容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010182006.0A priority Critical patent/CN113407815B/en
Publication of CN113407815A publication Critical patent/CN113407815A/en
Application granted granted Critical
Publication of CN113407815B publication Critical patent/CN113407815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for generating scene topics, and relates to the technical field of computers. One embodiment of the method comprises the following steps: receiving a generation request of a scene theme, and determining a target category combination corresponding to the scene theme to be generated according to the generation request; based on a pre-built category mapping dictionary and a category combination database, performing scene matching analysis on the target category combination to obtain an optional scene statement corresponding to the target category combination; and carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases. According to the method and the device, the effect of intelligently generating the scene theme through the target category combination input by the user can be achieved, and the defects that specific people are required to write, time and labor are consumed, cost is high, and subjectivity is high in the prior art are overcome.

Description

Method and device for generating scene theme
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating a scene theme.
Background
In the advent of the intelligent age, more and more users select data of interest by themselves through the internet, for example, purchase items required by themselves on shopping websites, browse news information of interest by themselves on news websites, and the like. In order to attract users to conveniently purchase or browse, etc., a plurality of columns or channels are set on the website to display or recommend articles to the users. A series of item combinations presented on these columns or channels add a textual scene description, which is referred to as a scene theme.
In the prior art, a scene theme generation method is composed of specific groups (such as darers). In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art: is written by specific people, and has the advantages of time and labor consumption, high cost and strong subjectivity.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a method and a device for generating scene topics, which can intelligently generate scene topics through target category combination input by a user, and solve the defects of time consumption, labor consumption, high cost and stronger subjectivity in the prior art that a specific crowd is required to write.
To achieve the above object, according to a first aspect of an embodiment of the present invention, there is provided a method of generating a scene topic.
The method for generating the scene theme comprises the following steps: receiving a generation request of a scene theme, and determining a target category combination corresponding to the scene theme to be generated according to the generation request; based on a pre-built category mapping dictionary and a category combination database, performing scene matching analysis on the target category combination to obtain an optional scene statement corresponding to the target category combination; and carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases.
Optionally, based on the pre-built category mapping dictionary and the category combination database, performing scene matching analysis on the target category combination to obtain an optional scene statement corresponding to the target category combination, including: querying a previous class combination corresponding to the target class combination by using the class mapping dictionary to generate an extended class combination corresponding to the target class combination; generating scene matching values of the extended category combinations and the material category combinations in the category combination database according to predefined rules; screening selectable category combinations from the material category combinations according to the scene matching value and the preset scene statement number; and acquiring a scene statement corresponding to the optional category combination from the category combination database as the optional scene statement.
Optionally, generating the scene matching value of the combination of the extended category and the combination of the material category in the category combination database according to a predefined rule includes: for each material category combination in the category combination database, calculating a scene matching value of the extended category combination and each material category combination according to the following method: acquiring the number of first elements in the extension category combination, the number of second elements in each material category combination, the number of intersection elements of the extension category combination and each material category combination, and the number of collinear words of the extension category combination and each material category combination; and obtaining scene matching values of the extended category combination and each material category combination according to the number of the first elements, the number of the second elements, the number of the intersection elements and the number of the collinear words.
Optionally, before performing scene matching analysis on the target category combination based on the pre-built category mapping dictionary and the category combination database to obtain the optional scene statement corresponding to the target category combination, the method further includes: determining each material category combination, and acquiring a scene statement corresponding to each material category combination; and constructing the category combination database according to the category combination of each material and the scene statement corresponding to the category combination of each material.
Optionally, after acquiring the scene statement corresponding to the optional category combination from the category combination database as the optional scene statement, the method further includes: filtering the optional scene statement by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary.
Optionally, the performing structural processing on the optional scene statement to obtain an optional scene phrase includes: for each optional scene statement, carrying out structural processing on the optional scene statement according to the following method to obtain an optional scene phrase corresponding to the optional scene statement: extracting at least one feature word and at least one viewpoint word in each optional scene statement by utilizing a dependency syntax structure, and then calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word; generating characteristic viewpoint words of each optional scene statement according to the relevance, the semantic similarity, a preset relevance threshold and a preset semantic similarity threshold by combining a dependency syntax structure; and determining the generated characteristic viewpoint as the optional scene phrase corresponding to each optional scene statement.
Optionally, the selecting, from the optional scene phrases, the target title corresponding to the scene theme to be generated includes: determining the number of main title words, the number of sub-short title words and the number of sub-long title words corresponding to the scene theme to be generated according to the generation request; calculating emotion values of the optional scene phrases by using the classification model, and acquiring scene matching values of the optional scene phrases; and screening the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated from the optional scene phrase according to the main title word number, the sub-short title word number, the sub-long title word number, the emotion value and the scene matching value.
Optionally, the selecting, according to the number of main title words, the number of sub-short title words, the number of sub-long title words, the emotion value and the scene matching value, the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated from the selectable scene phrase includes: calculating the optional weighted score of the optional scene phrase according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm; screening the main title from the optional scene phrases according to the main title word number and the optional weighted score; calculating a relation value of the main title and alternative scene phrases based on a Bert model, wherein the alternative scene phrases consist of phrases except the main title in the alternative scene phrases; and screening the sub-short title and the sub-long title from the alternative scene phrase according to the number of sub-short title words, the number of sub-long title words, the optional weighted score and the relation value.
To achieve the above object, according to a second aspect of an embodiment of the present invention, there is provided an apparatus for generating scene subjects.
The device for generating the scene theme comprises the following components: the determining module is used for receiving a generating request of the scene theme and determining a target category combination corresponding to the scene theme to be generated according to the generating request; the acquisition module is used for carrying out scene matching analysis on the target category combination based on a pre-built category mapping dictionary and a category combination database to acquire an optional scene statement corresponding to the target category combination; and the screening module is used for carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases.
Optionally, the acquiring module is further configured to: querying a previous class combination corresponding to the target class combination by using the class mapping dictionary to generate an extended class combination corresponding to the target class combination; according to a predefined rule, calculating scene matching values of the extended category combination and the material category combination in the category combination database; screening selectable category combinations from the material category combinations according to the scene matching value and the preset scene statement number; and acquiring a scene statement corresponding to the optional category combination from the category combination database as the optional scene statement.
Optionally, the acquiring module is further configured to: for each material category combination in the category combination database, calculating a scene matching value of the extended category combination and each material category combination according to the following method: acquiring the number of first elements in the extension category combination, the number of second elements in each material category combination, the number of intersection elements of the extension category combination and each material category combination, and the number of collinear words of the extension category combination and each material category combination; and obtaining scene matching values of the extended category combination and each material category combination according to the number of the first elements, the number of the second elements, the number of the intersection elements and the number of the collinear words.
Optionally, the apparatus further comprises: the construction module is used for determining each material category combination and obtaining a scene statement corresponding to each material category combination; and constructing the category combination database according to the category combination of each material and the scene statement corresponding to the category combination of each material.
Optionally, the acquiring module is further configured to: filtering the optional scene statement by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary.
Optionally, the screening module is further configured to: for each optional scene statement, carrying out structural processing on the optional scene statement according to the following method to obtain an optional scene phrase corresponding to the optional scene statement: extracting at least one feature word and at least one viewpoint word in each optional scene statement by utilizing a dependency syntax structure, and then calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word; generating characteristic viewpoint words of each optional scene statement according to the relevance, the semantic similarity, a preset relevance threshold and a preset semantic similarity threshold by combining a dependency syntax structure; and determining the generated characteristic viewpoint as the optional scene phrase corresponding to each optional scene statement.
Optionally, the screening module is further configured to: determining the number of main title words, the number of sub-short title words and the number of sub-long title words corresponding to the scene theme to be generated according to the generation request; calculating emotion values of the optional scene phrases by using the classification model, and acquiring scene matching values of the optional scene phrases; and screening the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated from the optional scene phrase according to the main title word number, the sub-short title word number, the sub-long title word number, the emotion value and the scene matching value.
Optionally, the screening module is further configured to: calculating the optional weighted score of the optional scene phrase according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm; screening the main title from the optional scene phrases according to the main title word number and the optional weighted score; calculating a relation value of the main title and alternative scene phrases based on a Bert model, wherein the alternative scene phrases consist of phrases except the main title in the alternative scene phrases; and screening the sub-short title and the sub-long title from the alternative scene phrase according to the number of sub-short title words, the number of sub-long title words, the optional weighted score and the relation value.
To achieve the above object, according to a third aspect of the embodiments of the present invention, there is provided an electronic apparatus.
An electronic device of an embodiment of the present invention includes: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for generating the scene theme.
To achieve the above object, according to a fourth aspect of the embodiments of the present invention, there is provided a computer-readable medium.
A computer readable medium of an embodiment of the present invention has stored thereon a computer program which, when executed by a processor, implements a method of generating scene subjects of an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: the method and the device can analyze and obtain the target category combination input by the user from the received generation request of the scene theme, further analyze and obtain the optional scene statement corresponding to the target category combination by combining with the pre-constructed category mapping dictionary and category combination database, further obtain the optional scene phrase, screen the target title corresponding to the scene to be generated, achieve the effect of intelligently generating the scene theme through the target category combination input by the user, and solve the defects of time and labor consumption, high cost and stronger subjectivity in the prior art that specific crowd is needed to write. In addition, in the method for generating the scene theme, the expansion category combination corresponding to the target category combination is generated, and then the expansion category combination is analyzed, so that the range of analysis and matching can be expanded, and the accuracy of matching can be improved. In addition, in the embodiment of the invention, the sentences which do not accord with the current situation or can not be used in the optional scene sentences are filtered by utilizing the pre-built filtering dictionary, so that the accuracy of generating scene topics can be improved, and better experience is brought to users.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a method of generating scene topics in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of the main flow of a method for obtaining alternative scenario statements corresponding to target category combinations according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the main flow of a method for generating a target title corresponding to a scene topic using an alternative scene statement in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of the main flow of a method of generating scene topics in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of the main modules of an apparatus for generating scene topics in accordance with an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
Fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of the main steps of a method of generating a scene topic according to an embodiment of the invention. As shown in fig. 1, the main steps of the method for generating a scene theme according to an embodiment of the present invention may include:
Step S101, receiving a generation request of a scene theme, and determining a target category combination corresponding to the scene theme to be generated according to the generation request;
Step S102, performing scene matching analysis on the target category combination based on a pre-built category mapping dictionary and a category combination database to obtain an optional scene statement corresponding to the target category combination;
step S103, carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases.
In the method for generating the scene theme according to the embodiment of the present invention, the received generating request of the scene theme includes the target category combination input by the user, for example, the input target category combination is a personal financial # stock # financial theory # investment, each category may be separated by # and finally the scene theme corresponding to the personal financial # stock # financial theory # investment is generated, or the input target category combination is (personal financial, stock, financial theory, investment), although the input target category combination may also be in other forms, and the embodiment of the present invention is not limited thereto.
After the target category combination is obtained, a scene matching analysis is carried out on the input target category combination by utilizing a pre-constructed category mapping dictionary and a category combination database, namely, a scene corresponding to the target category combination is obtained through matching, so that an optional scene statement can be obtained. In the embodiment of the invention, a category mapping dictionary needs to be built in advance, and the built category mapping dictionary comprises the articles, the type levels corresponding to the articles and the superior classifications corresponding to the articles. Assume that the constructed category mapping dictionary includes three classes, the first class includes electronic products, the second class corresponding to the electronic products includes audio and video products, and the third class corresponding to the audio and video products includes audio and video accessories, microphones, sound boxes, television boxes, headphones, and the like. It should be noted that the number of the obtained alternative scene sentences is at least one.
After at least one optional scene statement is obtained, the structure of each optional scene statement may be analyzed to obtain an optional scene phrase corresponding to each optional scene statement. Obviously, the number of the obtained optional scene phrases is at least one, and then the target titles corresponding to the scene subjects to be generated are screened out from the at least one optional scene phrase.
From the above steps S101 to S103, it can be known that, from the received generation request of the scene theme, the target category combination input by the user can be obtained through analysis, and then the optional scene sentence corresponding to the target category combination can be obtained through analysis by combining with the pre-constructed category mapping dictionary and the category combination database, so as to further obtain the optional scene phrase, screen the target title corresponding to the scene to be generated, achieve the effect of intelligently generating the scene theme through the target category combination input by the user, and solve the defects of time consumption, labor consumption, high cost and stronger subjectivity in the prior art, which require writing of specific crowd.
It can be seen that, in the method for generating a scene theme according to the embodiment of the present invention, performing scene matching analysis by using the category mapping dictionary and the category combination database is an important part of the present technical solution. Therefore, the construction of the category mapping dictionary and the type combination data is significant, and the constructed category mapping dictionary, which has been described above, includes the item, the type level corresponding to the item, and the superior classification corresponding to the item.
As a referenceable embodiment of the present invention, before performing scene matching analysis on a target category combination based on a pre-built category mapping dictionary and a category combination database to obtain an optional scene statement corresponding to the target category combination, the method for generating a scene topic may further include: determining each material category combination, and obtaining a scene statement corresponding to each material category combination; and constructing a category combination database according to each material category combination and the scene statement corresponding to each material category combination. The material category combination is determined according to actual service requirements, wherein the material category combination is a category combination set by a worker according to requirements, and is also a category combination stored in a constructed category combination database, for example, the material category combination is a suit # sports vest # T shirt, a scene statement corresponding to the material category combination is obtained by using a data query statement and is 'basketball factory, people wave sweat such as rain, hot blood boiling, sweat can certainly wet our clothes, so that …' is required for personal sports, and the material category combination and the scene statement corresponding to the material category combination are stored in the category combination database, thereby completing construction of the category combination database. It should be noted that the category combination database constructed in the embodiment of the present invention may be continuously updated in an expanding manner.
The scene matching analysis is an important part of the technical scheme by using the category mapping dictionary and the category combination database, and the accuracy of a matching result influences the accuracy of a selected scene sentence, so that the accuracy of a selected scene sentence can influence the target theme of the scene theme. As a referenceable embodiment of the present invention, performing scene matching analysis on a target category combination based on a pre-built category mapping dictionary and a category combination database, and obtaining an optional scene statement corresponding to the target category combination may include:
s1021, inquiring a previous class combination corresponding to the target class combination by using the class mapping dictionary to generate an extended class combination corresponding to the target class combination;
Step S1022, generating scene matching values of the extended category combination and the material category combination in the category combination database according to the predefined rule;
Step S1023, screening selectable category combinations from the material category combinations according to the scene matching value and the preset scene statement number;
step S1024, obtaining the scene statement corresponding to the optional category combination from the category combination database as an optional scene statement.
The constructed category mapping dictionary has been described above as including the item, the type level to which the item corresponds, and the superior classification to which the item corresponds. Therefore, in step S1021, a class mapping dictionary may be utilized to query a previous class combination corresponding to a target class combination input by a user, so as to obtain an extended class combination, for example, the target class combination is an individual financial # stock # financial theory # investment, the previous class corresponding to the individual financial, stock and investment in the target class combination is financial, the previous class corresponding to the financial theory is finance, the obtained second class combination is financial # finance, and the generated extended class combination is an individual financial # stock # financial theory # investment # financial. In the method for generating the scene theme, provided by the embodiment of the invention, the extended category combination corresponding to the target category combination is generated, and then the extended category combination is analyzed, so that the range of analysis and matching can be enlarged, and the accuracy of matching can be improved.
After generating the extended category combination, a scene match value for the extended category combination with each material category combination in the category combination database may be generated according to predefined rules. The specific calculation method comprises the following steps: acquiring the number of first elements in the combination of the expansion categories, the number of second elements in the combination of each material category, the number of intersection elements of the combination of the expansion categories and the combination of each material category, and the number of collinear words of the combination of the expansion categories and the combination of each material category; and obtaining scene matching values of the combination of the extended category and the combination of each material category according to the number of the first elements, the number of the second elements, the number of intersection elements and the number of collinear words.
The first element number in the extension category combination is the element number included in the extension category combination, for example, the extension category combination is a personal finance # stock # finance theory # investment # finance, and the first element number is 5; the number of the second elements in each material category combination is the number of elements included in the material category combination, for example, the material category combination is a personal financial product # financial fund # economy, and the number of the second elements is 4; the number of intersection elements of the combination of the extension category and each material category is the same as the number of elements in the combination of the extension category, for example, the combination of the extension category is personal financing # stock # finance theory # investment # financing # finance, the combination of the material category is personal financing # financing product # financing fund # economy, and the number of intersection elements is 1; the number of collinear words of the combination of the extension category and each combination of the material category refers to the number of the same words obtained after the elements in the combination of the extension category and the combination of the material category are segmented, for example, the combination of the extension category is personal finance # stock # finance theory # investment # finance, the combination of the material category is personal finance # finance product # finance fund # economy, the words obtained after the segmentation of the combination of the extension category are personal, finance, stocks, finance, theory and investment, the words obtained after the segmentation of the combination of the material category are personal, finance, products, funds and economy, and the number of the collinear words is 2.
And after the number of the first elements, the number of the second elements, the number of the intersection elements and the number of the collinear words are obtained, obtaining scene matching values of the combination of the extended category and the combination of each material category. In the embodiment of the invention, the predefined rule can utilize a scene matching value calculation formula to generate a scene matching value, namely, the number of first elements, the number of second elements, the number of intersection elements and the number of collinear words are substituted into the scene matching value calculation formula to obtain the scene matching value. The scene matching value calculation formula may be:
match(querycates,keycates)=(x_num*x_num)/(q_num*min(max(k_num,penalty_bias),q_num))+homo_word_match
Wherein querycates is an extended category combination; keycates is a material category combination; x_num is the number of intersection elements of querycates and keycates; q_num is querycates first elements; k_num is keycates second element number; the homoword match is the number of collinear words for querycates and keycates; the penalty_bias is a constant that is set, and its value can be set according to the actual situation.
And (3) calculating to obtain scene matching values of the extended category combination and each material category combination in the category combination database, and then screening selectable category combinations from the material category combinations according to the calculated scene matching values and the preset scene statement number in step S1023. And sequencing the calculated scene matching values of the extended category combination and each material category combination from high to low, and then selecting the category combination with the top ranking as the selectable category combination by utilizing the preset scene statement number. Assuming that the number of preset scene sentences is 100, the scene sentences corresponding to the material category combination corresponding to the highest scene matching value is 30, the scene sentences corresponding to the material category combination corresponding to the second scene matching value is 40, the scene sentences corresponding to the material category combination corresponding to the third scene matching value is 30, and the screened selectable category combination is the three material category combinations. Furthermore, the scene statement corresponding to the optional category combination can be acquired from the category combination database as an optional scene statement.
In addition, in the embodiment of the present invention, after obtaining, from the category combination database, a scenario sentence corresponding to the selectable category combination as a selectable scenario sentence, the method for generating a scenario theme may further include: filtering the optional scene sentences by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary. The filtering process is performed on the optional scene sentences by using a pre-built filtering dictionary in consideration of the fact that the obtained optional scene sentences may include sentences which do not conform to the current situation or are unavailable. The holiday dictionary may include: the sentences which are contained in the selectable scene sentences and are irrelevant to the current generation request are filtered out; the personal name dictionary can comprise some personal names with influence, and sentences related to the personal name dictionary in the optional scene sentences can be filtered out in order to avoid infringement problems; the negative vocabulary dictionary may include: be rendered homeless by war, diffuse, sell, repudiation, false, calumny, etc., negative vocabulary is not suitable for appearing in the scene topic, and therefore sentences related to the negative vocabulary in the alternative scene sentences need to be filtered out.
Fig. 2 is a schematic diagram of main flow of a method for obtaining an alternative scenario statement corresponding to a target category combination according to an embodiment of the present invention. As shown in fig. 2, the main flow of the method for obtaining the optional scene statement corresponding to the target category combination according to the embodiment of the present invention may include:
Step S201, inquiring a previous class combination corresponding to the target class combination by using a class mapping dictionary to generate an extended class combination corresponding to the target class combination;
step S202, selecting any material category combination in a category combination database, and naming the material category combination as A;
Step S203, the number of first elements in the combination of the expansion category, the number of second elements in the combination of the material category, the number of intersection elements of the combination of the expansion category and the combination of the material category, and the number of collinear words of the combination of the expansion category and the combination of the material category;
Step S204, obtaining scene matching values of the combination of the expansion category and the combination A of the material category according to the number of the first elements, the number of the second elements, the number of the intersection elements and the number of the collinear words;
Step S205, judging whether to calculate scene matching values of the combination of the extended category and each material category in the category combination database, if so, executing step S206;
step S206, selecting optional category combinations from the material category combinations according to the scene matching value of the extended category combination and each material category combination in the category combination database and the preset scene statement number;
Step S207, obtaining scene sentences corresponding to the optional category combination from a category combination database as optional scene sentences;
step S208, filtering the optional scene sentences by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary.
In the method for acquiring the optional scene statement corresponding to the target category combination, the extended category combination corresponding to the target category combination is generated, and then the extended category combination is analyzed, so that the range of the matched category combination can be enlarged, and the matching accuracy can be improved.
After the optional scene statement is obtained, the optional scene statement needs to be subjected to structural processing to obtain an optional scene phrase. As a referenceable embodiment of the present invention, performing structure processing on the optional scene sentence to obtain an optional scene phrase may include: and carrying out structural processing on each optional scene statement aiming at each optional scene statement to obtain an optional scene phrase corresponding to each optional scene statement. The method for generating the optional scene phrase corresponding to each optional scene statement is specifically explained as follows: extracting at least one feature word and at least one viewpoint word in each optional scene statement by utilizing the dependency syntax structure, and then calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word; according to the relevance, the semantic similarity, a preset relevance threshold and a preset semantic similarity threshold, and combining the dependency syntax structure, generating characteristic viewpoint words of each optional scene statement; and determining the generated characteristic viewpoint words as the optional scene phrases corresponding to each optional scene statement.
The dependency syntax structure is that the syntax structure is interpreted by analyzing the dependency relation before the components in the language units, and the core verbs in the sentence are the center components which govern other components and are not subject to any other components, all the dominated components are subject to the dominator syntax analysis technology in a certain relation, namely, the syntax structure of the sentence is automatically deduced according to a given grammar system, and the relation between the syntax units contained in the sentence and the syntax units is analyzed. In the embodiment of the invention, the dependency syntax structure can be directly utilized to extract at least one feature word and at least one viewpoint word in each optional scene statement, wherein the feature word is generally a noun, a proper noun or a compound word, such as conversation quality, effect, use and the like, and the viewpoint word is generally an adjective, a formal verb or a compound word, such as special, good, correct and the like.
For each optional scene sentence, after extracting the feature words and the viewpoint words of the optional scene sentence, the association degree and the semantic similarity between each feature word and each viewpoint word are calculated. In the embodiment of the invention, the degree of association between the feature words and the viewpoint words can be calculated by utilizing the mutual information technology, wherein the mutual information refers to the measure of the interdependence between variables, and the degree of association between the feature words and the viewpoint words can be calculated by utilizing other means, so that the method is not limited; the semantic similarity between the feature words and the viewpoint words can be calculated by using KL divergence, which is a measure of asymmetry of the difference between the two probability distributions P and Q, or by using other means, without limitation.
After the association degree and the semantic similarity between the feature words and the viewpoint words are obtained through calculation, the feature viewpoint words of each optional scene statement are generated according to the association degree, the semantic similarity, a preset association degree threshold value and a preset semantic similarity threshold value by combining the dependency syntax structure, and meanwhile the generated feature viewpoint words are determined to be the optional scene phrases corresponding to each optional scene statement. For example, if the association degree between a feature word and a viewpoint word is greater than a preset association degree threshold, and the semantic similarity between the feature word and the viewpoint word is greater than a preset semantic similarity threshold, it is indicated that the association between the feature word and the viewpoint word is strong, in this case, the feature viewpoint word composed of the feature word and the viewpoint word may be generated according to the dependency syntax structure, and the feature viewpoint word is determined to be an optional scene phrase. For example, the feature words are effective and used, the viewpoint words are good, correct and brought, the generated feature viewpoint words can be effective, brought effect and correct and used, or the feature words are of quality and conversation quality, the viewpoint words are special, and the generated feature viewpoint words can be of special quality and conversation quality.
In addition, as a referenceable embodiment of the present invention, screening out a target title corresponding to a scene topic to be generated from optional scene phrases may include: determining the number of main title words, the number of sub-short title words and the number of sub-long title words corresponding to the scene theme to be generated according to the generation request; calculating emotion values of the optional scene phrases by using the classification model, and obtaining scene matching values of the optional scene phrases; and screening the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated from the optional scene phrase according to the number of main title words, the number of sub-short title words, the number of sub-long title words, the emotion value and the scene matching value.
In the embodiment of the invention, the generated scene theme can be a main title, a sub-short title and a sub-long title of the generated scene theme, and a user can know the scene theme through the main title, the sub-short title and the sub-long title. Accordingly, the received generation request includes the word count requirement of the main title, the word count requirement of the sub-short title, and the word count requirement of the sub-long title. The optional scene phrases are obtained by carrying out structural processing on the optional scene sentences, and in the method for acquiring the optional scene sentences, scene matching values of the combination of the optional scene sentences and the extended categories are calculated, which is equivalent to acquiring the scene matching values corresponding to the optional scene phrases. In the embodiment of the invention, in the method for screening the optional scene phrases, besides the scene matching value, the emotion value of the optional scene phrases can be utilized. In the embodiment of the invention, the emotion value of the optional scene phrase can be calculated by using the classification model, which belongs to a mature algorithm of natural language processing and is not specifically described herein.
In the embodiment of the invention, when the number of main title words, the number of sub-short title words, the number of sub-long title words, the emotion value and the scene matching value are obtained, the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated can be screened from the optional scene phrase, which is specifically realized as follows: calculating the optional weighted score of the optional scene phrase according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm; screening main titles from the optional scene phrases according to the number of main title words and the optional weighting score; calculating a relation value of a main title and alternative scene phrases based on the Bert model, wherein the alternative scene phrases consist of phrases except the main title in the alternative scene phrases; and screening the sub-short titles and the sub-long titles from the alternative scene phrases according to the number of sub-short title words, the number of sub-long title words, the optional weighting score and the relation value. The Bert model is totally called Bidirection al Encoder Representations from Transformer, and aims to obtain text containing rich semantic information by training large-scale unlabeled corpus.
Assuming that the number of main title words is 4, the number of sub-short title words is 4 to 5, the number of sub-long title words is 6 to 10, selecting the 4 word phrase with the highest emotion value and scene matching value from the optional scene phrases as the main title of the scene theme, wherein the highest emotion value and scene matching value can be calculated by setting a weighting coefficient according to practical situations. Then select the phrase with 4 to 5 words, the highest emotion value and scene matching value, and no coincidence with the main title. It should be noted that, at this time, the relation value between the main title and the selected phrase is calculated based on the Bert model, if the calculated relation value is greater than a preset relation value, for example, 0.8, it is indicated that the main title is consistent with the selected phrase, and the selected phrase is confirmed to be a sub-short title, otherwise, the selected phrase is reselected. Then, phrases with the highest emotion value and scene matching value and not overlapping with the main title and the sub-short title, which have the number of words of 6 to 10, are selected. It should be noted that, at this time, the relation value between the main title and the selected phrase needs to be calculated based on the Bert model, if the calculated relation value is greater than a preset relation value, for example, 0.8, it is indicated that the main title is consistent with the selected phrase, and the selected phrase is confirmed to be a sub-long title, otherwise, the selected phrase still needs to be reselected. For example, the target category combination is a personal financing # stock # finance theory, the main title of the finally generated scene theme is finance, the sub-short title is understanding point finance, and the sub-long title is gold house in the book. Fig. 3 is a schematic diagram of main flow of a method for generating a target title corresponding to a scene topic using an alternative scene statement according to an embodiment of the invention. As shown in fig. 3, the main flow of the method for generating the target title corresponding to the scene theme by using the optional scene statement according to the embodiment of the present invention may include:
Step S301, selecting any one optional scene statement S from the optional scene statements;
Step S302, extracting at least one feature word and at least one viewpoint word in the optional scene statement S by utilizing the dependency syntax structure, and calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word;
step S303, generating characteristic viewpoint words of the optional scene statement S according to the relevance, the semantic similarity, a preset relevance threshold and a preset semantic similarity threshold by combining the dependency syntax structure, and determining that the generated characteristic viewpoint words are optional scene phrases corresponding to the optional scene statement S;
step S304, calculating the emotion value of the optional scene phrase S by using the classification model, and obtaining the scene matching value of the optional scene phrase S;
Step S305, calculating the optional weighted score of the optional scene phrase S according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm;
step S306, judging whether to analyze the optional scene phrase corresponding to each optional scene statement, if yes, executing step S307;
Step S307, determining the number of main title words, the number of sub-short title words and the number of sub-long title words corresponding to the scene subject to be generated;
step S308, screening out main titles from all optional scene phrases according to the number of main title words and optional weighted scores;
Step S309, calculating a relation value between the main title and alternative scene phrases based on the Bert model, wherein the alternative scene phrases consist of phrases except the main title in all the alternative scene phrases;
step S310, selecting the sub-short title and the sub-long title from the alternative scene phrases according to the number of sub-short title words, the number of sub-long title words, the optional weighting score and the relation value.
Wherein, step S309 and step S310 of screening out the subtitle and the subtitle may be obtained from the alternative scene phrase. And selecting a main title from all the optional scene phrases, and deleting the main title from all the optional scene phrases to obtain the alternative scene phrases. The execution sequence of step S304 and step S305 may be adjusted according to the actual situation, and may be executed before step S308, which is not limited in the present invention.
Fig. 4 is a schematic diagram of the main flow of a method of generating scene subjects according to an embodiment of the invention. As shown in fig. 4, the main flow of the method for generating a scene theme according to an embodiment of the present invention may include:
Step S401, inquiring a previous class combination corresponding to the target class combination by using a class mapping dictionary to generate an extended class combination corresponding to the target class combination;
Step S402, selecting any material category combination in a category combination database, and naming the material category combination as A;
Step S403, obtaining the number of first elements in the combination of the expansion category, the number of second elements in the combination of the material category A, the number of intersection elements of the combination of the expansion category and the combination of the material category A, and the number of collinear words of the combination of the expansion category and the combination of the material category A;
step S404, obtaining scene matching values of the combination of the expansion category and the combination A of the material category according to the number of the first elements, the number of the second elements, the number of the intersection elements and the number of the collinear words;
Step S405, judging whether to calculate scene matching values of the combination of the extended category and each material category in the category combination database, if yes, executing step S406;
Step S406, selecting optional category combinations from the material category combinations according to the scene matching value of the extended category combination and each material category combination in the category combination database and the preset scene statement number;
step S407, obtaining scene sentences corresponding to the optional category combination from the category combination database as optional scene sentences;
Step S408, filtering the optional scene sentences by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary;
Step S409, selecting any one optional scene statement S from the optional scene statements;
step S410, extracting at least one feature word and at least one viewpoint word in the optional scene statement S by utilizing the dependency syntax structure, and calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word;
Step S411, according to the association degree, the semantic similarity, a preset association degree threshold value and a preset semantic similarity threshold value, combining the dependency syntax structure, generating characteristic viewpoint words of the optional scene statement S, and determining that the generated characteristic viewpoint words are the optional scene phrases corresponding to the optional scene statement S;
Step S412, calculating the emotion value of the optional scene phrase S by using the classification model, and obtaining the scene matching value of the optional scene phrase S;
Step S413, calculating the optional weighted score of the optional scene phrase S according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm;
step S414, judging whether to analyze the optional scene phrase corresponding to each optional scene statement, if yes, executing step S415;
step S415, determining the number of main title words, the number of sub-short title words, and the number of sub-long title words corresponding to the scene subject to be generated;
Step S416, screening out main titles from all optional scene phrases according to the number of main title words and the optional weighted score;
Step S417, calculating a relation value between the main title and alternative scene phrases based on the Bert model, wherein the alternative scene phrases consist of phrases except the main title in all the alternative scene phrases;
Step S418, selecting a sub-short title and a sub-long title from the alternative scene phrases according to the number of sub-short title words, the number of sub-long title words, the optional weighting score and the relation value.
The execution sequence of step S412 and step S413 may be adjusted according to the actual situation, and may be executed before step S416, which is not limited in the present invention
According to the technical scheme for generating the scene theme, the target category combination input by the user can be obtained through analysis from the received generation request of the scene theme, further, the optional scene sentences corresponding to the target category combination can be obtained through analysis by combining the pre-constructed category mapping dictionary and the category combination database, further, the optional scene phrases are obtained, the target title corresponding to the scene to be generated is screened out, the effect of intelligently generating the scene theme through the target category combination input by the user is achieved, and the defects that specific crowd writing is needed, time consumption, labor consumption and high cost are overcome, and subjectivity is high in the prior art. In addition, in the method for generating the scene theme, the expansion category combination corresponding to the target category combination is generated, and then the expansion category combination is analyzed, so that the range of analysis and matching can be expanded, and the accuracy of matching can be improved. In addition, in the embodiment of the invention, the sentences which do not accord with the current situation or can not be used in the optional scene sentences are filtered by utilizing the pre-built filtering dictionary, so that the accuracy of generating scene topics can be improved, and better experience is brought to users.
Fig. 5 is a schematic diagram of main modules of an apparatus for generating scene subjects according to an embodiment of the present invention. The apparatus 500 for generating a scene topic according to an embodiment of the present invention may include: a determining module 501, an acquiring module 502 and a screening module 503.
The determining module 501 may be configured to receive a generation request of a scene topic, and determine, according to the generation request, a target category combination corresponding to the scene topic to be generated; the obtaining module 502 may be configured to perform scene matching analysis on the target category combination based on the pre-built category mapping dictionary and the category combination database, to obtain an optional scene statement corresponding to the target category combination; the screening module 503 may be configured to perform structural processing on the optional scene sentence to obtain an optional scene phrase, and screen a target title corresponding to the scene theme to be generated from the optional scene phrase.
In the embodiment of the present invention, the obtaining module 502 may further be configured to: querying a previous class combination corresponding to the target class combination by using a class mapping dictionary to generate an extended class combination corresponding to the target class combination; generating scene matching values of the extended category combinations and the material category combinations in the category combination database according to predefined rules; screening selectable category combinations from the material category combinations according to the scene matching value and the preset scene statement number; and obtaining the scene statement corresponding to the optional category combination from the category combination database as an optional scene statement.
In the embodiment of the present invention, the obtaining module 502 may further be configured to: for each material category combination in the category combination database, calculating scene matching values of the extended category combination and each material category combination according to the following method: acquiring the number of first elements in the combination of the expansion categories, the number of second elements in the combination of each material category, the number of intersection elements of the combination of the expansion categories and the combination of each material category, and the number of collinear words of the combination of the expansion categories and the combination of each material category; and obtaining scene matching values of the combination of the extended category and the combination of each material category according to the number of the first elements, the number of the second elements, the number of intersection elements and the number of collinear words.
In the embodiment of the present invention, the device for generating a scene theme may further include: building blocks (not shown). The building block may be used to: determining each material category combination, and obtaining a scene statement corresponding to each material category combination; and constructing a category combination database according to each material category combination and the scene statement corresponding to each material category combination.
In the embodiment of the present invention, the obtaining module 502 may further be configured to: filtering the optional scene sentences by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary.
In the embodiment of the present invention, the screening module 503 may further be configured to: for each optional scene statement, carrying out structural processing on each optional scene statement according to the following method to obtain an optional scene phrase corresponding to each optional scene statement: extracting at least one feature word and at least one viewpoint word in each optional scene statement by utilizing the dependency syntax structure, and then calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word; according to the relevance, the semantic similarity, a preset relevance threshold and a preset semantic similarity threshold, and combining the dependency syntax structure, generating characteristic viewpoint words of each optional scene statement; and determining the generated characteristic viewpoint words as the optional scene phrases corresponding to each optional scene statement.
In the embodiment of the present invention, the screening module 503 may further be configured to: determining the number of main title words, the number of sub-short title words and the number of sub-long title words corresponding to the scene theme to be generated according to the generation request; calculating emotion values of the optional scene phrases by using the classification model, and obtaining scene matching values of the optional scene phrases; and screening the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated from the optional scene phrase according to the number of main title words, the number of sub-short title words, the number of sub-long title words, the emotion value and the scene matching value.
In the embodiment of the present invention, the screening module 503 may further be configured to: calculating the optional weighted score of the optional scene phrase according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm; screening main titles from the optional scene phrases according to the number of main title words and the optional weighting score; calculating a relation value of a main title and alternative scene phrases based on the Bert model, wherein the alternative scene phrases consist of phrases except the main title in the alternative scene phrases; and screening the sub-short titles and the sub-long titles from the alternative scene phrases according to the number of sub-short title words, the number of sub-long title words, the optional weighting score and the relation value.
From the above description, it can be seen that the device for generating a scene theme according to the embodiment of the present invention can analyze and obtain a target category combination input by a user from a received generation request of the scene theme, further analyze and learn optional scene sentences corresponding to the target category combination in combination with a pre-constructed category mapping dictionary and a category combination database, further obtain optional scene phrases, screen out a target title corresponding to a scene to be generated, achieve the effect of intelligently generating the scene theme through the target category combination input by the user, and solve the defects of time consumption, labor consumption, high cost and strong subjectivity in the prior art that a specific crowd is required to write. In addition, in the method for generating the scene theme, the expansion category combination corresponding to the target category combination is generated, and then the expansion category combination is analyzed, so that the range of analysis and matching can be expanded, and the accuracy of matching can be improved. In addition, in the embodiment of the invention, the sentences which do not accord with the current situation or can not be used in the optional scene sentences are filtered by utilizing the pre-built filtering dictionary, so that the accuracy of generating scene topics can be improved, and better experience is brought to users.
Fig. 6 illustrates an exemplary system architecture 600 to which the method of generating a scene topic or the apparatus of generating a scene topic of an embodiment of the invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 605. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 605 via the network 604 using the terminal devices 601, 602, 603 to receive or send messages, etc. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using terminal devices 601, 602, 603. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the method for generating a scene theme according to the embodiment of the present invention is generally executed by the server 605, and accordingly, the device for generating a scene theme is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes a determination module, an acquisition module, and a screening module. The names of these modules do not in some cases limit the module itself, for example, the determining module may also be described as "a module that receives a generation request of a scene theme, and determines, according to the generation request, a target category combination corresponding to the scene theme to be generated".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: receiving a generation request of a scene theme, and determining a target category combination corresponding to the scene theme to be generated according to the generation request; based on a pre-built category mapping dictionary and a category combination database, performing scene matching analysis on the target category combination to obtain an optional scene statement corresponding to the target category combination; and carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases.
According to the technical scheme provided by the embodiment of the invention, the target category combination input by the user can be obtained through analysis from the received generation request of the scene theme, further the optional scene sentences corresponding to the target category combination can be obtained through analysis by combining the pre-constructed category mapping dictionary and the category combination database, further the optional scene phrases are obtained, the target title corresponding to the scene to be generated is screened out, the effect of intelligently generating the scene theme through the target category combination input by the user is achieved, and the defects of time consumption, labor consumption, high cost and stronger subjectivity in the prior art are overcome. In addition, in the method for generating the scene theme, the expansion category combination corresponding to the target category combination is generated, and then the expansion category combination is analyzed, so that the range of analysis and matching can be expanded, and the accuracy of matching can be improved. In addition, in the embodiment of the invention, the sentences which do not accord with the current situation or can not be used in the optional scene sentences are filtered by utilizing the pre-built filtering dictionary, so that the accuracy of generating scene topics can be improved, and better experience is brought to users.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A method of generating a scene topic, comprising:
Receiving a generation request of a scene theme, and determining a target category combination corresponding to the scene theme to be generated according to the generation request;
Based on a pre-built category mapping dictionary and a category combination database, performing scene matching analysis on the target category combination to obtain an optional scene statement corresponding to the target category combination, wherein the method comprises the following steps: querying a previous class combination corresponding to the target class combination by using the class mapping dictionary to generate an extended class combination corresponding to the target class combination; generating scene matching values of the extended category combinations and the material category combinations in the category combination database according to predefined rules; screening selectable category combinations from the material category combinations according to the scene matching value and the preset scene statement number; acquiring scene sentences corresponding to the optional category combination from the category combination database as the optional scene sentences; the method comprises the steps of pre-constructing a category mapping dictionary, wherein the constructed category mapping dictionary comprises articles, type levels corresponding to the articles and superior classifications corresponding to the articles; determining each material category combination, obtaining a scene statement corresponding to each material category combination, and constructing the category combination database according to each material category combination and the scene statement corresponding to each material category combination;
and carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases.
2. The method of claim 1, wherein the generating a scene matching value for the extended category combination to be combined with the material category in the category combination database according to a predefined rule comprises:
for each material category combination in the category combination database, calculating a scene matching value of the extended category combination and each material category combination according to the following method:
Acquiring the number of first elements in the extension category combination, the number of second elements in each material category combination, the number of intersection elements of the extension category combination and each material category combination, and the number of collinear words of the extension category combination and each material category combination;
And obtaining scene matching values of the extended category combination and each material category combination according to the number of the first elements, the number of the second elements, the number of the intersection elements and the number of the collinear words.
3. The method according to claim 1, wherein after acquiring, from the category combination database, a scenario sentence corresponding to the selectable category combination as the selectable scenario sentence, the method further comprises:
Filtering the optional scene statement by using a pre-built filtering dictionary, wherein the filtering dictionary comprises at least one of the following options: holiday dictionary, personal name dictionary, and negative vocabulary dictionary.
4. The method of claim 1, wherein the performing structural processing on the alternative scene statement to obtain an alternative scene phrase comprises:
for each optional scene statement, carrying out structural processing on the optional scene statement according to the following method to obtain an optional scene phrase corresponding to the optional scene statement:
Extracting at least one feature word and at least one viewpoint word in each optional scene statement by utilizing a dependency syntax structure, and then calculating the association degree and semantic similarity between the at least one feature word and the at least one viewpoint word;
Generating characteristic viewpoint words of each optional scene statement according to the relevance, the semantic similarity, a preset relevance threshold and a preset semantic similarity threshold by combining a dependency syntax structure;
And determining the generated characteristic viewpoint as the optional scene phrase corresponding to each optional scene statement.
5. The method of claim 1, wherein the screening out the target title corresponding to the scene topic to be generated from the selectable scene phrases comprises:
determining the number of main title words, the number of sub-short title words and the number of sub-long title words corresponding to the scene theme to be generated according to the generation request;
calculating emotion values of the optional scene phrases by using the classification model, and acquiring scene matching values of the optional scene phrases;
And screening the main title, the sub-short title and the sub-long title corresponding to the scene theme to be generated from the optional scene phrase according to the main title word number, the sub-short title word number, the sub-long title word number, the emotion value and the scene matching value.
6. The method of claim 5, wherein the selecting, from the selectable scene phrases, the main title, the sub-short title, and the sub-long title corresponding to the scene topic to be generated according to the number of main title words, the number of sub-short title words, the number of sub-long title words, the emotion value, and the scene matching value comprises:
calculating the optional weighted score of the optional scene phrase according to the emotion value and the scene matching value by using a preset scene phrase weighting algorithm;
Screening the main title from the optional scene phrases according to the main title word number and the optional weighted score;
calculating a relation value of the main title and alternative scene phrases based on a Bert model, wherein the alternative scene phrases consist of phrases except the main title in the alternative scene phrases;
And screening the sub-short title and the sub-long title from the alternative scene phrase according to the number of sub-short title words, the number of sub-long title words, the optional weighted score and the relation value.
7. An apparatus for generating scene subjects, comprising:
The determining module is used for receiving a generating request of the scene theme and determining a target category combination corresponding to the scene theme to be generated according to the generating request;
the acquisition module is used for carrying out scene matching analysis on the target category combination based on a pre-built category mapping dictionary and a category combination database, and acquiring the optional scene statement corresponding to the target category combination, and comprises the following steps: querying a previous class combination corresponding to the target class combination by using the class mapping dictionary to generate an extended class combination corresponding to the target class combination; generating scene matching values of the extended category combinations and the material category combinations in the category combination database according to predefined rules; screening selectable category combinations from the material category combinations according to the scene matching value and the preset scene statement number; acquiring scene sentences corresponding to the optional category combination from the category combination database as the optional scene sentences; the method comprises the steps of pre-constructing a category mapping dictionary, wherein the constructed category mapping dictionary comprises articles, type levels corresponding to the articles and superior classifications corresponding to the articles; determining each material category combination, obtaining a scene statement corresponding to each material category combination, and constructing the category combination database according to each material category combination and the scene statement corresponding to each material category combination;
And the screening module is used for carrying out structure processing on the optional scene sentences to obtain optional scene phrases, and screening out target titles corresponding to the scene topics to be generated from the optional scene phrases.
8. An electronic device, comprising:
One or more processors;
Storage means for storing one or more programs,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202010182006.0A 2020-03-16 2020-03-16 Method and device for generating scene theme Active CN113407815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010182006.0A CN113407815B (en) 2020-03-16 2020-03-16 Method and device for generating scene theme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010182006.0A CN113407815B (en) 2020-03-16 2020-03-16 Method and device for generating scene theme

Publications (2)

Publication Number Publication Date
CN113407815A CN113407815A (en) 2021-09-17
CN113407815B true CN113407815B (en) 2024-06-18

Family

ID=77676404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010182006.0A Active CN113407815B (en) 2020-03-16 2020-03-16 Method and device for generating scene theme

Country Status (1)

Country Link
CN (1) CN113407815B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766488A (en) * 2018-07-25 2020-02-07 北京京东尚科信息技术有限公司 Method and device for automatically determining theme scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210411A1 (en) * 2008-02-15 2009-08-20 Oki Electric Industry Co., Ltd. Information Retrieving System

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766488A (en) * 2018-07-25 2020-02-07 北京京东尚科信息技术有限公司 Method and device for automatically determining theme scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于业务词典的精准主题挖掘解决方案;杨志;林峰;胡牧;孟庆强;郑浩泉;;计算机与数字工程;20180820(08);全文 *

Also Published As

Publication number Publication date
CN113407815A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
JP6708717B2 (en) News recommendation method and device
US12039447B2 (en) Information processing method and terminal, and computer storage medium
CN108829822B (en) Media content recommendation method and device, storage medium and electronic device
CN107992585B (en) Universal label mining method, device, server and medium
WO2017114019A1 (en) Keyword recommendation method and system based on latent dirichlet allocation model
US11586689B2 (en) Electronic apparatus and controlling method thereof
WO2016197767A2 (en) Method and device for inputting expression, terminal, and computer readable storage medium
US8825661B2 (en) Systems and methods for two stream indexing of audio content
US9984687B2 (en) Image display device, method for driving the same, and computer readable recording medium
CN107291840B (en) User attribute prediction model construction method and device
US20150309988A1 (en) Evaluating Crowd Sourced Information Using Crowd Sourced Metadata
CN115982376B (en) Method and device for training model based on text, multimode data and knowledge
CN110879839A (en) Hot word recognition method, device and system
KR101541306B1 (en) Computer enabled method of important keyword extraction, server performing the same and storage media storing the same
CN112650842A (en) Human-computer interaction based customer service robot intention recognition method and related equipment
CN114861889A (en) Deep learning model training method, target object detection method and device
CN112926308B (en) Method, device, equipment, storage medium and program product for matching text
CN108153875B (en) Corpus processing method and device, intelligent sound box and storage medium
CN115062135B (en) Patent screening method and electronic equipment
CN113919424A (en) Training of text processing model, text processing method, device, equipment and medium
KR20180113444A (en) Method, apparauts and system for named entity linking and computer program thereof
CN113407815B (en) Method and device for generating scene theme
CN115858776B (en) Variant text classification recognition method, system, storage medium and electronic equipment
CN104376034B (en) Information processing equipment, information processing method and program
CN109960752A (en) Querying method, device, computer equipment and storage medium in application program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant