CN116070175B - Document generation method and electronic equipment - Google Patents
Document generation method and electronic equipment Download PDFInfo
- Publication number
- CN116070175B CN116070175B CN202310355671.9A CN202310355671A CN116070175B CN 116070175 B CN116070175 B CN 116070175B CN 202310355671 A CN202310355671 A CN 202310355671A CN 116070175 B CN116070175 B CN 116070175B
- Authority
- CN
- China
- Prior art keywords
- attribute
- feature
- target
- weight
- attribute feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 107
- 239000013077 target material Substances 0.000 claims abstract description 63
- 239000000463 material Substances 0.000 claims description 90
- 238000004590 computer program Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 abstract description 52
- 238000012545 processing Methods 0.000 abstract description 7
- 230000000875 corresponding effect Effects 0.000 description 129
- 239000000523 sample Substances 0.000 description 99
- 238000012544 monitoring process Methods 0.000 description 49
- 230000008569 process Effects 0.000 description 41
- 238000012549 training Methods 0.000 description 35
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 29
- 239000008280 blood Substances 0.000 description 29
- 210000004369 blood Anatomy 0.000 description 29
- 229910052760 oxygen Inorganic materials 0.000 description 29
- 239000001301 oxygen Substances 0.000 description 29
- 238000012937 correction Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 15
- 239000005332 obsidian Substances 0.000 description 12
- 238000011156 evaluation Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000007726 management method Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 8
- 239000000284 extract Substances 0.000 description 7
- 239000000047 product Substances 0.000 description 7
- 238000009530 blood pressure measurement Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000036772 blood pressure Effects 0.000 description 5
- 238000007499 fusion processing Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 102220576448 Alpha-(1,3)-fucosyltransferase 7_S56A_mutation Human genes 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000013589 supplement Substances 0.000 description 4
- 102220576446 Alpha-(1,3)-fucosyltransferase 7_S57A_mutation Human genes 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012217 deletion Methods 0.000 description 3
- 230000037430 deletion Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000001502 supplementing effect Effects 0.000 description 3
- 102000014171 Milk Proteins Human genes 0.000 description 2
- 108010011756 Milk Proteins Proteins 0.000 description 2
- 241000736199 Paeonia Species 0.000 description 2
- 102220639120 Protein Wnt-11_S57E_mutation Human genes 0.000 description 2
- 102220530148 RNA polymerase-associated protein LEO1_S57G_mutation Human genes 0.000 description 2
- 241000219793 Trifolium Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 235000021239 milk protein Nutrition 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 102220576443 Alpha-(1,3)-fucosyltransferase 7_S57D_mutation Human genes 0.000 description 1
- 235000006484 Paeonia officinalis Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000003205 fragrance Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000006210 lotion Substances 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 102200083530 rs34382405 Human genes 0.000 description 1
- 102220053251 rs536611911 Human genes 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a document generation method and electronic equipment, relates to the technical field of data processing, and solves the problem that a generated document cannot accurately describe an entity. The document generation method comprises the following steps: and acquiring a target material for generating the text, and extracting a target entity and a first attribute feature corresponding to the target entity from the acquired target material. And obtaining a second attribute characteristic corresponding to the target entity from a preset knowledge base. Meanwhile, attribute information input by a user is acquired, and a third attribute feature corresponding to the target entity is extracted from the acquired attribute information. And carrying out feature fusion on the first attribute features, the second attribute features and the third attribute features acquired according to the three different ways to obtain a fourth attribute feature which is more comprehensive and more accurate in description of the target entity, so that the generated target document is more accurate and more comprehensive based on a preset document template according to the fourth attribute feature and the target entity.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a document generating method and an electronic device.
Background
Along with the high-speed development of the mobile internet and the popularization of mobile terminals, the traditional marketing industry is more and more developed, the marketing channel is gradually changed from offline to online, advertisement documents are taken as important propaganda means, the quantity of the advertisement documents is also explosively increased, and the traditional mode of manually writing the advertisement documents cannot meet the requirement of the current advertisement market on rapid creation of the advertisement documents. Therefore, automatic document creation using machine learning algorithms has been applied to current advertisement document creation.
Generally, an advertisement service platform previously generates a document generation model using a machine learning algorithm. When the advertisement service platform receives the advertisement materials, the advertisement materials are input into the document generation model, and the corresponding advertisement document can be obtained. However, the matching degree between the advertisement document generated from the advertisement material and the advertisement commodity is limited by the influence of the credibility and/or accuracy of the advertisement material. The current advertisement materials are greatly influenced by subjective factors of a throwing party, and the credibility, accuracy and the like of the advertisement materials cannot be determined, so that the finally obtained advertisement document is poor in matching degree with advertisement commodities, the successful recommendation probability of advertisements is reduced, and the user experience is influenced.
Disclosure of Invention
The application provides a document generation method and electronic equipment, which are used for at least solving the problem that the document is inaccurate in entity description.
In a first aspect, a document generation method is provided, the method including: and acquiring a target material for generating the text, and extracting a target entity and a first attribute feature corresponding to the target entity from the acquired target material. And obtaining a second attribute characteristic corresponding to the target entity from a preset knowledge base. Meanwhile, attribute information input by a user is acquired, and a third attribute feature corresponding to the target entity is extracted from the acquired attribute information. And carrying out feature fusion on the first attribute features, the second attribute features and the third attribute features acquired according to the three different approaches to obtain fourth attribute features. And generating a target document based on the preset document template according to the fourth attribute characteristics and the target entity.
In the above-mentioned document generating method, the first attribute is obtained according to the target material (i.e., the original material), and then the first attribute is closer to the original material. And the second attribute features are obtained based on a preset knowledge base, so that the second attribute features describe the target entity more accurately and comprehensively. Further, the third attribute features are obtained based on attribute information input by the user, and therefore the third attribute features describe the target entity more closely to the user requirements. Therefore, the fourth attribute features obtained based on the three features not only comprise correct attribute features closer to the original material input by the user, but also comprise key attribute features missed by the user, and also comprise control attribute features required by the user.
Therefore, the fourth attribute features and the target entity are used as the generation basis of the target document, and the target document generated by the preset document template can be close to the original material requirement, describe the target entity more comprehensively and accurately, and meet the personalized control requirement of the user. When the advertisement document is generated based on the document generation method, the advertisement document with real and reliable description on the advertisement commodity and high matching degree can be obtained, so that the success rate of advertisement recommendation is improved, and the user experience is ensured.
In one possible implementation, the target entity includes at least one first attribute; obtaining a fourth attribute feature corresponding to the target entity according to the first attribute feature, the second attribute feature and the third attribute feature, including: correcting the first attribute features for describing the first attribute by using the second attribute features for describing the first attribute aiming at any first attribute of the target entity, and obtaining corrected feature results for describing the first attribute; and obtaining a fourth attribute feature for describing the first attribute according to the feature result and the third attribute feature.
In the implementation manner, the second attribute features with higher accuracy or higher confidence coefficient obtained from the preset knowledge base are adopted to correct the first attribute features with lower accuracy obtained from the target materials, so that the feature results with high accuracy or high confidence coefficient are obtained. And further, correcting the characteristic result by adopting a third attribute characteristic which is input by the user and reflects the user requirement, so as to fuse the attribute characteristic of the user requirement with the characteristic result, and further obtain a fourth attribute characteristic which can reflect the user requirement and ensure the accuracy.
In another possible implementation manner, correcting the first attribute feature for describing the first attribute by using the second attribute feature for describing the first attribute to obtain a corrected feature result for describing the first attribute, including: when the first attribute feature and the second attribute feature are the same, determining the first attribute feature or the second attribute feature as a feature result; when the first attribute feature and the second attribute feature are different, determining the second attribute feature as a feature result; when the first attribute has a first attribute characteristic and does not have a second attribute characteristic, determining the first attribute characteristic as a characteristic result; and when the first attribute does not exist the first attribute feature and the second attribute feature exists, determining the second attribute feature as a feature result.
In the implementation manner, aiming at the condition that the description of the same first attribute is consistent with that of the first attribute and the second attribute, the attribute features of which the first attribute feature and the second attribute feature do not conflict are reserved, so that the feature result is the attribute feature of which the two do not conflict.
And aiming at the condition that the descriptions of the first attribute feature and the second attribute feature are inconsistent, correspondingly modifying the wrong first attribute feature into the correct second attribute feature, so that the feature result is the corrected correct attribute feature.
And aiming at the condition that the first attribute features are attribute features described for the first attribute and the attribute features described for the first attribute are not included in each second attribute feature, reserving the first attribute features which do not conflict with the second attribute features and are not included in each second attribute feature in each first attribute feature in a feature result so as to avoid that the obtained feature result does not include or miss the correct first attribute features in the target material.
And supplementing missing attribute features in each first attribute feature aiming at the condition that the first attribute feature described for the first attribute does not exist in each first attribute feature and the second attribute feature is the attribute feature described for the first attribute, so that the feature result is the missing attribute feature in each first attribute feature.
Through the implementation manner, four correction manners of the first attribute feature are corrected, so that the reasonability of the correction process of the first attribute feature is ensured, and the accuracy of each feature result is ensured.
In another possible implementation manner, a first attribute corresponds to a feature result, and a fourth attribute feature for describing the first attribute is obtained according to the feature result and the third attribute feature, including: determining the third attribute feature or the feature result as a fourth attribute feature when the feature result is the same as the third attribute feature aiming at any first attribute of the target entity; when the feature result is different from the third attribute feature, determining the third attribute feature as a fourth attribute feature; when the first attribute has a feature result and the third attribute does not exist, determining the feature result as a fourth attribute feature; and when the third attribute feature exists in the first attribute and the feature result does not exist, determining the third attribute feature as a fourth attribute feature.
In the implementation manner, for the case that the feature result is consistent with the description of the third attribute feature on the same first attribute, the feature result and the common attribute feature of the third attribute feature, which is consistent with the description of the same first attribute, are reserved, so that the fourth attribute feature retains the feature result reflecting the requirement of the user.
And correspondingly modifying the feature result inconsistent with the attribute information input by the user into the third attribute feature required by the user aiming at the condition that the third attribute feature and the feature result are inconsistent with the description of the same first attribute, so that each fourth attribute feature can comprise the attribute feature which accords with the user requirement after the feature result is modified.
And aiming at the situation that the characteristic results comprise the attribute characteristics described for the first attribute, and each third attribute characteristic does not comprise the attribute characteristics described for the first attribute, reserving the original attribute characteristics which do not conflict with the user requirements in each characteristic result, so that each fourth attribute characteristic can comprise the original attribute characteristics which do not conflict with the user requirements in the characteristic results.
And for the case that the attribute features described for the first attribute are not included in the respective feature results and the third attribute features include the attribute features described for the first attribute, retaining the attribute features not included in the feature results but included in the respective third attribute features for the user requirements, thereby enabling the respective fourth attribute features to include the attribute features for the user requirements not included or missing in the respective feature results.
Through the implementation mode, different correction modes of the third attribute characteristic correction characteristic result are correspondingly set, so that the reasonability of the third attribute characteristic to the characteristic result correction process is ensured, and the accuracy of the fourth attribute characteristic is ensured.
In another possible implementation manner, generating the target document according to the fourth attribute feature, the target entity and a preset document template corresponding to the target material includes: acquiring preset attribute characteristics of preset entities and related preset entities in a preset document template; the preset entity comprises at least one attribute; the preset attribute features comprise descriptive words and/or descriptive sentences for describing the attributes of the preset entities; correcting a preset entity by using the target entity, and correcting preset attribute characteristics corresponding to at least one second attribute of the preset entity according to fourth attribute characteristics corresponding to at least one first attribute of the target entity to obtain target attribute characteristics corresponding to at least one first attribute; and generating a target document based on the target attribute characteristics corresponding to the at least one first attribute.
In the above implementation manner, the original target material is the material associated with the target entity provided by the user for the target entity, so the target entity extracted from the original target material is accurate. Therefore, the accuracy of the preset entity is judged based on the target entity, and when the preset entity is judged to be inaccurate, the preset entity is corrected to be the target entity, so that the accuracy of the entity included in the target document is ensured.
Further, the fourth attribute feature accurately reflecting the user demand is taken as the basis for modifying the preset attribute feature, and the preset attribute feature is modified, so that the determined target attribute feature is more accurate and meets the user demand.
Further, the target attribute characteristics corresponding to the first attributes are used as the basis for generating the target document, so that the generated target document can describe the attribute information of the target entity more accurately and reflect the user requirements.
According to the implementation mode, the target entity is used as the basis for correcting the preset entity in the preset document template, and the fourth attribute characteristic is used as the basis for correcting the preset attribute characteristic in the preset document template, so that the target document comprises a correct target entity and also comprises the attribute characteristic which is accurate and meets the requirement of a user, and the generated target document can accurately describe the target entity.
In another possible implementation manner, correcting the preset attribute feature corresponding to the at least one second attribute of the preset entity according to the fourth attribute feature corresponding to the at least one first attribute of the target entity, and obtaining the target attribute feature corresponding to the at least one first attribute includes: aiming at any first attribute of the target entity, determining a fourth attribute feature or a preset attribute feature as a target attribute feature when the first attribute is the same as the second attribute and the fourth attribute feature is the same as the preset attribute feature; when the first attribute is the same as the second attribute and the fourth attribute characteristic is different from the preset attribute characteristic, determining the fourth attribute characteristic as a target attribute characteristic; and determining the fourth attribute feature as the target attribute feature when the first attribute is different from the second attribute.
The first attribute and the second attribute may be different as follows: and each first attribute corresponding to each fourth attribute feature does not comprise a second attribute, and/or each second attribute corresponding to each preset attribute feature does not comprise a first attribute.
In the above implementation manner, for the case that the fourth attribute feature and the preset attribute feature are consistent with the same attribute description, the correct preset attribute feature consistent with the fourth attribute feature description is reserved as the target attribute feature in each preset attribute feature.
And aiming at the condition that the fourth attribute feature is inconsistent with the preset attribute feature for the same attribute description, modifying the error preset attribute feature inconsistent with the fourth attribute feature description in each preset attribute feature into the fourth attribute feature, namely, the fourth attribute feature is a target attribute feature.
And determining the fourth attribute feature of the first attribute different from the second attribute as a target attribute feature aiming at the condition that the fourth attribute feature and the preset attribute feature have different attributes, so that the target attribute feature does not comprise the preset attribute feature of the second attribute different from the first attribute, and reserving the fourth attribute feature with high accuracy as the target attribute feature and simultaneously eliminating the preset attribute feature with low accuracy.
Based on the above, by correcting the low-accuracy preset attribute feature in each preset attribute feature with the high-accuracy fourth attribute feature, the correct preset attribute feature can be reserved in the target attribute feature, the fourth attribute feature corrected by the wrong preset attribute feature can be determined as the fourth attribute feature, the fourth attribute feature without the first attribute in each preset attribute feature can be supplemented as the target attribute feature, and the low-accuracy preset attribute feature in each preset attribute feature can be eliminated, so that the target attribute feature does not comprise the low-accuracy attribute feature.
In another possible implementation manner, generating the target document based on the target attribute features corresponding to the at least one first attribute includes: acquiring weights corresponding to the target attribute features corresponding to at least one first attribute; the weight represents the association degree of each target attribute characteristic and the target entity; sequencing all target attribute features according to the weights, and acquiring a preset number of target attribute features from high to low in a weight sequencing result; and generating a target document according to the target attribute characteristics of the preset number.
In the implementation manner, the target attribute features with larger weight are determined to be the attribute features included in the target document. Therefore, the determined target document can describe the target entity more accurately by adopting the target attribute characteristics with large association degree.
In another possible implementation, the weight of the target attribute feature is determined from the weight of the fourth attribute feature; the weight of the fourth attribute feature is determined based on the first weight of the first attribute feature, the second weight of the second attribute feature, and the third weight of the third attribute feature; the first weight is determined according to the frequency of the descriptors associated with the first attribute features in the target material; the second weight is determined according to the occurrence frequency of the descriptive words associated with the second attribute features in the preset knowledge base; the third weight is determined based on the attribute information.
In another possible implementation manner, for any first attribute of the target entity, when the feature result is the same as the third attribute feature, the weight of the fourth attribute feature is the third weight; when the feature result is different from the third attribute feature, the weight of the fourth attribute feature is the third weight; when the first attribute has a feature result and the third attribute does not exist, the weight of the fourth attribute is the weight of the feature result; and when the third attribute feature exists in the first attribute and the feature result does not exist, the weight of the fourth attribute feature is the third weight.
Aiming at the condition that the feature result is consistent with the description of the third attribute feature on the same first attribute and the weight is inconsistent, and/or aiming at the condition that the feature result is inconsistent with the description of the third attribute feature on the same first attribute, in order to reflect the control action of the user input attribute information through the third attribute feature, the weight of the fourth attribute feature is determined by the weight of the third attribute feature to be more consistent with the scene.
In the above implementation manner, for the case that the feature result includes the attribute feature described for the first attribute, and each third attribute feature does not include the attribute feature described for the first attribute, the obtained fourth attribute feature is the feature result that each third attribute feature does not include, so the fourth attribute feature in this case cannot be the third attribute feature, so in this scenario, the weight of the fourth attribute feature is determined by the weight of the feature result, and the scenario is more consistent.
For the situation that the attribute features described for the first attribute are not included in each feature result, and the third attribute features include the attribute features described for the first attribute, the fourth attribute features are the third attribute features which are not included or omitted in each feature result, and therefore the fourth attribute features in the situation cannot be feature results, in the scene, the weights of the fourth attribute features are determined by the weights of the third attribute features, and the scene is more met.
Through the implementation manner, the determination manner of the weight of the fourth attribute feature is correspondingly set, so that the rationality of the determination process of determining the weight of the fourth attribute feature is ensured, and the accuracy of the fourth attribute feature is ensured.
In another possible implementation manner, for any first attribute of the target entity, when the first attribute feature and the second attribute feature are the same, the weight of the feature result is a target weight determined according to the first weight and the second weight; when the first attribute features and the second attribute features are different, the weights of the feature results are second weights; when the first attribute has a first attribute feature and the second attribute feature does not exist, the weight of the feature result is the first weight; and when the first attribute does not exist the first attribute feature and the second attribute feature exists, the weight of the feature result is the second weight.
In the implementation manner, aiming at the condition that the first attribute feature and the second attribute feature are consistent in description of the same attribute and the first weight is different from the second weight, the weight of the feature result is determined by the first weight and the second weight together, so that the weight of the feature result considers the weights of the two feature dimensions of the first attribute feature and the second attribute feature, and the association degree of the feature result and the target entity in the scene is reflected more accurately.
And aiming at the condition that the first attribute feature and the second attribute feature are inconsistent with the same attribute description, determining the weight of the feature result by the second weight, so that the weight of the feature result only reflects the association degree of the correct second attribute feature and the target entity, but does not reflect the association degree of the wrong first attribute feature and the target entity, and further enabling the weight of the feature result to reflect the association degree of the feature result and the target entity in the scene more accurately.
And describing a certain first attribute aiming at the first attribute characteristics, wherein the condition that the second attribute characteristics describing the certain first attribute do not exist in the second attribute characteristics is characterized in that the first attribute characteristics not included in the second attribute characteristics cannot be obtained as the characteristic result, so that the weight of the characteristic result determined by the first weight in the scene is more consistent with the scene.
And describing a certain first attribute aiming at the second attribute characteristics, wherein the condition that the first attribute characteristics describing the certain first attribute do not exist in the first attribute characteristics is characterized in that the second attribute characteristics which are not included in the first attribute results, and the characteristic results in the condition cannot be the first attribute characteristics, so that the weights of the characteristic results determined by the second weights in the scene are more consistent with the scene.
Through the implementation manner, four ways of determining the weight of the feature result are correspondingly set, so that the rationality of the determination process of determining the weight of the feature result is ensured, and the accuracy of each feature result is ensured.
In another possible implementation manner, the preset knowledge base comprises a material base and a webpage base, the material base comprises historical attribute features corresponding to the target entity, and the historical attribute features are extracted from historical materials of the target entity; the web page library comprises the attribute features associated with the target entities retrieved from the web page information of each target web page.
In a second aspect, an electronic device is provided, the electronic device comprising a memory, one or more processors; the memory is coupled with the processor; wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the document generation method as in the first aspect or any of the possible implementations of the first aspect.
In a third aspect, there is provided a computer readable storage medium comprising: computer instructions; when executed in an electronic device, the computer instructions cause the electronic device to perform a document generation method as in the first aspect or any of the possible implementations of the first aspect.
In a fourth aspect, a computer program product is provided which, when run on a computer, causes the computer to perform the document generation method as in the first aspect or any one of the possible implementations of the first aspect.
It will be appreciated that the advantages achieved by the electronic device according to the second aspect, the computer-readable storage medium according to the third aspect, and the computer program product according to the fourth aspect may refer to the advantages of the first aspect and any possible implementation manner of the first aspect, and are not repeated here.
Drawings
FIG. 1 is a schematic diagram of a generation interface of an advertisement document according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of an advertisement document generation interface according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a document generating system according to an embodiment of the present application;
fig. 4 is a schematic view of a scenario of a document generating method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a document generating system according to an embodiment of the present application;
FIG. 6 is a flowchart of a document generation method according to an embodiment of the present application;
FIG. 7 is a flowchart of an entity and attribute feature extraction method according to an embodiment of the present application;
Fig. 8 is a schematic diagram of acquiring attribute information based on a terminal device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a forming process of a fact material according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a general knowledge forming process according to an embodiment of the present application;
fig. 11 is a schematic diagram of a process for forming a user control signal according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a feature fusion process according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram I of a document generation process according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a document rewriting process according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram two of a document generation process provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the present application, "/" means that the related objects are in a "or" relationship, unless otherwise specified, for example, a/B may mean a or B; the term "and/or" in this application is merely an association relation describing an association object, and means that three kinds of relations may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural. In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
In advertising, advertising documents are vital information. The advertisement document can describe the functions, characteristics, advantages and the like of the advertisement entity most directly, so that the quality of the advertisement document directly determines whether the put advertisement is clicked and converted. Generally, the advertisement document is produced in the following two ways: the document is intelligently generated by manually writing the document and adopting a machine algorithm model.
In the mode of manually writing the text, the advertisement text writer writes the targeted text based on the understanding of advertisement entities to achieve the purpose of attracting users, but the process is quite low-efficiency, and the text writing capability of different advertisement text writers is uneven, so that the final advertisement putting effect is larger in difference.
In order to solve the problems of low efficiency and strong dependence on the writing capability of a writer of the manually created advertisement document. Further, a method for intelligently generating a document by using a machine algorithm model is developed.
In the intelligent document generation mode, a large amount of advertisement materials are used as input and corresponding high-quality advertisement documents are used as output pre-training corpus, and a pre-training document generation model is trained to obtain a document generation model. When the advertisement service platform receives target advertisement materials input by a user and detects a document generation signal aiming at the target advertisement materials, the target advertisement materials are input into a document generation model, the document generation model intelligently understands the content of the advertisement materials, and the target advertisement documents are generated from a generation paradigm extracted from rich pre-training corpus.
However, in the practical application of generating the target advertisement document using the above document generation model, there is a frequent occurrence that the generated target advertisement document describes an advertisement article (i.e., an advertising entity) that does not conform to the actual attribute characteristics of the advertisement article.
For example, as shown in fig. 1, the terminal device of the user responds to the advertisement material "peony fragrance, baby, shower gel" input by the user on the document generation application program interface. The server inputs the advertisement materials to a document generation model, and the corresponding advertisement document is displayed on the terminal equipment as follows: the bath lotion is specially designed for babies and contains paeonia essence and milk protein extract. However, in fact, the milk protein extract is not mentioned in the input information, and then the document generation model generates a phenomenon of generating illusion, so that the generated target advertisement document has the risk of introducing false information.
As another example, as shown in fig. 2, the terminal device of the user responds to the advertisement material "artificial intelligence (artificial intelligence, AI) photography, a operating system, model 1 mobile phone" input by the user on the document generation application program interface, and the corresponding advertisement document is displayed on the terminal device: the mobile phone with a certain model 2 has ultra-smooth AI shooting experience and longer endurance. It can be seen that in the generated target advertisement document, the advertisement entity has an error, the model 1 mobile phone is changed into the model 2 mobile phone, and the key selling point of the product of the A operating system is also lost.
Therefore, the current advertisement materials are greatly influenced by subjective factors of the delivery party, the credibility, accuracy and the like of the advertisement materials cannot be determined, the finally obtained advertisement document is poor in matching degree with the advertisement commodity, the successful recommendation probability of the advertisement is reduced, and the user experience is influenced.
In view of the foregoing, embodiments of the present application provide a document generating method, where a target entity and a first attribute feature corresponding to the target entity may be extracted from a target material of a generated document. And obtaining a second attribute characteristic corresponding to the target entity from a preset knowledge base. Meanwhile, a third attribute feature corresponding to the target entity is extracted from the attribute information input by the user. And carrying out feature fusion on the first attribute features, the second attribute features and the third attribute features acquired by the three different approaches to obtain fourth attribute features which are more comprehensive and more accurate in description of the target entity. Therefore, the target document generated based on the preset document template according to the fourth attribute characteristics and the target entity is more accurate and comprehensive.
In the above-mentioned document generating method, the first attribute features are obtained according to the target material (i.e., the original material), and the first attribute features are closer to the original material. And the second attribute features are obtained based on a preset knowledge base, and the second attribute features describe the target entity more accurately and comprehensively. Further, the third attribute features are obtained based on attribute information input by the user, and the third attribute features describe the target entity more closely to the user requirements. Therefore, after the three features are subjected to feature fusion, the obtained fourth attribute features not only comprise correct attribute features which are closer to the original materials input by the user, but also comprise key attribute features missed by the user, and also comprise control attribute features required by the user.
Therefore, the fourth attribute features and the target entity are used as the generation basis of the target document, and the target document generated by the preset document template can be close to the original material requirement, describe the target entity more comprehensively and accurately, and meet the personalized control requirement of the user. When the advertisement document is generated based on the document generation method, the advertisement document with real and reliable description on the advertisement commodity and high matching degree can be obtained, so that the success rate of advertisement recommendation is improved, and the user experience is improved.
The document generation method provided by the embodiment of the application can be applied to a document generation system with a document generation function. As shown in fig. 3, the document generation system includes at least one first terminal device 11, at least one server 12 (e.g., a server for document generation), and at least one second terminal device 13.
In some embodiments, the target document may be an advertisement document, and the target material may be advertisement material (such as pictures, video, audio, etc. in the advertisement).
Taking the target material as an advertisement material as an example, the first terminal device 11 may send the advertisement material to the server 12 in response to a first input operation of the advertisement material input by the advertiser, and request the server 12 to generate an advertisement document corresponding to the advertisement material. The server 12 extracts the corresponding advertising entity and the first attribute feature of the advertising entity from the advertising material. The advertising entity may be an advertising commodity (e.g., a commodity sold by a merchant such as a mobile phone, a watch, or clothing). The server 12 uses the advertisement entity as a retrieval basis, and retrieves the second attribute features corresponding to the advertisement entity from the preset knowledge base. Further, the first terminal device 11 transmits the attribute information to the server 12 in response to a second input operation of the attribute information input by the advertiser. After the server 12 receives the attribute information, a third attribute feature corresponding to the advertising entity is extracted from the attribute information.
The server 12 performs feature fusion on the first attribute feature, the second attribute feature and the third attribute feature to obtain a fused fourth attribute feature, so that the advertisement document of the advertisement entity is generated based on a preset document template corresponding to the target material by taking the fourth attribute feature and the advertisement entity as knowledge bases for generating the advertisement document.
Optionally, the feature fusion is performed on the first attribute feature, the second attribute feature and the third attribute feature, and each of the first weight of the first attribute feature, the second weight of the second attribute feature and the third weight of the third attribute feature may be fused, so that the weight of the fourth attribute feature is the weight obtained by fusing the three weights, thereby generating the knowledge base of the advertisement document and further including the weight of the fourth attribute feature. Based on the above, the advertisement document is generated according to the fourth attribute features with higher weight (namely, the fourth attribute features with high association degree with the advertisement entity), so as to ensure the matching degree of the advertisement document and the advertisement entity.
The weights are used to represent the degree of association of the corresponding attribute features with the target entity (e.g., advertising entity).
The second terminal device 13 may send a play request for playing an advertisement document corresponding to the advertisement material to the server 12 during a process of running a certain APP. After receiving the play request, the server 12 transmits the generated advertisement document to the second terminal device 13. The advertisement document is displayed for the advertisement user on the display interface of the second terminal device 13.
As shown in fig. 4, in one possible implementation, the document generated by the above-mentioned document generation method may be applied to various scene advertisement delivery processes. Specifically, the second terminal device 13 installs thereon various service software such as a search engine, an applet, an APP, and the like. The second terminal device 13 responds to the starting instruction or the application instruction of the service software, and obtains the advertisement document corresponding to the service software from the server 12 while running the service software; and displaying the advertisement document corresponding to the service software on the display interface of the second terminal device 13 so as to deliver the corresponding advertisement document in the delivery process of the corresponding advertisement. Wherein, various scene advertisements can be show advertisements, shopping advertisements, search advertisements, application propaganda advertisements, audio advertisements and the like. The second terminal device 13 may be an electronic device such as a mobile phone, a tablet computer, and a computer.
In some embodiments, the server 12 may be a single cloud server, or may be a server cluster formed by a plurality of servers.
As shown in fig. 5, the above-mentioned document generating system further includes a plurality of first terminal devices 11 and a cloud server, wherein the plurality of first terminal devices 11 and the cloud server communicate through a wired network or a wireless network.
In some embodiments, the first terminal device 11 includes a document production module (e.g., an advertisement production module) and a document display module (e.g., an advertisement display module). The document making module is used for obtaining the document generated by the cloud server, and the document display module is used for displaying the generated document. The cloud server comprises a document generation module for generating a document.
The first terminal device 11 may be a mobile phone, a tablet computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, or an electronic device that supports user input of a target material, and the specific form of the first terminal device 11 is not particularly limited in this embodiment. And the first terminal device 11 may perform man-machine interaction with a user through one or more manners of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or a handwriting device, etc.
The document creation method according to the embodiment of the present application will be described below by taking a server as an example of the main body of execution of the document creation method. As shown in fig. 6, the document generation method may include S51 to S57.
S51, the server acquires target materials for generating the document.
As one possible material acquisition mode, the first terminal device responds to input confirmation operation of a target material input by a user on a display interface, and initiates a document generation request for generating a document based on the target material. Wherein the document generation request includes the target material. The server receives the document generation request and acquires the target material. The user may be an advertiser or advertiser.
In some embodiments, the preset document is also referred to as a candidate document.
As another possible material acquisition mode, the server receives a document rewrite request for writing a candidate document, and determines a material corresponding to the generated candidate document as a target material.
S52, the server extracts the target entity and one or more first attribute features corresponding to the target entity from the target material.
In some embodiments, the target entity may be an advertising commodity or an advertising object, or the like.
Correspondingly, the target material is a material for expressing a target entity, and the target entity extracted based on the target material is a real entity to be expressed by the target document.
The target entity corresponds to at least one first attribute, and the first attribute may be a functional attribute, a color attribute, a material attribute, an appearance attribute, or the like. The first attribute features are one or more, one first attribute corresponds to one first attribute feature, and the first attribute feature is a description word or a description sentence describing the first attribute. For example, the first attribute feature may be an ultra-thin design, small volume, a colorful appearance, and the like.
S53, the server acquires one or more second attribute features corresponding to the target entity from a preset knowledge base.
The second attribute feature is also a descriptor or description sentence describing any one of the first attributes of the target entity. The number of second attribute features is also typically one or more.
The preset knowledge base comprises attribute features with accuracy higher than preset accuracy. Since the second attribute features are obtained from the preset knowledge base, the accuracy of the second attribute features is higher than that of the first attribute features from the target material.
The accuracy may be reflected in terms of an evaluation index, i.e., the higher the evaluation index, the higher the accuracy of the attribute features.
In one implementation, the evaluation index of the attribute features is reflected according to the click rate or search frequency of the attribute features by the advertisement user. The larger the click rate or search frequency of the advertisement user on the attribute features, the higher the evaluation index of the attribute features.
In some embodiments, the preset knowledge base includes a material base and a web page base. The material library comprises historical attribute features corresponding to the target entity, and the historical attribute features are extracted from the historical materials of the target entity. The web page library includes attribute features associated with the target entities retrieved from web page information for each target web page.
The historical material is played on the second terminal device, and the evaluation index is higher than the first evaluation threshold. The historical materials in the material library are dynamic materials updated in real time, and the historical materials are stored in the material library after being associated with the corresponding entities. The rating index is determined based on rating information entered by a user viewing the material on the second terminal device. The higher the evaluation index, the higher the corresponding material confidence or accuracy.
Illustratively, the server invokes the historical attribute feature associated with the target entity from the library of stories using the "target entity" as a term. And determining the associated historical attribute feature as a second attribute feature. For example, the "X model cell phone" inputs the "X model cell phone" into the material library, and retrieves the history attribute features such as "2023 new model, and suitability for children" from the material library. The history attribute features of 2023 new and suitable for children are the second attribute features of the mobile phone of the X model.
In some embodiments, the target web page may be one or more internet web pages. The target web page is an internet web page storing stories associated with the target entity.
The server, upon determining the target entity, invokes the target web page, for example. And retrieving each attribute feature associated with the target entity from the webpage information of the target webpage, and determining the attribute feature with the evaluation index higher than the second evaluation threshold value as each second attribute feature in each associated attribute feature. For example, by taking an X-model mobile phone as a keyword, attribute features such as 'suitable for children, beidou positioning' and the like are searched on a target webpage. The attribute characteristics of the mobile phone suitable for children, beidou positioning and the like are the second attribute characteristics of the mobile phone of the X model.
S54, the server acquires attribute information input by a user.
As a way of acquiring attribute information, the first terminal device of the user responds to a document generation request initiated by the user on the first terminal device interface, and an "attribute information input box" as shown in fig. 8 is displayed on the display interface. The first terminal device receives parameter (such as 'middle-aged male' and 'new product marketing') setting operation instructions of users on 'attribute information input boxes' of different categories (such as target users and scene option categories), and sends the parameters set by the users on the 'attribute information input boxes' to the server. The server determines the parameters input by the user as attribute information.
In some embodiments, the attribute information determined as the above-described input parameter is referred to as user attribute control information. Specifically, the user attribute control information is divided into two categories: one type is suppression type attribute information, namely negative attribute information, of a specific attribute in a target document which is expected to be deleted by a user; the other is enhanced class attribute information of a specific attribute desired to be added in the target document, i.e., forward attribute information. The server configures corresponding labels for the attribute control information of different categories to distinguish whether the attribute control information belongs to the suppression type attribute information or the enhancement type attribute information.
In some embodiments, the server obtains the third attribute feature by.
The server obtains attribute control information, namely 'blood oxygen monitoring', which is a description aiming at a first attribute, and the corresponding obtained third attribute is characterized by 'blood oxygen monitoring'.
As another acquisition mode of attribute information, the server determines user trend information of the user to the target entity according to the history attribute information associated with the user, and determines the user trend information as attribute information. The history attribute information may be obtained from a history document associated with the user.
Illustratively, the user trend information includes, but is not limited to, targeting information of the advertisement placement (gender, age, effort, hobbies, etc.), sales scenario information of the advertisement placement (new, promotions, time-limited offers), etc.
As a possible implementation manner, different user tendency information is distributed and stored in a pre-built tag library in a barrel-type mapping manner according to different tendency information tag identifications, so that user tendency information can be conveniently obtained by category, and a third attribute characteristic is determined according to the tendency information tag identifications.
The process of label identification allocation by the barrel-type mapping mode is as follows: storing the user tendency information belonging to the tag identifications into a storage space corresponding to the tag identifications, wherein the tag library comprises a plurality of tag identifications, and the tag identifications are in one-to-one correspondence with the storage space. Each memory space may be understood as a "bucket" for storing corresponding user trend information.
Illustratively, the user trend information is analyzed to obtain the third attribute feature in the following manner. The server obtains user tendency information as 'male user' and 'new product to be marketed'. The trend information labels of the user trend information map identify "male" and "new". Wherein "male user" is mapped as "male" and "new product is marketed" is mapped as "new". Extracting text components from a 'male' advertisement word stock to be 'necessary for mature men'; and extracting a text component from the advertisement word stock according to 'new' to be 'shocked to be on the market', wherein the obtained third attribute features are 'necessary for mature men' and 'shocked to be on the market'.
The attribute information may include one or both of the two acquisition modes, and the specific content of the attribute information is not specifically limited and is specifically determined according to a specific acquisition scene.
S55, the server extracts one or more third attribute features corresponding to the target entity from the attribute information.
The attribute information may be user-entered information indicating attribute features included and/or not included in the generated document. It can be understood that the information of the attribute features included in the document is the information of the forward attribute features; the information of the attribute features not included in the document is information of the negative attribute features.
The attribute information can reflect the user's needs, and the third attribute feature extracted from the attribute information can also reflect the user's needs.
In some embodiments, the attribute information may include positive attribute information and/or negative attribute information. The forward attribute information is used for indicating information of attribute characteristics to be included in the target document; the negative attribute information is information indicating attribute characteristics that cannot be included in the target document. Based on this, positive attribute features are extracted from positive attribute information of the attribute information, and negative attribute features are extracted from negative attribute information of the attribute information. The forward attribute features are attribute features included in the target document. The reverse attribute feature is an attribute feature not included in the target document. Thus, each third attribute feature may be a positive attribute feature or a negative attribute feature.
Taking the generated target document as an advertisement document as an example, in one example, the attribute information input by the user is forward attribute information. For example, the forward attribute information is that the advertisement commodity is aimed at a male user, the advertisement document needs to include the forward attribute feature of the male user, and the document of the advertisement commodity can include the document content similar to 'necessary articles for male'.
In another example, the attribute information entered by the user is negative attribute information. If the negative attribute information is that the bright spot of the advertisement commodity is not blood pressure monitoring, and the target document cannot include the negative attribute feature of blood pressure monitoring, the advertisement document of the advertisement commodity cannot include the document content similar to blood pressure monitoring.
In yet another example, attribute information entered by the user includes both positive attribute information and negative attribute information. For example, the positive attribute information is for a male user, and the negative attribute information is for a bright spot of the advertisement commodity, which is not blood pressure monitoring, and the advertisement document includes similar document contents of 'necessary articles for men' and also cannot include similar document contents of 'blood pressure monitoring'.
The third attribute feature is also a descriptor or a description sentence describing any one of the first attributes of the target entity. The number of third attribute features is also typically one or more.
S56, obtaining one or more fourth attribute features corresponding to the target entity according to the one or more first attribute features, the one or more second attribute features and the one or more third attribute features.
In some embodiments, the manner in which the fourth attribute feature is obtained may be a manner of feature fusion. It can be understood that, for any one of the first attribute, each of the first attribute, the second attribute and the third attribute is fused, so that according to the second attribute with higher confidence and the third attribute reflecting the user requirement, the wrong attribute in the first attribute is corrected or the missing attribute is supplemented, so that each obtained fourth attribute can include the correct first attribute, and each missing second attribute and/or third attribute of the first attribute.
In some embodiments, the specific feature fusion process is as follows: and correcting the attribute features which are not matched with the target entity in the first attribute features into the correct attribute features by utilizing the correct attribute features which are included in the second attribute features in the preset knowledge base and are described for the target entity. And meanwhile, supplementing undescribed attribute features in the first attribute features by utilizing key attribute features included in the second attribute features and/or control attribute features included in the third attribute features.
In some embodiments, feature fusion is also referred to as knowledge fusion.
As a possible feature fusion manner, the first attribute features are modified based on the second attribute features and the third attribute features, so as to obtain fourth attribute features.
Modifying the first attribute feature includes one or more of: feature replenishment, feature correction, and feature deletion.
Wherein the feature replenishment is used to indicate replenishment of attribute features not included; feature correction is used for indicating attribute features of correction errors; feature deletion is used to indicate deletion of attribute features that cannot be included in the document.
S57, the server generates a target document according to one or more fourth attribute characteristics, the target entity and a preset document template corresponding to the target material.
The preset document template comprises a preset entity and corresponding preset attribute features. Wherein the preset entity comprises at least one second attribute; the preset attribute features include descriptors and/or descriptive sentences describing the second attributes of the preset entities.
As a possible implementation, the preset document template may include a preset document.
As a possible target document generation mode, the server inputs target materials into a preset document template to obtain a preset document corresponding to the target materials. And the server corrects the preset entity and the preset attribute characteristic in the preset document obtained based on the target material based on the fourth attribute characteristic and the target entity to obtain the target document.
According to the embodiment, the fourth attribute characteristics and the target entity are used as the generation basis of the target document, and the target document generated by the preset document template can be close to the original material requirement, describe the target entity more comprehensively and accurately, and meet the personalized control requirement of the user. When the advertisement document is generated based on the document generation method, the advertisement document with real and reliable description on the advertisement commodity and high matching degree can be obtained, so that the success rate of advertisement recommendation is improved, and the user experience is improved.
As an embodiment of acquiring the first attribute feature, the method of extracting the target entity and the first attribute feature from the target material may include S52a to S52d as shown in fig. 7. As shown in fig. 7, S52 includes S52a-S52d.
S52a, the server performs word segmentation and part-of-speech tagging on the target material to obtain a plurality of adjectives and a plurality of adjective bodies contained in the target material.
Based on this step, adjectives and adjective bodies included in the target material can be determined.
S52b, the server analyzes the dependency relationship between adjectives and adjective bodies to obtain adjective bodies of all adjective adjectives, and obtains a first inter-word dependency relationship between a plurality of adjectives and a plurality of adjective bodies.
In some embodiments, the first inter-word dependencies may be represented in the form of a syntax tree, with adjectives pointing to corresponding adjective bodies.
S52c, the server identifies the target entity of the target material based on the entity extraction model of the depth semantics to obtain the target entity.
The order of execution of the steps S52c may not be fixed with the steps S52a and S52b described above. The step S52c may be performed before the step S52a, between the step S52a and the step S52b, or after the step S52 b.
S52d, the server determines each target adjective subordinate to the target entity from the first inter-word subordinate relations between the adjectives and the adjective bodies, and obtains first attribute characteristics corresponding to the first attributes of the target adjectives according to the first attributes of the target adjectives.
Illustratively, the server aligns the adjective body of the adjective obtained in S52b with the target entity obtained in S52c, and determines each target adjective corresponding to the target entity from a plurality of adjectives of the first inter-word subordinate relationship. Determining a first attribute corresponding to each target adjective, and determining the target adjective corresponding to the first attribute as a first attribute characteristic of the first attribute.
In one feature fusion manner, the above S56 may be specifically implemented by the following steps S56A and S56B.
S56A, the server corrects each first attribute feature by using each second attribute feature to obtain each feature result.
For any one of the first attribute, modifying the wrong first attribute feature inconsistent with the second attribute feature into the second attribute feature, and taking the second attribute feature as a feature result so that the feature result does not comprise the wrong first attribute feature. And reserving the correct first attribute features consistent with the second attribute features as feature results so that the feature results comprise the correct first attribute features. And determining the second attribute features which are not included in the first attribute features as feature results, so that the second attribute features which are omitted from the first attribute features are included in the feature results. And determining the first attribute features not included in the second attribute features as feature results so that the correct first attribute features missed by the second attribute features are included in the feature results.
Thus, each feature result not only retains the correct first attribute feature, but also includes the missing second attribute feature in each first attribute feature.
It is understood that the target entity includes at least one first attribute, one first attribute corresponding to one first attribute feature, one second attribute feature, and one feature result.
When the first attribute feature and/or the second attribute feature are plural, S56A may be implemented specifically by:
the server sequentially selects a first attribute, corrects the first attribute feature for describing the first attribute by using the second attribute feature for describing the first attribute, and obtains a corrected feature result for describing the first attribute. Until all the first attributes corresponding to the first attribute features and the second attribute features are selected and completed; or, until each of the first attribute features and each of the second attribute features are corrected. Based on the feature results, the feature results corresponding to the first attributes are obtained.
In some possible application scenarios, the modification manner of the first attribute feature by the server using the second attribute feature for any one of the first attribute features includes the following scenario one to four scenarios four.
And in the first scene, when the first attribute feature and the second attribute feature are the same, determining the first attribute feature or the second attribute sub-feature as a feature result.
For example, for attribute 1, the first attribute feature is A1 and the second attribute feature is A2. If A1 is the same as A2, then the characteristic result is A1 or A2.
The correction mode aims at the condition that the description of the same first attribute is consistent with that of the first attribute and the second attribute, and the attribute features of the first attribute and the second attribute, which do not conflict, are reserved, so that the feature result is the attribute feature of which the two do not conflict.
And in a second scene, when the first attribute feature and the second attribute feature are different, determining the second attribute feature as a feature result.
For attribute 1, the first attribute feature is, illustratively, A1; and the second attribute is characterized as A2. If A1 is different from A2, the characteristic result is A2.
The correction mode aims at the condition that the descriptions of the first attribute feature and the second attribute feature are inconsistent, and the wrong first attribute feature is correspondingly corrected to the correct second attribute feature, so that the feature result is the corrected correct attribute feature.
And in a third scene, when the first attribute has the first attribute feature and the second attribute feature does not exist, determining the first attribute feature as a feature result.
Illustratively, for attribute 2, the first attribute feature is A3; while there is no second attribute feature described for attribute 2, with the result being A3.
The correction mode aims at the situation that the first attribute features are attribute features described for the first attribute, and the attribute features described for the first attribute are not included in all the second attribute features, and the first attribute features which do not conflict with the second attribute features and are not included in all the second attribute features are reserved in the feature result, so that the fact that the obtained feature result does not include or miss the correct first attribute features in the target material is avoided.
And in a fourth scene, when the first attribute does not exist in the first attribute and the second attribute exists in the second attribute, determining the second attribute as a feature result.
Illustratively, for attribute 3, the second attribute feature is A4, and there is no first attribute feature described for attribute 3; the characteristic result is A4.
The correction mode is to supplement missing attribute features in each first attribute feature aiming at the situation that the first attribute feature described for the first attribute does not exist in each first attribute feature and the second attribute feature is the attribute feature described for the first attribute, so that the feature result is the missing attribute feature in each first attribute feature.
As can be seen from combining the four scenes from the first scene to the fourth scene, the method in the embodiment of the application correspondingly sets a correction mode for correcting the first attribute features, which is matched with the description conditions, aiming at different description conditions of the first attribute features and the second attribute features on each first attribute, so as to ensure the rationality of the correction process of the first attribute features, thereby ensuring the accuracy of each feature result.
And S56B, the server corrects the characteristic results by utilizing the third attribute characteristics to obtain fourth attribute characteristics for describing the first attributes.
Wherein the fourth attribute feature may be one or more. A first attribute corresponds to a fourth attribute feature.
In some possible application scenarios, the server uses the third attribute feature to correct the feature result according to any one of the first attributes, including the following scenario five to scenario eight.
And in a fifth scene, when the feature result is the same as the third attribute feature, determining the third attribute feature or the feature result as a fourth attribute feature, so that the common attribute features which do not conflict with each other are reserved in each fourth attribute feature.
For attribute 3, the feature result is A5; and the third attribute is characterized as A6. If A5 is the same as A6, A5 or A6 is the fourth attribute feature.
The correction mode aims at the condition that the feature result is consistent with the description of the third attribute feature on the same first attribute, and the feature result and the common attribute feature which is consistent with the description of the same first attribute in the third attribute feature are reserved, so that the fourth attribute feature reserves the feature result reflecting the requirement of the user.
And in a sixth scene, determining the characteristic result as a fourth attribute characteristic when the characteristic result and the third attribute characteristic are different.
Illustratively, for attribute 3, the feature result is A5, and the third attribute feature is A6. If A5 is different from A6, A6 is a fourth attribute feature.
Aiming at the third attribute characteristics and the characteristic results, the correction mode correspondingly corrects the characteristic results inconsistent with the attribute information input by the user to the third attribute characteristics required by the user under the condition that the descriptions of the same first attribute are inconsistent, so that each fourth attribute characteristic can comprise the attribute characteristics meeting the requirements of the user after the characteristic results are modified.
And a seventh scene, when the first attribute has a feature result and the third attribute does not exist, determining that the attribute feature in the feature result is a fourth attribute feature.
Illustratively, for attribute 4, the feature result is A7, and each third attribute feature does not include the attribute feature described for attribute 4, then the fourth attribute feature is A7.
The correction mode aims at the situation that the characteristic results comprise the attribute characteristics described for the first attribute, and each third attribute characteristic does not comprise the attribute characteristics described for the first attribute, and the original attribute characteristics which do not conflict with the user requirements in each characteristic result are reserved, so that each fourth attribute characteristic can comprise the original attribute characteristics which do not conflict with the user requirements in the characteristic results.
And in a scene eight, when the third attribute feature exists in the first attribute and the attribute feature in the feature result does not exist, determining that the third attribute feature is a fourth attribute feature.
Illustratively, for attribute 5, the third attribute is characterized by pair A8; and the attribute features described for the attribute 5 are not included in the respective feature results, the fourth attribute feature is A8.
The modification mode aims at the situation that the attribute features described for the first attribute are not included in each feature result, the third attribute features include the attribute features described for the first attribute, the attribute features which are not included in the feature result and are included in each third attribute feature and are required by users are reserved, and therefore each fourth attribute feature can include the attribute features which are not included or are omitted in each feature result and are required by users.
As can be seen from combining the four scenes from the fifth scene to the eighth scene, the method in the embodiment of the present application sets different correction modes of the third attribute feature correction feature result correspondingly according to different description conditions of the feature result and the third attribute feature on each first attribute, so as to ensure rationality of the third attribute feature on the feature result correction process, thereby ensuring accuracy of the fourth attribute feature.
In the feature fusion mode, the second attribute features with higher accuracy or higher confidence coefficient obtained from the preset knowledge base are adopted to correct the first attribute features with lower accuracy obtained from the target materials, so that feature results with high accuracy or high confidence coefficient are obtained. And further, correcting the characteristic result by adopting a third attribute characteristic which is input by the user and reflects the user requirement, so as to fuse the attribute characteristic of the user requirement with the characteristic result, and further obtain a fourth attribute characteristic which can reflect the user requirement and ensure the accuracy.
In another feature fusion mode, the priority of the first attribute feature, the priority of the second attribute feature and the priority of the third attribute feature are set to be sequentially higher in the server, namely, the first priority of the first attribute feature is lower than the second priority of the second attribute feature, and the second priority of the second attribute feature is lower than the third priority of the third attribute feature.
The higher the priority, the more accurate the attribute description accuracy or confidence of the attribute feature corresponding to the target entity, that is, the attribute of the attribute feature description target entity corresponding to the priority.
Correspondingly, for the same attribute, the first attribute feature has lower accuracy than the second attribute feature; the second attribute feature is less accurate than the third attribute feature.
Based on this, the above S56 can also be embodied by the following steps S561 to S564.
And S561, the server combines the first attribute features, the second attribute features and the third attribute features to obtain a fusion feature.
Each of the first attribute features described above may be understood as one or more first attribute features; each second attribute feature may be understood as one or more second attribute features; each third attribute feature may be understood as one or more third attribute features.
It is understood that the fusion features include respective first attribute features, respective second attribute features, and respective third attribute features.
Wherein each first attribute feature is an attribute feature described for a first number of different first attributes. Each second attribute feature is an attribute feature described for a second number of different first attributes. Each third attribute feature is an attribute feature described for a third number of different first attributes.
The first number of first attributes, the second number of first attributes and the third number of first attributes may be the same attribute or different attributes. For example, the first number of first attributes and the second number of first attributes may all be the same, all different, or only partially the same. Therefore, the first number, the second number, the third number, and the combination form among the respective different first attributes are not particularly limited in this application, and generally, the number and the combination form thereof are determined by the contents of the specific descriptions in the respective first attribute features, the respective second attribute features, and the respective third attribute features.
Illustratively, if each first attribute feature includes attribute feature a and attribute feature B; each second attribute feature comprises an attribute feature C and an attribute feature D; the third attribute features include attribute feature E and attribute feature F. The merged feature comprises an attribute feature A, an attribute feature B, an attribute feature C, an attribute feature D, an attribute feature E and an attribute feature F.
S562, the server judges whether the fusion feature has a plurality of attribute features described for the same attribute. If so, S563 is performed; if not, S564 is performed.
It is understood that the plurality of attribute features is any two or three of the first attribute feature, the second attribute feature, and the third attribute feature.
In S563, the server determines, as the fourth attribute feature, the attribute feature having the highest priority among the plurality of attribute features.
It can be understood that, for a certain attribute, when only one attribute feature corresponding to the attribute is included in the fusion feature, the priority of the attribute feature included only is highest. That is, when the attribute feature corresponding to a certain attribute in each fusion feature is one, the attribute feature uniquely corresponding to the certain attribute in the fusion features is determined as the fourth attribute feature of the attribute.
Illustratively, each first attribute feature includes an attribute feature a described for attribute a, an attribute feature B1 described for attribute B, and an attribute feature C1 described for attribute C. Each of the second attribute features includes an attribute feature D described for attribute D, an attribute feature B2 described for attribute B, and an attribute feature C2 described for attribute C. Each third attribute feature includes an attribute feature E described for attribute E and an attribute feature B3 described for attribute B. After the three features are combined, the obtained fusion feature comprises the following attribute features: attribute feature a, attribute feature b1, attribute feature c1, attribute feature d, attribute feature b2, attribute feature c2, attribute feature e, and attribute feature b3.
Because the attribute feature B1, the attribute feature B2 and the attribute feature B3 in the fusion feature are all aimed at the same attribute B; the first attribute feature of the attribute feature B1 has a lower priority than the second attribute feature of the attribute feature B2, and the second attribute feature of the attribute feature B2 has a lower priority than the third attribute feature of the attribute feature B3, so that the attribute feature of the same attribute B in each fourth attribute feature is the attribute feature B3. Similarly, the attribute feature for attribute C in the fourth attribute feature is attribute feature C2.
And, the attribute feature a of the fusion feature for the attribute a is the only one attribute feature, and the attribute feature of the fourth attribute feature for the attribute a is the attribute feature a. Similarly, the attribute feature of the fourth attribute feature for the attribute D is an attribute feature D; the attribute feature for attribute E in the fourth attribute feature is attribute feature E.
Based on the above, each of the fourth attribute features includes attribute feature a, attribute feature b3, attribute feature c2, attribute feature d, and attribute feature e.
Based on the feature fusion mode, the attribute features with high priority are used as fourth attribute features, so that the fourth attribute features do not comprise attribute features with low priority, namely low accuracy, and the accuracy of the fourth attribute features is ensured, and the accuracy of generating the target document is improved.
S564, the server determines the fusion feature as each fourth attribute feature.
Under the condition that each attribute in the fusion features corresponds to one attribute feature, the fusion features are the fourth attribute features. In the above feature fusion method, when a plurality of attribute features exist in the fusion feature for any one attribute, the attribute feature with the highest priority is determined as the fourth attribute feature. And determining that only one attribute feature exists in the fusion feature aiming at any one attribute, and determining the independently existing attribute feature as a fourth attribute feature. By means of the feature fusion mode, the fourth attribute features corresponding to any attribute can be determined in one step, and the acquisition speed of acquiring the fourth attribute features is improved.
As a way of generating the target document, the above S57 may be specifically realized by the following steps S571 to S573.
S571, the server acquires preset attribute characteristics of preset entities corresponding to the target materials in the preset document template and related to the preset entities.
In some embodiments, a preset document corresponding to the target material is obtained from a preset document template. And the server analyzes the preset document to obtain a preset entity and preset attribute characteristics.
S572, the server corrects the preset entity by using the target entity.
The server determines whether the preset entity is consistent with the target entity, and if so, the preset entity is reserved. And if the preset entities are different, correcting the preset entities into target entities. It may also be understood that, for the entity scenario in which an error occurs in the entity described in the preset document, the erroneous entity is corrected to be the correct target entity.
Illustratively, the preset entity is a, and the target entity is B, with a being revised to B. The preset entity is A, the target entity is A, and A is reserved.
The original target material is the material provided by the user for the target entity and associated with the target entity, so the target entity extracted from the original target material is accurate. Based on the method, the server judges the accuracy of the preset entity based on the target entity, and when judging that the preset entity is inaccurate, the server corrects the preset entity into the target entity, so that the accuracy of the entity included in the target document is ensured.
S573, the server corrects each preset attribute feature corresponding to the second attribute of the preset entity according to each fourth attribute feature corresponding to the first attribute of the target entity, and obtains at least one target attribute feature corresponding to the first attribute.
In one embodiment, each preset attribute feature is replaced with each fourth attribute feature as each target attribute feature. Meanwhile, the server identifies attribute characteristics which are different from preset attribute characteristics corresponding to at least one second attribute in the target attribute characteristics.
In another embodiment, in a different application scenario, for any one of a plurality of first attributes corresponding to a plurality of fourth attribute features, the server uses the correction manner of each fourth attribute feature to each preset attribute feature, including the following three types.
First, determining the fourth attribute feature or the preset attribute feature as the target attribute feature when the first attribute is the same as the second attribute and the fourth attribute feature is the same as the preset attribute feature.
And based on the condition that the fourth attribute feature is consistent with the preset attribute feature aiming at the same attribute description, reserving the correct preset attribute feature consistent with the fourth attribute feature description in all the preset attribute features as a target attribute feature.
Second, when the first attribute is the same as the second attribute and the fourth attribute is different from the preset attribute, determining the fourth attribute as the target attribute.
And modifying the wrong preset attribute characteristic inconsistent with the fourth attribute characteristic description in each preset attribute characteristic into a fourth attribute characteristic based on the condition that the fourth attribute characteristic and the preset attribute characteristic are inconsistent with each other for the same attribute description, namely, the fourth attribute characteristic is a target attribute characteristic.
Third, when the first attribute is different from the second attribute, the fourth attribute feature is determined as the target attribute feature.
The first attribute and the second attribute are different from each other in the following three cases: and the second attribute which is not included in each first attribute corresponding to each fourth attribute feature and/or the first attribute which is not included in each second attribute corresponding to each preset attribute feature. That is, each second attribute corresponding to the preset entity has an attribute other than each first attribute corresponding to each fourth attribute feature, and/or each first attribute corresponding to each fourth attribute feature has an attribute other than each second attribute corresponding to the preset entity.
Exemplary, each first attribute corresponding to each fourth attribute feature is: A. b, C and D; the second attributes corresponding to the preset attribute features are as follows: a. B and E. For attribute A, the first attribute is the same as the second attribute; the first attribute is the same as the second attribute when the attribute is B; when the attribute is C, the first attribute is different from the second attribute; the first attribute is different from the second attribute for the attribute D.
Further exemplary, each first attribute corresponding to each fourth attribute feature is: A. b and D; the second attributes corresponding to the preset attribute features are as follows: a and B. The first attribute is the same as the second attribute when the attribute is A; the first attribute is the same as the second attribute when the attribute is B; the first attribute is different from the second attribute for the attribute D. The fourth attribute features are preset attribute features which describe the target entity accurately, and the second attribute features which are different from the first attribute are preset attribute features which describe the target entity with low accuracy.
Specifically, the following description is made for the correction method of different attribute cases:
(1) And determining the first attribute which is not included in each second attribute corresponding to each preset attribute characteristic as a first candidate attribute, and determining the second attribute which is not included in each first attribute corresponding to each fourth attribute characteristic as a second candidate attribute.
(2) And correcting the preset attribute characteristics corresponding to the second candidate attributes into fourth attribute characteristics corresponding to the first candidate attributes.
In practical application, if the server detects that the second candidate attribute does not exist and the first candidate attribute exists, determining the fourth attribute features corresponding to the first candidate attribute as target attribute features. If the server detects that the first candidate attribute does not exist and the second candidate attribute exists, deleting the preset attribute feature corresponding to the second candidate attribute, namely, generating no target attribute feature under the condition.
For example, if each preset attribute feature includes an attribute feature a1 of the attribute a and an attribute feature H1 of the attribute H; each fourth attribute feature includes an attribute feature a1 of attribute a, an attribute feature E1 of attribute E, and an attribute feature F1 of attribute F. For attribute a, attribute feature a1 (preset attribute feature) is the same as attribute feature a1 (fourth attribute feature), and the target attribute feature for attribute a is attribute feature a1. The second attribute described above does not include the attribute E, but for this attribute E, there is a fourth attribute feature E1, and the target attribute feature is E1. The above-described second attribute does not include an attribute F, and for this attribute F, there is a fourth attribute feature F1, and the target attribute feature is F1. The above-mentioned attribute does not include an attribute H for which a preset attribute feature H1 exists, in which case the target attribute feature does not exist.
In an attribute feature correction mode with different first attributes and second attributes, replacing the low-accuracy preset attribute feature in each preset attribute feature by the high-accuracy fourth attribute feature, so that the fourth attribute feature without the first attribute in each preset attribute feature can be supplemented as a target attribute feature; and eliminating the preset attribute features with low accuracy from the preset attribute features so that the target attribute features do not comprise the attribute features with low accuracy.
Through the S573, the server uses the fourth attribute feature which accurately and reflects the user requirement as a basis for modifying the preset attribute feature, and modifies the preset attribute feature, so that the determined target attribute feature is more accurate and meets the user requirement.
S574, the server generates a target document based on the target attribute characteristics corresponding to the at least one first attribute.
The server takes the target attribute characteristics corresponding to each first attribute as the basis for generating the target document, so that the generated target document can describe the attribute information of the target entity more accurately and reflect the user requirements.
The S574 may include: S574A to S574C.
S574A, obtaining weights corresponding to the target attribute features corresponding to at least one first attribute.
Wherein the weight of each target attribute feature represents the degree of association of each target attribute feature with the target entity. That is, the weight is used to indicate the likelihood that the target document includes the target attribute feature.
The weight of the target attribute feature is determined based on the weight of the fourth attribute feature. For the same first attribute, the weight of the target attribute feature is the weight of the fourth attribute feature.
In one embodiment, the step S574A includes steps S574A-1 to S574A-5.
S574A-1, the server determines a first weight according to the frequency of the descriptors associated with the first attribute features included in the target material.
In the embodiment of the present application, the first weight is determined in the following manners S1-A1 and S1-A2, and this specific process refers to a schematic diagram of the formation process of the real material shown in fig. 9.
S1-A1, the server extracts first word weights corresponding to a plurality of adjectives from the target material in a word frequency-inverse text frequency index (term frequency-inverse document frequency, TF-IDF) mode.
TF-IDF is a common weighting technique used for information retrieval and data mining. Where TF is word frequency and IDF is the inverse text frequency index.
The first word weight characterizes the importance of the different adjectives to the entity description, i.e. also indicates the degree of association of the adjectives with the entity.
S1-A2, the server selects the first weight of each first attribute feature from the first word weights corresponding to the adjectives.
Further, S1-A3, each first attribute feature and the first weight of each first attribute feature are associated with the target entity, and the material facts are obtained.
In some embodiments, the server associates each first attribute feature, a first weight of each first attribute feature, with the target entity according to a first direction, and constructs a first directed weighted graph including the first direction and the first weight, the first directed weighted graph being referred to as a facts of the material.
The first direction refers to a direction in which the first attribute feature points to the target entity.
Taking a target material as a watch of a certain third model, obsidian color matching and blood oxygen monitoring as examples, and supporting intelligent voice, correspondingly, a target entity is a watch of a certain third model; the first attribute features are "blood oxygen monitoring", "intelligent voice", "obsidian color matching" adjectives, and the weights of the first words corresponding to the adjectives are 0.8, 0.7 and 0.9 in sequence, and the fact material of a certain third type watch formed by the obtained first directed weighted graph can be shown in fig. 9.
S574A-2, the server determines a second weight according to the frequency of occurrence of the descriptive words associated with the second attribute features in the preset knowledge base.
In some embodiments, the second weight is determined using the patterns in S2-A1 and S2-A2. This detailed process is illustrated schematically by the general knowledge of the formation process shown in fig. 10.
S2-A1, the server performs word segmentation processing and dependency relationship analysis on the historical materials related to the target entity and the webpage information related to the target entity in a preset knowledge base to obtain a second inter-word dependency relationship between the target entity and a plurality of adjectives.
S2-A2, the server extracts second word weights corresponding to a plurality of adjectives of the target entity from a preset knowledge base in a TF-IDF mode.
The second word weight is used to characterize the importance of the plurality of adjectives to the target entity description, i.e. also indicates the degree of association of the plurality of adjectives with the target entity.
S2-A3, the server determines adjectives corresponding to the first attribute of the adjectives from a plurality of adjectives corresponding to a preset knowledge base as second attribute features of the first attribute of the adjectives, and determines second word weights of the adjectives as second weights of the corresponding second attribute features.
Further, S2-A4, the server associates each second attribute feature and the second weight of each second attribute feature with the target entity to obtain the common sense of the target entity.
In some embodiments, the server associates each first attribute feature, a first weight for each first attribute feature, with the target entity in a second direction, and constructs a second directed weighted graph comprising the second direction and the second weight. This second directed graph may be referred to as a common sense.
The second direction refers to a direction in which the second attribute feature points to the target entity.
For example, taking the target material as a watch of a certain third model, obsidian color matching, blood oxygen monitoring and intelligent voice support as an example, corresponding target entities as watches of a certain third model, corresponding second attribute features as adjectives of fall monitoring, heart rate monitoring and sleep management, and weights of the second words corresponding to the adjectives are 0.8, 0.6 and 0.7 in sequence, and general knowledge of watches of a certain third model formed by the obtained second directional weighted graph can be shown in fig. 10.
Therefore, the common general knowledge supplements core information such as fall monitoring, heart rate monitoring, sleep management and the like which are not mentioned in the fact materials, and realizes good supplement of the content of the target materials.
Based on the material facts and common knowledge, the process of the server for correcting each first attribute feature by using each second attribute feature in S56A will be described as follows.
And sequentially extracting each second attribute feature in the common general knowledge and each first attribute feature in the material facts, and comparing each second attribute feature with the first attribute and feature content to which each first attribute feature belongs. And judging whether each comparison result belongs to any one of the first scene to the fourth scene. And obtaining corresponding characteristic results according to the corresponding correction modes of the scenes to which each comparison result belongs.
S574A-3, the server determines a third weight according to the attribute information.
Specifically, the attribute information input by the user includes a third weight of a third attribute feature.
In some embodiments, the third weight is determined using the approach in S3-A1 and S3-A3. This specific process is shown in the schematic diagram of the process of forming the user control signal shown in fig. 11.
S3-A1, analyzing the attribute control information to obtain first candidate weights of the third attribute features corresponding to the attribute control information.
Wherein, the first candidate weight is a positive number when the attribute control information belongs to the forward attribute characteristic information; the first candidate weight is negative when the attribute control information belongs to the negative attribute feature information.
In a method for obtaining the first candidate weight, after analyzing the attribute control information to obtain a third attribute feature corresponding to the attribute control information, displaying the third attribute feature by the first terminal device of the user, and displaying a weight input box corresponding to the weight of the third attribute feature. The first terminal device responds to the confirmation operation of the user on the weight input box, the weight value input by the weight input box is sent to the server, and the server determines the weight value as a first candidate weight.
S3-A2, analyzing the user trend information to obtain a third attribute characteristic corresponding to the user trend information to obtain a second candidate weight.
Wherein the second candidate weight is a positive number when the user tendency information belongs to the forward attribute feature information; and the second candidate weight is negative when the user tendency information belongs to the negative attribute characteristic information.
The method for obtaining the second candidate weight may refer to the method for obtaining the first candidate weight in S3-A1, which is not described herein.
S3-A3, adding the first candidate weight and the second candidate weight according to any one of the first attributes corresponding to the attribute information, and taking the weight addition result as a third weight.
For the same first attribute, when the third attribute features only correspond to the attribute control information, the weight of the third attribute features corresponding to the user tendency information is defaulted to 0.
In practical application, when the attribute information includes attribute control information and user trend information input by a user are for the same attribute, if the weights input by the user exist in the third attribute features corresponding to the attribute control information and the user trend information of the same attribute respectively, the weights respectively existing are added, and the sum of the added weights is the third weight of the third attribute feature for the same attribute. If the weight input by the user does not exist in the third attribute features corresponding to the attribute control information and/or the user trend information of the same attribute, setting the third weight of the third attribute features of the same attribute to be 1.
In some embodiments, the server associates each third attribute feature, and a third weight of each third attribute feature, with the target entity according to a third direction, and constructs a third directed weighted graph including the third direction and the third weight, where the third directed weighted graph is referred to as a user control signal.
The third direction refers to a direction in which the third attribute feature points to the target entity.
By way of example, taking the example that the attribute control information is "blood oxygen monitoring", the user tendency information is "male user" and "new product is marketed", the dependency control information: in blood oxygen monitoring, the extracted third attribute feature is "blood oxygen monitoring". And, based on the user trend information: the advertisement word groups corresponding to the user trend information extracted from the advertisement word stock are 'necessary for mature men' and 'shocking to market', and the 'necessary for mature men' and 'shocking to market' are taken as third attribute characteristics. Meanwhile, the target entity is a watch with a certain third model, and the corresponding third attribute features are as follows: the weight of the "necessary for mature men" and "shocking to market" and "blood oxygen monitoring" is set to 1, and the obtained user control signal formed by the third directed weighted graph can be shown in fig. 11.
S574A-4, the server determines the weight of the fourth attribute feature according to the first weight of the first attribute feature, the second weight of the second attribute feature, and the third weight of the third attribute feature.
In some embodiments, the server associates each fourth attribute feature, the weight of each fourth attribute feature, and the target entity according to a fourth direction, and constructs a fourth directed weighted graph including the fourth direction and the weight of the fourth attribute feature, where the fourth directed weighted graph is referred to as a fused knowledge.
The fourth direction refers to the direction in which the fourth attribute feature points to the target entity.
Based on the fact materials, common sense and user control signals, the three are subjected to feature fusion, and the process of obtaining the fusion knowledge is as follows. This specific process is illustrated schematically in fig. 12 for a feature fusion process.
S211, extracting each first attribute feature and each first weight from the real material, each second attribute feature and each second weight from the common sense, and each third attribute feature and each third weight from the user control signal.
S212, carrying out first feature fusion on each first attribute feature and each second attribute feature, and carrying out first weight fusion on each first weight and each second weight, so as to obtain each feature result and the weight of each feature result.
By way of example, referring to fig. 9 to 11, the material facts are (obsidian, 0.9), (blood oxygen monitoring, 0.8), (intelligent voice, 0.7), the general common sense is (fall monitoring, 0.8), (heart rate monitoring, 0.6), (sleep management, 0.7), and based on the general common sense, there is no attribute feature that conflicts with the general common sense in the material facts, there is no infinitesimal information in the material facts, i.e., neither feature nor weight conflicts, and the modified material facts are still (obsidian, 0.9), (blood oxygen monitoring, 0.8), (intelligent voice, 0.7).
However, when the common sense supplements the fact that the attribute feature does not exist in the material, the weight obtained by multiplying the second weight of the common sense by the scaling factor is determined as the weight of the supplemented attribute feature in the feature result. If the scaling factor is 0.5, the common general knowledge added to the facts of the material is: fall monitoring, 0.4, (heart rate monitoring, 0.3) and (sleep management, 0.35).
Wherein the weight of the fall monitoring is 0.4, namely 0.4 obtained by multiplying 0.8 by 0.5; the weight of heart rate monitoring is 0.3, i.e. 0.3 obtained by multiplying 0.6 by 0.5; the weight of sleep management is 0.35, i.e., 0.35 obtained by multiplying 0.7 by 0.5.
Finally, the weights of each feature result and each feature result are (obsidian, 0.9), (blood oxygen monitoring, 0.8), (intelligent voice, 0.7), (fall monitoring, 0.4, (heart rate monitoring, 0.3), and (sleep management, 0.35).
The confidence of the second attribute features in the common sense is higher than the confidence of the first attribute features in the fact material. Thus, in the first feature fusion process, when an attribute in a material fact and an attribute in common sense conflict, the attribute in common sense is subject to.
S213, performing second feature fusion on each third attribute feature and each feature result, and performing second weight fusion on the weight of each feature result and the weight of each third attribute feature to obtain a fourth attribute feature and the weight of the fourth attribute feature.
For example, referring to fig. 9 to 11, the user control signals are (blood oxygen monitor, 1), (shocked to market, 1) and (mature, 1), the weight of the user control signal "blood oxygen monitor" is 1, and the weight of the feature result "blood oxygen monitor" is 0.8, i.e. the two weights are different, i.e. the two weights conflict, and the weight of the second weight after fusion "blood oxygen monitor" is 1.
The confidence of the third attribute in the attribute information is higher than the confidence of the second attribute in the common sense. Therefore, in the second feature fusion process, when an attribute feature of a certain attribute in the material facts and common sense conflicts with an attribute feature in the attribute information, the attribute feature in the attribute information is subject to.
And S214, associating the fourth attribute features and the weights of the fourth attribute features with the target entity to obtain the fusion knowledge.
For example, taking a target material as a watch of a certain third model, obsidian color matching, blood oxygen monitoring, supporting intelligent voice and attribute control information as blood oxygen monitoring, and user trend information as male users and new products to be marketed as examples, a target entity is a watch of a certain third model, and weights of a fourth attribute feature and a fourth attribute feature corresponding to the target entity are as follows: (obsidian, 0.9), (blood oxygen monitoring, 1), (intelligent voice, 0.7), (fall monitoring, 0.4), (heart rate monitoring, 0.3), (sleep management, 0.35), (shocking marketing, 1) and (maturing, 1) the resulting fusion knowledge can be shown in fig. 12.
Specifically, the determination process of the weight of the fourth attribute feature is described below in connection with the determination process of the fourth attribute feature in the above-described scenes one to eight.
In the first scene, when the first attribute feature and the second attribute feature are the same, the weight of the feature result is a target weight determined according to the first weight and the second weight.
As a method for determining a target weight in the scene, a mapping relationship exists between the first weight and the second weight and the target weight, and weights corresponding to the first weight and the second weight in the mapping relationship are determined as the target weights. The mapping relationship may be expressed as follows:
equation (1).
The first weight in equation (1) isThe method comprises the steps of carrying out a first treatment on the surface of the The second weight is->The method comprises the steps of carrying out a first treatment on the surface of the The target weight is +.>;/>Is a fact validity threshold, is a constant; />Is a constant for the scaling factor.
Based on the example in scenario one, for attribute 1, if the first weight of the first attribute feature A1 isThe second weight of the second attribute feature A2 is +.>The weight of the characteristic result is +.>。
Aiming at the condition that the first attribute feature and the second attribute feature are consistent in description of the same attribute and the first weight is different from the second weight, the weight of the feature result is determined by the first weight and the second weight together so as to ensure that the weight of the feature result considers the weight of two feature dimensions of the first attribute feature and the second attribute feature, thereby enabling the weight of the feature result to reflect the association degree of the feature result and the target entity in the scene more accurately.
In the second scene, when the first attribute feature and the second attribute feature are different, the weight of the feature result is the second weight.
Based on the example in scene two, for attribute 1, the first weight of the first attribute feature A1 is q1, the second weight of the second attribute feature A2 is q2, and then the weight of the feature result is q2.
Aiming at the condition that the first attribute feature and the second attribute feature are inconsistent with the same attribute description, the weight of the feature result is determined by the second weight, so that the weight of the feature result only reflects the association degree of the correct second attribute feature and the target entity, but does not reflect the association degree of the incorrect first attribute feature and the target entity, and the weight of the feature result can reflect the association degree of the feature result and the target entity in the scene more accurately.
In the third scene, when the first attribute has the first attribute feature and the second attribute feature does not exist, the weight of the feature result is the first weight.
For attribute 2, there is a first attribute feature A3, A3 described for attribute 2, weighted q3; and there is no second attribute feature described for attribute 2, then the weight of the feature result is the weight q3 of the first attribute feature A3.
And describing a certain first attribute aiming at the first attribute characteristics, wherein the second attribute characteristics describing the certain first attribute do not exist in all the second attribute characteristics, and the characteristic result is that the first attribute characteristics which are not included in all the second attribute characteristics, so that the characteristic result in the situation cannot be the second attribute characteristics, and the weight of the characteristic result determined by the first weight in the third scene is more consistent with the scene.
In the fourth scene, when the first attribute does not exist in the first attribute and the second attribute exists in the second attribute, the weight of the feature result is the second weight.
For attribute 3, the first attribute features do not include the first attribute features described for attribute 3; and the second attribute feature includes a second attribute feature A4, A4 described for attribute 3 with a weight q4, then the weight of the feature result is the weight q4 of the second attribute feature A4.
And describing a certain first attribute aiming at the second attribute characteristics, wherein the condition that the first attribute characteristics describing the certain first attribute do not exist in the first attribute characteristics is characterized in that the second attribute characteristics which are not included in the first attribute are obtained as the characteristic result, and the characteristic result in the condition cannot be the first attribute characteristics, so that the weight of the characteristic result determined by the second weight in the fourth scene accords with the scene better.
In the fifth scenario, when the feature result is the same as the third attribute feature, the weight of the fourth attribute feature is the third weight.
Based on the example in scene five, the weight of attribute feature A5 is q5 and the weight of attribute feature A6 is q6. If attribute feature A5 is the same as attribute feature A6, then the fourth attribute feature has a weight of q6.
In practical application, when the third attribute feature is a forward attribute feature, the third weight of the third attribute feature is set to a number within a larger numerical range, for example, the weight of the forward attribute feature is 1. When the third attribute is a negative attribute, the third weight of the third attribute is set to a number within a smaller range of values, e.g., the weight of the negative attribute is-1.
For the case that the feature result is consistent with the description of the third attribute feature on the same first attribute and the weights are inconsistent, the weights of the fourth attribute feature are determined by the weights of the third attribute feature in order to reflect the control action of the user input attribute information through the third attribute feature.
In the sixth scenario, when the feature result and the third attribute feature are different, the weight of the fourth attribute feature is the third weight.
Based on the example in scene six, the attribute feature A5 has a weight q5 and the attribute feature A6 has a weight q6. If attribute feature A5 is not the same as attribute feature A6, then the weight of the fourth attribute feature is q6.
For the case that the feature result is inconsistent with the description of the same first attribute by the third attribute feature, in order to reflect the control effect of the user input attribute information through the third attribute feature, the weight of the fourth attribute feature is determined by the weight of the third attribute feature.
In scenario seven, when the first attribute has a feature result and the third attribute does not have a feature, the weight of the fourth attribute is the weight of the feature result.
Based on the example in scene seven, for attribute 4, attribute features described for attribute 4 are not included in each third attribute feature; and the weight of the attribute result to the attribute feature A7 described by the attribute 4 is q7, the weight of the fourth attribute feature is q7.
And describing a certain first attribute aiming at the characteristic result, wherein the condition that the third attribute characteristic describing the certain first attribute does not exist in all the third attribute characteristics, and the fourth attribute characteristic is a characteristic result which is not included in all the third attribute characteristics, so that the fourth attribute characteristic cannot be the third attribute characteristic in the condition, and the weight of the fourth attribute characteristic is determined by the weight of the characteristic result under the seventh scene, so that the scene is more accordant.
In scenario eight, when the third attribute feature exists for the first attribute and no feature result exists, the weight of the fourth attribute feature is the third weight.
Based on the example in scene eight, for attribute 5, the attribute feature described for attribute 5 is not included in each feature result, and the weight of attribute feature A8 described for attribute 5 is q8, and the weight of the fourth attribute feature is q8.
And describing a certain first attribute aiming at the third attribute characteristic, wherein the condition that the characteristic result describing the certain first attribute does not exist in each characteristic result, and the fourth attribute characteristic is the third attribute characteristic which is not included in each characteristic result, so that the fourth attribute characteristic cannot be the characteristic result in the condition, and the weight of the fourth attribute characteristic is determined by the weight of the third attribute characteristic under the condition of eight scenes, so that the scene is more accordant.
S574A-5, the server determines the weight of the target attribute feature according to the weight of the fourth attribute feature.
For the same first attribute, the weight of the target attribute feature is the weight of the fourth attribute feature.
S574B, sorting all target attribute features according to the weights, and obtaining a preset number of target attribute features from high to low in the weight sorting result.
The preset number may be determined by the number of preset attribute features and/or the number of target attribute features. The number of the attributes included in the target document in the attribute information input by the user can also be determined. For example, the preset number may be the number of preset attribute features. For another example, the preset number may be the number of target attribute features having weights greater than the weight threshold among the weights of the target attribute features. For another example, when the number of attributes included in the target document is 5, the preset number is also 5.
In the step, the target attribute features with larger weight are determined as the attribute features included in the target document.
S574C, generating a target document according to the preset number of target attribute features.
Based on the method, the determined target document can describe the target entity more accurately by adopting the target attribute characteristics with large association degree.
In another document generation manner, as shown in fig. 13, taking a preset document template as a document generation model and a document rewriting model as an example, the specific implementation of S57 may include S57A, S F and S57G, where the document rewriting model is coupled to a document output end of the document generation model.
S57A, the server inputs the target material into a document generation model to obtain a preset document.
The document generation model is a model obtained by training a document for a plurality of times by using a plurality of sample materials and a plurality of target samples. One of the sample materials corresponds to one of the target sample documents. The sample material is input of a document generation model, and the corresponding target sample document is output of the document generation model.
The server can intelligently convert the target material into the preset document based on the document generation model, so that the preset document can be quickly acquired. Moreover, the document generation model is obtained based on a large number of target sample document training, so that the generation paradigm for generating the preset document is quite rich, the richness and diversity of the generated preset document form are ensured, and the richness and diversity of the obtained target document after the preset document is corrected are ensured.
Optionally, S57B, the server sends the preset document to the first terminal device.
Optionally, S57C, the first terminal device displays a preset document to the user, and simultaneously displays a "document rewrite control".
For example, the document rewrite control is used to modify a preset document to a target document. And triggering a 'document rewriting control' on the first terminal equipment under the condition that the user is not satisfied with the currently displayed preset document.
Optionally, in S57D, the first terminal device sends a document rewrite request to the server in response to a trigger operation of the user on the "document rewrite control".
Optionally, S57E, the server accepts a file rewrite request.
S57F, the server acquires each fourth attribute feature and each target entity corresponding to the target material, and associates the target entity with each fourth attribute feature to obtain the association relationship between the target entity and each fourth attribute feature.
The association relationship may be an association graph formed by the associated target entity and each fourth attribute feature; the knowledge-fused association graph shown in fig. 12 may also be formed by the target entity, each fourth attribute feature, and each weight of each fourth attribute feature.
S57G, the server inputs the association relation and the preset document into the document rewriting model to call the document rewriting model to correct the preset entity and the preset attribute characteristic in the preset document by adopting the association relation, so as to obtain the target document.
The server corrects the preset entity in the preset document by adopting the target entity in the association relation so as to ensure the accuracy of the target entity described in the target document. And correcting the preset attribute characteristics in the preset document by adopting the fourth attribute characteristics in the association relation so as to ensure the accuracy of describing the attributes of the target entity in the target document. According to the document rewriting model, the preset document can be rewritten in one step, logic is simple, and the rewriting process is rapid, so that the target document can be rapidly acquired.
In some embodiments, the above-mentioned document rewriting model is obtained by training the pre-training rewriting model multiple times by taking multiple sets of sample documents as training samples and taking the first sample entity of each set of sample documents and the corresponding first sample attribute features as constraints for rewriting the sample documents.
Wherein a set of sample papers includes a positive sample file and a negative sample file. The quality of the negative sample document is lower than the quality of the positive sample document. The positive sample document is a description document of the first sample entity based on the respective first sample attribute features.
Illustratively, the negative sample document and corresponding respective first sample attribute features and associated first sample entities are input to a pre-training rewrite model. And according to each first sample attribute characteristic and the associated first sample entity, rewriting the negative sample document, and correspondingly, outputting the positive sample document by the pre-training rewrite model.
The quality of the document is positively correlated with the accuracy of the description of the corresponding entity with respect to each attribute feature included in the document, and/or the quality of the document is positively correlated with the degree of association of each attribute feature included in the document with the entity.
As a way of obtaining the sample document, the server first determines the positive sample document. And obtaining a negative sample document according to the positive sample document, namely, modifying the positive sample document.
In the actual positive sample acquisition process, a history document with high document quality is used as a positive sample document. Each first sample attribute feature is extracted from the positive sample document.
The server modifies the positive sample document by one or more of the following modifications to obtain the negative sample document.
In one approach, a first sample entity as in the case herein is replaced with a second sample entity.
After the first sample entity of the positive sample document is replaced by the second sample entity, the entity included in the obtained negative sample document is the wrong second sample entity.
In a second mode, each first sample attribute feature of the positive sample document is modified to each second sample attribute feature.
The respective second sample attribute feature comprises an attribute feature other than the respective first sample attribute feature and/or the respective first sample attribute feature comprises an attribute feature other than the respective second sample attribute feature.
The server obtaining each second sample attribute feature includes one or more of:
(1) At least one of the first sample attribute features of the positive sample document is deleted such that the resulting negative sample document does not include the deleted at least one first sample attribute feature.
(2) At least one of the first sample attribute features of the positive sample document is modified to an attribute feature other than the respective first sample attribute feature such that the resulting negative sample document includes the attribute feature other than the respective first sample attribute feature.
(3) Attribute features other than the respective first sample attribute features are added to the positive sample document such that the resulting negative sample document includes added attribute features other than the respective first sample attribute features.
(4) The word order of the descriptors of each first attribute feature in the sample document is modified.
The four ways of obtaining negative samples described above may also be referred to as negative sampling. Through the negative sampling mode, the positive sample text is directly used as an acquisition source of the negative sample text, and the acquired negative sample text is used as a training corpus of the text rewriting model, so that the problems of difficult acquisition of the training corpus of the generated model and high use cost are solved.
In some embodiments, the pre-training rewrite model and the document rewrite model are non-autoregressive model structures. Because the input of the pre-training rewrite model is a negative sample document, the output is a positive sample document, and the input of the document rewrite model is a preset document and the output is a target document, both the output and the input are documents including the correspondence between the entity and the attribute feature, and as can be seen, the input and the output of the pre-training rewrite model and the document rewrite model are aligned. However, the non-autoregressive model structure is applicable to an aligned model of input and output, and is not applicable to a model of input and output misalignment. Meanwhile, compared with an autoregressive model structure, a non-autoregressive model structure is adopted, because of constraint conditions aligned with input, the model cannot be excessively fitted when the document is rewritten to the decoded document content in the document rewriting process, so that the problem that the original material is changed due to excessive fitting when the autoregressive model is decoded, and the key characteristics of the original material are omitted in the output document, so that wrong attribute characteristics are added is solved. Therefore, the pre-training rewrite model and the document rewrite model are set to be of non-autoregressive model structures, so that the output results of the two models are more accurate, and the accuracy of the target document is ensured.
In the process of decoding the target material, the problem of excessive fitting of the target material can occur by adopting the model of the autoregressive model structure, so that irrelevant material variables are introduced to replace important original material variables, and the output preset document can not accurately describe the target entity, even the incorrect entity included in the preset document, namely, the problem that the model illusion occurs in the document generation model. Based on the problem, according to the fourth attribute characteristics and the target entity, correcting the wrong preset attribute characteristics and the wrong preset entity in the preset document so that the generated target document comprises more accurate attribute characteristics and entities.
Correspondingly, the input entity in the document is aligned and compared with the input entity of the constraint condition in the document rewriting process, and the input attribute feature in the document is aligned and compared with the attribute feature of the input constraint condition, so that the document can be modified.
Meanwhile, the negative sample text and constraint conditions input by the pre-training rewrite model are both from the positive sample text output by the pre-training rewrite model, so that the training process of the pre-training rewrite model is a self-supervision learning training process of returning to the positive sample text, thereby improving the controllability of the process from training to generating the text rewrite model.
In some embodiments, the backbox of the pre-training rewrite model is set to a 3-layer BERT structure and the hidden size of the pre-training rewrite model is set to 512 to construct a lightweight pre-training rewrite model structure to speed up training to quickly acquire the document rewrite model.
In a specific implementation, S57A is an optional step. In case a preset document already exists in the preset document template, this step may be omitted.
In some embodiments, the preset document acquired by the server may also be a preset document sent by the first terminal device received by the server. Specifically, the user performs a selection operation on the interface of the first terminal device on the document in the preset document template, and the first terminal device responds to the selection operation and sends the document selected by the user to the server.
Meanwhile, S57B to S57E described above are optional steps. For the execution of the above steps, whether to execute or not may be determined according to a specific application scenario.
The server directly calls the trained document rewriting model to correct the preset document, so that the target document can be obtained more quickly and more accurately.
The following describes a construction process of the document rewriting model and a rewriting process of a preset document by the document rewriting model, with reference to a specific example, as shown in fig. 14.
S141, acquiring negative sample materials for constructing a negative sample document according to the positive sample materials of the positive sample document.
Specifically, positive sample material is extracted from the positive sample document, and negative sample material is constructed based on the positive sample material.
The positive sample material is the fusion knowledge of the entity and the attribute feature obtained by analyzing the positive sample document.
In some embodiments, the positive sample document may be a document that was historically placed by the advertiser.
Illustratively, the advertiser history is: certain watch of a third model supports two-period continuous voyages, brand-new obsidian Dan Peise and brand-new upgrading heart rate monitoring technology, and new plateau blood oxygen monitoring is achieved, so that a brand-new healthy life model of 'healthy clover' is brought, and a user is helped to actively manage healthy life.
The positive sample materials such as the entity word 'watch of a certain third model' and adjective 'obsidian, heart rate monitoring, blood oxygen monitoring, fall monitoring, sleep management' and the like are extracted from the historical release notes of the advertiser and used as negative sample material sources of the subsequent negative sample notes. For example, negative material of the present document related to a certain third-model watch entity is "certain second-model watch", "certain fourth-model watch", "certain fifth-model watch", or the like. Negative examples related to adjectives of a watch of a third model the negative sample materials of the present document are "starry blue", "ceramic white", "blood pressure measurement", etc.
S142, constructing a negative sample document according to the negative sample material.
The negative version of the document may be understood as a low quality document.
Based on the negative sample material exemplified in step S141, the obtained negative sample text is: a certain watch of a third model supports four-cycle endurance, a brand-new star river blue color matching technology is brand-new, a heart rate monitoring technology is brand-new, blood pressure measurement is added, a brand-new healthy life model of 'healthy clover' is brought, and a user is helped to actively manage healthy life.
S143, constructing a structure of the pre-training rewrite model to obtain the pre-training rewrite model.
For example, the pre-trained rewrite model may be constructed as a lightweight model structure that is not autoregressive.
S144, training the pre-training rewrite model by taking the negative sample text as an input corpus, taking the marked positive sample material as a decoding constraint condition and taking the positive sample text as a training target of the output text, so as to obtain the text rewrite model.
In the training process, the input of the pre-training rewrite model is a negative sample text, and in the process of decoding the negative sample text by the pre-training rewrite model, the text finally output by the pre-training rewrite model is a positive sample text.
S145, obtaining a low-quality preset document and a target material corresponding to the low-quality preset document.
The low-quality preset document is that a certain third type watch adopts a classical black watch disc design, intelligent voice control is supported, blood pressure measurement is carried out, and the watch is protected for healthy driving of you.
S146, generating fusion knowledge of the preset document according to the target material, the preset knowledge base and the attribute information.
The method for generating the fusion knowledge of the preset document in step S146 is the same as that of the above embodiment, and will not be described here again.
Taking a target material as a watch of a certain third model, obsidian color matching, blood oxygen monitoring and intelligent voice support as an example. The target material of ' a certain third type watch, obsidian color matching, blood oxygen monitoring, intelligent voice support ' is input into a document generation model to obtain a low-quality preset document ' the certain third type watch adopts classical black dial design, intelligent voice control is supported, blood pressure measurement is carried out, and the watch is used for protecting and navigating healthy. In combination with the examples in fig. 9 to 12, according to the target material of the preset document, the preset knowledge base and the user control signal, the fusion knowledge corresponding to the preset document is obtained to be consistent with the fusion knowledge in fig. 12, specifically, (obsidite, 0.9), (blood oxygen monitoring, 1), (intelligent voice, 0.7), (fall monitoring, 0.4), (heart rate monitoring, 0.3), (sleep management, 0.35), (shocking, marketing, 1), (maturation, 1).
Optionally, S147, marking the fusion knowledge corresponding to the preset document.
Illustratively, the fused knowledge is labeled, and the labeled fused knowledge is (black, 0.9), (idian, 0.9) … … (cooked, 1), wherein the word segmentation granularity of the labeled fused knowledge can be in a single word level.
S148, inputting the low-quality preset document and the marked fusion knowledge into a document rewriting model to obtain the high-quality target document.
Based on the example in fig. 15, the document rewrite model, when decoded, uses the tokenized fusion knowledge: (black, 0.9), (idian, 0.9) … … (mature, 1) as a decoding constraint, this labeled fusion knowledge suppresses the error information of "blood pressure measurement" in the preset document, and adds the user control signals "blood oxygen monitoring", "shocking to market", "non-two choices of mature men".
Therefore, the preset file is rewritten into a target file as' a certain third type watch adopts a classical black watch disc design, the two choices of a mature man support intelligent voice control and blood oxygen monitoring, and the watch protects the healthy driving of the man and is shocked to the market.
Based on the example, the obtained target text not only reserves the design of a classical black table in the preset text, but also supports intelligent voice control, and protects the correct attribute characteristics of navigation for your health; and the functional characteristics of 'blood oxygen measurement' and 'blood oxygen monitoring' are added. From the rewritten result, based on the extracted material facts and generalized general knowledge, the error attribute features added in the preset scheme by error (such as deleting the blood pressure measurement) and the missing attribute features in the preset scheme (such as supplementing the blood oxygen monitoring) can be corrected; and advertisement words with the third attribute characteristics of 'non-two choices of mature men' and 'shocked and marketed' are added for the target file based on the user control signal, so that the user requirement is met.
In some embodiments, when the document rewrite model is decoding, marked fusion knowledge is used as a decoding constraint, positive knowledge in the fusion knowledge (i.e., attribute features with weights higher than a weight threshold in the fusion knowledge) promotes decoding, and negative knowledge (i.e., attribute weights with weights less than or equal to the weight threshold in the fusion knowledge) suppresses decoding, so that the generation target document satisfies the fusion knowledge.
In some aspects, various embodiments of the present application may be combined and the combined aspects implemented. Optionally, some operations in the flow of method embodiments are optionally combined, and/or the order of some operations is optionally changed. The order of execution of the steps in each flow is merely exemplary, and is not limited to the order of execution of the steps, and other orders of execution may be used between the steps. And is not intended to suggest that the order of execution is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described in the embodiments of the present application. In addition, it should be noted that details of processes involved in a certain embodiment of the present application are equally applicable to other embodiments in a similar manner, or may be used in combination between different embodiments.
Moreover, some steps in method embodiments may be equivalently replaced with other possible steps. Alternatively, some steps in method embodiments may be optional and may be deleted in some usage scenarios. Alternatively, other possible steps may be added to the method embodiments.
Moreover, the method embodiments may be implemented alone or in combination.
It will be appreciated that, in order to achieve the above-described functionality, the electronic device described above comprises corresponding hardware and/or software modules that perform the respective functionality. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The present embodiment may divide the functional modules of the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
Embodiments of the present application also provide an electronic device, as shown in fig. 16, which may include one or more processors 1001, memory 1002, and a communication interface 1003.
Wherein a memory 1002, a communication interface 1003, and a processor 1001 are coupled. For example, the memory 1002, the communication interface 1003, and the processor 1001 may be coupled together by a bus 1004.
Wherein the communication interface 1003 is used for data transmission with other devices. The memory 1002 has stored therein computer program code. The computer program code comprises computer instructions which, when executed by the processor 1001, cause the electronic device to perform the text overwriting method in an embodiment of the present application.
The processor 1001 may be a processor or a controller, for example, a central processing unit (Central Processing Unit, CPU), a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an Application-specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
The bus 1004 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The bus 1004 may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 16, but not only one bus or one type of bus.
The present application also provides a computer readable storage medium having computer program code stored therein, which when executed by the above processor, causes the electronic device to perform the relevant method steps of the method embodiments described above.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the relevant method steps of the method embodiments described above.
The electronic device, the computer storage medium or the computer program product provided in the present application are configured to perform the corresponding methods provided above, and therefore, the advantages achieved by the electronic device, the computer storage medium or the computer program product may refer to the advantages of the corresponding methods provided above, which are not described herein.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or contributing part or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, where the software product includes several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (12)
1. A document generation method, comprising:
acquiring target materials for generating a document and attribute information input by a user; the attribute information is used for indicating the information of positive attribute characteristics included in the generated document and/or the information of negative attribute characteristics included in the generated document;
extracting a target entity and a first attribute feature corresponding to the target entity from a target material, acquiring a second attribute feature corresponding to the target entity from a preset knowledge base, and extracting a third attribute feature corresponding to the target entity from the attribute information;
obtaining a fourth attribute feature corresponding to the target entity according to the first attribute feature, the second attribute feature and the third attribute feature; the second attribute feature is used for correcting the first attribute feature, the first attribute feature is corrected to obtain a feature result, the third attribute feature is used for correcting the feature result, and the feature result is corrected to obtain the fourth attribute feature;
Generating a target document according to the fourth attribute characteristics, the target entity and a preset document template corresponding to the target material;
the generating a target document according to the fourth attribute feature, the target entity and the preset document template corresponding to the target material includes:
acquiring preset attribute characteristics of a preset entity in the preset document template and related to the preset entity; the preset entity comprises at least one second attribute; the preset attribute features comprise descriptive words and/or descriptive sentences for describing the attributes of the preset entities;
correcting the preset entity by using the target entity, and correcting the preset attribute feature corresponding to the at least one second attribute of the preset entity according to the fourth attribute feature corresponding to the at least one first attribute of the target entity to obtain a target attribute feature corresponding to the at least one first attribute;
and generating a target document based on the target attribute characteristics corresponding to the at least one first attribute.
2. The method of claim 1, wherein the target entity comprises at least one first attribute; the obtaining a fourth attribute feature corresponding to the target entity according to the first attribute feature, the second attribute feature and the third attribute feature includes:
For any first attribute of the target entity,
correcting the first attribute feature for describing the first attribute by using the second attribute feature for describing the first attribute to obtain a corrected feature result for describing the first attribute;
and obtaining a fourth attribute feature for describing the first attribute according to the feature result and the third attribute feature.
3. The method according to claim 2, wherein said modifying the first attribute feature describing the first attribute with the second attribute feature describing the first attribute to obtain a modified feature result describing the first attribute includes:
determining that the first attribute feature or the second attribute feature is the feature result when the first attribute feature and the second attribute feature are the same;
determining that the second attribute feature is the feature result when the first attribute feature and the second attribute feature are different;
when the first attribute has a first attribute feature and does not have a second attribute feature, determining the first attribute feature as the feature result;
And when the first attribute does not exist the first attribute feature and the second attribute feature exists, determining the second attribute feature as the feature result.
4. A method according to claim 3, wherein one of the first attributes corresponds to a feature result, and the obtaining a fourth attribute feature describing the first attribute based on the feature result and the third attribute feature comprises:
for any first attribute of the target entity,
when the feature result is the same as the third attribute feature, determining the third attribute feature or the feature result as the fourth attribute feature;
determining that the third attribute feature is the fourth attribute feature when the feature result and the third attribute feature are different;
determining that the feature result is the fourth attribute feature when the feature result exists for the first attribute and the third attribute feature does not exist;
and when a third attribute feature exists in the first attribute and the feature result does not exist, determining that the third attribute feature is the fourth attribute feature.
5. The method according to claim 4, wherein said modifying the preset attribute feature corresponding to the at least one second attribute of the preset entity according to the fourth attribute feature corresponding to the at least one first attribute of the target entity, to obtain the target attribute feature corresponding to the at least one first attribute, includes:
For any first attribute of the target entity,
determining the fourth attribute feature or the preset attribute feature as the target attribute feature when the first attribute is the same as the second attribute and the fourth attribute feature is the same as the preset attribute feature;
determining the fourth attribute feature as the target attribute feature when the first attribute is the same as the second attribute and the fourth attribute feature is different from the preset attribute feature;
and determining the fourth attribute feature as a target attribute feature when the first attribute is different from the second attribute.
6. The method of claim 4, wherein generating a target document based on the target attribute feature corresponding to the at least one first attribute comprises:
acquiring weights corresponding to the target attribute features corresponding to the at least one first attribute; wherein the weight represents the degree of association of each of the target attribute features with the target entity;
sorting all the target attribute features according to weights, and obtaining a preset number of target attribute features from high to low in a weight sorting result;
And generating the target document according to the preset number of target attribute features.
7. The method of claim 6, wherein the weight corresponding to the target attribute feature is determined based on the weight of the fourth attribute feature; the weight of the fourth attribute feature is determined according to the first weight of the first attribute feature, the second weight of the second attribute feature, and the third weight of the third attribute feature; the first weight is determined according to the frequency of the descriptors associated with the first attribute features in the target material; the second weight is determined according to the occurrence frequency of the descriptive words associated with the second attribute features in the preset knowledge base; the third weight is determined from the attribute information.
8. The method of claim 7, wherein, for any first attribute of the target entity,
when the feature result is the same as the third attribute feature, the weight of the fourth attribute feature is the third weight;
when the feature result is different from the third attribute feature, the weight of the fourth attribute feature is the third weight;
When the first attribute has the feature result and the third attribute does not have the feature, the weight of the fourth attribute is the weight of the feature result;
and when a third attribute feature exists in the first attribute and the feature result does not exist, the weight of the fourth attribute feature is the third weight.
9. The method of claim 8, wherein, for any first attribute of the target entity,
when the first attribute feature and the second attribute feature are the same, the weight of the feature result is a target weight determined according to the first weight and the second weight;
when the first attribute feature and the second attribute feature are different, the weight of the feature result is the second weight;
when the first attribute has a first attribute feature and does not have a second attribute feature, the weight of the feature result is the first weight;
and when the first attribute does not have the first attribute feature and the second attribute feature exists, the weight of the feature result is the second weight.
10. The method according to any one of claims 1-9, wherein the preset knowledge base includes a material base and a web page base, the material base includes a history attribute feature corresponding to the target entity, and the history attribute feature is extracted from history materials of the target entity; the web page library comprises attribute features associated with the target entities retrieved from web page information of each target web page.
11. An electronic device comprising a memory, one or more processors; the memory is coupled with the processor; the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the document generation method of any of claims 1 to 10.
12. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the document generation method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310355671.9A CN116070175B (en) | 2023-04-06 | 2023-04-06 | Document generation method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310355671.9A CN116070175B (en) | 2023-04-06 | 2023-04-06 | Document generation method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116070175A CN116070175A (en) | 2023-05-05 |
CN116070175B true CN116070175B (en) | 2024-03-01 |
Family
ID=86170131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310355671.9A Active CN116070175B (en) | 2023-04-06 | 2023-04-06 | Document generation method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116070175B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321537A (en) * | 2019-06-11 | 2019-10-11 | 阿里巴巴集团控股有限公司 | A kind of official documents and correspondence generation method and device |
CN111782784A (en) * | 2020-06-24 | 2020-10-16 | 京东数字科技控股有限公司 | File generation method and device, electronic equipment and storage medium |
CN112446214A (en) * | 2020-12-09 | 2021-03-05 | 北京有竹居网络技术有限公司 | Method, device and equipment for generating advertisement keywords and storage medium |
CN113129051A (en) * | 2020-01-15 | 2021-07-16 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and storage medium for generating advertisement file |
WO2021164425A1 (en) * | 2020-02-19 | 2021-08-26 | 京东方科技集团股份有限公司 | Method and device for data processing, electronic device, and storage medium |
CN115496820A (en) * | 2022-08-31 | 2022-12-20 | 阿里巴巴(中国)有限公司 | Method and device for generating image and file and computer storage medium |
CN115577693A (en) * | 2022-10-11 | 2023-01-06 | 西窗科技(苏州)有限公司 | Method and system for intelligently generating file |
-
2023
- 2023-04-06 CN CN202310355671.9A patent/CN116070175B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321537A (en) * | 2019-06-11 | 2019-10-11 | 阿里巴巴集团控股有限公司 | A kind of official documents and correspondence generation method and device |
CN113129051A (en) * | 2020-01-15 | 2021-07-16 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and storage medium for generating advertisement file |
WO2021164425A1 (en) * | 2020-02-19 | 2021-08-26 | 京东方科技集团股份有限公司 | Method and device for data processing, electronic device, and storage medium |
CN111782784A (en) * | 2020-06-24 | 2020-10-16 | 京东数字科技控股有限公司 | File generation method and device, electronic equipment and storage medium |
CN112446214A (en) * | 2020-12-09 | 2021-03-05 | 北京有竹居网络技术有限公司 | Method, device and equipment for generating advertisement keywords and storage medium |
CN115496820A (en) * | 2022-08-31 | 2022-12-20 | 阿里巴巴(中国)有限公司 | Method and device for generating image and file and computer storage medium |
CN115577693A (en) * | 2022-10-11 | 2023-01-06 | 西窗科技(苏州)有限公司 | Method and system for intelligently generating file |
Also Published As
Publication number | Publication date |
---|---|
CN116070175A (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11544177B2 (en) | Mapping of test cases to test data for computer software testing | |
CN108932335B (en) | Method and device for generating file | |
CN110134931B (en) | Medium title generation method, medium title generation device, electronic equipment and readable medium | |
CN106709040B (en) | Application search method and server | |
US20180365212A1 (en) | Computerized system and method for automatically transforming and providing domain specific chatbot responses | |
CN106682169B (en) | Application label mining method and device, application searching method and server | |
CN110325986B (en) | Article processing method, article processing device, server and storage medium | |
US20210118035A1 (en) | Generation device and non-transitory computer readable medium | |
US11264006B2 (en) | Voice synthesis method, device and apparatus, as well as non-volatile storage medium | |
CN106970991B (en) | Similar application identification method and device, application search recommendation method and server | |
CN106682170B (en) | Application search method and device | |
US11074595B2 (en) | Predicting brand personality using textual content | |
CN111400584A (en) | Association word recommendation method and device, computer equipment and storage medium | |
CN107798622B (en) | Method and device for identifying user intention | |
CN113570413A (en) | Method and device for generating advertisement keywords, storage medium and electronic equipment | |
CN111737961B (en) | Method and device for generating story, computer equipment and medium | |
CN116821324A (en) | Model training method and device, electronic equipment and storage medium | |
CN115345669A (en) | Method and device for generating file, storage medium and computer equipment | |
CN110909154A (en) | Abstract generation method and device | |
CN116070175B (en) | Document generation method and electronic equipment | |
CN113297520A (en) | Page design auxiliary processing method and device and electronic equipment | |
CN107665442A (en) | Obtain the method and device of targeted customer | |
CN117609612A (en) | Resource recommendation method and device, storage medium and electronic equipment | |
CN110929513A (en) | Text-based label system construction method and device | |
WO2022147746A1 (en) | Intelligent computer search engine removal of search results |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |