[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111382266A - User portrait generation method, device and equipment - Google Patents

User portrait generation method, device and equipment Download PDF

Info

Publication number
CN111382266A
CN111382266A CN201811626669.6A CN201811626669A CN111382266A CN 111382266 A CN111382266 A CN 111382266A CN 201811626669 A CN201811626669 A CN 201811626669A CN 111382266 A CN111382266 A CN 111382266A
Authority
CN
China
Prior art keywords
user
characteristic information
portrait
representation
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811626669.6A
Other languages
Chinese (zh)
Other versions
CN111382266B (en
Inventor
解威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Mxnavi Co Ltd
Original Assignee
Shenyang Mxnavi Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Mxnavi Co Ltd filed Critical Shenyang Mxnavi Co Ltd
Priority to CN201811626669.6A priority Critical patent/CN111382266B/en
Publication of CN111382266A publication Critical patent/CN111382266A/en
Application granted granted Critical
Publication of CN111382266B publication Critical patent/CN111382266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3679Retrieval, searching and output of POI information, e.g. hotels, restaurants, shops, filling stations, parking facilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method, a device and equipment for generating a user portrait, wherein the method comprises the following steps: aiming at the use of each functional item by a user, collecting user characteristic information used for describing a user portrait to form a temporary user portrait; extracting user characteristic information from the temporary user portrait for each function item to use, and acquiring accurate user characteristic information corresponding to each function item; and performing multi-dimensional positioning on the user by using the accurate user characteristic information to generate the user portrait. The user portrait generation method provided by the invention collects the user characteristic information from a plurality of functional items, screens the inside of the functional items, and then positions the user by a plurality of dimensions, so that the individual characteristics of the user can be embodied better, and the user characteristic information of the user with one side removed is removed through cross-functional learning and screening, so that the behavior description of the user is enriched in a continuous, comprehensive and three-dimensional manner.

Description

User portrait generation method, device and equipment
Technical Field
The present invention relates to the field of information processing, and in particular, to a method, an apparatus, and a device for generating a user portrait.
Background
In the current society, along with the development of science and technology, the application of navigation technology is more mature, and navigation retrieval contains a plurality of function items, and in the use of each function item, through constantly optimizing the query technology, also make inquiry information more accurate, user's experience is also better and better. However, in the application of the navigation technology, due to different use habits of each person, the use of the function items of each query is different, and the prior art does not refer to the 'personality' part of each individual. The following disadvantages mainly exist:
firstly, when the prior art searches the functional items used by the user, the information is not effectively collected according to the behavior habits of each individual. For example: for the input recommendation optimization, only the input words in the user session are continuously analyzed, and the subject words for recommendation are generated through certain processing. The ranking of these terms depends on the number of times most people enter the query, and ultimately translates into some degree of heat for the query. Such obvious dependencies are easy to identify, but some indirect dependencies may also have an impact on the final effect.
Secondly, the collected information is also relatively single, for example: the search ranking is optimized, and only the click rate is considered singly. The information of other function items is not effectively combined, and is analyzed from one side only, so that one side of the user behavior is considered to be an isolated analysis, for example, some data contents may be considered as effective data if a single behavior analysis is carried out, but some invalid noise data may be considered to be invalid by observing other behaviors of the user.
Disclosure of Invention
In view of the technical drawbacks and disadvantages of the prior art, embodiments of the present invention provide a method, apparatus, and device for generating a user representation that overcome or at least partially solve the above problems.
As a first aspect of the embodiments of the present invention, a method for generating a user portrait includes:
aiming at the use of each functional item by a user, collecting user characteristic information used for describing a user portrait to form a temporary user portrait; extracting user characteristic information from the temporary user portrait for each function item to use, and acquiring accurate user characteristic information corresponding to each function item; and performing multi-dimensional positioning on the user by using the accurate user characteristic information, and generating the user portrait on the basis of the temporary user portrait.
In an alternative embodiment, after the user uses each of the function items, an operation sequence is formed according to time and the user characteristic information.
In an alternative embodiment, the temporary user representation includes: an initial user representation and a user representation prior to generating a final user representation.
In an alternative embodiment, the initial user representation is for a user's initial use of various functional items, user characteristic information characterizing the user representation is initially collected, the user is initially located, and an initial user representation is formed from a user representation model.
In an alternative embodiment, prior to forming the initial user representation, further comprising:
detecting whether the user representation model exists;
when the user representation model is detected to exist, determining whether the user representation model is available and/or whether the user representation model needs to be updated;
performing the step of forming the initial user representation when the model is detected as available and/or when the user representation model does not need to be updated;
upon detecting the absence of the user representation model, and/or upon the unavailability of the user representation model, and/or upon the need for updating the user representation model, a connection server downloads the user representation model and then performs the step of forming the initial user representation.
In an alternative embodiment, the user representation model comprises: a user image frame part and a user characteristic information filling part;
the user representation frame portion for providing support to the user representation model, comprising: user preferences describing a user's preference selection order; and the combination of (a) and (b),
model curing information for restricting the use authority of the user;
and the user characteristic information filling part is user characteristic information in at least one operation sequence.
In an optional embodiment, the user characteristic information is used in each function item, and the obtaining of the accurate user characteristic information corresponding to each function item further includes: and (4) carrying out noise point removal processing according to a preset rule, and collecting and using the effective judgment information.
In an optional embodiment, the denoising processing according to the preset rule is as follows: and judging whether the user characteristic information is effective or not according to the reasonable operation sequence and/or empirical data.
As a second aspect of the embodiments of the present invention, there is provided a user portrait creation apparatus including:
the temporary user portrait forming module is used for collecting user characteristic information used for describing the user portrait aiming at the use of each function item by a user to form the temporary user portrait;
the user characteristic information selection module is used for extracting user characteristic information from the temporary user portrait for each functional item to use, and acquiring accurate user characteristic information corresponding to each functional item;
and the user portrait generation module is used for carrying out multi-dimensional positioning on the user by using the accurate user characteristic information to generate the user portrait.
In an alternative embodiment, the device forms an operation sequence according to time and the user characteristic information after the user uses each function item.
In an alternative embodiment, the temporary user representation includes: an initial user representation and a user representation prior to generating a final user representation.
In an alternative embodiment, the initial user representation is for a user's initial use of various functional items, user characteristic information characterizing the user representation is initially collected, the user is initially located, and an initial user representation is formed from a user representation model.
In an optional embodiment, the apparatus further comprises:
a detection module that detects whether the user representation model exists;
when the user representation model is detected to exist, determining whether the user representation model is available and/or whether the user representation model needs to be updated;
performing the step of forming the initial user representation when the model is detected as available and/or when the user representation model does not need to be updated;
upon detecting the absence of the user representation model, and/or upon the unavailability of the user representation model, and/or upon the need for updating the user representation model, a connection server downloads the user representation model and then performs the step of forming the initial user representation.
In an alternative embodiment, the user representation model comprises: a user image frame part and a user characteristic information filling part;
the user representation frame portion for providing support to the user representation model, comprising: user preferences describing a user's preference selection order; and the combination of (a) and (b),
model curing information for restricting the use authority of the user;
and the user characteristic information filling part is user characteristic information in at least one operation sequence.
In an optional embodiment, the user characteristic information is used in each function item, and the obtaining of the accurate user characteristic information corresponding to each function item further includes: and (4) carrying out noise point removal processing according to a preset rule, and collecting and using the effective judgment information.
In an optional embodiment, the denoising processing according to the preset rule is as follows: and judging whether the user characteristic information is effective or not according to the reasonable operation sequence and/or empirical data.
As a third aspect of embodiments of the present invention, a computer-readable storage medium is related, having stored thereon computer instructions, which when executed by a processor, implement the above-described user representation generation method.
As a fourth aspect of the embodiments of the present invention, a computer device is related to, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-mentioned user representation generating method when executing the program.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the embodiment of the invention provides a method, a device and equipment for generating a user portrait, wherein the method comprises the following steps: aiming at the use of each functional item by a user, collecting user characteristic information used for describing a user portrait to form a temporary user portrait; extracting user characteristic information from the temporary user portrait for each function item to use, and acquiring accurate user characteristic information corresponding to each function item; and performing multi-dimensional positioning on the user by using the accurate user characteristic information to generate the user portrait. The user portrait generation method provided by the invention collects the user characteristic information from a plurality of functional items, screens the inside of the functional items, and then positions the user by a plurality of dimensions, so that the individual characteristics of the user can be embodied better, and the user characteristic information of the user with one side removed is removed through cross-functional learning and screening, so that the behavior description of the user is enriched in a continuous, comprehensive and three-dimensional manner.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow diagram of generating a user representation provided in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of collecting user characteristic information provided in the embodiment of the present invention;
FIG. 3 is a flow diagram for generating a temporary user representation provided in an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a user profile model detection process according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a user characteristic information screening structure provided in an embodiment of the present invention;
FIG. 6 is a flow chart of a specific user representation generation provided in an embodiment of the present invention.
FIG. 7 is a schematic diagram of a user representation generation apparatus according to an embodiment of the present invention;
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The following respectively describes the detailed implementation of the method, device and apparatus for generating a user portrait according to the embodiments of the present invention.
An embodiment of the present invention provides a user portrait generation method, which is shown in fig. 1 and may include the following steps:
step 100, collecting user characteristic information used for describing a user portrait aiming at the use of each function item by a user to form a temporary user portrait;
the function items refer to software and/or systems specifically operated by the user, and the user forms an operation sequence according to time and the user characteristic information after using each function item. The user characteristic information refers to any one or combination of a plurality of user characteristic information of the user: the method comprises the following steps of user identity, user age, travel mode, gender, vehicle attribute, activity range number, common place, schedule, function item use frequency, co-occurrence frequency of a plurality of function items and function item operation times.
For example, the user uses a retrieval function to retrieve several places, and through operation analysis of the user, user characteristic information such as usage habits, activity ranges of the user, and some preferences of the user for retrieval can be acquired; for another example, the user often sets the location as the destination and the route in the search function, and can indirectly estimate that the user is interested in the location, so that the knowledge of the user's habit in and out of the search function, that is, the user characteristic information, can be acquired. The collection of these places can be converted into knowledge of the retrieval function through a model of machine learning and relevance, where it is preferentially recommended to the current user.
The process of generating the temporary user representation is shown in fig. 2, and may specifically include the following steps:
step 110, the user uses each function item;
referring to fig. 3, there are a plurality of function items (function item 1, function item 2, … …, and function item n) used by a user, each of which can provide user feature information, in the embodiment of the present invention, the information is actively pushed to a user portrait through an interface, the user portrait filters the information, and the information retained after filtering generates a temporary user portrait. The information is actively pushed to the user portrait, the triggering time can be set according to needs, the information can be pushed to the user portrait by triggering after the user uses one function item, or the information can be pushed to the user portrait in a fixed period, which is not specifically limited in the embodiment of the present invention.
Step 120, collecting user characteristic information used for describing a user portrait;
the user characteristic information refers to any one or combination of a plurality of user characteristic information of the user: the method comprises the following steps of user identity, user age, travel mode, gender, vehicle attribute, activity range number, common place, schedule, function item use frequency, co-occurrence frequency of a plurality of function items and function item operation times.
Specifically, the user identity refers to a unique identifier of the user, and is an identifier which can globally distinguish the user identity in the user representation model. The user identity is mainly used for associating the related information of the user, in turn, the knowledge obtained by analyzing the habitual behavior of the user can be applied to the user, and an algorithm for generating a unique user identifier, such as registering the user, generating a registration password and the like, needs to be defined in the user portrait model.
The age of the user refers to which age range the age of the user belongs to, for example, the age range may be: (0,18],[19,30],[31,60],[61,100].
The travel mode may be: driving, public transportation, walking, etc.
The gender includes: male and female.
The vehicle attribute refers to the model of the vehicle, such as seven cars, five cars, and the like; but also the brand of the vehicle, etc.
The number of the activity ranges refers to the number of places where the user frequently goes, and may be: [1,3], [4,6], [7, + ∞ ].
The common place refers to a place where the user goes to and appears for many times, or a place with special meaning, and may be: home address, work unit, restaurant, cafe, etc.;
the schedule refers to important information including places and time when a user obtains a plan for traveling, and can help the model to feed back to functional performance. For example: the time period for dividing the users is 6: 00-9: 00, 9: 00-17: 00,17: 00-22: 00,22: 00-6: 00.
The use frequency of the function items refers to the use frequency of one of the function items used by the user, such as: the frequency with which the user uses the "point of interest retrieval" function.
The co-occurrence frequency of a plurality of function items refers to the association frequency between the plurality of function items, i.e. the frequency when one function item is used and another function item is used. Such a group of function items may be a group of function items, which may include a group of function items, and a group of function items, which may be a group of function items. Theoretically, the co-occurrence frequency of more function items can be considered, but based on the consideration of the consumption of local and server resources, it is considered that the current demand can be satisfied by calculating up to three function items.
The function item operation sequence refers to a function sequence between an operation start point and an operation end point. And detailing and defining acquisition points of function operation for the function items which can be operated by the user, and then numbering, wherein the function items are used by the user, and the model can obtain the information of the used number of the corresponding function item. In addition, at the acquisition point of each function item, which operations can be used as the starting points of the operations are also set, and after the function item is used, the sequence of the recording operations is started, and the number and the parameters of the recording operations are required.
The collection point of the operation is an operation with more tiny function items, such as: a search function, input operation, each step of input, can be defined as an acquisition point of an operation.
The end point of the operation can be described by referring to the function stay time and the co-occurrence frequency of a plurality of functions, the exit of one function item cannot be regarded as the termination of the operation, and the co-occurrence of a plurality of function items is recommended to be at most 3 function items, that is, if the user conducts consecutive operations, the operation sequence of at most 3 function items is recorded. If the user has a problem of operation interruption, for example: without operation 3s, this would be considered the end of an operation.
The function item dwell time refers to time consumption for using one function item. The definition of the dwell time of the function item needs to be defined separately according to the performance of the function item, for example: a search function, dwell time, means to start counting from entering characters to stop counting from exiting the search interface. In a special case, in the operation process, no operation occurs in the middle, the waiting time exceeds 3s, and the calculation is also stopped.
Step 130, temporarily positioning the user;
the positioning refers to analyzing the type of the user through user characteristic information, such as: the user belongs to type A, sports type, likes driving by oneself, etc. By positioning the user in multiple dimensions, the type of the user can be judged, and the information of the type is collected mainly.
In step 140, a temporary user representation is generated from the user representation model.
Wherein the temporary user representation comprises: an initial user representation and a user representation prior to generating a final user representation.
Specifically, the initial user portrait is the user characteristic information used for describing the user portrait for the first time of the user using each function item, the user is initially positioned, and the initial user portrait is formed through a user portrait model.
In an alternative embodiment, as shown with reference to FIG. 4, prior to forming the initial user representation, further comprising:
step 401, detecting whether the user portrait model exists; performing step 402 when the presence of the user representation model is detected, and/or performing step 403, performing step 404 when the absence of the user representation model is detected;
step 402, determining whether the user representation model is available; when the user representation model is available, performing step 405, otherwise performing step 404;
step 403, determining whether the user portrait model needs to be updated; when the user representation model needs to be updated, go to step 404, otherwise go to step 405;
step 404, connecting a server to download the model, and then executing step 405;
in step 405, an initial user representation is formed.
For example, there are two ways to obtain a user representation model: the method comprises the steps that a user portrait model can be deployed at the cloud end, and a local terminal can download the requested user portrait model to the local according to a protocol with the cloud end. The terminal can check whether the user portrait model exists after being started, and if the user portrait model does not exist, the terminal can try to connect the cloud end to download to the local. The user representation model is initially installed locally, so that the local user representation can be initialized using the local representation model without the need for urgent networking. However, there are some problems with this, such as: if the current initial user representation model of the user is too old, the user model should not be used locally, and in general, even if the user representation model exists locally, the user representation model still tries to connect to the cloud end under the condition of local energy networking, and whether the local user representation model needs to be updated to the latest state is judged.
In an alternative embodiment, the user representation model comprises: a user image frame part and a user characteristic information filling part;
the user representation frame portion for providing support to the user representation model, comprising: user preferences describing a user's preference selection order; and the combination of (a) and (b),
model curing information for restricting the use authority of the user;
and the user characteristic information filling part is user characteristic information in at least one operation sequence. A combination comprising any one or more of the following user characteristic information for the user: the method comprises the following steps of user identity, user age, travel mode, gender, vehicle attribute, activity range number, common place, schedule, function item use frequency, co-occurrence frequency of a plurality of function items and function item operation times.
User preference information for describing a user's preference selection order; the user preference information is user modifiable model setting information for the model. And the model solidification information is used for limiting the use authority of the user, and the model solidification information is model setting information which is not changeable by the user in the model.
Wherein the user preference refers to a selection order relation of tendencies presented by a user, and the user presets items before using the model. The method comprises the steps of obtaining data information for judging the tendency of user behaviors by analyzing user group behaviors;
for example, when the navigation device is used, the interface color, the interface brightness, the sound size of the navigation device and the like in the appearance expression of the navigation interface; interaction modes of the user and the navigation equipment, such as a contact interaction mode, a voice interaction mode and the like; and displaying the function items, such as the priority providing sequence, the opening and closing of the function items, the opening and closing of the notification, the setting of the language, the setting of the customized event reminder and the like. The settings of the above-mentioned items can be changed according to the personal preference of the user, and the set items of the system can be used if not changed.
In an optional embodiment, the user preference information further includes: and the user feeds back the user image frame part to obtain feedback information after using the user image model.
The feedback information refers to the change degree and the change content of the user preference setting in the model after the user uses the model. For example, the user preference options set by the user of each age are counted, the options set by the user of male and female are counted, and the options set by the user of male and female of each age are counted. And then applied to the user representation model based on the statistical results. After a large number of users use the portrait model, the portrait model used by the user can be analyzed according to the number of the user change options. If the change ratio is more, such as: when the change is 50%, the existing user model is considered to have problems, and the user preference information needs to be adjusted or optimized.
Step 200, extracting user characteristic information from the temporary user portrait for each function item to use, and acquiring accurate user characteristic information corresponding to each function item;
as shown in fig. 5, each function extracts user feature information from the generated user representation (in this case, the temporary user representation) and uses the extracted user feature information in each function, because the user feature information at this time includes many dirty data, error data, and the like, it is necessary to filter the data information and keep accurate user feature information. In this case, the process of filtering the user feature information for each function item, such as function item 1 learning, function item 2 learning, … …, and function item n learning shown in fig. 5, is performed.
In an optional embodiment, the user characteristic information needs to be denoised according to a preset rule before being collected and used, and the user characteristic information is collected and used after the information is judged to be valid. Because some user feature information is considered as valid data if it is analyzed from one aspect, but some noise may be analyzed and needs to be rejected if the user observes other behaviors.
Specifically, the noise point removing processing according to the preset rule is as follows: and judging whether the user characteristic information is effective or not according to the reasonable operation sequence and/or empirical data. By means of denoising point analysis, the embodiment of the invention can analyze error data, abnormal data, rewritten dirty data, missing data and the like which are easy to identify. For example, the embodiment of the present invention specifies the start point and the end point of the operation for the operation sequence, and if the collected data result only has the start point of the operation and does not have the end point of the operation, the collected data is considered to be unavailable. For empirical data, the embodiment of the present invention may provide a user to submit 10 operations at most within 1 hour, and if the number of operations exceeds this number, malicious data-refreshing behavior may exist, and therefore such a collection result may not be adopted. In addition, the collected user characteristic information can be adopted only after being processed by denoising points in different modes for many times. For example, after the user characteristic information of an operation is submitted, it is necessary to analyze whether the operation sequence is reasonable? Whether error data which are easy to identify exist in each item of collected user characteristic information, such as whether the user characteristic information is legal identity ID, whether the location position is incorrect and the like, needs to be judged according to the rationality of the individual item of user characteristic information. Is there any overwriting? Is there a deletion? Has been marked as non-regular data? These passes can only be used.
And 300, performing multi-dimensional positioning on the user by using the accurate user characteristic information, and generating the user portrait on the basis of the temporary user portrait.
Suppose that a user portrait has N dimensions { D1, D2 and D3 … … DN }, each dimension has any finite m values { s1, s2 and … … sm }, the number of the possible values of each dimension is m1, m2 and … … mN, all the value-taking items of each dimension can be represented as Di { s1, s2 and … … sm } (1 is less than or equal to i and less than or equal to N), and the value of each dimension can be represented as Di (sj) (1 is less than or equal to i and less than or equal to N,1 is less than or equal to j and less than or equal to m).
For a positioning set T of N space, T is composed of { T1, T2 … … tk }, any one of the positioning can be represented by tk (Di1(sj1), Di2(sj2), Di3(sj3) … … Din (sjm)), (i1, i2, i3 … … in ≦ N, j1 ≦ mj1, j2 ≦ mj2 … … jm ≦ mjm, N ≦ N), and is a positioning type for each tk.
For another example, the embodiment of the present invention considers a 3-space positioning set, and selects 3 dimensions (gender, age, range of motion), where:
range of values for gender: male and female
Age: (0,18],[19,30],[31,60],[61,100]
Number of moving ranges: [1,3], [4, + ∞ ]
For the above positioning set of 3 spaces, 16 elements are included in total. There may be a positioning (male, [19,30], [1,3]), which positioning information is a user representation of the user.
The user portrait generation method provided by the invention collects the user characteristic information from a plurality of functional items, screens the inside of the functional items, and then positions the user by a plurality of dimensions, so that the individual characteristics of the user can be embodied better, and the user characteristic information of the user with one side removed is removed through cross-functional learning and screening, so that the behavior description of the user is enriched in a continuous, comprehensive and three-dimensional manner.
In a specific embodiment, referring to fig. 6, after a user starts to use each function item in the terminal, the user portrait receives a signal used by the client, and further determines whether the user portrait (the user portrait is a temporary user portrait or a user portrait model) is available, and if not, the user portrait is networked to download a template or perform other reminding; if the user portrait can be used, user portrait information is used for collecting user characteristic information of user behaviors, and the user characteristic information is timely fed back to each function item for information screening after collection; and after the screened accurate user characteristic information is finished, positioning the user portrait so as to generate the user portrait. The generation of the user portrait provided by the embodiment of the invention is an iterative process, the user portrait can be described on the basis of the user portrait generated at the last time, the user portrait can also be depicted on an initial user portrait template, the slice information is removed and the accurate information is reserved by continuously describing the multiple dimensions of the user portrait, so that a comprehensive and systematic model for introducing the user characteristics, namely the user portrait of the user, is formed.
A second aspect of the embodiments of the present invention relates to a user portrait creation apparatus, as shown in fig. 7, including:
a temporary user portrait forming module 10, configured to collect user feature information describing a user portrait according to the use of each function item by the user, and form a temporary user portrait;
a user characteristic information selection module 20, configured to extract user characteristic information from the temporary user representation for each function item to use, and obtain accurate user characteristic information corresponding to each function item;
and a user representation generation module 30 for performing multi-dimensional positioning on the user by using the accurate user characteristic information to generate the user representation.
Optionally, after the user uses each of the function items in the device, an operation sequence is formed according to time and the user feature information.
Optionally, the temporary user representation includes: an initial user representation and a user representation prior to generating a final user representation.
Optionally, the initial user portrait is obtained by initially collecting user characteristic information used for depicting the user portrait according to initial use of each function item by the user, initially positioning the user, and forming the initial user portrait through a user portrait model.
Optionally, referring to fig. 7, the apparatus further includes:
a detection module 40 for detecting whether the user portrait model exists;
when the user representation model is detected to exist, determining whether the user representation model is available and/or whether the user representation model needs to be updated;
performing the step of forming the initial user representation when the model is detected as available and/or when the user representation model does not need to be updated;
upon detecting the absence of the user representation model, and/or upon the unavailability of the user representation model, and/or upon the need for updating the user representation model, a connection server downloads the user representation model and then performs the step of forming the initial user representation.
Optionally, the user representation model includes: a user image frame part and a user characteristic information filling part;
the user representation frame portion for providing support to the user representation model, comprising: user preferences describing a user's preference selection order; and the combination of (a) and (b),
model curing information for restricting the use authority of the user;
and the user characteristic information filling part is user characteristic information in at least one operation sequence.
Optionally, the using the user characteristic information in each function item, and the obtaining the accurate user characteristic information corresponding to each function item further includes: and (4) carrying out noise point removal processing according to a preset rule, and collecting and using the effective judgment information.
Optionally, the denoising processing according to the preset rule is as follows: and judging whether the user characteristic information is effective or not according to the reasonable operation sequence and/or empirical data.
For specific descriptions and examples in this embodiment, reference may be made to the contents of the above method embodiments, which are not described herein again.
Based on the same inventive concept, the embodiment of the present invention further provides a user portrait generation device, including: a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the method for generating the user image when executing the program.
Based on the same inventive concept, the embodiment of the present invention further provides a computer-readable storage medium, on which computer instructions are stored, and the instructions, when executed by a processor, implement the user representation generation method as described above.
Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems or similar devices that manipulates and transforms data represented as physical (e.g., electronic) quantities within the processing system's registers and memories into other data similarly represented as physical quantities within the processing system's memories, registers or other such information storage, transmission or display devices. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".

Claims (10)

1. A method for generating a user representation, comprising:
aiming at the use of each functional item by a user, collecting user characteristic information used for describing a user portrait to form a temporary user portrait;
extracting user characteristic information from the temporary user portrait for each function item to use, and acquiring accurate user characteristic information corresponding to each function item;
and performing multi-dimensional positioning on the user by using the accurate user characteristic information, and generating the user portrait on the basis of the temporary user portrait.
2. The method of claim 1, wherein said user forms a sequence of operations according to time and said user characteristic information after using each of said function items.
3. The method of claim 2, wherein the temporary user representation comprises: an initial user representation and a user representation prior to generating a final user representation.
4. The method of claim 3, wherein the initial user representation is for a first time user usage of various functional items, first capturing user characteristic information characterizing the user representation, first locating the user, and forming an initial user representation from a user representation model.
5. The method of claim 4, prior to forming the initial user representation, further comprising:
detecting whether the user representation model exists;
when the user representation model is detected to exist, determining whether the user representation model is available and/or whether the user representation model needs to be updated;
performing the step of forming the initial user representation when the model is detected as available and/or when the user representation model does not need to be updated;
upon detecting the absence of the user representation model, and/or upon the unavailability of the user representation model, and/or upon the need for updating the user representation model, a connection server downloads the user representation model and then performs the step of forming the initial user representation.
6. The method of claim 5, wherein the user representation model comprises: a user image frame part and a user characteristic information filling part;
the user representation frame portion for providing support to the user representation model, comprising: user preferences describing a user's preference selection order; and the combination of (a) and (b),
model curing information for restricting the use authority of the user;
and the user characteristic information filling part is user characteristic information in at least one operation sequence.
7. The method of claim 1, wherein the user characteristic information is used in each function item, and obtaining the accurate user characteristic information corresponding to each function item further comprises: and (4) carrying out noise point removal processing according to a preset rule, and collecting and using the effective judgment information.
8. An apparatus for generating a user representation, comprising:
the temporary user portrait forming module is used for collecting user characteristic information used for describing the user portrait aiming at the use of each function item by a user to form the temporary user portrait;
the user characteristic information selection module is used for extracting user characteristic information from the temporary user portrait for each functional item to use, and acquiring accurate user characteristic information corresponding to each functional item;
and the user portrait generation module is used for carrying out multi-dimensional positioning on the user by using the accurate user characteristic information to generate the user portrait.
9. A computer-readable storage medium having computer instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any one of claims 1 to 7.
CN201811626669.6A 2018-12-28 2018-12-28 User portrait generation method, device and equipment Active CN111382266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811626669.6A CN111382266B (en) 2018-12-28 2018-12-28 User portrait generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811626669.6A CN111382266B (en) 2018-12-28 2018-12-28 User portrait generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN111382266A true CN111382266A (en) 2020-07-07
CN111382266B CN111382266B (en) 2024-09-13

Family

ID=71222158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811626669.6A Active CN111382266B (en) 2018-12-28 2018-12-28 User portrait generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN111382266B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933049A (en) * 2014-03-17 2015-09-23 华为技术有限公司 Method and system for generating digital human
CN105574159A (en) * 2015-12-16 2016-05-11 浙江汉鼎宇佑金融服务有限公司 Big data-based user portrayal establishing method and user portrayal management system
CN105657003A (en) * 2015-12-28 2016-06-08 腾讯科技(深圳)有限公司 Information processing method and server
CN108154401A (en) * 2018-01-15 2018-06-12 网易无尾熊(杭州)科技有限公司 User's portrait depicting method, device, medium and computing device
CN108156146A (en) * 2017-12-19 2018-06-12 北京盖娅互娱网络科技股份有限公司 A kind of method and apparatus for being used to identify abnormal user operation
CN108268547A (en) * 2016-12-29 2018-07-10 北京国双科技有限公司 User's portrait generation method and device
CN108415952A (en) * 2018-02-02 2018-08-17 北京腾云天下科技有限公司 User data storage method, label computational methods and computing device
CN108764663A (en) * 2018-05-15 2018-11-06 广东电网有限责任公司信息中心 A kind of power customer portrait generates the method and system of management
CN108920160A (en) * 2018-05-31 2018-11-30 深圳壹账通智能科技有限公司 Upgrade method, device, server and the computer storage medium of application APP

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933049A (en) * 2014-03-17 2015-09-23 华为技术有限公司 Method and system for generating digital human
CN105574159A (en) * 2015-12-16 2016-05-11 浙江汉鼎宇佑金融服务有限公司 Big data-based user portrayal establishing method and user portrayal management system
CN105657003A (en) * 2015-12-28 2016-06-08 腾讯科技(深圳)有限公司 Information processing method and server
CN108268547A (en) * 2016-12-29 2018-07-10 北京国双科技有限公司 User's portrait generation method and device
CN108156146A (en) * 2017-12-19 2018-06-12 北京盖娅互娱网络科技股份有限公司 A kind of method and apparatus for being used to identify abnormal user operation
CN108154401A (en) * 2018-01-15 2018-06-12 网易无尾熊(杭州)科技有限公司 User's portrait depicting method, device, medium and computing device
CN108415952A (en) * 2018-02-02 2018-08-17 北京腾云天下科技有限公司 User data storage method, label computational methods and computing device
CN108764663A (en) * 2018-05-15 2018-11-06 广东电网有限责任公司信息中心 A kind of power customer portrait generates the method and system of management
CN108920160A (en) * 2018-05-31 2018-11-30 深圳壹账通智能科技有限公司 Upgrade method, device, server and the computer storage medium of application APP

Also Published As

Publication number Publication date
CN111382266B (en) 2024-09-13

Similar Documents

Publication Publication Date Title
US8620836B2 (en) Preprocessing of text
CN104933049B (en) Generate the method and system of Digital Human
EP2052377A1 (en) Adaptive drive supporting apparatus and method
CN105487663A (en) Intelligent robot oriented intention identification method and system
CN107797894B (en) APP user behavior analysis method and device
CN108829799A (en) Based on the Text similarity computing method and system for improving LDA topic model
CN103390044B (en) Method and device for identifying linkage type POI (Point Of Interest) data
WO2009014361A2 (en) Method, system, and computer readable recording medium for filtering obscene contents
CN103034508A (en) Software recommending method and software recommending system
CN104036004B (en) Search for error correction method and search error correction device
CN110633406B (en) Event thematic generation method and device, storage medium and terminal equipment
KR20190128246A (en) Searching methods and apparatus and non-transitory computer-readable storage media
CN102646100A (en) Domain term obtaining method and system
CN110807068B (en) Equipment-changing user identification method and device, computer equipment and storage medium
CN106202126B (en) A kind of data analysing method and device for logistics monitoring
CN108280164A (en) A kind of short text filtering and sorting technique based on classification related words
CN108229567A (en) Driver identity recognition methods and device
CN106264545A (en) Step recognition method and device
CN103870489B (en) Chinese personal name based on search daily record is from extending recognition methods
CN114003803B (en) Method and system for discovering media account numbers of specific regions on social platform
CN111382266A (en) User portrait generation method, device and equipment
CN107122403B (en) Webpage academic report information extraction method and system
CN111210232B (en) Data processing method and device and electronic equipment
CN110990673A (en) Method and system for obtaining questionnaire focus
CN116701772A (en) Data recommendation method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 110000 No. 861-6, shangshengou village, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Meihang Technology Co.,Ltd.

Address before: 110167 12th and 13th floors of Debao building, No.1 Jinhui street, Hunnan New District, Shenyang City, Liaoning Province

Applicant before: SHENYANG MXNAVI Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant