WO2024114034A1 - 内容推荐方法、装置、设备、介质和程序产品 - Google Patents
内容推荐方法、装置、设备、介质和程序产品 Download PDFInfo
- Publication number
- WO2024114034A1 WO2024114034A1 PCT/CN2023/118248 CN2023118248W WO2024114034A1 WO 2024114034 A1 WO2024114034 A1 WO 2024114034A1 CN 2023118248 W CN2023118248 W CN 2023118248W WO 2024114034 A1 WO2024114034 A1 WO 2024114034A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- feature representation
- cluster
- cluster center
- domain
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 96
- 238000013507 mapping Methods 0.000 claims abstract description 244
- 230000003993 interaction Effects 0.000 claims abstract description 121
- 238000004458 analytical method Methods 0.000 claims abstract description 29
- 230000006870 function Effects 0.000 claims description 223
- 238000009826 distribution Methods 0.000 claims description 78
- 238000000605 extraction Methods 0.000 claims description 36
- 238000007621 cluster analysis Methods 0.000 claims description 28
- 239000000284 extract Substances 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 description 22
- 238000005516 engineering process Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 16
- 238000012545 processing Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 11
- 238000011176 pooling Methods 0.000 description 11
- 238000013473 artificial intelligence Methods 0.000 description 8
- 239000002699 waste material Substances 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000013526 transfer learning Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000013475 authorization Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the embodiments of the present application relate to the field of computer technology, specifically to the field of machine learning, and in particular to a content recommendation method, apparatus, device, medium, and program product.
- Data sparsity refers to the lack of historical interaction data of users
- cold start refers to the lack of historical interaction data after new users enter the system.
- the recommendation system cannot accurately analyze the interests and preferences of users, resulting in the inability to accurately push content to users, resulting in waste of resources supporting content push functions and low resource utilization.
- a content recommendation method, apparatus, device, medium, and program product are provided.
- a content recommendation method which is executed by a computer device and includes:
- mapping relationship function corresponding to the cluster center, and map the out-of-domain feature representation through the mapping relationship function to obtain the in-domain feature representation of the first user, wherein the mapping relationship function is used to indicate a mapping relationship between the feature representations of the second function platform and the first function platform;
- a target feature representation of the first user is determined, and based on the target feature representation, target content matching the first user is determined from a candidate content recommendation pool, and the target content is pushed to the first user.
- a content recommendation device comprising:
- an extraction module which obtains attribute data of a first user in a first function platform, extracts features based on the attribute data, and obtains a first feature representation of the first user;
- a cluster analysis module which obtains a second feature representation of a second user in the first functional platform, performs cluster analysis on the first feature representation and the second feature representation, and obtains a cluster center corresponding to the first user;
- the extraction module is used to obtain second historical interaction data of the first user in the second function platform, extract features based on the second historical interaction data, and obtain an out-of-domain feature representation of the first user;
- an acquisition module configured to acquire a mapping relationship function corresponding to the cluster center, and map the out-of-domain feature representation through the mapping relationship function to obtain the in-domain feature representation of the first user, wherein the mapping relationship function is used to indicate a mapping relationship between the feature representations of the second function platform and the first function platform;
- a recommendation module configured to determine a target user of the first user according to the intra-domain feature representation and the first feature representation.
- the target feature representation is used to determine target content matching the first user from a candidate content recommendation pool based on the target feature representation, and the target content is pushed to the first user.
- a computer device including a processor and a memory, wherein the memory stores computer-readable instructions, and the computer-readable instructions are loaded and executed by the processor to implement the content recommendation method of each embodiment of the present application.
- a computer-readable storage medium in which computer-readable instructions are stored.
- the computer-readable instructions are loaded and executed by a processor to implement the content recommendation method of each embodiment of the present application.
- a computer program product including computer-readable instructions, which, when executed by a processor, implement the content recommendation method of each embodiment of the present application.
- FIG1 is a schematic diagram of performing personalized content recommendation for a specified user provided by an exemplary embodiment of the present application
- FIG2 is a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application.
- FIG3 is a flow chart of a content recommendation method provided by an exemplary embodiment of the present application.
- FIG4 is a schematic diagram of performing personalized content recommendation for a first user based on intra-domain features provided by an exemplary embodiment of the present application
- FIG5 is a flow chart of a training method for a personalized mapping relationship function provided by an exemplary embodiment of the present application
- FIG6 is a flow chart of a cluster analysis method provided by an exemplary embodiment of the present application.
- FIG7 is a schematic diagram of obtaining a second clustering distribution result after performing discrete analysis processing on a first clustering distribution result provided by an exemplary embodiment of the present application;
- FIG8 is a flow chart of a method for obtaining an out-of-domain feature representation of a first user provided by an exemplary embodiment of the present application
- FIG9 is a schematic diagram of a heterogeneous graph provided by an exemplary embodiment of the present application.
- FIG10 is a schematic diagram of a meta-path-based heterogeneous graph convolution provided by an exemplary embodiment of the present application.
- FIG11 is a structural block diagram of a content recommendation device provided by an exemplary embodiment of the present application.
- FIG12 is a structural block diagram of a content recommendation device provided by another exemplary embodiment of the present application.
- FIG. 13 is a structural block diagram of a computer device provided by an exemplary embodiment of the present application.
- Artificial Intelligence is the theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
- artificial intelligence is a comprehensive technology in computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence.
- Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that machines have the functions of perception, reasoning and decision-making.
- Artificial intelligence technology is a comprehensive discipline that covers a wide range of fields, including both hardware-level and software-level technologies.
- the basic technologies of artificial intelligence generally include sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, extraction technology for large feature representation, operation/interaction systems, mechatronics, etc.
- Artificial intelligence software technology mainly includes It includes several major directions such as computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
- Personalized recommendation system is the product of the development of the Internet and e-commerce. It is an advanced business intelligence platform based on massive data mining, providing users with personalized information services and decision support.
- the personalized recommendation system collects data on user attributes and historical behavior, uses a designed recommendation algorithm to build a user interest model, generates a specific recommendation list for each user, and pushes it to them, thereby achieving the purpose of personalized recommendation.
- Data sparsity refers to the fact that there are few interaction records between users and items, making it difficult to capture user interests or item characteristics
- cold start refers to the situation where new users or new items have just entered the system and have no interaction records.
- Traditional recommendation algorithms make recommendations based on the interaction data between users and items, and it is difficult to make appropriate recommendations in these two situations.
- Transfer learning is a term in machine learning, which refers to the influence of one learning on another, or the influence of acquired experience on the completion of other activities. Transfer is widely present in the learning of various knowledge, skills and social norms.
- Transfer learning uses the rich knowledge and information in the source domain to improve the performance of the target domain and reduce the number of samples required in the target domain. It is widely used in the fields of vision and natural language processing. For example, the knowledge (or model) used to identify cars can also be used to improve the ability to identify trucks.
- the source domain (Source Domain, SD) refers to the existing knowledge domain, which means a domain different from the target sample, and usually has rich supervision information and labeled data
- the target domain (Target Domain, TD) refers to the domain that needs to be learned, which means the domain where the target sample is located, and usually has only a small amount of labeled data, or no labeled data.
- the source domain can be the domain that serves as the source of the transferred knowledge in transfer learning
- the target domain can be the domain that serves as the destination of the transferred knowledge in transfer learning.
- cross-domain recommendation aims to combine data from multiple fields and introduce information from other domains (source domains) to assist, so that better recommendations can be made in the target domain or even multiple domains.
- source domains There is generally some overlapping information between different domains, such as public users in different domains, or the same items in different domains, which all belong to the category of overlapping information.
- overlapping information is required to migrate information between different domains.
- mapping-based cross-domain recommendation algorithm shares the same mapping function for all users.
- Each user has individual differences, and the interest mapping from the source domain to the target domain varies greatly. If all users share the same mapping function, this complex mapping relationship cannot be well modeled, resulting in low accuracy of the mapping result, and poor results in personalized content recommendation for users based on the mapping result.
- the out-of-domain features of the cold user in the source domain are mapped to the target domain through the personalized mapping function, and the in-domain features of the cold user are obtained, and personalized content recommendations are made to the cold user based on the in-domain features.
- a cold user is a user whose relevant information is missing, and is generally a new user.
- the attribute data of the sample user 101 in the first functional platform is obtained, and the feature extraction is performed on the attribute data of the sample user 101 to obtain the second feature representation 102 of the sample user, and the second feature representation 102 of the sample user is extracted.
- the feature representation 102 is clustered and analyzed to obtain a cluster distribution result 103, wherein the cluster distribution result 103 includes multiple clusters, and each cluster corresponds to its own cluster center.
- the candidate mapping function is trained based on the cluster center to obtain a mapping module 104, wherein the mapping module 104 includes multiple mapping relationship functions.
- Each mapping relationship function in the mapping module 104 corresponds to a cluster, and the cluster center of the cluster is obtained, and the corresponding mapping relationship function can be indexed according to the cluster center.
- the attribute data of the designated user 111 in the first function platform is obtained, and the feature extraction is performed on the attribute data of the designated user 111 to obtain the first feature representation 112 of the designated user.
- the designated user 111 may be a cold user, that is, a user with no historical interaction data in the first function platform; the designated user 111 may also be a returning user, that is, a user with no historical interaction data in the first function platform in the historical period, but with historical interaction data in the most recent period.
- the second historical interaction data of the designated user 111 in the second functional platform is obtained, and features are extracted from the historical interaction data of the designated user 111 to obtain an out-of-domain feature representation 113 of the designated user.
- the similarity between the first feature representation 112 of the designated user and the cluster center in the cluster distribution result 103 is calculated, and the cluster center corresponding to the designated user 111 is obtained based on the similarity.
- a target mapping relationship function 114 suitable for the designated user 111 is obtained in the mapping module 104.
- the out-of-domain feature representation 113 of the specified user is input into the target mapping relationship function 114, and the in-domain feature 115 of the specified user 111 is mapped.
- the in-domain feature 115 of the specified user 111 and the first feature representation 112 of the specified user are concatenated together to form the target feature representation 116 of the specified user.
- the personalized content 117 that the designated user 111 may be interested in is screened out, and the personalized content 117 is recommended to the designated user 111 .
- user attribute data and historical interaction data are data actively uploaded by the user; or, are data obtained after the user's separate authorization.
- the information involved in this application including but not limited to user attribute information, historical interaction information between users and the first functional platform and the second functional platform, etc.
- data including but not limited to data used for analysis, stored data, displayed data, etc.
- the attribute data involved in this application is obtained with full authorization.
- the implementation environment involves a terminal 210 and a server 220.
- the terminal 210 and the server 220 are connected via a communication network 230.
- the terminal 210 is used to send at least one of the first feature representation or the second feature representation of the user and the out-of-domain feature representation to the server 220.
- the terminal 210 is installed with an application having a feature mapping function (e.g., a function of mapping an out-of-domain feature representation to an in-domain feature representation).
- the terminal 210 is installed with an application having a personalized mapping function.
- the terminal 210 is installed with a search engine application, a travel application, a life-assisting application, an instant messaging application, a video application, a game application, a news application, a content recommendation application, etc., which is not limited in the embodiments of the present application.
- the server 220 After obtaining the first feature representation, the second feature representation, and the out-of-domain feature representation of the user, the server 220 performs feature analysis on the first feature representation, the second feature representation, and the out-of-domain feature representation of the user to obtain the in-domain feature representation of the user, and filters out personalized content that the user may be interested in based on the in-domain feature representation of the user, which is then applied to downstream applications, such as: user aggregation based on in-domain features, user personalized content recommendation, etc.
- the server 220 After acquiring the user's first feature representation, second feature representation, and out-of-domain feature representation, the server 220 returns the in-domain feature representation to the terminal 210 corresponding to the user's in-domain feature representation, and the terminal 210 finally selects personalized content based on the in-domain feature representation and recommends content to the user.
- the personalized content includes in-domain information flow content that the user may be interested in, such as information flow articles, videos, music, etc.
- the above-mentioned terminal can be a mobile phone, a tablet computer, a desktop computer, a portable laptop computer, a smart TV, a car terminal, a smart home device and other terminal devices in various forms, which is not limited in the embodiments of the present application.
- the above server can be an independent physical server or a combination of multiple physical servers. It can also be a server cluster or distributed system that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (CDNs), as well as big data and artificial intelligence platforms.
- cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (CDNs), as well as big data and artificial intelligence platforms.
- cloud technology refers to a hosting technology that unifies hardware, software, network and other resources in a wide area network or local area network to realize data computing, storage, processing and sharing.
- Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, application technology, etc. based on the cloud computing business model. It can form a resource pool, which is used on demand and is flexible and convenient. Cloud computing technology will become an important support.
- the background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites. With the high development and application of the Internet industry, in the future, each item may have its own identification mark, and all need to be transmitted to the background system for logical processing. Data of different levels will be processed separately. All kinds of industry data require strong system backing support, which can only be achieved through cloud computing.
- the above server can also be implemented as a node in a blockchain system.
- the content recommendation method provided by the present application is explained.
- the method can be executed by a server or a terminal, or can be executed by a server and a terminal together.
- the method is explained by taking the execution of the server as an example. As shown in Figure 3, the method includes the following steps.
- Step 310 Acquire attribute data of the first user in the first functional platform, extract features based on the attribute data, and obtain a first feature representation of the first user.
- the first user is a specific user to whom content needs to be pushed.
- the first user can be called a designated user or a target user.
- Attribute data is data used to describe user characteristics.
- the attribute data of the first user is data describing the user characteristics of the first user.
- the attribute data of the first user in the first functional platform is attribute data generated or stored by the first user in the first functional platform.
- the first feature representation is a feature representation of the first user.
- the first functional platform includes different types of platform elements, and the first user can interact with the platform elements in the first functional platform.
- the type of the first functional platform includes but is not limited to a game platform, a social platform or a shopping platform.
- the types of platform elements included in the first functional platform include but are not limited to: video elements, such as movies, TV series, animation videos, etc.; image elements, which refer to images containing information flow content; music elements, such as songs, accompaniments, etc.; text elements, such as journal articles, e-books, etc.
- the services that the first functional platform can provide include but are not limited to: online live broadcast push, video account content push, subscription account article push or communication community push.
- the first user has no effective interaction with the platform elements in the first functional platform, that is, the first user has no effective historical interaction data in the first functional platform, including but not limited to the following situations:
- the first user is a newly registered user on the first functional platform, i.e., a cold user with no historical interaction data;
- the first user is a returning user of the first functional platform and has not logged into the first functional platform for a preset period of time, and the historical interaction data is cleared;
- the first user's login frequency on the first functional platform is lower than the frequency threshold, and the amount of historical interaction data is lower than the threshold;
- the first user interacts with the first functional platform to generate historical interaction data in the following ways, including but not limited to:
- the first user browses the content pushed by the first functional platform: at least one of articles, videos, or live broadcast content;
- the first user performs transaction-related activities on the first functional platform: shopping, selling or commenting on products, etc.;
- the first user actively uploads content on the first functional platform: at least one of articles, pictures or videos.
- the attribute data of the first user in the first functional platform includes, but is not limited to: age information of the first user, IP address (Internet Protocol Address) of the first user, gender information of the first user or The device model.
- a feature extraction network is used to perform feature extraction on attribute data of the first user in the first function platform to obtain a first feature representation of the first user.
- Step 320 Perform cluster analysis on the first feature representation and the second feature representation of the second user in the first functional platform to obtain a cluster center corresponding to the first user.
- the first functional platform includes a first user and at least one second user, each of which has corresponding attribute data.
- the second feature representation is a feature representation of the second user.
- the computer device may extract a second feature representation of the second user based on the attribute data of the second user in the first function platform.
- the second feature representation of the second user and the first feature representation of the first user belong to the same type of feature representation, or are feature representations extracted through the same feature extraction network, and are used to represent the characteristics of the user's attribute data in the first function platform.
- the computer device may perform cluster analysis on the second user based on the second feature representation of the second user to obtain a plurality of clusters.
- Each second user corresponds to a respective cluster, the center point of the cluster is the cluster center, and each cluster includes the respective corresponding cluster center.
- the computer device may obtain the similarity between the first feature representation and the cluster center, and determine the cluster center corresponding to the first user based on the similarity.
- obtaining the similarity between the first feature representation and the cluster center, and determining the cluster center corresponding to the first user based on the similarity includes: calculating the distance between the first feature representation and each cluster center, and using the cluster center with the smallest distance from the first feature representation as the cluster center corresponding to the first user.
- the distance between the first feature representation and the cluster center is used to measure the similarity between the first feature representation and the cluster center, which makes it easy to find a more suitable cluster center, so that accurate push can be performed, avoiding waste of resources supporting the content push function.
- the similarity between the first feature representation and the cluster center is obtained, and the cluster center corresponding to the first user is determined based on the similarity, including: calculating the similarity between the first feature representation and each cluster center; determining the probability that the first user belongs to each cluster based on the similarity; screening the maximum probability from the determined probabilities, and determining the cluster center corresponding to the first user based on the maximum probability.
- the probability that the first user belongs to each cluster is calculated using the similarity, and the maximum probability is screened out, so that the cluster center of the cluster to which the first user belongs when the probability is the maximum is used as the cluster center corresponding to the first user.
- This embodiment can also perform content recommendation more accurately, and resources supporting the content push function are wasted.
- Step 330 Acquire second historical interaction data of the first user in the second functional platform, extract features based on the second historical interaction data, and obtain an out-of-domain feature representation of the first user.
- the second functional platform also includes different types of platform elements, and the first user can interact with the platform elements in the second functional platform.
- the relationship between the first function platform and the second function platform includes but is not limited to: (1) different function modules in the same software, the first user is the account logged in to the software; (2) different function modules in the same website, the first user is the account logged in to the website, or the first user is the user corresponding to the terminal identification code of the currently browsing website; (3) the first user's account is associated with the first function platform and the second function platform, such as: the first user uses the first account to log in to the first function platform, and uses the second account to log in to the second function platform, and a binding relationship is established between the first function platform and the second function platform.
- Second historical interaction data of the first user in the second functional platform is obtained, and feature extraction processing is performed on the historical interaction data to obtain an out-of-domain feature representation of the first user.
- out-of-domain refers to the range outside the first functional platform
- the out-of-domain feature representation is the historical interaction data generated by the first user on other functional platforms instead of the first functional platform.
- the first user can interact with platform elements in any type of functional platform, that is, the first user may have historical interaction data in any type of functional platform.
- the historical interaction data generated by the first user interacting with platform elements in areas outside the first functional platform is represented by features obtained after feature extraction processing as out-of-domain feature representation, which is not limited in this embodiment.
- Step 340 Obtain a mapping relationship function corresponding to the cluster center, and map the out-of-domain feature representation through the mapping relationship function to obtain the in-domain feature representation of the first user.
- the mapping relationship function is used to indicate the mapping relationship between the feature representations of the second functional platform and the first functional platform.
- the mapping relationship function corresponding to the cluster center is a mapping relationship function adapted to the cluster center.
- the computer device can map the out-of-domain feature representation input to the mapping relationship function to obtain the in-domain feature representation of the first user output by the mapping relationship function.
- a mapping relationship function corresponding to the cluster center is obtained, and the out-of-domain feature representation is mapped through the mapping relationship function to obtain the in-domain feature representation of the first user, including: based on the cluster center corresponding to the first user, a parameter replacement is performed on a pre-generated parameter-containing mapping function to obtain a mapping relationship function corresponding to the cluster center; the out-of-domain feature representation is mapped through the mapping relationship function to obtain the in-domain feature representation of the first user.
- the computer device can replace parameters of the parameter-containing mapping function based on the cluster center corresponding to the first user to obtain a mapping relationship function corresponding to the cluster center; and map the out-of-domain feature representation through the mapping relationship function to obtain the in-domain feature representation of the first user.
- a pre-generated parameter-containing mapping function is replaced with parameters to obtain a mapping relationship function corresponding to the cluster center, including: obtaining a parameter-containing mapping function, the parameter-containing mapping function includes a specified parameter position in a to-be-filled state; substituting the cluster center as a parameter into the specified parameter position to obtain a mapping relationship function corresponding to the cluster center, wherein the cluster center is used as a search keyword to query the mapping relationship function.
- the computer device can obtain sample data in advance, perform training based on the sample data, determine a number of discrete cluster centers, and determine a mapping relationship function corresponding to each cluster center. Based on the structure of the parameter-containing mapping function, that is, based on the specified parameter position in a state to be filled, the parameter value is extracted from the mapping relationship function, so that the extracted parameter value is corresponded to the corresponding cluster center, forming a preset mapping relationship between the cluster center and the parameter value.
- the computer device can determine the parameter value to be replaced based on the cluster center corresponding to the first user and the preset mapping relationship, and substitute the parameter value into the parameter-containing mapping function to obtain the mapping relationship function corresponding to the cluster center. Furthermore, the computer device can map the out-of-domain feature representation input to the mapping relationship function, and output the in-domain feature representation of the first user.
- mapping relationship function may be a neural network model
- parameter-containing mapping function may be a neural network model with replaceable model parameters
- the cluster center can be used as a search keyword to query the mapping relationship function.
- Each mapping relationship function can correspond to a Key value for indexing.
- the computer device can use the current cluster center as a search keyword to match the target cluster center corresponding to the search key (Key). When the match is successful, the search value (Value) corresponding to the target cluster center is the mapping relationship function corresponding to the current cluster center.
- the type of parameter-containing mapping function can be arbitrary, and the method of replacing parameters of the parameter-containing mapping function based on the cluster center can be arbitrary, which is not limited in this embodiment.
- Step 350 determining the target feature representation of the first user according to the intra-domain feature representation and the first feature representation, and based on the target feature representation, determining the target content matching the first user from the candidate content recommendation pool, and pushing the target content to the first user.
- the target feature representation is a feature representation determined based on the intra-domain feature representation and the first feature representation, and is the basis for matching content from the candidate content recommendation pool.
- the target content is the content pushed to the first user.
- the computer device can combine the intra-domain feature representation and the first feature representation to obtain a target feature representation of the first user, and match the first user with the candidate content recommendation pool based on the target feature representation, match the target content, and push the target content in the candidate content recommendation pool to the first user.
- the intra-domain feature representation of the first user obtained through the above step 340 can be used to represent the possible interaction behavior of the first user on the first functional platform, and there is no historical interaction data of the first user on the first functional platform. That is, the intra-domain feature representation refers to mapping the characteristics of the first user's interaction with the platform elements on the second functional platform to the first functional platform.
- the computer device may combine the in-domain feature representation and the first feature representation by splicing to obtain a target feature representation of the first user, and the target feature representation is used to recommend personalized content to the first user.
- a first feature representation 402 and an out-of-domain feature representation 403 of a first user 401 are obtained, a graph is constructed, and the first feature representation 402 and the out-of-domain feature representation 403 are represented in the form of a graph.
- the out-of-domain feature representation 403 is mapped using a mapping function to obtain an in-domain feature representation 404 of the first user 401, a candidate content recommendation pool 405 is obtained, and similarity matching is performed between the in-domain feature representation 404 and the contents in the candidate content recommendation pool 405 to obtain a similarity matching result 406, and the contents ranked in the top M in terms of similarity corresponding numerical values in the similarity matching result 406 are used as target content 407, and the target content 407 is recommended to the first user 401.
- the way of combining the intra-domain feature representation and the first feature representation can be arbitrary, and the types and quantities of content included in the candidate content recommendation pool can be arbitrary; the way of performing similarity matching between the intra-domain feature representation and the content in the candidate content recommendation pool can be arbitrary, and when selecting target content based on the similarity matching results, the number and types of target content can be arbitrary, and this embodiment does not limit this.
- the method provided in the present application performs cluster analysis on all users in the first functional platform to obtain the cluster to which each user belongs and the cluster center corresponding to each user, and obtains a personalized mapping relationship function based on the cluster center.
- the personalized mapping relationship function can realize the mapping process from out-of-domain feature representation to in-domain feature representation for different users, thereby improving the accuracy of the mapping results.
- a first feature representation of the first user and an out-of-domain feature representation obtained by feature extraction of the first user's historical interaction data on the second functional platform are obtained, the cluster center corresponding to the first user is found based on the first feature representation, a mapping relationship function corresponding to the first user is obtained according to the cluster center corresponding to the first user, the out-of-domain feature representation of the first user is input into the mapping relationship function, and the in-domain feature representation of the first user is obtained after mapping.
- the interaction characteristics of the first user on the first functional platform can be obtained, thereby performing personalized content recommendation on the first user on the first functional platform based on the in-domain feature representation and the first feature representation of the first user, solving the cold user problem and the data sparsity problem, making the recommended content more in line with the real interests of the first user, improving the recommendation effect, and avoiding the waste of resources supporting the content push function.
- the method provided in this embodiment replaces the parameters of the parameter-containing mapping function by replacing the parameters in the parameter-containing mapping function with the cluster center corresponding to the first user, finds a personalized mapping relationship function that meets the mapping requirements of the first user, and uses the mapping relationship function to map the out-of-domain feature representation of the first user to obtain the in-domain feature representation of the first user.
- the in-domain feature representation of the first user can be understood, thereby realizing the feature migration of the first user.
- the personalized mapping function can perform feature mapping for each different first user, and also improves the accuracy and efficiency of feature mapping, so that the target content pushed to the first user is more in line with the needs of the first user, and can further avoid the waste of resources supporting the content push function.
- the method provided in this embodiment obtains the attribute data of all second users in the first functional platform and performs feature extraction to obtain a second feature representation that can represent each second user, performs cluster analysis on the second users based on the second feature representation, generates multiple clusters, and can quickly classify the second users in the first functional platform.
- Each cluster has its corresponding cluster center. Based on the similarity between the first feature representation of the first user and the cluster center, the cluster to which the first user belongs and the corresponding cluster center can be obtained, thereby obtaining a cluster analysis result with a high accuracy rate. This makes the target content pushed to the first user more in line with the needs of the first user, and can further avoid wasting resources supporting the content push function.
- steps 510-550 can be a process of obtaining a mapping function containing parameters.
- the mapping function containing parameters includes a specified parameter position in a state to be filled;
- Step 510 obtaining first historical interaction data of the sample user in the first functional platform, extracting features based on the first historical interaction data, obtaining a sample domain feature representation of the sample user, and the sample user corresponds to a sample cluster center.
- the sample user corresponds to the sample cluster center.
- the first function platform includes at least one sample user, and there is interaction between the sample user and the platform elements in the first function platform, that is, there is historical interaction data of the sample user in the first function platform.
- a feature extraction network is used to extract features from first historical interaction data of a sample user in a first functional platform to obtain a sample out-of-domain feature representation of the sample user.
- Step 520 obtaining second historical interaction data of the sample user in the second functional platform, extracting features based on the second historical interaction data, and obtaining a sample out-of-domain feature representation of the sample user.
- step 510 Same as above step 510.
- the sample users have historical interaction data in both the first functional platform and the second functional platform.
- the method of extracting features from sample users to obtain in-domain feature representations and out-of-domain feature representations can be arbitrary, including but not limited to the above-mentioned method of using a feature extraction network; when using a feature extraction network to extract features from historical interaction data of sample users, the feature extraction network used can be arbitrary, and this embodiment does not limit this.
- Step 530 obtain a candidate mapping function, input the sample user's out-of-domain feature representation into the candidate mapping function, and obtain the sample domain mapping feature corresponding to the sample user through mapping.
- Each sample user corresponds to its own cluster
- the center point of the cluster is the cluster center
- each cluster contains its own cluster center.
- the distance between the sample feature representation of the sample user and the cluster center is calculated, and the cluster center with the smallest distance to the sample feature representation is used as the sample cluster center corresponding to the sample user.
- Obtain a candidate mapping function which is a preset function that has the ability to map the out-of-domain features of the sample user into in-domain features. Input the sample out-of-domain feature representation of the sample user into the candidate mapping function, and the accuracy of the in-domain mapping features obtained after mapping is low.
- Step 540 Obtain a reconstruction loss based on the sample domain feature representation and the sample domain mapping feature of the sample user.
- the reconstruction loss (L reconstruction ) is obtained based on the difference between the sample user's in-sample domain feature representation and the in-sample domain mapping features.
- the reconstruction loss is obtained by using mean square error (MSE), which is to calculate the sum of the squares of the distances between the feature representation in the sample domain of the sample user and the mapping feature in the sample domain.
- MSE mean square error
- the method of obtaining the reconstruction loss may be arbitrary, including but not limited to the above-mentioned mean square error loss method, which is not limited in this embodiment.
- Step 550 Train the candidate mapping functions based on the reconstruction loss to obtain the mapping relationship function corresponding to the sample cluster center.
- the sample user's out-of-domain feature representation is input into the candidate mapping function, and the sample user's mapping domain feature is output.
- the candidate mapping function is trained based on the reconstruction loss between the in-domain mapping feature and the sample user's in-domain feature representation.
- the obtained candidate mapping function has a high accuracy when performing feature mapping on all sample users belonging to the target cluster.
- the candidate mapping functions obtained by training using sample users in different clusters that is, the mapping relationship function, have a corresponding relationship with the sample cluster centers corresponding to the sample users used in training.
- mapping relationship function obtained by training each cluster is recorded in a table, as shown in Table 1 below:
- Value represents different mapping relationship functions, and each mapping relationship function corresponds to a Key value, which is used to represent the index corresponding to the mapping relationship function.
- the candidate mapping function is trained using sample users in the first cluster, wherein the cluster center of the first cluster is the first sample cluster center, and the first mapping function obtained through the above training process is suitable for feature mapping of the first user corresponding to the first sample cluster center.
- the mapped in-domain mapping features are used to approximate the original in-domain feature representation of the sample user, and the training is stopped when the reconstruction loss satisfies any of the following conditions.
- the reconstruction loss is lower than the preset threshold; (2) The reconstruction loss converges.
- the parameter-containing mapping function obtained through loss training includes the specified parameter position in a state to be filled.
- the mapping characteristics and mapping effects of the obtained parameter-containing mapping function are also different, that is, after adjusting the parameters in the specified parameter position, a personalized parameter-containing mapping function can be obtained, and different parameter-containing mapping functions are suitable for different types of users.
- the method of training the parameter-containing mapping function based on the reconstruction loss can be arbitrary, and the condition for determining the end of training can be arbitrary, which is not limited in this embodiment.
- Step 560 Substitute the cluster center as a parameter into the designated parameter position to obtain a mapping relationship function corresponding to the cluster center.
- the cluster center is used as a search keyword to query the mapping relationship function.
- each mapping relationship function corresponds to a Key value for indexing.
- the current cluster center is used as the search keyword and matched with the target cluster center corresponding to the Key value in Table 1.
- the Value corresponding to the target cluster center is the mapping relationship function corresponding to the current cluster center.
- the parameter-containing mapping function obtained through the training in the above steps 510 to 550 can be subjected to parameter replacement operations to obtain different types of mapping relationship functions. These different types of mapping relationship functions constitute a mapping module.
- Each cluster has its corresponding cluster center.
- the cluster center of each cluster is taken as a parameter into the specified parameter position in the parameter-containing mapping function to obtain a personalized mapping relationship function.
- the mapping module includes the same number of mapping relationship functions as the cluster centers.
- the mapping relationship function corresponding to the cluster center can perform feature mapping on the out-of-domain feature representations of all users in the cluster to which the cluster center belongs, and obtain the in-domain mapping features of these users. That is, when obtaining the in-domain features of the user on the first functional platform, the cluster center of the cluster can be obtained based on the cluster in which the user is located, and the cluster center can be used as an index to find the mapping relationship function corresponding to the cluster center in the mapping module.
- the method provided by the present application obtains a personalized mapping relationship function by clustering the cluster centers, and uses the cluster centers as the parameters of the specified positions in the candidate mapping function.
- the obtained personalized mapping relationship function can realize the mapping process from the out-of-domain feature representation to the in-domain feature representation for different types of users, thereby improving the accuracy of the mapping result.
- the out-of-domain feature representation of the user is input into the mapping relationship function, and the in-domain feature representation of the user is obtained after mapping.
- the interaction characteristics of the user on the first functional platform are obtained, so as to make personalized content recommendations for the user on the first functional platform based on the user's in-domain feature representation and the first feature representation, thereby solving the cold user problem and the data sparsity problem, making the recommended content more in line with the user's real interests, improving the recommendation effect, and avoiding the waste of resources supporting the content push function.
- the method provided in this embodiment obtains multiple sample users who have historical interaction data on both the first function platform and the second function platform, performs feature analysis on the historical interaction data, obtains the in-domain feature representation of the sample users on the first function platform, and the out-of-domain feature representation of the sample users on the second function platform, presets a candidate mapping function with a mapping function, inputs the out-of-domain feature representation into the candidate mapping function, obtains mapped in-domain mapping features, trains the candidate mapping function based on the reconstruction loss between the in-domain mapping features and the in-domain feature representation, so that the in-domain mapping features are close to the real in-domain feature representation of the sample users, thereby obtaining an accurate mapping parameter-containing mapping function, which can migrate the interest features of the sample users in one domain to another domain, thereby improving the accuracy and effect of mapping.
- the method provided in this embodiment replaces the parameters at the specified position in the parameter-containing mapping function, brings the cluster center as a parameter into the specified parameter position, and obtains a personalized mapping relationship function with targeted mapping effect.
- the obtained in-domain mapping features are close to the user's real in-domain feature representation, and can accurately represent the interaction characteristics between the user and the platform elements in the first functional platform.
- the user's in-domain feature representation can also be obtained, which improves the accuracy of mapping, solves the cold user problem and data sparsity problem, and makes content recommendations based on the in-domain mapping features, so that the recommended content is more in line with the user's real interests, improves the recommendation effect, and avoids the waste of resources supporting the content push function.
- the first functional platform includes multiple users, and cluster analysis is performed on the first user and the second user on the platform so that each user can find the cluster to which he/she belongs and obtain the corresponding cluster center, as shown in Figure 6.
- Figure 6 is a flow chart of a cluster analysis method provided by an exemplary embodiment of the present application, and the method includes the following steps.
- Step 610 Acquire clustering information, where the clustering information is used to indicate the location information of the initial cluster center.
- the methods for obtaining the location information of the initial cluster center include but are not limited to the following methods: 1. Random initialization; 2. Specifying the location of the initial cluster center.
- clustering analysis is performed on all users in the first functional platform based on the location information of the initial cluster centers
- batch training is used to learn the cluster centers.
- Batch training also known as batch training, refers to dividing the entire set of training data into several batches for training. Each batch selects n_num (total data)/n_batch (batch) data from the data until the entire set of data is trained.
- the number of cluster centers is K, and K is a positive integer.
- the number of cluster centers corresponds to the number of cluster clusters, and the number of cluster centers can be any specified value, which is not limited in this embodiment.
- Step 620 obtaining the similarity between the second feature representation and the initial cluster center, and determining the first cluster distribution result based on the similarity between the second feature representation and the initial cluster center, wherein the first cluster distribution result includes the feature distribution corresponding to each initial cluster center.
- the first cluster distribution result includes the characteristic distribution corresponding to each initial cluster center.
- step 610 the initial cluster center is obtained.
- cluster analysis is performed on the second users in the platform based on the initial cluster centers to find the cluster to which each user belongs.
- the similarity between the second feature representation of the second user and the initial cluster center is obtained using the Student's T distribution, and based on the similarity, the probability that the second user belongs to a certain cluster is obtained, and the cluster with the highest probability corresponding value and the initial cluster center are used as the initial classification result of the second user, which together constitute the first cluster distribution result.
- Student's T distribution (T-distribution) is used to predict the probability of a Estimate the mean of a population with a normal distribution and unknown variance. If the population variance is known (for example, when the sample size is large enough), the normal distribution should be used to estimate the population mean.
- q ij refers to the probability that the second user belongs to a certain cluster
- h i refers to the second feature representation of the second user
- ⁇ j refers to the initial cluster center
- ⁇ represents the degree of freedom of Student's T distribution
- ⁇ j′ refers to any initial cluster center.
- the second features corresponding to the first and second users are represented as h 1
- the second features corresponding to the first and second users are represented as h 1 and the initial cluster center
- All similarity values in the similarity array array1[j] are summed to obtain the similarity sum sum, and each similarity value in the similarity array array1[j] is divided by the similarity sum sum to obtain the probability q ij that the first and second users belong to each cluster, and the cluster with the largest probability corresponding value is taken as the cluster corresponding to the first and second users, and the initial cluster center corresponding to the cluster is also the initial cluster center corresponding to the first and second users.
- the method for obtaining the similarity between the second feature representation of the second user and the center of the initial probability cluster can be arbitrary, including but not limited to the above-mentioned method based on Student's T distribution; the method for obtaining the probability of the cluster to which each user belongs can be arbitrary, and this embodiment does not limit this.
- Step 630 Perform discrete analysis on the first cluster distribution result to obtain a second cluster distribution result, and determine multiple clusters based on the second cluster distribution result, wherein the second cluster distribution result includes a cluster center corresponding to each second feature representation.
- the second cluster distribution result includes the cluster center corresponding to each second feature representation.
- the confidence of the first cluster distribution result obtained in step 620 is low, and the probability that each second user belongs to the corresponding cluster is low.
- the first cluster distribution result is made closer to the target cluster distribution result.
- the discrete analysis process of the first cluster distribution result includes the following steps:
- p ij refers to the updated probability that the second user belongs to a certain cluster, that is, the target cluster distribution result
- f j ⁇ i q ij represents the probability that the second user belongs to the center of the jth cluster.
- KL divergence is used to make the first cluster distribution result closer to the target cluster distribution result, as shown in the following formula 3:
- P represents the target clustering cluster distribution result
- Q represents the first clustering cluster distribution result
- L clustering refers to KL divergence.
- the target cluster distribution result is the second cluster distribution result. After the second cluster distribution result is obtained based on the above steps, multiple clusters are obtained based on the second cluster distribution result, and each cluster corresponds to its own cluster center.
- FIG. 7 is a schematic diagram of obtaining a second clustering distribution result after performing discrete analysis processing on the first clustering distribution result.
- the first cluster distribution result 701 includes clusters formed based on multiple initial cluster centers. After a dispersion analysis is performed on the first cluster distribution result 701 based on the KL divergence 702, a second cluster distribution result 703 with a higher confidence level is obtained.
- the first cluster distribution result 701 includes an initial cluster center 704 and a second feature representation 705 belonging to the same cluster.
- the distance between the initial cluster center 704 and the second feature representation 705 is large, and the formed clusters are relatively dispersed.
- the second cluster distribution result 703 includes a cluster center 706 and an updated second feature representation 707 belonging to the same cluster.
- the distance between the cluster center 706 and the updated second feature representation 707 is relatively close, and the formed clusters are relatively compact.
- the method provided by the present application performs cluster analysis on all users on the first functional platform to obtain the cluster to which each user belongs and the cluster center corresponding to each user, and obtains a personalized mapping relationship function based on the cluster center.
- the personalized mapping relationship function can realize the mapping process from the out-of-domain feature representation to the in-domain feature representation for different users, thereby improving the accuracy of the mapping result.
- the first feature representation of the first user and the out-of-domain feature representation obtained by feature extraction of the historical interaction data of the first user on the second functional platform are obtained, and the cluster center corresponding to the first user is found based on the first feature representation.
- the mapping relationship function corresponding to the first user is obtained according to the cluster center corresponding to the first user, and the out-of-domain feature representation of the first user is input into the mapping relationship function, and the in-domain feature representation of the first user is obtained after mapping.
- personalized content recommendation is performed on the first functional platform for the first user, which solves the cold user problem and the data sparse problem, so that the recommended content is more in line with the real interests of the first user, improves the recommendation effect, and avoids the waste of resources supporting the content push function.
- the method provided in this embodiment obtains the position information of the initial cluster center by random initialization, performs cluster analysis on all users in the first functional platform based on the initial cluster center, obtains a first cluster analysis result, performs a discreteness analysis on the first cluster analysis result, and can obtain a second cluster distribution result with a higher confidence level, find the cluster to which each user belongs and the corresponding cluster center, accurately classify all users, and further find a personalized mapping function based on the classification result to make the mapping result more accurate.
- the method provided in this embodiment performs a dispersion analysis on each cluster in the first cluster distribution result.
- the discrete value corresponding to each initial cluster center is obtained, and the initial cluster center is updated based on the discrete value to obtain a second cluster distribution result with higher confidence, and the feature extraction network is updated to obtain a more accurate second feature representation and the relationship between each second feature representation and the cluster to which it belongs.
- Figure 8 is a flow chart of a method for obtaining an out-of-domain feature representation of a first user. Specifically, obtaining the second historical interaction data of the first user in the second functional platform, extracting features based on the historical interaction data, and obtaining the out-of-domain feature representation of the first user include the following steps.
- Step 810 obtain the second historical interaction data of the first user in the second functional platform, and obtain a heterogeneous graph based on the historical interaction data.
- the target heterogeneous graph includes multiple meta-paths, and the target heterogeneous graph is used to represent the historical interaction relationship between the first user and the platform elements in the second functional platform.
- the target heterogeneous graph includes multiple meta-paths, and the target heterogeneous graph is used to represent the historical interaction relationship between the first user and the elements in the second functional platform.
- Heterogeneous graphs are also called heterogeneous networks.
- heterogeneous graphs the types of nodes and edges are not single but diverse.
- a meta-path can be understood as a path connecting nodes of different types. Different meta-paths have different path types, and the so-called path type is usually represented by a node type path.
- FIG9 is a schematic diagram of a heterogeneous graph.
- the heterogeneous graph 900 there are a target domain 910 , a source domain 920 , and a platform user 930 .
- the target domain 910 refers to the first functional platform
- the source domain 920 refers to the second functional platform
- the platform user 930 includes a first user and a second user
- the first user is represented as a first user node 931 in the heterogeneous graph 900 .
- first platform elements in the target domain 910 which are represented as first element nodes 911 in the heterogeneous graph 900 .
- second platform elements in the source domain 920 which are represented as second element nodes 921 in the heterogeneous graph 900 .
- the first element node 911, the second element node 921 and the first user node 931 are of different types.
- the first user node 931 and the second element node 921 are connected by a straight line, indicating that there is an interaction relationship between the first user node 931 and the second element node 921.
- the first user node 931, the second element node 921 and the straight line used for connecting therebetween together constitute a meta-path in the heterogeneous graph 900 belonging to the first user.
- the meta-paths generated based on the historical interaction data between the source domains 920 and the first user node 931 include but are not limited to the following:
- u1 refers to the first user node 931
- i2 is the second element node 921 in the source domain 920
- u2, u3, u4 and u5 represent the user nodes corresponding to the second user in the platform users 930.
- i2 is the first-order neighbor of u1
- u2 and u4 are the second-order neighbors of u1, and so on.
- the number of nodes that u1 needs to pass through to reach the target node is N
- the target node is the (N-1)-order neighbor of u1.
- the target nodes that can be reached through the meta-path are all neighbor nodes of u1.
- the target node is a specific node.
- the heterogeneous graph 900 in addition to the meta-path centered on the first user node 931, it also includes a meta-path centered on the user node corresponding to the second user in the platform user 930.
- the meta-paths centered on the first user node 931 together constitute the target heterogeneous graph, that is, the target heterogeneous graph is a part of the heterogeneous graph 900.
- the number and type of meta-paths contained in a heterogeneous graph can be arbitrary, the number of nodes and edges contained in a heterogeneous graph can be arbitrary, the type of nodes can be arbitrary, and the nodes and edges contained in a meta-path can be arbitrary.
- the number can be arbitrary, and the order and number of neighbor nodes of the central node in the meta-path can be arbitrary, which is not limited in this embodiment.
- the number and type of user nodes included in the target heterogeneous graph can be arbitrary, the number and type of source domains can be arbitrary, the number and type of target domains can be arbitrary, the platform that can serve as the source domain includes but is not limited to the second functional platform, the number and type of first element nodes in the first functional platform can be arbitrary, and the number and type of second element nodes in the second functional platform can be arbitrary. This embodiment does not limit this.
- Step 820 extracting the path feature representation corresponding to the meta-path in the heterogeneous graph.
- Graph Neural Network is a general term for algorithms that use neural networks to learn graph structured data, extract and discover features and patterns in graph structured data, and meet the needs of graph learning tasks such as clustering, classification, prediction, segmentation, and generation.
- a graph neural network is used to analyze the target heterogeneous graph.
- Graph Attention Network is a graph neural network that uses a method similar to self-attention in transformer to calculate the attention of a node in the graph relative to each neighboring node, concatenates the node's own features and the attention features as the node's features, and performs tasks such as node classification on this basis.
- the second historical interaction data of the first user in the second functional platform is represented in the target heterogeneous graph in the form of meta-paths, and each meta-path has semantic information for representing the characteristics and interest preferences of the first user in interacting on the second functional platform.
- meta-paths there are different types of meta-paths in the target heterogeneous graph.
- heterogeneous graph convolution is used to capture the rich semantic information contained in each meta-path, and a node-level attention mechanism is added to distinguish the importance of each neighbor node to the central node (the first user node).
- extracting a path feature representation corresponding to a meta-path in a heterogeneous graph includes: obtaining node attention of each path node in the meta-path, where the path node is used to represent a platform element that has a historical interaction relationship with the first user; and aggregating the node attention to obtain a path feature representation of the meta-path.
- FIG10 is a schematic diagram of heterogeneous graph convolution based on meta-path.
- the first user node 1000 corresponding to the first user is taken as the center, and the first-order neighbor nodes 1010 and the second-order neighbor nodes 1020 of the first user node 1000 are obtained in sequence in the meta-path.
- the order of obtaining the node-level attention of each neighbor node is opposite to the order of obtaining the neighbor nodes.
- the node attention of the second-order neighbor node 1020 is first obtained, then the node attention of the first-order neighbor node 1010 is obtained, and finally the node attention of the first user node 1000 is obtained.
- the node attention of the second-order neighbor node 1020 is first aggregated to obtain the embedding of the first-order neighbor node 1010, and then the node attention of the first-order neighbor node 1010 is aggregated to obtain the embedding of the first user node 1000.
- each neighbor node (first-order neighbor node 1010 and second-order neighbor node 1020) has different degrees of importance to the central node (first user node 1000).
- a node representation is finally formed, as shown in the following formula 4:
- ⁇ ui represents the correlation between node u and node i
- h u , hi represent the representation of node u and node i
- ⁇ u represents the neighbor set of node u.
- Each meta-path has multiple nodes, so there are multiple node representations. All the node representations on this meta-path are aggregated. Putting it together, we get the path feature representation of the meta-path, as shown in the following formula 5:
- h′u represents the path feature representation of the meta-path
- ⁇ ( ⁇ ) represents the activation function
- the above examples only involve first-order neighbor nodes and second-order neighbor nodes.
- the node order of the meta-path can be arbitrary.
- attention analysis can be performed only on specified neighbor nodes, or attention analysis can be performed on all neighbor nodes.
- the method used when performing attention analysis and obtaining node attention can be arbitrary, including but not limited to the above-mentioned graph attention network method, which is not limited in this embodiment.
- Step 830 Aggregate the path feature representations corresponding to the meta-paths to obtain the out-of-domain feature representations of the first user.
- the path feature representation of the meta-path is aggregated to obtain the out-of-domain feature representation of the first user.
- the out-of-domain refers to the source domain, that is, refers to the area other than the first functional platform.
- the out-of-domain can be the area of any functional platform, but the first user must have historical interaction data in the source domain, and these historical interaction data reflect the interaction characteristics and interest preferences of the first user in the source domain.
- the number of meta-paths contained in the target heterogeneous graph corresponding to the first user is at least one.
- each meta-path is subjected to meta-path-based heterogeneous graph convolution to perform feature extraction to obtain multiple path feature representations.
- the feature representation of the first user on the second functional platform is obtained, that is, the out-of-domain feature representation of the first user.
- pooling comes from the visual mechanism and is the process of abstracting information.
- the essence of pooling is sampling.
- the pooling layer selects a certain method to reduce the dimension of the input Feature Map to speed up the operation.
- average pooling can be understood as averaging the content of the input pooling layer.
- each path feature representation corresponds to one of the grids.
- Compress the 10*10 grid into a 2*2 large grid that is, divide the 100 grids into 4 groups, each group includes 25 grids, take the average of the path feature representations in each group of grids, and use the average to represent the path feature representation corresponding to each large grid, which is the average pooling process.
- steps 810 to 830 take the first user as an example to obtain the out-of-domain feature representation of the first user.
- the out-of-domain feature representation or in-domain feature representation of the first user on other functional platforms can also be obtained according to the methods of the above steps 810 to 830.
- Any domain can be used as a target domain, and any domain can be used as a source domain.
- the method for obtaining in-domain feature representation and out-of-domain feature representation is the same, and is applicable to each user, including but not limited to the first user and the second user.
- the domain with sparse historical interaction data between the current user or the domain without historical interaction data is used as the target domain; and the domain with historical interaction data or a large amount of historical interaction data is used as the source domain.
- the sample user's out-of-domain features are represented as in-domain mapping features, and the learning process of using the in-domain mapping features to approximate the in-domain feature representation involves reconstruction loss (L reconstruction ).
- the learning process of content recommendation for sample users involves a personalized recommendation module.
- the personalized recommendation module before recommending content to the first user based on the similarity matching result, it is necessary to train the personalized recommendation module based on the loss between the recommended target content and the content that the user is actually interested in. As shown in the following formula 6:
- hu is the representation of user u
- hvi is the representation of positive sample vi of user u
- hvj is the representation of negative sample vj of user u
- ⁇ represents the activation function
- L rec is the recommendation loss.
- the personalized recommendation mechanism is trained based on the recommendation loss, so that when similarity matching is performed based on the in-domain feature representation of the first user and the elements in the candidate content recommendation pool, the similarity matching result can accurately represent the content that the first user is truly interested in, and recommend content to the first user.
- the reconstruction loss is lower than the preset threshold; (2) The reconstruction loss converges.
- the above three links adopt a joint training method, that is, the training of each process based on recommendation loss, clustering loss, and reconstruction loss is synchronous.
- the method of training the personalized recommendation module based on the recommendation loss can be arbitrary, and the condition for determining the end of training can be arbitrary, which is not limited in this embodiment.
- the method provided by the present application by expressing the historical interaction data of the first user on the second functional platform in the form of a heterogeneous graph, can intuitively observe the characteristics of the first user's interaction between the second functional platforms, and represent each historical interaction data based on the meta-path in the heterogeneous graph, and obtain the interest preference of the first user on the second functional platform according to the path feature representation of the meta-path, and further obtain the out-of-domain feature representation of the first user, which provides a reliable basis for feature mapping, improves the accuracy of feature migration, and obtains the in-domain feature representation based on the out-of-domain feature representation, and the effect of personalized content recommendation for the first user.
- the method provided in this embodiment obtains a target heterogeneous graph through the second historical interaction data of the first user in the second functional platform.
- the target heterogeneous graph contains multiple meta-paths, which can intuitively and concisely represent the historical interaction relationship between the first user and the elements in the second functional platform.
- the path feature representation corresponding to the meta-path in the target heterogeneous graph is extracted and the path feature representation corresponding to the meta-path is aggregated, so that the out-of-domain feature representation of the first user is obtained with high accuracy.
- the method provided in this embodiment uses a graph attention network to perform attention analysis on the path nodes of each meta-path in a heterogeneous graph, obtain an attention representation of each path node, and based on the attention representation of the path node, obtain the importance of each path node to the central node, aggregate the node attention of each path node in the meta-path, and obtain the path feature representation of the entire meta-path. It is possible to understand the characteristics of the first user's interaction on the second functional platform and the interest preferences shown by the first user on the second functional platform, so that the out-of-domain feature representation obtained based on the path feature representation of the meta-path more accurately reflects the interest characteristics of the first user on the second functional platform.
- FIG. 11 is a structural block diagram of a content recommendation device provided by an exemplary embodiment of the present application. As shown in FIG. 11 , the device includes:
- An extraction module 1110 is used to obtain attribute data of a first user in a first function platform, extract features based on the attribute data, and obtain a first feature representation of the first user;
- the cluster analysis module 1120 is used to obtain a second feature representation of the second user in the first functional platform, perform cluster analysis on the first feature representation and the second feature representation, and obtain a cluster center corresponding to the first user;
- the extraction module 1110 is further used to obtain second historical interaction data of the first user in the second function platform, extract features based on the second historical interaction data, and obtain an out-of-domain feature representation of the first user;
- the acquisition module 1130 is used to acquire a mapping relationship function corresponding to the cluster center, and map the out-of-domain feature representation through the mapping relationship function to obtain the in-domain feature representation of the first user, wherein the mapping relationship function is used to indicate the mapping relationship between the feature representations of the second function platform and the first function platform;
- the recommendation module 1140 is used to determine the target feature representation of the first user according to the intra-domain feature representation and the first feature representation, and based on the target feature representation, determine the target content matching the first user from the candidate content recommendation pool, and push the target content to the first user.
- the acquisition module 1130 further includes:
- a parameter replacement unit 1131 is used to replace the parameters of the pre-generated parameter-containing mapping function based on the cluster center corresponding to the first user, so as to obtain a mapping relationship function corresponding to the cluster center;
- the mapping unit 1132 is used to map the out-of-domain feature representation through a mapping relationship function to obtain the in-domain feature representation of the first user.
- the parameter replacement unit 1131 is also used to obtain a parameter-containing mapping function, which includes a specified parameter position in a to-be-filled state; the cluster center is substituted as a parameter into the specified parameter position to obtain a mapping relationship function corresponding to the cluster center, wherein the cluster center is used as a retrieval keyword to query the mapping relationship function.
- the acquisition module 1130 is also used to obtain first historical interaction data of the sample user in the first functional platform, extract features based on the first historical interaction data, and obtain a sample domain feature representation of the sample user, and the sample user corresponds to a sample cluster center; obtain second historical interaction data of the sample user in the second functional platform, extract features based on the second historical interaction data, and obtain an out-of-sample domain feature representation of the sample user; obtain a candidate mapping function, input the out-of-sample domain feature representation of the sample user into the candidate mapping function, and obtain a sample domain mapping feature corresponding to the sample user after mapping; obtain a reconstruction loss based on the sample user's in-sample domain feature representation and the sample domain mapping feature; train the candidate mapping function based on the reconstruction loss to obtain a mapping relationship function corresponding to the sample cluster center.
- the clustering analysis module 1120 is also used to obtain attribute data of a second user in the first functional platform, extract features based on the attribute data of the second user, and obtain a second feature representation of the second user; perform cluster analysis on the second user based on the second feature representation to obtain multiple clusters, wherein each cluster contains a cluster center; obtain the similarity between the first feature representation and the cluster center, and determine the cluster center corresponding to the first user based on the similarity.
- the clustering analysis module 1120 is also used to obtain clustering information, which is used to indicate the location information of the initial clustering cluster center; obtain the similarity between the second feature representation and the initial clustering cluster center; determine the first clustering cluster distribution result based on the similarity between the second feature representation and the initial clustering cluster center, and the first clustering cluster distribution result includes the feature distribution corresponding to each initial clustering cluster center; perform discrete analysis on the first clustering cluster distribution result to obtain the second clustering cluster distribution result, and determine multiple clusters based on the second clustering cluster distribution result, and the second clustering cluster distribution result includes the clustering cluster center corresponding to each second feature representation.
- the clustering analysis module 1120 is also used to perform a discreteness analysis on the feature distribution corresponding to the i-th initial cluster center in the first cluster distribution result to obtain a discrete value corresponding to the i-th initial cluster center; update the i-th initial cluster center based on the discrete value corresponding to the i-th initial cluster center to obtain a second cluster distribution result; and update the feature extraction network, wherein the feature extraction network is used to extract feature representations of users; and obtain multiple clusters based on the second cluster distribution result and the updated feature extraction network.
- the cluster analysis module 1120 is further configured to calculate the distance between the first feature representation and each cluster center, and use the cluster center with the smallest distance to the first feature representation as the cluster center corresponding to the first user.
- the cluster analysis module 1120 is also used to calculate the similarity between the first feature representation and each cluster center; based on the similarity, determine the probability that the first user belongs to each cluster; select the maximum probability from the determined probabilities, and determine the cluster center corresponding to the first user based on the maximum probability.
- the extraction module 1110 is also used to obtain the second historical interaction data of the first user in the second functional platform, obtain a heterogeneous graph based on the historical interaction data, the target heterogeneous graph includes multiple meta-paths, and the target heterogeneous graph is used to represent the historical interaction relationship between the first user and the platform elements in the second functional platform; extract the path feature representation corresponding to the meta-path in the heterogeneous graph; aggregate the path feature representation corresponding to the meta-path to obtain the out-of-domain feature representation of the first user.
- the extraction module 1110 is also used to obtain the node attention of each path node in the meta-path, where the path node is used to represent the platform element that has a historical interaction relationship with the first user; the node attention is aggregated to obtain the path feature representation of the meta-path.
- the device provided in this embodiment performs cluster analysis on all users in the first functional platform to obtain the cluster to which each user belongs and the cluster center corresponding to each user, and obtains a personalized mapping relationship function based on the cluster center.
- the personalized mapping relationship function can realize the mapping process from out-of-domain feature representation to in-domain feature representation for different users, thereby improving the accuracy of the mapping results.
- At least one of the first feature representation or the second feature representation of the first user and the out-of-domain feature representation obtained by feature extraction of the historical interaction data of the first user on the second functional platform are obtained, the cluster center corresponding to the first user is found based on the first feature representation and the second feature representation, a mapping relationship function corresponding to the first user is obtained according to the cluster center corresponding to the first user, the out-of-domain feature representation of the first user is input into the mapping relationship function, and the in-domain feature representation of the first user is obtained after mapping, so that when the first user only has historical interaction data on the second functional platform but no historical interaction data on the first functional platform, the interaction characteristics of the first user on the first functional platform can be obtained, thereby performing personalized content recommendation on the first user on the first functional platform based on the in-domain feature representation and the first feature representation of the first user, solving the cold user problem and the data sparsity problem, making the recommended content more in line with the real interests of the first user, improving the recommendation effect, and avoiding the waste of resources supporting the content push
- the content recommendation device provided in the above embodiment is only illustrated by the division of the above functional modules.
- the above functions can be assigned to different functional modules as needed, that is, the internal structure of the device can be divided into different functional modules to complete all or part of the functions described above.
- FIG13 shows a block diagram of a computer device 1300 provided by an exemplary embodiment of the present application.
- the computer device 1300 may be a laptop computer or a desktop computer.
- the computer device 1300 may also be called a user device, a portable terminal, a laptop terminal, a desktop terminal, or other names.
- the computer device 1300 includes a processor 1301 and a memory 1302 .
- the processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
- the processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array).
- the processor 1301 may also include a main processor and a coprocessor.
- the main processor is a processor for processing data in the awake state, also known as a CPU (Central Processing Unit); the coprocessor is a low-power processor for processing data in the standby state.
- the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content to be displayed on the display screen.
- the processor 1301 may also include an AI processor, which is used to process computing operations related to machine learning.
- the memory 1302 may include one or more computer-readable storage media, which may be non-transitory.
- the memory 1302 may also include a high-speed random access memory, and a non-volatile memory, such as one or more disk storage devices, flash memory storage devices.
- the non-transitory computer-readable storage medium in the memory 1302 is used to store at least one instruction, which is used to be executed by the processor 1301 to implement the content recommendation method provided in the method embodiment of the present application.
- the computer device 1300 also includes other components. Those skilled in the art will understand that the structure shown in Figure 13 does not constitute a limitation on the terminal 1300, and may include more or fewer components than shown in the figure, or combine certain components, or adopt a different component arrangement.
- An embodiment of the present application also provides a computer device, which includes a processor and a memory, wherein the memory stores at least one instruction, computer-readable instruction, code set or instruction set, and the at least one instruction, computer-readable instruction, code set or instruction set is loaded and executed by the processor to implement the content recommendation method provided by the above-mentioned method embodiments.
- An embodiment of the present application also provides a computer-readable storage medium, on which is stored at least one instruction, computer-readable instruction, code set or instruction set, and the at least one instruction, computer-readable instruction, code set or instruction set is loaded and executed by a processor to implement the content recommendation method provided by the above-mentioned method embodiments.
- the embodiments of the present application also provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium.
- a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any of the content recommendation methods described in the above embodiments.
- the computer-readable storage medium may include: a read-only memory (ROM), a random access memory (RAM), a solid-state drive (SSD), or an optical disk.
- the random access memory may include a resistive random access memory (ReRAM) and a dynamic random access memory (DRAM).
- ReRAM resistive random access memory
- DRAM dynamic random access memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
一种内容推荐方法,由计算机设备执行,包括:获取第一用户在第一功能平台中的属性数据,基于属性数据提取特征,得到第一用户的第一特征表示(310);获取第一功能平台中第二用户的第二特征表示,将第一特征表示与第二特征表示进行聚类分析,得到第一用户对应的聚类簇中心(320);获取第一用户在第二功能平台中的第二历史交互数据,基于第二历史交互数据提取特征,获得第一用户的域外特征表示(330);获取与聚类簇中心对应的映射关系函数,通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示,其中,映射关系函数用于指示第二功能平台和第一功能平台之间特征表示的映射关系(340);及根据域内特征表示和第一特征表示,确定第一用户的目标特征表示,并基于目标特征表示,从候选内容推荐池中确定与第一用户匹配的目标内容,向第一用户推送目标内容(350)。
Description
相关申请:
本申请要求于2022年11月29日提交中国专利局,申请号为2022115160629、发明名称为“内容推荐方法、装置、设备、介质和程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及计算机技术领域,具体涉及机器学习领域,特别涉及一种内容推荐方法、装置、设备、介质和程序产品。
用户基于网络在不同平台获取需要的信息内容,而平台中存在的信息内容量巨大,用户难以对其进行筛选,获得自己想要的信息内容。
相关技术中,个性化推荐系统通过获得授权后收集用户在平台上的属性和历史交互数据,捕捉用户的兴趣特征,利用设计好的推荐算法为用户生成特定的推荐列表,对用户进行个性化内容的推荐。
然而,在推荐系统中,存在数据稀疏问题和冷启动问题。数据稀疏问题是指用户的历史交互数据较少,冷启动问题是指新用户进入系统后,没有历史交互数据。推荐系统无法准确的分析出用户的兴趣和偏好,导致无法准确地为用户推送内容,导致支持内容推送功能的资源被浪费,资源利用率低。
发明内容
根据本申请的各种实施例,提供一种内容推荐方法、装置、设备、介质和程序产品。
一方面,提供了一种内容推荐方法,由计算机设备执行,包括:
获取第一用户在第一功能平台中的属性数据,基于所述属性数据提取特征,得到所述第一用户的第一特征表示;
获取所述第一功能平台中第二用户的第二特征表示,将所述第一特征表示与所述第二特征表示进行聚类分析,得到所述第一用户对应的聚类簇中心;
获取所述第一用户在第二功能平台中的第二历史交互数据,基于所述第二历史交互数据提取特征,获得所述第一用户的域外特征表示;
获取与所述聚类簇中心对应的映射关系函数,通过所述映射关系函数对所述域外特征表示进行映射,得到所述第一用户的域内特征表示,其中,所述映射关系函数用于指示所述第二功能平台和所述第一功能平台之间特征表示的映射关系;及
根据所述域内特征表示和所述第一特征表示,确定所述第一用户的目标特征表示,并基于所述目标特征表示,从候选内容推荐池中确定与所述第一用户匹配的目标内容,向所述第一用户推送所述目标内容。
另一方面,提供了一种内容推荐装置,包括:
提取模块,获取第一用户在第一功能平台中的属性数据,基于所述属性数据提取特征,得到所述第一用户的第一特征表示;
聚类分析模块,获取所述第一功能平台中第二用户的第二特征表示,将所述第一特征表示与所述第二特征表示进行聚类分析,得到所述第一用户对应的聚类簇中心;
所述提取模块,用于获取所述第一用户在第二功能平台中的第二历史交互数据,基于所述第二历史交互数据提取特征,获得所述第一用户的域外特征表示;
获取模块,用于获取与所述聚类簇中心对应的映射关系函数,通过所述映射关系函数对所述域外特征表示进行映射,得到所述第一用户的域内特征表示,其中,所述映射关系函数用于指示所述第二功能平台和所述第一功能平台之间特征表示的映射关系;及
推荐模块,用于根据所述域内特征表示和所述第一特征表示,确定所述第一用户的目
标特征表示,并基于所述目标特征表示,从候选内容推荐池中确定与所述第一用户匹配的目标内容,向所述第一用户推送所述目标内容。
另一方面,提供了一种计算机设备,包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令由所述处理器加载并执行以实现本申请各实施例的内容推荐方法。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可读指令,所述计算机可读指令由处理器加载并执行以实现本申请各实施例的内容推荐方法。
另一方面,提供了一种计算机程序产品,包括计算机可读指令,所述计算机可读指令被处理器执行时实现本申请各实施例的内容推荐方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的对指定用户进行个性化的内容推荐的示意图;
图2是本申请一个示例性实施例提供的实施环境示意图;
图3是本申请一个示例性实施例提供的内容推荐方法的流程图;
图4是本申请一个示例性实施例提供的基于域内特征对第一用户进行个性化内容推荐的示意图;
图5是本申请一个示例性实施例提供的个性化的映射关系函数的训练方法流程图;
图6是本申请一个示例性实施例提供的聚类分析的方法流程图;
图7是本申请一个示例性实施例提供的对第一聚类簇分布结果进行离散分析处理后获得第二聚类簇分布结果的示意图;
图8是本申请一个示例性实施例提供的获取第一用户的域外特征表示的方法流程图;
图9是本申请一个示例性实施例提供的异构图的示意图;
图10是本申请一个示例性实施例提供的基于元路径的异构图卷积示意图;
图11是本申请一个示例性实施例提供的内容推荐装置的结构框图;
图12是本申请另一个示例性实施例提供的内容推荐装置的结构框图;
图13是本申请一个示例性实施例提供的计算机设备的结构框图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大特征表示的提取技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包
括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
个性化推荐系统是互联网和电子商务发展的产物,它是建立在海量数据挖掘基础上的一种高级商务智能平台,向用户提供个性化的信息服务和决策支持。
个性化推荐系统通过收集用户的属性和历史的行为的数据,利用设计好的推荐算法来构建用户兴趣模型,为每个用户生成其特定的推荐列表并推送给他们,从而达到个性化推荐的目的。
然而,在推荐系统中,有两个长期存在的问题:数据稀疏问题及冷启动问题。数据稀疏问题指的是用户和物品的交互记录很少,很难很好地捕捉用户的兴趣或者物品的特性;冷启动问题指的是新用户或者新物品刚进入系统,没有交互记录的情况。传统的推荐算法是基于用户和物品的交互数据来进行推荐的,在这两种情况下很难做出合适的推荐。
迁移学习(Transfer Learning,TL)是机器学习中的一个名词,是指一种学习对另一种学习的影响,或习得的经验对完成其他活动的影响。迁移广泛存在于各种知识、技能与社会规范的学习中。
迁移学习利用源域丰富的知识和信息来提升目标域的性能,减少目标域所需要的样本数量,在视觉领域和自然语言处理领域被广泛使用。比如:来辨识汽车的知识(或者是模型)也可以被用来提升识别卡车的能力。
其中,源域(Source Domain,SD)是指已有的知识领域,表示与目标样本不同的领域,通常具有丰富的监督信息和标签数据;目标域(Target Domain,TD)是指需要进行学习的领域,表示目标样本所在的领域,通常只有少量标签数据,或者无标签数据。源域可以是迁移学习中作为所迁移知识来源的领域,目标域可以是迁移学习中作为所迁移知识去处的领域。
受到迁移学习的启发,在解决冷启动问题和数据稀疏问题时,可以通过获取用户在其他领域(源域)的交互信息,对交互信息进行分析,从而捕捉用户在一定方面的偏好,丰富目标域的数据;或者,在新用户启动时增加可供推荐系统利用的信息,缓解数据稀疏问题和冷启动问题,从而对用户进行跨域推荐个性化内容。
其中,跨域推荐旨在结合多个领域的数据,引入其它域(源域)的信息来进行辅助,使得在目标域甚至多个域上都能进行更好的推荐。不同域之间一般要存在一些重叠的信息,例如不同域的公共用户,或者不同域的相同物品等等都属于重叠信息的范畴,一般需要存在这样的重叠信息,才能进行不同域间信息的迁移。
针对用户冷启动问题,跨域推荐算法中有一个主流的分支:建立合适的映射函数迁移用户的兴趣,即建立从源域到目标域兴趣的映射。
这类方法的假设在于,用户在不同领域的兴趣存在一定的映射关系,则可以通过用户在源域的兴趣经过映射,得到目标域的兴趣。即使在目标域上没有行为的用户,也能通过在源域的兴趣经过映射函数得到目标域上的兴趣,从而进行合适的推荐,缓解用户冷启动问题。
然而,相关技术中,基于映射的跨域推荐算法对于所有的用户都共享同一个映射函数。每个用户之间存在个性化差异,从源域到目标域的兴趣映射差别很大,若所有用户共享同一个映射函数,则无法很好地建模这种复杂的映射关系,使得映射结果准确性较低,基于映射结果对用户进行个性化内容推荐的效果较差。
本申请实施例中,通过设计个性化映射函数,将冷用户在源域的域外特征,经过个性化映射函数,映射到目标域中,得到该冷用户的域内特征,基于域内特征对冷用户进行个性化的内容推荐。冷用户是其相关信息缺失的用户,一般是新用户。
示意性的,如图1所示,首先获取样本用户101在第一功能平台中的属性数据,对样本用户101的属性数据进行特征提取,得到样本用户的第二特征表示102,对样本用户的第二
特征表示102进行聚类分析,得到聚类簇分布结果103,聚类簇分布结果103中包含多个聚类簇,每个聚类簇都对应各自的聚类簇中心。基于聚类簇中心对候选映射函数进行训练,得到映射模块104,其中,映射模块104中包含多种映射关系函数。其中,映射模块104中的每种映射关系函数都对应一个聚类簇,获取聚类簇的聚类簇中心,可以根据聚类簇中心索引到对应的映射关系函数。
获取指定用户111在第一功能平台中的属性数据,对指定用户111的属性数据数据进行特征提取,得到指定用户的第一特征表示112。其中,指定用户111可以是冷用户,即在第一功能平台中没有历史交互数据的用户;指定用户111也可以是回流用户,即在历史周期内,在第一功能平台中不存在历史交互数据,但最近周期内又存在历史交互数据的用户。
获取指定用户111在第二功能平台中的第二历史交互数据,对指定用户111的历史交互数据进行特征提取,得到指定用户的域外特征表示113。
计算指定用户的第一特征表示112与聚类簇分布结果103中聚类簇中心之间的相似度,基于相似度获取指定用户111对应的聚类簇中心。基于指定用户111对应的聚类簇中心在映射模块104中获取适用于指定用户111的目标映射关系函数114。
将指定用户的域外特征表示113输入至目标映射关系函数114中,映射得到指定用户111的域内特征115,将指定用户111的域内特征115和指定用户的第一特征表示112拼接到一起,形成指定用户的目标特征表示116。
基于指定用户的目标特征表示116,筛选出指定用户111可能感兴趣的个性化内容117,将个性化内容117推荐给指定用户111。
值得注意的是,上述用户的属性数据和历史交互数据等,为用户主动上传的数据;或者,为经过用户单独授权后获取的数据。
需要说明的是,本申请所涉及的信息(包括但不限于用户的属性信息、用户和第一功能平台、第二功能平台之间的历史交互信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户单独授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的属性数据是在充分授权的情况下获取的。
其次,对本申请实施例中涉及的实施环境进行说明,示意性的,请参考图2,该实施环境中涉及终端210、服务器220,终端210和服务器220之间通过通信网络230连接。
在一些实施例中,终端210用于向服务器220发送用户的第一特征表示或第二特征表示中至少一种以及域外特征表示。在一些实施例中,终端210中安装有具有特征映射的功能(如:将域外特征表示映射成域内特征表示的功能)的应用程序,示意性的,终端210中安装有个性化映射功能的应用程序。如:终端210中安装有搜索引擎程序、旅游应用程序、生活辅助应用程序、即时通讯应用程序、视频类程序、游戏类程序、新闻类应用程序、内容推荐应用程序等,本申请实施例对此不加以限定。
服务器220中在获取用户的第一特征表示、第二特征表示以及域外特征表示后,通过对用户的第一特征表示、第二特征表示和域外特征表示进行特征分析,得到用户的域内特征表示,并基于用户的域内特征表示筛选出用户可能感兴趣的个性化内容,从而应用于下游应用中,如:基于域内特征的用户聚合、用户个性化内容推荐等。
服务器220在获取用户的第一特征表示、第二特征表示和域外特征表示后,对应用户的域内特征表示,将域内特征表示返回给终端210中,由终端210最终根据域内特征表示筛选出个性化内容,并对用户进行内容推荐。其中,个性化内容包括用可能感兴趣的域内信息流内容,如:信息流文章、视频、音乐等内容。
上述终端可以是手机、平板电脑、台式电脑、便携式笔记本电脑、智能电视、车载终端、智能家居设备等多种形式的终端设备,本申请实施例对此不加以限定。
值得注意的是,上述服务器可以是独立的物理服务器,也可以是多个物理服务器构成
的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。
其中,云技术(Cloud technology)是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。云技术基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源,如视频网站、图片类网站和更多的门户网站。伴随着互联网行业的高度发展和应用,将来每个物品都有可能存在自己的识别标志,都需要传输到后台系统进行逻辑处理,不同程度级别的数据将会分开处理,各类行业数据皆需要强大的系统后盾支撑,只能通过云计算来实现。
在一些实施例中,上述服务器还可以实现为区块链系统中的节点。
结合上述名词简介和应用场景,对本申请提供的内容推荐方法进行说明,该方法可以由服务器或者终端执行,也可以由服务器和终端共同执行,本申请实施例中,以该方法由服务器执行为例进行说明,如图3所示,该方法包括如下步骤。
步骤310,获取第一用户在第一功能平台中的属性数据,基于属性数据提取特征,得到第一用户的第一特征表示。
第一用户是特定的用户,是需要向其推送内容的用户。第一用户可以称之为指定用户或目标用户。属性数据是用于描述用户特性的数据。第一用户的属性数据,是描述第一用户的用户特性的数据。第一用户在第一功能平台中的属性数据,是第一用户在第一功能平台产生或存储的属性数据。第一特征表示,是第一用户的特征表示。
第一功能平台中包含不同种类的平台元素,第一用户可以和第一功能平台中的平台元素进行交互。
可选地,第一功能平台的类型包括但不限于游戏平台、社交平台或购物平台。
可选地,第一功能平台中包含的平台元素种类包括但不限于:视频元素,如影视剧、动画视频等;图像元素,是指包含信息流内容的图像;音乐元素,如歌曲、伴奏等;文本元素,如期刊文章、电子书籍等。
可选地,第一功能平台可提供的服务包括但不限于:在线直播推送,视频号内容推送,订阅号文章推送或交流社群推送。
可选地,第一用户和第一功能平台中的平台元素未进行有效交互,也即第一用户在第一功能平台中不存在有效历史交互数据,包括但不限于以下几种情况:
(1)第一用户是第一功能平台新注册用户,即冷用户,没有历史交互数据;
(2)第一用户是第一功能平台的回流用户,超过预设时长未登陆第一功能平台,历史交互数据被清除;
(3)第一用户在第一功能平台登陆频率低于频率阈值,历史交互数据的数量低于阈值;
(4)第一用户在第一功能平台注册时长未达到时长阈值,在时长阈值内的历史交互数据有效性较低。
可选地,第一用户与第一功能平台进行交互,产生历史交互数据的方式包括但不限于以下几种:
(1)第一用户浏览第一功能平台推送的内容:文章、视频或直播内容等中至少一种;
(2)第一用户在第一功能平台进行交易相关行为:购物、卖出或评论商品等;
(3)第一用户在第一功能平台主动上传内容:文章、图片或视频等中的至少一种。
可选地,第一用户在第一功能平台中的属性数据,包括但不限于:第一用户的年龄信息、第一用户的IP地址(Internet Protocol Address)、第一用户的性别信息或第一用户
的设备型号。
示意性地,使用特征提取网络对第一用户在第一功能平台中的属性数据进行特征提取,得到第一用户的第一特征表示。
步骤320,将第一特征表示与第一功能平台中的第二用户的第二特征表示进行聚类分析,得到第一用户对应的聚类簇中心。
在第一功能平台中包括第一用户和至少一个第二用户,每个用户都具有各自对应的属性数据。第二特征表示,是第二用户的特征表示。
计算机设备可基于第二用户在第一功能平台中的属性数据,提取第二用户的第二特征表示。
其中,第二用户的第二特征表示与第一用户的第一特征表示属于同类型特征表示,或者是通过相同特征提取网络提取得到的特征表示,用于表示用户在第一功能平台中的属性数据的特点。
计算机设备可基于第二用户的第二特征表示,对第二用户进行聚类分析,得到多个聚类簇。每个第二用户都对应各自的聚类簇,聚类簇的中心点为聚类簇中心,每个聚类簇都包含各自对应的聚类簇中心。
计算机设备可获取第一特征表示和聚类簇中心之间的相似度,基于相似度确定第一用户对应的聚类簇中心。
在一个实施例中,获取第一特征表示和聚类簇中心之间的相似度,基于相似度确定第一用户对应的聚类簇中心,包括:计算第一特征表示和每个聚类簇中心之间的距离,将与第一特征表示距离最小的聚类簇中心,作为第一用户对应的聚类簇中心。本实施例中,利用第一特征表示和聚类簇中心之间的距离来衡量第一特征表示和聚类簇中心之间的相似度,容易找到更适合的聚类簇中心,从而能够进行准确地推送,避免支持内容推送功能的资源被浪费。
在一个实施例中,获取第一特征表示和聚类簇中心之间的相似度,基于相似度确定第一用户对应的聚类簇中心,包括:计算第一特征表示和每个聚类簇中心之间的相似度;基于相似度,确定第一用户属于每个聚类簇的概率;从确定的概率中筛选最大的概率,基于最大的概率确定第一用户对应的聚类簇中心。本实施例中,利用相似度来计算第一用户属于每个聚类簇的概率,筛选出最大的概率,从而将概率最大时第一用户所属的聚类簇的聚类簇中心,作为第一用户对应的聚类簇中心。本实施例也能够更加准确地进行内容推荐,支持内容推送功能的资源被浪费。
步骤330,获取第一用户在第二功能平台中的第二历史交互数据,基于第二历史交互数据提取特征,获得第一用户的域外特征表示。
第二功能平台中也包含不同种类的平台元素,第一用户可以和第二功能平台中的平台元素进行交互。
其中,第一功能平台和第二功能平台之间的关系包括但不限于:(1)同一个软件中的不同功能模块,第一用户为登录在该软件中的帐号;(2)同一个网站中的不同功能模块,第一用户为登录该网站的帐号,或者第一用户为与当前浏览网站的终端标识码对应的用户;(3)第一用户的账号在第一功能平台和第二功能平台相关联,如:第一用户采用第一帐号登录第一功能平台,并使用第二帐号登录第二功能平台,在第一功能平台和第二功能平台之间,第一帐号和第二帐号建立有绑定关系。
可选地,第一用户和第二功能平台中的平台元素存在有效交互行为,也即第一用户在第二功能平台中存在有效历史交互数据,包括但不限于以下几种情况。
(1)第一用户在第二功能平台注册的时长超过时长阈值;
(2)第一用户在第二功能平台登陆频率高于频率阈值;
(3)第一用户在第二功能平台的历史交互数据的数量超过阈值。
获取第一用户在第二功能平台中的第二历史交互数据,对历史交互数据进行特征提取处理,得到第一用户的域外特征表示。
其中,域外是指第一功能平台以外的范围,域外特征表示也即第一用户不在第一功能平台上,而在其他功能平台上生成的历史交互数据。
值得注意的是,第一用户可以与任意类型的功能平台中的平台元素进行交互,也即,第一用户可以在任意类型的功能平台中存在历史交互数据,第一用户在第一功能平台以外的区域和平台元素进行交互所产生的历史交互数据,经过特征提取处理后所得到的特征表示为域外特征表示,本实施例对此不加以限定。
步骤340,获取与聚类簇中心对应的映射关系函数,通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示。
其中,映射关系函数用于指示第二功能平台和第一功能平台之间特征表示的映射关系。与聚类簇中心对应的映射关系函数,是适配该类簇中心的映射关系函数。计算机设备可以将域外特征表示输入映射关系函数进行映射,获得由映射关系函数输出的该第一用户的域内特征表示。
在一些实施例中,获取与聚类簇中心对应的映射关系函数,通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示,包括:基于第一用户对应的聚类簇中心,对预生成的含参映射函数进行参数替换,得到与聚类簇中心对应的映射关系函数;通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示。
计算机设备可基于第一用户对应的聚类簇中心,对含参映射函数进行参数替换,得到与聚类簇中心对应的映射关系函数;通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示。
在一个实施例中,基于第一用户对应的聚类簇中心,对预生成的含参映射函数进行参数替换,得到与聚类簇中心对应的映射关系函数,包括:获取含参映射函数,含参映射函数包括处于待填充状态的指定参数位置;将聚类簇中心作为参数代入指定参数位置,得到聚类簇中心对应的映射关系函数,其中,聚类簇中心用于作为检索关键字,对映射关系函数进行查询。
计算机设备可预先获取样本数据,基于样本数据进行训练,确定若干离散的聚类簇中心,并确定每种聚类簇中心对应的映射关系函数,基于含参映射函数的结构,即基于处于待填充状态的指定参数位置,从该映射关系函数中提取参数值,从而将提取的参数值与对应的聚类簇中心对应,形成预设的聚类簇中心和参数值的映射关系。
进而,计算机设备可基于第一用户对应的聚类簇中心,以及预设的映射关系,确定需替换成的参数值,将该参数值代入含参映射函数,得到与聚类簇中心对应的映射关系函数。进而,计算机设备可将域外特征表示输入映射关系函数进行映射,输出得到第一用户的域内特征表示。
在其它实施例中,映射关系函数可以是神经网络模型,含参映射函数可以是具有可替换的模型参数的神经网络模型。
在一个实施例中,聚类簇中心可用于作为检索关键字,对映射关系函数进行查询。个映射关系函数可对应一个用于索引的Key值,计算机设备可将当前聚类簇中心作为检索关键字,与检索键(Key)所对应的目标聚类簇中心分别进行匹配,匹配成功时,其目标聚类簇中心所对应的检索值(Value)即为当前聚类簇中心所对应的映射关系函数。
值得注意的是,含参映射函数的种类可以是任意的,基于聚类簇中心对含参映射函数进行参数替换的方式可以是任意的,本实施例对此不加以限定。
步骤350,根据域内特征表示和第一特征表示,确定第一用户的目标特征表示,并基于目标特征表示,从候选内容推荐池中确定与第一用户匹配的目标内容,向第一用户推送目标内容。
其中,目标特征表示,是根据域内特征表示和第一特征表示确定的特征表示,是从候选内容推荐池中匹配内容的依据。目标内容,是向第一用户推送的内容。
计算机设备可将域内特征表示和第一特征表示结合,得到第一用户的目标特征表示,并基于目标特征表示对第一用户与候选内容推荐池进行匹配,匹配到目标内容,并向第一用户推送候选内容推荐池中的目标内容。
通过上述步骤340所获得的第一用户的域内特征表示,可以用于表示第一用户在第一功能平台上有可能发生的交互行为,该第一用户在第一功能平台上不存在历史交互数据。也即,域内特征表示,是指将第一用户在第二功能平台上和平台元素进行交互时的特点映射到第一功能平台上。
可选地,计算机设备可将域内特征表示和第一特征表示通过拼接的方式结合起来,得到第一用户的目标特征表示,目标特征表示用于给第一用户推荐个性化内容。
示意性的,如图4所示,获取第一用户401的第一特征表示402和域外特征表示403,构建图,以图的形式表现第一特征表示402和域外特征表示403,使用映射函数对域外特征表示403映射,得到第一用户401的域内特征表示404,获取候选内容推荐池405,将域内特征表示404和候选内容推荐池405中的内容进行相似度匹配,得到相似度匹配结果406,将相似度匹配结果406中相似度对应数值排序在前M个的内容作为目标内容407,将目标内容407推荐给第一用户401。
值得注意的是,将域内特征表示和第一特征表示结合起来的方式可以是任意的,候选内容推荐池中所包含的内容种类、数量可以是任意的;对域内特征表示和候选内容推荐池中的内容进行相似度匹配的方式可以是任意的,基于相似度匹配结果选取目标内容时,目标内容的数量、种类可以是任意的,本实施例对此不加以限定。
综上,本申请提供的方法,通过对第一功能平台中的所有用户进行聚类分析,得到每个用户所属于的聚类簇以及每个用户所对应的聚类簇中心,基于聚类簇中心获取个性化的映射关系函数,个性化的映射关系函数能够针对不同的用户,实现从域外特征表示到域内特征表示的映射过程,提高了映射结果的准确率。获取第一用户的第一特征表示以及,第一用户在第二功能平台的历史交互数据经过特征提取得到的域外特征表示,基于第一特征表示找到第一用户对应的聚类簇中心,根据第一用户对应的聚类簇中心获得与第一用户所对应的映射关系函数,将第一用户的域外特征表示输入至映射关系函数中,经过映射得到第一用户的域内特征表示,可以在第一用户仅在第二功能平台中存在历史交互数据、在第一功能平台中不存在历史交互数据时,得到第一用户在第一功能平台的交互特点,从而基于第一用户的域内特征表示和第一特征表示,对第一用户进行在第一功能平台上的个性化内容推荐,解决了冷用户问题和数据稀疏问题,使得推荐内容更加符合第一用户的真实兴趣,提高了推荐效果,避免支持内容推送功能的资源被浪费。
本实施例提供的方法,通过对含参映射函数进行参数替换,将含参映射函数中的参数替换为与第一用户所对应的聚类簇中心,找到符合第一用户映射要求的个性化映射关系函数,使用该映射关系函数对第一用户的域外特征表示进行映射,得到第一用户的域内特征表示,能够在第一用户在域内没有历史交互数据的情况下,了解第一用户在域内的特征,实现了第一用户的特征迁移,个性化的映射函数能够针对每个不同的第一用户进行特征映射,也提高了特征映射的准确率和效率,使得推送给第一用户的目标内容更符合第一用户的需求,可进一步避免支持内容推送功能的资源被浪费。
本实施例提供的方法,通过获取在第一功能平台中所有第二用户的属性数据,并进行特征提取,得到能够代表每个第二用户的第二特征表示,基于第二特征表示对第二用户进行聚类分析,生成多个聚类簇,能够快速给第一功能平台中的第二用户进行分类,每个聚类簇都有其对应的聚类簇中心,基于第一用户的第一特征表示和聚类簇中心之间的相似度,能够得到第一用户所属的聚类簇以及对应的聚类簇中心,得到准确率较高的聚类分析结果,
使得推送给第一用户的目标内容更符合第一用户的需求,可进一步避免支持内容推送功能的资源被浪费。
将第一用户的域外特征表示映射为域内特征表示时,需要筛选适配该第一用户的个性化映射函数,如图5所示,图5示出了本申请一个示例性实施例提供的个性化的映射关系函数的训练方法流程图,该方法包括如下步骤510-560。其中,步骤510-550可以是获取含参映射函数的过程。该含参映射函数包括处于待填充状态的指定参数位置;
步骤510,获取样本用户在第一功能平台中的第一历史交互数据,基于第一历史交互数据提取特征,得到样本用户的样本域内特征表示,样本用户对应有样本聚类簇中心。
其中,样本用户对应样本聚类簇中心。
在第一功能平台中包含至少一个样本用户,样本用户和第一功能平台中的平台元素之间存在交互行为,也即样本用户在第一功能平台中存在历史交互数据。
示意性地,使用特征提取网络对样本用户在第一功能平台中的第一历史交互数据进行特征提取,得到样本用户的样本域外特征表示。
步骤520,获取样本用户在第二功能平台中的第二历史交互数据,基于第二历史交互数据提取特征,得到样本用户的样本域外特征表示。
同上述步骤510。
样本用户在第一功能平台和第二功能平台中都存在历史交互数据。
值得注意的是,在一些实施例中,对样本用户进行特征提取得到域内特征表示和域外特征表示的方式可以是任意的,包括但不限于上述使用特征提取网络的方式;当使用特征提取网络对样本用户的历史交互数据进行特征提取时,所使用的特征提取网络可以是任意的,本实施例对此不加以限定。
步骤530,获取候选映射函数,将样本用户的样本域外特征表示输入至候选映射函数,经过映射得到与样本用户对应的样本域内映射特征。
对样本用户的属性数据进行特征分析,得到样本用户的样本特征表示。基于样本特征表示对样本用户进行聚类分析,得到多个聚类簇。
每个样本用户都对应各自的聚类簇,聚类簇的中心点为聚类簇中心,每个聚类簇都包含各自的聚类簇中心。
计算样本用户的样本特征表示和聚类簇中心之间的距离,将与样本特征表示距离最小的聚类簇中心,作为样本用户对应的样本聚类簇中心。
获取候选映射函数,候选映射函数是预设的函数,具备将样本用户的域外特征映射成为域内特征的能力,将样本用户的样本域外特征表示输入至候选映射函数,经过映射所得到的域内映射特征准确性较低。
域内映射特征和样本用户自身的域内特征表示之间存在差异。
值得注意的是,候选映射函数的种类可以是任意的,本实施例对此不加以限定。
步骤540,基于样本用户的样本域内特征表示和样本域内映射特征,得到重构损失。
基于样本用户的样本域内特征表示和样本域内映射特征之间的差异获得重构损失(Lreconstruction)。
示意性地,获得重构损失的方式采用均方误差损失(Mean Square Error,MSE),即计算样本用户的样本域内特征表示和样本域内映射特征之间的距离平方之和。
值得注意的是,在一些实施例中,获得重构损失的方式可以是任意的,包括但不限于上述均方误差损失的方式,本实施例对此不加以限定。
步骤550,基于重构损失对候选映射函数进行训练,得到样本聚类簇中心对应的映射关系函数。
对样本用户进行聚类分析,得到每个样本用户对应的样本聚类簇中心。
获取同一个样本聚类簇中心所对应的目标聚类簇,选取目标聚类簇中的样本用户,对候选映射函数进行训练。
将样本用户的样本域外特征表示输入至候选映射函数,输出得到样本用户的映射域内特征,并基于域内映射特征和样本用户的样本域内特征表示之间的重构损失对候选映射函数进行训练,所得到的候选映射函数对属于该目标聚类簇中所有样本用户进行特征映射时,准确率较高。
使用不同聚类簇中的样本用户训练得到的候选映射函数,也即含映射关系函数,与训练时所使用的样本用户对应的样本聚类簇中心存在对应关系。
将每个聚类簇训练得到的映射关系函数以表格形式记录,如下表1所示:
表1
其中,Value代表不同的映射关系函数,每个映射关系函数对应一个Key值,用于表示该映射关系函数对应的索引。
使用第一聚类簇中的样本用户对候选映射函数进行训练,其中,第一聚类簇的聚类簇中心是第一样本聚类簇中心,经过上述训练过程所得到的第一映射函数,适用于对应第一样本聚类簇中心的第一用户进行特征映射。基于重构损失,使映射得到的域内映射特征去逼近样本用户原有的域内特征表示,当重构损失满足以下任一条件时,停止训练。
(1)重构损失低于预设阈值;(2)重构损失收敛。
经过损失训练得到的含参映射函数中包括处于待填充状态的指定参数位置。指定参数位置所填充的内容不同时,所得到的含参映射函数的映射特点和映射效果也不同,也即,经过对指定参数位置中参数的调整,可以对应得到个性化的含参映射函数,不同的含参映射函数适用于不同类型的用户。
值得注意的是,基于重构损失对含参映射函数进行训练的方式可以是任意的,判定训练停止的条件可以使任意的,本实施例对此不加以限定。
步骤560,将聚类簇中心作为参数代入指定参数位置,得到聚类簇中心对应的映射关系函数。
其中,聚类簇中心用于作为检索关键字,对映射关系函数进行查询。
在表1中,每个映射关系函数对应一个用于索引的Key值,将当前聚类簇中心作为检索关键字,与表1中的Key值所对应的目标聚类簇中心分别进行匹配,匹配成功时,其目标聚类簇中心所对应的Value值即为当前聚类簇中心所对应的映射关系函数。
经过上述步骤510至550训练获得的含参映射函数,可以对其进行参数替换操作,得到不同类型的映射关系函数,这些不同种类的映射关系函数组成了映射模块。
每个聚类簇都有其对应的聚类簇中心,分别将每个聚类簇的聚类簇中心都作为参数带入含参映射函数中的指定参数位置,得到个性化的映射关系函数,在映射模块中,包含了与聚类簇中心数量相同的映射关系函数。
与聚类簇中心所对应的映射关系函数,可以对该聚类簇中心所属聚类簇中所有用户的域外特征表示进行特征映射,得到这些用户的域内映射特征。也即,要获得用户在第一功能平台上的域内特征时,可以基于用户所处的聚类簇,获取该聚类簇的聚类簇中心,并将该聚类簇中心作为索引,在映射模块中找到与该聚类簇中心对应的映射关系函数。
综上,本申请提供的方法,通过聚类簇中心获取个性化的映射关系函数,将聚类簇中心作为候选映射函数中指定位置的参数,得到的个性化的映射关系函数能够针对不同类型的用户,实现从域外特征表示到域内特征表示的映射过程,提高了映射结果的准确率。将用户的域外特征表示输入至映射关系函数中,经过映射得到用户的域内特征表示,可以在
用户仅在第二功能平台中存在历史交互数据、在第一功能平台中不存在历史交互数据时,得到用户在第一功能平台的交互特点,从而基于用户的域内特征表示和第一特征表示,对用户进行在第一功能平台上的个性化内容推荐,解决了冷用户问题和数据稀疏问题,使得推荐内容更加符合用户的真实兴趣,提高了推荐效果,避免支持内容推送功能的资源被浪费。
本实施例提供的方法,通过获取在第一功能平台和第二功能平台都具有历史交互数据的多个样本用户,对历史交互数据进行特征分析,得到样本用户在第一功能平台的域内特征表示,以及样本用户在第二功能平台的域外特征表示,预设一个具有映射功能的候选映射函数,将域外特征表示输入至候选映射函数中,得到映射的域内映射特征,基于域内映射特征和域内特征表示之间的重构损失对候选映射函数进行训练,使域内映射特征逼近样本用户真实的域内特征表示,从而获得映射准确的含参映射函数,能够将样本用户在一个域的兴趣特征迁移到另一个域,提高了映射的准确率和效果。
本实施例提供的方法,通过对含参映射函数中指定位置的参数进行替换,将聚类簇中心作为参数带入指定参数位置,得到映射效果具有针对性的个性化映射关系函数,对不同类型的用户的域外特征表示进行映射时,所得到的域内映射特征接近用户真实的域内特征表示,能够准确的表示用户与第一功能平台中平台元素的交互特点,当用户在第一功能平台中不存在历史交互数据时也可以获得用户的域内特征表示,提高了映射的准确率,解决了冷用户问题和数据稀疏问题,基于域内映射特征进行内容推荐时,使得推荐内容更加符合用户的真实兴趣,提高了推荐效果,避免支持内容推送功能的资源被浪费。
第一功能平台上包含多个用户,对平台上的第一用户和第二用户进行聚类分析,使每个用户找到各自所属于的聚类簇并获得各自对应的聚类簇中心,如图6所示,图6是本申请一个示例性实施例提供的聚类分析的方法流程图,该方法包括如下步骤。
步骤610,获取聚类信息,聚类信息用于指示初始聚类簇中心的位置信息。
可选地,获取初始聚类簇中心的位置信息的方式包括但不限于如下几种方式:1、随机初始化;2、指定初始聚类簇中心的位置。
在基于初始聚类簇中心的位置信息,对第一功能平台中所有用户进行聚类分析时,采用batch训练的方式来学习聚类簇中心。
batch训练,又称批训练,是指把整套训练数据分成数个批次进行训练,每个批次从数据中选取n_num(总数据)/n_batch(批次)个数据,直到把整套数据训练完成。
示意性地,采用随机初始化的方式获得聚类簇中心μj,j=1、2、3……K。
其中,聚类簇中心的数量为K个,K为正整数。
值得注意的是,聚类簇中心的数量和聚类簇的数量对应,聚类簇中心的数量可以为任意的指定数值,本实施例对此不加以限定。
步骤620,获取第二特征表示和初始聚类簇中心之间的相似度,基于第二特征表示和初始聚类簇中心之间的相似度,确定第一聚类簇分布结果,第一聚类簇分布结果包括各初始聚类簇中心对应的特征分布。
其中,第一聚类簇分布结果中包括各初始聚类簇中心对应的特征分布。
经过步骤610,获得初始聚类簇中心后,基于初始聚类簇中心对平台中的第二用户进行聚类分析,找到每个用户所属的聚类簇。
示意性地,使用学生T分布的方式获得第二用户的第二特征表示和初始聚类簇中心之间的相似度,并基于相似度得到第二用户属于某个聚类簇的概率,将概率对应数值最高的聚类簇和初始聚类簇中心作为第二用户的初始分类结果,共同组成第一聚类簇分布结果。
在概率论和统计学中,学生T分布(T-distribution)即T分布,用于根据小样本来
估计呈正态分布且方差未知的总体的均值。如果总体方差已知(例如在样本数量足够多时),则应该用正态分布来估计总体均值。
如下列公式一所示:
公式一:
其中,qij是指第二用户属于某个聚类簇的概率,hi是指第二用户的第二特征表示,μj是指初始聚类簇中心,α表示学生T分布的自由度,μj′是指任意一个初始聚类簇中心。
示意性地,以第一第二用户为例,第一第二用户对应的第二特征表示为h1,将第一第二用户对应的第二特征表示h1与初始聚类簇中心逐一进行相似度计算,得到第一第二用户对应的相似度数组array1[j],j=1、2、3……K。
对相似度数组array1[j]中的所有相似度数值求和,得到相似度和sum,将相似度数组array1[j]中每个相似度数值与相似度和sum相除,得到第一第二用户属于每个聚类簇的概率qij,将概率对应数值最大的聚类簇作为第一第二用户所对应的聚类簇,该聚类簇对应的初始聚类簇中心也是第一第二用户对应初始聚类簇中心。
对每个第二用户重复执行上述步骤,获得每个第二用户所属的聚类簇,共同组成第一聚类簇分布结果。
值得注意的是,在一些实施例中,获得第二用户的第二特征表示和初始概率簇中心之间的相似度的方式可以是任意的,包括但不限于上述基于学生T分布的方式;获得每个用户所属聚类簇的概率的方式可以的任意的,本实施例对此不加以限定。
步骤630,对第一聚类簇分布结果进行离散分析,得到第二聚类簇分布结果,并基于第二聚类簇分布结果确定多个聚类簇,第二聚类簇分布结果包含每个第二特征表示对应的聚类簇中心。
其中,第二聚类簇分布结果包含每个第二特征表示对应的聚类簇中心。
经过步骤620所获得的第一聚类簇分布结果置信度较低,每个第二用户属于对应聚类簇的概率较低,为了获得更加尖锐的聚类簇分布结果,也即让每个第二用户的第二特征表示更加靠近各自对应的初始聚类簇中心,则需要设置一个置信度更高的目标聚类簇分布结果,通过离散分析处理,使得第一聚类簇分布结果更加靠近目标聚类簇分布结果。
对第一聚类簇分布结果进行离散分析处理,包括如下步骤:
(1)对第一聚类簇分布结果中第i个初始聚类簇中心对应的特征分布进行离散度分析,得到第i个初始聚类簇中心对应的离散值;
(2)基于第i个初始聚类簇中心对应的离散值对第i个初始聚类簇中心进行更新,得到第二聚类簇分布结果;以及,对特征提取网络进行更新,其中,特征提取网络用于提取用户的特征表示;
(3)基于第二聚类簇分布结果和更新后的特征提取网络得到多个聚类簇。
示意性地,如下列公式二所示:
公式二:
其中,pij是指第二用户属于某个聚类簇的更新概率,也即目标聚类簇分布结果,fj=∑iqij表示第二用户属于第j个聚类簇中心的概率。
示意性地,采用KL散度使第一聚类簇分布结果更加靠近目标聚类簇分布结果,如下列公式三所示:
公式三:
其中,P表示目标聚类簇分布结果,Q表示第一聚类簇分布结果,Lclustering是指KL散度。
目标聚类簇分布结果即第二聚类簇分布结果,基于上述步骤获得第二聚类簇分布结果后,基于第二聚类簇分布结果得到多个聚类簇,每个聚类簇都对应各自的聚类簇中心。
示意性的,如图7所示,图7是对第一聚类簇分布结果进行离散分析处理后获得第二聚类簇分布结果的示意图。
第一聚类簇分布结果701中包含基于多个初始聚类簇中心形成的聚类簇,基于KL散度702,对第一聚类簇分布结果701进行离散度分析后,得到置信度更高的第二聚类簇分布结果703。
在第一聚类簇分布结果701中,包含初始聚类簇中心704和属于相同聚类簇的第二特征表示705,初始聚类簇中心704和第二特征表示705之间的距离较大,所形成的聚类簇较为分散。
在第二聚类簇分布结果703中,包含聚类簇中心706和更新后的属于相同聚类簇的第二特征表示707,聚类簇中心706和更新后的第二特征表示707之间的距离较近,所形成的聚类簇较为紧凑。
综上,本申请提供的方法,通过对第一功能平台中的所有用户进行聚类分析,得到每个用户所属于的聚类簇以及每个用户所对应的聚类簇中心,基于聚类簇中心获取个性化的映射关系函数,个性化的映射关系函数能够针对不同的用户,实现从域外特征表示到域内特征表示的映射过程,提高了映射结果的准确率。获取第一用户的第一特征表示以及,第一用户在第二功能平台的历史交互数据经过特征提取得到的域外特征表示,基于第一特征表示找到第一用户对应的聚类簇中心,根据第一用户对应的聚类簇中心获得与第一用户所对应的映射关系函数,将第一用户的域外特征表示输入至映射关系函数中,经过映射得到第一用户的域内特征表示,基于第一用户的域内特征表示和第一特征表示,对第一用户进行在第一功能平台上的个性化内容推荐,解决了冷用户问题和数据稀疏问题,使得推荐内容更加符合第一用户的真实兴趣,提高了推荐效果,避免支持内容推送功能的资源被浪费。
本实施例提供的方法,通过随机初始化的方式获得初始聚类簇中心的位置信息,基于初始聚类簇中心对第一功能平台中所有用户进行聚类分析,得到第一聚类分析结果,对第一聚类分析结果进行离散度分析,能够得到置信度更高的第二聚类簇分布结果,找到每个用户所属的聚类簇和对应的聚类簇中心,能够对所有用户进行准确的分类,进一步基于分类结果找到个性化的映射函数,使得映射结果更准确。
本实施例提供的方法,通过对第一聚类簇分布结果中每个聚类簇分别进行离散度分析,
得到每个初始聚类簇中心对应的离散值,基于离散值对初始聚类簇中心进行更新,得到置信度更高的第二聚类簇分布结果,并对特征提取网络进行更新,得到更为准确的第二特征表示以及每个第二特征表示和所属聚类簇之间的关系。
基于用户在任意功能平台中的历史交互数据,提取用户的域外特征表示或者域内特征表示时,需要引入异构图获得历史交互数据,并进一步提取用户的域内或域外特征表示。如图8所示,图8是一个获取第一用户的域外特征表示的方法流程图,具体地,获取第一用户在第二功能平台中的第二历史交互数据,基于历史交互数据提取特征,获得第一用户的域外特征表示,包括如下步骤。
步骤810,获取第一用户在第二功能平台中的第二历史交互数据,基于历史交互数据获得异构图,目标异构图中包含多个元路径,目标异构图用于表示第一用户和第二功能平台中的平台元素之间的历史交互关系。
其中,目标异构图中包含多个元路径,目标异构图用于表示第一用户和第二功能平台中的元素之间的历史交互关系。
异构图又称异构网络,在异构图中,节点和边的类型不是单一的,而是多样化的。
元路径(Meta-Path)可以理解为连接不同类型节点的一条路径,不同的元路径会有不同的路径类型,而所谓路径类型通常都是用节点类型路径来表示。
示意性的,如图9所示,图9是一个异构图的示意图。
在异构图900中,存在目标域910、源域920和平台用户930。
在本申请实施例中,目标域910指代第一功能平台,源域920指代第二功能平台,平台用户930包括第一用户和第二用户,第一用户在异构图900中表现为第一用户节点931。
在目标域910中存在多个第一平台元素,第一平台元素在异构图900中表现为第一元素节点911,在源域920中也存在多个第二平台元素,第二平台元素在异构图900中表现为第二元素节点921。
其中,第一元素节点911、第二元素节点921和第一用户节点931的种类不同。
第一用户和第二功能平台中的第二平台元素之间存在历史交互数据,在异构图900中通过元路径表示:第一用户节点931和第二元素节点921之间通过直线连接,说明第一用户节点931和第二元素节点921之间存在交互关系,由直第一用户节点931、第二元素节点921以及之间用于连接的直线共同构成属于第一用户所在异构图900中的一条元路径。
其中,以第一用户节点931为中心,基于源域920之间的历史交互数据生成的元路径包括但不限于以下几条:
(1)u1-i2;
(2)u1-i2-u2;
(3)u1-i2-u2-i2-u3;
(4)u1-i2-u4;
(5)u1-i2-u2-i2-u5。
其中,u1是指第一用户节点931,i2是源域920中的第二元素节点921,u2、u3、u4和u5代表平台用户930中第二用户所对应的用户节点。
其中,i2是u1的一阶邻居、u2和u4是u1的二阶邻居,以此类推,u1到达目标节点所需要经过的节点数量为N,则目标节点为u1的(N-1)阶邻居。以u1为中心,通过元路径可以到达的目标节点,都是u1的邻居节点。目标节点是特指的节点。
异构图900中,除了包含以第一用户节点931为中心的元路径,还包含以平台用户930中第二用户对应的用户节点为中心的元路径,以第一用户节点931为中心的元路径共同组成目标异构图,也即,目标异构图是异构图900中的一部分。
值得注意的是,异构图中所包含的元路径数量和种类可以是任意的,异构图中所包含的节点和边的数量可以是任意的,节点的种类可以是任意的,元路径中所包含的节点和边
的数量可以是任意的,中心节点在元路径中的邻居节点的阶数和数量可以是任意的,本实施例对此不加以限定。
在一些实施例中,目标异构图中所包含的用户节点的数量和种类可以是任意,源域的数量和种类可以是任意的,目标域的数量和种类可以是任意的,可以作为源域的平台包括但不限于第二功能平台,第一功能平台中的第一元素节点的数量和种类可以是任意的,第二功能平台中的第二元素节点的数量和种类可以是任意的,本实施例对此不加以限定。
步骤820,提取异构图中元路径对应的路径特征表示。
提取元路径的路径特征表示,利用图注意力网络作为聚合机制,对目标异构图中的每个元路径进行特征提取。
图神经网络(Graph Neural Network,GNN)是指使用神经网络来学习图结构数据,提取和发掘图结构数据中的特征和模式,满足聚类、分类、预测、分割、生成等图学习任务需求的算法总称。
在本实施例中,利用图神经网络对目标异构图进行分析。
图注意力网络(Graph Attention Network,GAT)是一种图神经网络,该网络使用类似transformer里面self-attention的方式计算图里面某个节点相对于每个邻居节点的注意力,将节点本身的特征和注意力特征串联起来作为该节点的特征,在此基础上进行节点的分类等任务。
第一用户在第二功能平台中的第二历史交互数据,以元路径的形式在目标异构图中表现,每个元路径都具有语义信息,用于表示第一用户在第二功能平台进行交互的特点和兴趣偏向。
目标异构图中存在不同类型的元路径,对于不同类型的元路径,利用异构图卷积来捕捉每条元路径所包含的丰富的语义信息,并加入节点级别的注意力机制来区分每个邻居节点对于中心节点(第一用户节点)的重要性。
在一些实施例中,提取异构图中元路径对应的路径特征表示,包括:获取元路径中每个路径节点的节点注意力,路径节点用于表示与第一用户之间存在历史交互关系的平台元素;对节点注意力进行聚合处理,得到元路径的路径特征表示。
示意性的,如图10所示,图10是基于元路径的异构图卷积示意图。
第一用户所对应的第一用户节点1000为中心,按照顺序依次在元路径中获取第一用户节点1000的一阶邻居节点1010和二阶邻居节点1020。
获取每个邻居节点的节点级别注意力的顺序与获取邻居节点的顺序相反。
基于图注意力网络1030首先获取二阶邻居节点1020的节点注意力,再获得一阶邻居节点1010的节点注意力,最后获得第一用户节点1000的节点注意力。
获得所有节点的节点注意力后,先对二阶邻居节点1020的节点注意力进行聚合处理,得到一阶邻居节点1010的嵌入,再对一阶邻居节点1010的节点注意力进行聚合处理,得到第一用户节点1000的嵌入。
其中,每个邻居节点(一阶邻居节点1010和二阶邻居节点1020)对中心节点(第一用户节点1000)都具有不同程度的重要性,将每个邻居节点的节点注意力进行聚合后,最终形成节点表示,如下列公式四所示:
公式四:
其中,其中,αui表示节点u和节点i的相关性,hu,hi表示节点u和节点i的表示,Νu表示节点u的邻居集合。
每条元路径都具有多个节点,则对应多个节点表示,将这条元路径上所有节点表示聚
合起来,则得到该元路径的路径特征表示,如下列公式五所示:
公式五:
其中,h′u代表元路径的路径特征表示,σ(·)表示激活函数。
值得注意的是,上述举例中仅涉及一阶邻居节点和二阶邻居节点,在一些实施例中,元路径的节点阶数可以是任意的,对邻居节点进行注意力分析获得节点注意力时,可以只对指定的邻居节点进行注意力分析,也可以对所有的邻居节点进行注意力分析,在进行注意力分析并获得节点注意力时所使用的方法可以是任意的,包括但不限于上述图注意力网络的方式,本实施例对此不加以限定。
步骤830,对元路径对应的路径特征表示进行聚合,得到第一用户的域外特征表示。
根据第一用户在第二功能平台中的第二历史交互数据所获得的元路径,对元路径的路径特征表示进行聚合处理,得到的即为第一用户的域外特征表示,域外指源域,也即指代非第一功能平台的区域,在一些实施例中,域外可以是任意功能平台的区域,但是第一用户在源域一定存在历史交互数据,这些历史交互数据反映了第一用户在源域的交互特点和兴趣偏向。
第一用户对应的目标异构图中所包含的元路径的数量为至少一条,根据上述步骤820中的方法,对每个元路径都经过基于元路径的异构图卷积,进行特征提取,得到多个路径特征表示,将多个元路径的路径特征表示做平均池化处理后,得到该第一用户在第二功能平台的特征表示,也即第一用户的域外特征表示。
其中,池化(Pooling)的思想来自于视觉机制,是对信息进行抽象的过程。池化的本质是采样,池化层对于输入的Feature Map,选择某种方式对其进行降维压缩,以加快运算速度。
采用较多的几种池化过程包括:最大池化(Max Pooling)、平均池化(Mean-pooling)等。
其中,平均池化可以理解为对输入池化层的内容进行求均值处理。
示意性地,假设输入池化层的路径特征表示的数量为100个,表现为10*10的网格形式,每个路径特征表示都对应其中一个网格。将10*10的网格压缩为2*2的大网格,也即将100个网格分为4组,每组包括25个网格,将每组网格中的路径特征表示取均值,使用均值代表每个大网格对应的路径特征表示,即为平均池化过程。
值得注意的是,步骤810至步骤830以第一用户为例,获取第一用户的域外特征表示,在一些实施例中,第一用户在其他域,即其他功能平台上存在历史交互数据时,也可以根据上述步骤810至步骤830的方法得到第一用户在其他功能平台的域外特征表示或者域内特征表示。
任意的域都可以作为目标域,任意的域都可以作为源域,域内特征表示和域外特征表示的获得方法是相同的,针对每个用户都适用,包括但不限于第一用户和第二用户。
无论是域内特征表示还是域外特征表示,都是用于表现当前用户在某个域内的兴趣特征和交互特点。通常将与当前用户之间历史交互数据较为稀疏的域,或者不存在历史交互数据的域作为目标域;而存在历史交互数据,或者存在大量历史交互数据的域作为源域。
值得注意的是,本申请提出的内容推荐方法,共包括三个部分:
(1)目标域中为样本用户进行内容推荐的学习过程,涉及推荐损失(Lrec);
(2)基于样本用户的样本特征表示进行聚类的学习过程,涉及聚类损失(Lclustering);
(3)基于映射关系函数将样本用户的样本域外特征表示域内映射特征,使用域内映射特征逼近域内特征表示的学习过程,涉及重构损失(Lreconstruction)。
其中,为样本用户进行内容推荐的学习过程涉及个性化推荐模块。
在一些实施例中,基于相似度匹配结果对第一用户进行内容推荐之前,需要基于推荐的目标内容和用户真实感兴趣的内容之间的损失,对个性化推荐模块进行训练。如下列公式六所示:
公式六:
其中,hu是用户u的表示,hvi是用户u的正样本vi的表示,hvj是用户u的负样本vj的表示,σ表示激活函数,Lrec是推荐损失。
基于推荐损失对个性化推荐机制进行训练,使得基于第一用户的域内特征表示和候选内容推荐池中的元素进行相似度匹配时,相似度匹配结果能够准确的表示第一用户真正感兴趣的内容,并对第一用户进行内容推荐。
可选地,当推荐损失满足以下任一条件时,停止训练。
(1)重构损失低于预设阈值;(2)重构损失收敛。
上述三个环节采用联合训练的方式,也即基于推荐损失、聚类损失、重构损失对各过程进行训练是同步的。
值得注意的是,基于推荐损失对个性化推荐模块进行训练的方式可以是任意的,判定训练停止的条件可以使任意的,本实施例对此不加以限定。
综上,本申请提供的方法,通过将第一用户在第二功能平台的历史交互数据用异构图的形式表现,能够直观的观察的第一用户在第二功能平台之间进行交互的特点,并基于异构图中的元路径表示每条历史交互数据,根据元路径的路径特征表示,获得第一用户在第二功能平台上的兴趣偏向,进一步得到第一用户的域外特征表示,为特征映射提供了可靠的依据,提高了特征迁移的准确率以及基于域外特征表示获得域内特征表示,对第一用户进行个性化内容推荐的效果。
本实施例提供的方法,通过第一用户在第二功能平台中的第二历史交互数据,获得目标异构图,目标异构图中包含多个元路径,可以直观简洁的表示第一用户和第二功能平台中的元素之间的历史交互关系。提取目标异构图中元路径对应的路径特征表示并对元路径对应的路径特征表示进行聚合处理,得到第一用户的域外特征表示准确性较高。
本实施例提供的方法,通过图注意力网络,对异构图中每个元路径的路径节点进行注意力分析,得到每个路径节点的注意力表示,并基于路径节点的注意力表示,获得每个路径节点对于中心节点的重要程度,将元路径中每个路径节点的节点注意力聚合起来,得到整个元路径的路径特征表示,能够了解第一用户在第二功能平台上进行交互的特点,以及第一用户在第二功能平台上所表现出的兴趣偏向,使得基于元路径的路径特征表示所获得的域外特征表示更加准确的反映第一用户在第二功能平台的兴趣特点。
图11是本申请一个示例性实施例提供的内容推荐装置的结构框图,如图11所示,该装置包括:
提取模块1110,用于获取第一用户在第一功能平台中的属性数据,基于属性数据提取特征,得到第一用户的第一特征表示;
聚类分析模块1120,用于获取第一功能平台中第二用户的第二特征表示,将第一特征表示与第二特征表示进行聚类分析,得到第一用户对应的聚类簇中心;
提取模块1110,还用于获取第一用户在第二功能平台中的第二历史交互数据,基于第二历史交互数据提取特征,获得第一用户的域外特征表示;
获取模块1130,用于获取与聚类簇中心对应的映射关系函数,通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示,其中,映射关系函数用于指示第二功能平台和第一功能平台之间特征表示的映射关系;
推荐模块1140,用于根据域内特征表示和第一特征表示,确定第一用户的目标特征表示,并基于目标特征表示,从候选内容推荐池中确定与第一用户匹配的目标内容,向第一用户推送目标内容。
在一个可选的实施例中,如图12所示,获取模块1130,还包括:
参数替换单元1131,用于基于第一用户对应的聚类簇中心,对预生成的含参映射函数进行参数替换,得到与聚类簇中心对应的映射关系函数;
映射单元1132,用于通过映射关系函数对域外特征表示进行映射,得到第一用户的域内特征表示。
在一个可选的实施例中,参数替换单元1131,还用于获取含参映射函数,含参映射函数包括处于待填充状态的指定参数位置;将聚类簇中心作为参数代入指定参数位置,得到聚类簇中心对应的映射关系函数,其中,聚类簇中心用于作为检索关键字,对映射关系函数进行查询。
在一个可选的实施例中,获取模块1130,还用于基于获取样本用户在第一功能平台中的第一历史交互数据,基于第一历史交互数据提取特征,得到样本用户的样本域内特征表示,样本用户对应有样本聚类簇中心;获取样本用户在第二功能平台中的第二历史交互数据,基于第二历史交互数据提取特征,得到样本用户的样本域外特征表示;获取候选映射函数,将样本用户的样本域外特征表示输入至候选映射函数,经过映射得到与样本用户对应的样本域内映射特征;基于样本用户的样本域内特征表示和样本域内映射特征,得到重构损失;基于重构损失对候选映射函数进行训练,得到样本聚类簇中心对应的映射关系函数。
在一个可选的实施例中,聚类分析模块1120,还用于获取第一功能平台中第二用户的属性数据,基于第二用户的属性数据提取特征,得到第二用户的第二特征表示;基于第二特征表示对第二用户进行聚类分析,得到多个聚类簇,其中,每个聚类簇包含聚类簇中心;获取第一特征表示和聚类簇中心之间的相似度,基于相似度确定第一用户对应的聚类簇中心。
在一个可选的实施例中,聚类分析模块1120,还用于获取聚类信息,聚类信息用于指示初始聚类簇中心的位置信息;获取第二特征表示和初始聚类簇中心之间的相似度;基于第二特征表示和初始聚类簇中心之间的相似度,确定第一聚类簇分布结果,第一聚类簇分布结果包括各初始聚类簇中心对应的特征分布;对第一聚类簇分布结果进行离散分析,得到第二聚类簇分布结果,并基于第二聚类簇分布结果确定多个聚类簇,第二聚类簇分布结果包含每个第二特征表示对应的聚类簇中心。
在一个可选的实施例中,聚类分析模块1120,还用于对第一聚类簇分布结果中第i个初始聚类簇中心对应的特征分布进行离散度分析,得到第i个初始聚类簇中心对应的离散值;基于第i个初始聚类簇中心对应的离散值对第i个初始聚类簇中心进行更新,得到第二聚类簇分布结果;以及,对特征提取网络进行更新,其中,特征提取网络用于提取用户的特征表示;基于第二聚类簇分布结果和更新后的特征提取网络得到多个聚类簇。
在一个可选的实施例中,聚类分析模块1120,还用于计算第一特征表示和每个聚类簇中心之间的距离,将与第一特征表示距离最小的聚类簇中心,作为第一用户对应的聚类簇中心。
在一个可选的实施例中,聚类分析模块1120,还用于计算第一特征表示和每个聚类簇中心之间的相似度;基于相似度,确定第一用户属于每个聚类簇的概率;从确定的概率中筛选最大的概率,基于最大的概率确定第一用户对应的聚类簇中心。
在一个可选的实施例中,提取模块1110,还用于获取第一用户在第二功能平台中的第二历史交互数据,基于历史交互数据获得异构图,目标异构图中包含多个元路径,目标异构图用于表示第一用户和第二功能平台中的平台元素之间的历史交互关系;提取异构图中元路径对应的路径特征表示;对元路径对应的路径特征表示进行聚合,得到第一用户的域外特征表示。
在一个可选的实施例中,提取模块1110,还用于获取元路径中每个路径节点的节点注意力,路径节点用于表示与第一用户之间存在历史交互关系的平台元素;对节点注意力进行聚合处理,得到元路径的路径特征表示。
综上,本实施例提供的装置,通过对第一功能平台中的所有用户进行聚类分析,得到每个用户所属于的聚类簇以及每个用户所对应的聚类簇中心,基于聚类簇中心获取个性化的映射关系函数,个性化的映射关系函数能够针对不同的用户,实现从域外特征表示到域内特征表示的映射过程,提高了映射结果的准确率。获取第一用户的第一特征表示或第二特征表示中至少一种以及,第一用户在第二功能平台的历史交互数据经过特征提取得到的域外特征表示,基于第一特征表示和第二特征表示找到第一用户对应的聚类簇中心,根据第一用户对应的聚类簇中心获得与第一用户所对应的映射关系函数,将第一用户的域外特征表示输入至映射关系函数中,经过映射得到第一用户的域内特征表示,可以在第一用户仅在第二功能平台中存在历史交互数据、在第一功能平台中不存在历史交互数据时,得到第一用户在第一功能平台的交互特点,从而基于第一用户的域内特征表示和第一特征表示,对第一用户进行在第一功能平台上的个性化内容推荐,解决了冷用户问题和数据稀疏问题,使得推荐内容更加符合第一用户的真实兴趣,提高了推荐效果,避免支持内容推送功能的资源被浪费。
需要说明的是:上述实施例提供的内容推荐装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
图13示出了本申请一个示例性实施例提供的计算机设备1300的结构框图。该计算机设备1300可以是:笔记本电脑或台式电脑。计算机设备1300还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,计算机设备1300包括有:处理器1301和存储器1302。
处理器1301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1301还可以包括AI处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1302中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1301所执行以实现本申请中方法实施例提供的内容推荐方法。
在一些实施例中,计算机设备1300还包括其他组件,本领域技术人员可以理解,图13中示出的结构并不构成对终端1300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请的实施例还提供了一种计算机设备,该计算机设备包括处理器和存储器,该存储器中存储有至少一条指令、计算机可读指令、代码集或指令集,至少一条指令、计算机可读指令、代码集或指令集由处理器加载并执行以实现上述各方法实施例提供的内容推荐方法。
本申请的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有至少一条指令、计算机可读指令、代码集或指令集,至少一条指令、计算机可读指令、代码集或指令集由处理器加载并执行,以实现上述各方法实施例提供的内容推荐方法。
本申请的实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中任一所述的内容推荐方法。
可选地,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
Claims (15)
- 一种内容推荐方法,由计算机设备执行,包括:获取第一用户在第一功能平台中的属性数据,基于所述属性数据提取特征,得到所述第一用户的第一特征表示;获取所述第一功能平台中第二用户的第二特征表示,将所述第一特征表示与所述第二特征表示进行聚类分析,得到所述第一用户对应的聚类簇中心;获取所述第一用户在第二功能平台中的第二历史交互数据,基于所述第二历史交互数据提取特征,获得所述第一用户的域外特征表示;获取与所述聚类簇中心对应的映射关系函数,通过所述映射关系函数对所述域外特征表示进行映射,得到所述第一用户的域内特征表示,其中,所述映射关系函数用于指示所述第二功能平台和所述第一功能平台之间特征表示的映射关系;及根据所述域内特征表示和所述第一特征表示,确定所述第一用户的目标特征表示,并基于所述目标特征表示,从候选内容推荐池中确定与所述第一用户匹配的目标内容,向所述第一用户推送所述目标内容。
- 根据权利要求1所述的方法,所述获取与所述聚类簇中心对应的映射关系函数,通过所述映射关系函数对所述域外特征表示进行映射,得到所述第一用户的域内特征表示,包括:基于所述第一用户对应的所述聚类簇中心,对预生成的含参映射函数进行参数替换,得到与所述聚类簇中心对应的映射关系函数;通过所述映射关系函数对所述域外特征表示进行映射,得到所述第一用户的所述域内特征表示。
- 根据权利要求2所述的方法,所述基于所述第一用户对应的所述聚类簇中心,对预生成的含参映射函数进行参数替换,得到与所述聚类簇中心对应的映射关系函数,包括:获取含参映射函数,所述含参映射函数包括处于待填充状态的指定参数位置;将所述聚类簇中心作为参数代入所述指定参数位置,得到所述聚类簇中心对应的映射关系函数,其中,所述聚类簇中心用于作为检索关键字,对所述映射关系函数进行查询。
- 根据权利要求2或3所述的方法,还包括:获取样本用户在所述第一功能平台中的第一历史交互数据,基于所述第一历史交互数据提取特征,得到所述样本用户的样本域内特征表示,所述样本用户对应有样本聚类簇中心;获取所述样本用户在所述第二功能平台中的第二历史交互数据,基于所述第二历史交互数据提取特征,得到所述样本用户的样本域外特征表示;获取候选映射函数,将所述样本用户的样本域外特征表示输入至所述候选映射函数,经过映射得到与所述样本用户对应的样本域内映射特征;基于所述样本用户的样本域内特征表示和所述样本域内映射特征,得到重构损失;基于所述重构损失对所述候选映射函数进行训练,得到所述样本聚类簇中心对应的所述映射关系函数。
- 根据权利要求1至4任一项所述的方法,所述获取所述第一功能平台中第二用户的第二特征表示,将所述第一特征表示与所述第二特征表示进行聚类分析,得到所述第一用户对应的聚类簇中心,包括:获取所述第一功能平台中第二用户的属性数据,基于所述第二用户的属性数据提取特征,得到所述第二用户的所述第二特征表示;基于所述第二特征表示对所述第二用户进行聚类分析,得到多个聚类簇,其中,每个所述聚类簇包含聚类簇中心;获取所述第一特征表示和所述聚类簇中心之间的相似度,基于所述相似度确定所述第 一用户对应的所述聚类簇中心。
- 根据权利要求5所述的方法,其特征在于,所述基于所述第二特征表示对所述第二用户进行聚类分析,得到多个聚类簇,包括:获取聚类信息,所述聚类信息用于指示初始聚类簇中心的位置信息;获取所述第二特征表示和所述初始聚类簇中心之间的相似度;基于所述第二特征表示和所述初始聚类簇中心之间的相似度,确定第一聚类簇分布结果,所述第一聚类簇分布结果包括各初始聚类簇中心对应的特征分布;对所述第一聚类簇分布结果进行离散分析,得到第二聚类簇分布结果,并基于所述第二聚类簇分布结果确定多个聚类簇,所述第二聚类簇分布结果包含每个所述第二特征表示对应的所述聚类簇中心。
- 根据权利要求6所述的方法,所述对所述第一聚类簇分布结果进行离散分析,得到第二聚类簇分布结果,并基于所述第二聚类簇分布结果确定多个聚类簇,包括:对所述第一聚类簇分布结果中第i个初始聚类簇中心对应的特征分布进行离散度分析,得到所述第i个初始聚类簇中心对应的离散值;基于所述第i个初始聚类簇中心对应的离散值对所述第i个初始聚类簇中心进行更新,得到所述第二聚类簇分布结果;以及,对特征提取网络进行更新,其中,所述特征提取网络用于提取用户的特征表示;基于所述第二聚类簇分布结果和更新后的特征提取网络得到多个聚类簇。
- 根据权利要求5至7任一项所述的方法,所述获取所述第一特征表示和所述聚类簇中心之间的相似度,基于所述相似度确定所述第一用户对应的所述聚类簇中心,包括:计算所述第一特征表示和每个聚类簇中心之间的距离,将与所述第一特征表示距离最小的聚类簇中心,作为所述第一用户对应的聚类簇中心。
- 根据权利要求5至7任一项所述的方法,所述获取所述第一特征表示和所述聚类簇中心之间的相似度,基于所述相似度确定所述第一用户对应的所述聚类簇中心,包括:计算所述第一特征表示和每个聚类簇中心之间的相似度;基于所述相似度,确定第一用户属于每个聚类簇的概率;从确定的概率中筛选最大的概率,基于最大的概率确定所述第一用户对应的所述聚类簇中心。
- 根据权利要求1至9任一所述的方法,所述获取所述第一用户在第二功能平台中的第二历史交互数据,基于所述历史交互数据提取特征,获得所述第一用户的域外特征表示,包括:获取所述第一用户在第二功能平台中的第二历史交互数据,基于所述历史交互数据获得异构图,所述目标异构图中包含多个元路径,所述目标异构图用于表示所述第一用户和所述第二功能平台中的平台元素之间的历史交互关系;提取所述异构图中元路径对应的路径特征表示;对所述元路径对应的路径特征表示进行聚合,得到所述第一用户的域外特征表示。
- 根据权利要求10所述的方法,其特征在于,所述提取所述异构图中元路径对应的路径特征表示,包括:获取所述元路径中每个路径节点的节点注意力,所述路径节点用于表示与所述第一用户之间存在历史交互关系的所述平台元素;对所述节点注意力进行聚合处理,得到所述元路径的路径特征表示。
- 一种内容推荐装置,包括:提取模块,获取第一用户在第一功能平台中的属性数据,基于所述属性数据提取特征,得到所述第一用户的第一特征表示;聚类分析模块,获取所述第一功能平台中第二用户的第二特征表示,将所述第一特征 表示与所述第二特征表示进行聚类分析,得到所述第一用户对应的聚类簇中心;所述提取模块,用于获取所述第一用户在第二功能平台中的第二历史交互数据,基于所述第二历史交互数据提取特征,获得所述第一用户的域外特征表示;获取模块,用于获取与所述聚类簇中心对应的映射关系函数,通过所述映射关系函数对所述域外特征表示进行映射,得到所述第一用户的域内特征表示,其中,所述映射关系函数用于指示所述第二功能平台和所述第一功能平台之间特征表示的映射关系;及推荐模块,用于根据所述域内特征表示和所述第一特征表示,确定所述第一用户的目标特征表示,并基于所述目标特征表示,从候选内容推荐池中确定与所述第一用户匹配的目标内容,向所述第一用户推送所述目标内容。
- 一种计算机设备,包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令由所述处理器加载并执行以实现如权利要求1至11任一所述的内容推荐方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机可读指令,所述计算机可读指令由处理器加载并执行以实现如权利要求1至11任一所述的内容推荐方法。
- 一种计算机程序产品,包括计算机可读指令,所述计算机可读指令被处理器执行时实现如权利要求1至11任一所述的内容推荐方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211516062.9 | 2022-11-29 | ||
CN202211516062.9A CN117216362A (zh) | 2022-11-29 | 2022-11-29 | 内容推荐方法、装置、设备、介质和程序产品 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024114034A1 true WO2024114034A1 (zh) | 2024-06-06 |
Family
ID=89043145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/118248 WO2024114034A1 (zh) | 2022-11-29 | 2023-09-12 | 内容推荐方法、装置、设备、介质和程序产品 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117216362A (zh) |
WO (1) | WO2024114034A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118396672A (zh) * | 2024-06-28 | 2024-07-26 | 深圳品阔信息技术有限公司 | 基于人工智能的数据分析方法及系统 |
CN118410362A (zh) * | 2024-07-02 | 2024-07-30 | 西安银信博锐信息科技有限公司 | 一种用户多维度指标数据的聚合方法 |
CN118674017A (zh) * | 2024-08-19 | 2024-09-20 | 腾讯科技(深圳)有限公司 | 模型训练方法、内容推荐方法、装置及电子设备 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117874355A (zh) * | 2024-02-07 | 2024-04-12 | 北京捷报金峰数据技术有限公司 | 跨域数据推荐方法、装置、电子设备和存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2860672A2 (en) * | 2013-10-10 | 2015-04-15 | Deutsche Telekom AG | Scalable cross domain recommendation system |
CN108717654A (zh) * | 2018-05-17 | 2018-10-30 | 南京大学 | 一种基于聚类特征迁移的多电商交叉推荐方法 |
CN111966914A (zh) * | 2020-10-26 | 2020-11-20 | 腾讯科技(深圳)有限公司 | 基于人工智能的内容推荐方法、装置和计算机设备 |
CN114266317A (zh) * | 2021-12-28 | 2022-04-01 | 中国农业银行股份有限公司 | 聚类方法、装置和服务器 |
-
2022
- 2022-11-29 CN CN202211516062.9A patent/CN117216362A/zh active Pending
-
2023
- 2023-09-12 WO PCT/CN2023/118248 patent/WO2024114034A1/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2860672A2 (en) * | 2013-10-10 | 2015-04-15 | Deutsche Telekom AG | Scalable cross domain recommendation system |
CN108717654A (zh) * | 2018-05-17 | 2018-10-30 | 南京大学 | 一种基于聚类特征迁移的多电商交叉推荐方法 |
CN111966914A (zh) * | 2020-10-26 | 2020-11-20 | 腾讯科技(深圳)有限公司 | 基于人工智能的内容推荐方法、装置和计算机设备 |
CN114266317A (zh) * | 2021-12-28 | 2022-04-01 | 中国农业银行股份有限公司 | 聚类方法、装置和服务器 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118396672A (zh) * | 2024-06-28 | 2024-07-26 | 深圳品阔信息技术有限公司 | 基于人工智能的数据分析方法及系统 |
CN118410362A (zh) * | 2024-07-02 | 2024-07-30 | 西安银信博锐信息科技有限公司 | 一种用户多维度指标数据的聚合方法 |
CN118674017A (zh) * | 2024-08-19 | 2024-09-20 | 腾讯科技(深圳)有限公司 | 模型训练方法、内容推荐方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN117216362A (zh) | 2023-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11593894B2 (en) | Interest recommendation method, computer device, and storage medium | |
WO2024114034A1 (zh) | 内容推荐方法、装置、设备、介质和程序产品 | |
WO2016161976A1 (zh) | 选择数据内容向终端推送的方法和装置 | |
WO2023065859A1 (zh) | 物品推荐方法、装置及存储介质 | |
CN112052387B (zh) | 一种内容推荐方法、装置和计算机可读存储介质 | |
CN107871166B (zh) | 针对机器学习的特征处理方法及特征处理系统 | |
CN110442790A (zh) | 推荐多媒体数据的方法、装置、服务器和存储介质 | |
US20140189525A1 (en) | User behavior models based on source domain | |
CN112765373B (zh) | 资源推荐方法、装置、电子设备和存储介质 | |
Silva et al. | A methodology for community detection in Twitter | |
CN110413867B (zh) | 用于内容推荐的方法及系统 | |
CN112989169B (zh) | 目标对象识别方法、信息推荐方法、装置、设备及介质 | |
CN110163703A (zh) | 一种分类模型建立方法、文案推送方法和服务器 | |
US20240193402A1 (en) | Method and apparatus for determining representation information, device, and storage medium | |
CN113761359B (zh) | 数据包推荐方法、装置、电子设备和存储介质 | |
US20230334314A1 (en) | Content recommendation method and apparatus, device, storage medium, and program product | |
Rai et al. | Using open source intelligence as a tool for reliable web searching | |
Pasricha et al. | A new approach for book recommendation using opinion leader mining | |
Zeng et al. | Uncovering the essential links in online commercial networks | |
Wei et al. | Multimedia QoE Evaluation | |
CN114357301B (zh) | 数据处理方法、设备及可读存储介质 | |
CN116506498A (zh) | 一种基于云计算的数据精准推送方法 | |
CN115795156A (zh) | 物料召回和神经网络训练方法、装置、设备及存储介质 | |
CN115168609A (zh) | 一种文本匹配方法、装置、计算机设备和存储介质 | |
CN116091133A (zh) | 一种目标对象属性的识别方法、装置及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23896191 Country of ref document: EP Kind code of ref document: A1 |