CN115438732B - Cross-domain recommendation method for cold start user based on classified preference migration - Google Patents
Cross-domain recommendation method for cold start user based on classified preference migration Download PDFInfo
- Publication number
- CN115438732B CN115438732B CN202211085488.3A CN202211085488A CN115438732B CN 115438732 B CN115438732 B CN 115438732B CN 202211085488 A CN202211085488 A CN 202211085488A CN 115438732 B CN115438732 B CN 115438732B
- Authority
- CN
- China
- Prior art keywords
- user
- representation
- preference
- domain
- source domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000013508 migration Methods 0.000 title claims abstract description 18
- 230000005012 migration Effects 0.000 title claims abstract description 18
- 230000006870 function Effects 0.000 claims abstract description 117
- 238000012549 training Methods 0.000 claims abstract description 40
- 239000013598 vector Substances 0.000 claims description 40
- 230000003993 interaction Effects 0.000 claims description 29
- 238000012360 testing method Methods 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000007246 mechanism Effects 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 7
- 238000012886 linear function Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000002474 experimental method Methods 0.000 description 32
- 238000013507 mapping Methods 0.000 description 12
- 238000005457 optimization Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 238000001914 filtration Methods 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000012417 linear regression Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 235000018185 Betula X alpestris Nutrition 0.000 description 1
- 235000018212 Betula X uliginosa Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a cross-domain recommendation method for cold start user based on classification preference migration, which comprises the following steps: s1, respectively learning embedded representations of a user and an article in a source domain and embedded representations of the user and the article in a target domain through pre-training, and obtaining a preference representation of the user in the source domain by using a preference encoder; s2, clustering users with similar preference into a cluster through an unsupervised learning clustering algorithm to obtain class labels and centroids of each class of users; the centroid is a general preference representation of the type of user; s3, embedding the general preference representation into an input meta-network, generating a class-type bridge function of the user of the type, and realizing that different bridge functions are used by different users of the type; s4, inputting the embedded representation of the source domain where the cold start user is located into the type bridge function to obtain the predicted embedded representation in the target domain. The method and the device can effectively improve the accuracy and the robustness of recommendation, and can avoid the problem of over fitting.
Description
Technical Field
The invention relates to the technical field of recommendation, in particular to a cross-domain recommendation method for cold-start user migration based on classification preference.
Background
The recommendation system is one of methods for relieving information overload, effectively realizes personalized information filtering, and has been applied to many real-world applications such as Amazon, netflix, youTube. Collaborative Filtering (CF) is an important milestone for the development of recommendation systems that learns the similarity between user-user and item-item through explicit interactions (i.e., scoring) between the user and item, and has met with great success in modeling user preferences. However, CF-based models often suffer from long-term user/item cold start problems, i.e., it is difficult to recommend items to new users without interaction history, and since new items do not have any interaction information, newly produced items cannot be accurately recommended to users requiring it. In summary, the data sparseness problem is not conducive to learning an effective representation for recommendations.
To alleviate the cold start problem, cross-domain recommendations (Cross-domain Recommendation, CDR) have become a research hotspot in recent years. The method mainly aims to convert proper knowledge of a source domain into a target domain through transfer learning and solve the problem of cold start of the target domain. In this task, the CDR requires that the mapping relationship between the source domain and the target domain be constructed by users (i.e., overlapping users) that exist in both the source domain and the target domain. In contrast to overlapping users, users that only exist in the source domain and not in the target domain are referred to as cold-start users. The goal of the CDR is to provide better recommendations for the cold-start user of the target domain.
In order to effectively solve the challenging cold start problem, a very promising view in CDRs in recent years is to link the source domain and the target domain by learning a mapping function, thereby transferring the appropriate knowledge from the information source domain to the target domain. This new idea was first created by Embedding and Mapping (EMCDR). Fig. 1 (a) gives an illustration of the EMCDR model. In general, an EMCDR refers to all users sharing a common bridge function to enable knowledge migration from a source domain to a target domain. The method comprises the steps of firstly respectively learning embedded representations of users and articles in two fields by using a latent semantic model, and then training mapping functions by mainly using some active users and hot articles to obtain bridge functions for performing embedded prediction of cold start users in a target field. The mainly used latent semantic models include MF and BPR, which are two more basic models used. Finally, given a cold start user, the embedded representation of the user in the source domain is fed into the bridge function to obtain the initial embedded representation of the user in the target domain.
However, there are still a number of serious problems with EMCDRs. On the one hand, a common bridge function is shared for all users, so that personalized recommendation of the users cannot be reflected, the recommendation accuracy can be reduced, and the method is an excessively coarse-grained processing method. On the other hand, training the bridge function using only partially active users and trending items is unstable and ignores another large portion of important users and items, making the model generalization less capable. In real life, it is difficult to capture the real interests of individuals through a general bridge function due to the differences between individuals, which will greatly impair the performance of CDRs. In addition, only active users and popular items are selected to train the bridge function, which inevitably leads to a certain popularity bias, so that the user experience is reduced.
To alleviate this drawback, the PTUPCDR proposes to learn a personalized bridge function for each user, thereby making up for the inadequacies of the EMCDR and achieving a better effect. However, a disadvantage of the PTUPCDR is that while it can learn a personalized bridge function for each user, it may be too fine-grained, resulting in over-fitting problems.
Therefore, how to provide accurate recommendation for the cold start user of the target domain and ensure to provide good processing granularity so as to avoid the over-fitting problem is a problem to be solved urgently.
Disclosure of Invention
The invention aims at least solving the technical problems in the prior art, and particularly creatively provides a cross-domain recommendation method based on classification preference migration for cold start users.
In order to achieve the above object of the present invention, the present invention provides a cross-domain recommendation method for cold-start user migration based on classification preference, comprising the steps of:
s1, respectively learning embedded representations of a user and an article in a source domain and embedded representations of the user and the article in a target domain through pre-training, and obtaining a preference representation of the user in the source domain by using a preference encoder; the pretraining adopts MF.
S2, clustering users with similar preference into a cluster through an unsupervised learning clustering algorithm to obtain class labels and centroids of each class of users; the centroid is a general preference representation of the type of user; this step is preference encoding, with attention to MLP plus Softmax normalization.
S3, embedding the general preference representation into an input meta-network, generating a class-type bridge function of the user of the type, and realizing that different bridge functions are used by different users of the type; parameters of class-type bridge functions are learned by using a meta-network, so that predictions of different classes of bridge functions can be quickly learned and applied.
MLP for meta-network, learning class-type bridge parameters through the network.
S4, inputting the embedded representation of the source domain where the cold start user is located into the type bridge function to obtain the predicted embedded representation in the target domain. Task oriented strategic training is thereby accomplished.
Further, the obtaining, with the preference encoder, a preference representation of the user in the source domain includes:
s1-1, giving an interaction sequence of a user in a source domainAnd extracting the overall preference characteristics of the source domain user by using a representation learning method, and transferring the overall preference characteristics to the target domain to predict the embedded representation of the target domain cold start user.
Given a sequence of historical interactions of a cold-start user in the source domain, it can be seen intuitively that the items in the sequence are of varying importance. In other words, each item contributes differently to the expression of the user's preferences.
The interaction sequence is encoded into a single representation representing the source domain user preferences. Therefore, we add attention mechanisms to the sequence modeling and take the form of weighted sums as integration operations:
wherein the method comprises the steps ofA preference representation of the user in the source domain after preference encoding is referred to;
f Att (. Cndot.) represents an attention mechanism function;
a j representing items for overall preference of a userWhich represents the degree of importance of the item when the user's preference representation is encoded.
s1-2, taking an embedded representation sequence of an object interacted with by a user in a source domain as input of an attention network, and obtaining attention scores as output of the attention network;
the attention network is expressed as:
wherein a' j The output of the attention network is shown as the attention score; is a scalar, but is also an original attention score, and is finally normalized.
W 1 、W 2 Is two learnable matrices;
b is a bias vector;
ReLU () represents a ReLU activation function;
s1-3, carrying out normalization operation on the attention score through a Softmax function to obtain a final attention score, wherein the expression is as follows:
a′ j representing the output of the attention network, the attention score;
a j Representing the final attention score, which is a scalar;
a′ m the attention score obtained by the MLP for the class label representing the training set. The hidden layers of the MLP are two layers.
Since normalization is helpful for improving the performance of the model, the phenomenon that the attention score is too large and too small is not caused.
Further, the clustering includes: clustering the users according to the preference of the users to obtain class labels and centroids of each class of users;
the euclidean distance is used as a measure of similarity of user preferences, and the formula is as follows:
wherein the method comprises the steps ofA preference vector representation representing user i in the source domain;
dist ed (. Cndot.) is a Euclidean function;
d represents a dimension;
l represents a label vector;
c represents a matrix composed of cluster center vectors;
l, C represents the result obtained by the clustering function, and comprises a label vector and a centroid matrix corresponding to the label vector;
f cluster (. Cndot.) refers to a clustering function;
representing the preference vector representation of user 1 in the source domain, user 2 in the source domain, and user m in the source domain, respectively.
Further, the clustering function is a k-means algorithm.
Further, the structure of the meta-network is an MLP structure with a hidden layer of two layers, which is expressed as follows:
wherein h (·) refers to a meta-network;
θ refers to a set of parameters within the meta-network, the set of parameters including a weight matrix and a bias vector;
c i refers to the user category as l i The corresponding centroid vector.
Thus, the class-bridge function is:
wherein the method comprises the steps ofThe expression class is l i Class-type bridge functions of (a);
f (·) is a linear function;
The MLP has an activation unit, and the ReLU () is used as an activation function of the activation unit, so that the nonlinear activation effect is better, and the data can be better fitted.
Further, the step S4 includes:
For a cold start userInputting the embedded representation of the source domain where the embedded representation is located into a category bridge function to obtain the predicted embedded representation in the target domain; the cold start user is present only in the source domain and not in the target domain.
The embedding of the predictions of the cold-start user in the target domain is expressed as:
wherein f (·) is a linear function;
refers to obtaining a predictive embedded representation of the target domain where the user is located through class-type bridge function transformation.
Further, the method further comprises the following steps: optimizing global parameters by the following penalty function:
|·| represents absolute value;
r ij representing a real grading value of a target domain where the cold start user is located;
f (·) is a linear function;
Further, the method also comprises the step of testing training results:
obtaining a bridge function for the global parameters obtained through training, and carrying out calculation of RMSE and MAE according to the predicted value and the true value to obtain a test result;
The predicted value is as follows: the user scores with labels are obtained through the user and the article information;
the true value is: actual tagged user scores.
In summary, by adopting the technical scheme, the embedded representation of the users and the articles in the source domain and the target domain is obtained through pre-training, and then the clustering and the meta-network are used for generating the category type bridge function for each type of users, so that the accuracy and the robustness of recommendation can be effectively improved, and the problem of over-fitting can be avoided.
Training the model by adopting a task-oriented optimization program, and directly taking the user score as an optimization target to replace a general mapping-based method. Thus, the influence of poor embedding quality of the learned user is avoided, and training samples are added.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
fig. 1 is a schematic diagram of a model.
Wherein (a) in fig. 1 is a schematic diagram of an EMCDR model;
FIG. 1 (b) is a schematic representation of the CDRCPT model of the present invention.
Fig. 2 is a schematic structural diagram of the CDRCPT model of the present invention.
Fig. 3 is a schematic diagram showing the impact of the number of clusters at different test (cold start) user scales in scenario 2 of the present invention.
Fig. 4 is the result of generalization experiments performed by the present invention using CMF, EMCDR, PTUPCDR and CDRCPT on both the MF and GMF semantic models, respectively.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
The invention provides a novel method CDRCPT for cold-start user cross-domain recommendation based on classification preference migration, which is shown in figure 2. The method first learns embedded representations of users and items corresponding to a source domain and a target domain, respectively, through pre-training. Then, users with similar preferences are clustered into a cluster by an unsupervised learning clustering algorithm, and the cluster center is represented as a general preference representation of the type of users. Finally, general preference of the class is embedded and input into the meta-network through meta-learning, and a class-type bridge function used by the class user is generated. Finally, the initialized embedded representation of the cold start user in the target domain is obtained by using the class bridge function.
Wherein kmeans is one of the cluster learning algorithms, and Mini-Batch clustering is adopted, wherein the clustering does not use all samples, but samples with one Batch size are selected for clustering at a time.
Specific embodiments of the present invention are shown in FIG. 1 (b): at the very beginning, the movie sequence that four users watch in the source domain is different. After clustering, we found that the movie preferences of Mary and Angor are similar, both like emotion series movies, and the movie preferences of Waylon and Johnny are similar, both like action series movies, and then the same class of users can use the same bridge function when they migrate. This example illustrates from the side that the effect of clustering on CDRs can be split into two aspects, on the one hand, the order of movies viewed by the user is significantly different. Similar users tend to watch the same series of movies, which lays a good foundation for clustering of user preferences. On the other hand, users with similar preferences may be clustered into one cluster by clustering and the cluster center as a general preference representation for such users, thereby incorporating collaborative filtering signals into class-type bridge functions.
Note that the generation of the class-type bridge function depends on the representation of the cluster center obtained by each type of user. In practice, the generated bridge functions may be regarded as models of the parameters of the learning meta-network.
In reality, the control of the degree of clustering is a complex problem, especially the selection of hyper-parameters. The sensitivity of different data sets to the number of clusters is different, and higher order meta-networks are also difficult to optimize. The degree of clustering is typically measured by inertial and profile coefficients. The smaller the inertia coefficient or the larger the contour coefficient, the better the clustering degree. In terms of model optimization, we employ a task-oriented optimization process, rather than most mapping-based approaches. This is because the mapping-oriented optimization process is very sensitive to the quality of the learned embedded representation of the user, whereas in a real recommendation system it is difficult to learn a compact and comprehensive embedded representation for all users. Therefore, we train the meta-network with a task oriented optimization program and directly take the user score as the optimization objective. This optimization method is in fact similar to the predictive problem in linear regression.
This work in the cross-domain recommendation domain is of a single-pieceThe target CDR, therefore, has a source domain and a target domain. The purpose is to enhance the recommendation performance of the target domain with the appropriate information of the source domain. Each domain has a user set A set of items and a scoring matrix.Refers to user u i And article v j The interaction score between them, the score representing the user's feedback on the item. To distinguish between these two areas we use +.>To indicate user sets, item sets and scoring matrices in the source domain, using +.>To represent the domain of interest. For overlapping users in these two domains we define the overlapping users in the two domains as +.>In this work, the items between the source domain and the target domain are unassociated, meaning that overlapping users will become the only hub to build the bridging function.
In the context of representation learning, it is critical to construct a complete, compact representation for users and items. Typically, the user and the item are converted into dense vectors, also known as embedding or latent factors, and then the user set or the item set, respectively, form a corresponding vector space. Within any one vector space, the dimensions are the same so that transformation of the embedded representation of the user within the next two domains can be facilitated. In this work, we will respectively divide the usersAnd articles->Is defined as +.>And->Within the space of the two embedded representations, d represents the dimension of the embedded representation, and domain e { s, t } represents a domain label that can specify the user, item, and scoring matrix of the source domain or target domain. For each user from the source domain +. >We encode the user history interactions later to form an embedded representation of the user in the source domain. The sequence of user history interaction items in the source domain may be denoted +.>In this sequence t n A time stamp representing interactions made by the user and the item at the source domain. Wherein->Is represented at a time stamp of t1 And when the user interacts with the object record in the source domain.
S1, preference encoding is carried out on users
To cluster user preferences and generate a class-type bridge function, a comprehensive and compact representation of user preferences needs to be generated for each user. As previously mentioned, user preferences originate primarily from the user's historical interaction sequence. Since the cold-start user does not have any interactive items in the target domain, it is important to make full use of the preference information of the source domain user.
Given a user interaction sequence in the source domainThe overall preference characteristics of the source domain user can be extracted using a representation learning method,and migrate it to the target domain to predict the embedded representation of the target domain cold start user. Given a sequence of historical interactions of a cold-start user in the source domain, it can be seen intuitively that the items in the sequence are of varying importance. In other words, each item contributes differently to the expression of the user's preferences; thus, we encode the interaction sequence as a single representation representing the source domain user preferences. Therefore, we add attention mechanisms to the interactive sequence coding and take the form of weighted sums as integration operations:
Wherein f Att (. Cndot.) represents the attentional mechanism function, a j Representing items for overall preference of a userWhich represents the degree of importance of the item when the user's preference representation is encoded.Refers to the preference expression of the user in the source domain after preference encoding,/for the user>Representation->Is embedded in the representation. Next, we learn the attention score of an item using an attention network. The input to the attention network is an embedded representation sequence of items with which the user interacts in the source domain. Formally, note that a network is defined as:
wherein a is j ' the output of the attention network is a scalar but also an original attention score, and finally normalized.Representing any one item in the source domain, +.>Representing a certain item in the source domain interaction sequence;
is two learnable matrices +.>Refers to the offset vector. d represents the dimension of feature embedding. The attention score is normalized by a Softmax function, a method that follows the common practice in previous work. We utilized ReLU () as the activation function and proved that the result was better through experiments.
S2, clustering the users
We have proposed that the preferences of some users in the source domain are similar. This intuition is closely related to the notion of grouping together and people. Learning the bridge function used by each class of users helps to alleviate the problem of cold start of the target domain. On the one hand, collaborative filtering signals can be well integrated into the construction of class-type bridge functions through clustering. On the other hand, the clustering can be used for constructing the bridge functions used by different types of users, so that the different types of users can use different bridge functions. The class-type bridge function has better granularity and very flexible function, and can be adjusted according to the characteristics of users to realize better performance. Based on the above thought, we propose to cluster users according to their preferences to obtain class labels and centroids of each class of users, where centroids are cluster centers. In particular, we use Euclidean distance as a measure of similarity of user preferences. Returned after clustering is the cluster center (centroid) and the corresponding label, and the clustering method can be expressed as:
wherein the method comprises the steps ofFor evaluating the similarity of any two user preference vectors, preference vectorsPreference vector->The preference vector is composed of scalar quantities, d represents the dimension,/- >A preference vector representation representing user i in the source domain, and (2)>Preference vector representation representing user j in source domain, dist ed (. Cndot.) refers to a Euclidean function that can evaluate the similarity of preferences of any two users. l= { l 1 ,l 2 ,...,l r Sum c= { C 1 ,c 2 ,...,c r The label vector (category) and the matrix composed of the clustering center vector are respectively referred to, r represents r categories in total, and after clustering, r categories are included, and r category type bridge functions are corresponding.Is shown in trainingThe training is mainly to cluster the preference of the source domain user in the overlapping user.
In particular, one class is l i The centroid vector corresponding to the user of (a) is c i It also refers to a unified embedded representation of such users. f (f) cluster (. Cndot.) refers to a clustering function. In this work we use the fast and efficient K-means algorithm as a clustering function, besides that, we can also use the step-in gaussian clustering, K-center method, CLARANS algorithm. The DIANA algorithm, the BIRCH algorithm, etc. as a clustering function. The k-means algorithm has only one hyper-parameter k, which will be discussed in detail in the experimental section.
S3, embedding preference expression into the input meta-network
Obviously, the user preferences in the source domain and the target domain are similar. After obtaining the preferences of the source domain user, a bridge function may be constructed to generate a predicted preference embedment for the target domain user. There is a relationship between the preference relationship and the user characteristics. Based on this intuition, we construct class-type bridge functions using a meta-network. By entering a general preference representation of the user type into the meta-network, a generic bridge function can be generated. This functionality can be used for mapping of source and target domains. The proposed meta-network is a MLP structure with two hidden layers, expressed as follows:
Where h (-) refers to the meta-network, which is a two-layer MLP. θ refers to a parameter set in the meta-network, mainly comprising a weight matrix and a bias vector. c i Refers to the user category as l i The corresponding centroid vector. Therefore, we can apply c i Inputting the meta-network to obtain an output As category l i Is a class of users of (a)The shared parameter vector. We can then derive a generic bridge function:
wherein the method comprises the steps ofThe expression class is l i Class-type bridge function f (·) utilizes a simple linear layer that is EMCDR compliant.Is a key parameter in the construction of class-type bridge functions. For a cold start user +.>We can input the embedded representation of the source domain in which it resides into a generic bridge function to obtain a predicted embedded representation within the target domain. The embedding of the predictions of the cold-start user in the target domain is expressed as:
wherein the method comprises the steps ofRefers to user +.>Embedded representation in source domain, but +.>Refers to obtaining a predictive embedded representation of the target domain where the user is located through class-type bridge function transformation. Thus (S)>Can be used to pass through the object in the target areaThe form of the product is used for scoring prediction. The overall structure of the CDRCPT is shown in figure 2.
S4, training with tasks as guidance
To improve CDR performance and increase training samples, we train the objective function with a task oriented training strategy. In the prior art, the training of the objective function by using the mapping-based method has the defects of limited interaction of overlapping users in the objective domain and unavoidable generation of certain deviation due to less training data. Furthermore, for cold-start users of the target domain, learning a reasonable embedded representation is challenging, and a relatively unreasonable embedded representation can negatively impact the model.
Thus, we train the meta-network with a task oriented strategy. In other words, the ultimate goal of the training paradigm is to optimize various parameters of the model according to the user's score. Essentially, the strategy is a linear regression task that optimizes model parameters using the error between the final prediction rating and the ground truth value. This task-oriented optimization strategy is expressed as:
wherein the method comprises the steps ofIs a parameter set of the optimization function and the training process updates the weight matrix and bias vector with a back propagation algorithm.Refers to the interaction scoring moment of overlapping users in a target domainMatrix, |·| represents absolute value. r is (r) ij Refers to the true score value for a test cold start user's target domain. / >Refers to items within the target domain, +.>Is an embedded representation of the items within the target domain.
It can be found from the optimization function that the use of the task-oriented optimization program can effectively avoid the influence of unreasonable embedded representations of users. Moreover, he can increase the number of training samples to a large extent. A simple assumption can prove that if the number of overlapping users is N and a mapping-based approach is adopted, then there are only N samples, and these N samples need to be divided into training and test sets. This leads to the problem of insufficient training samples, since the data samples in the mapping-based optimization procedure are small and need to be partitioned. However, if a task oriented optimizer is employed, the data samples are all scores of the target domain where the user is located, not just the embedded representation of the user. Assuming the overlapping user is N and the number of items in the target domain is M, then a scoring sample of approximately NxM will be used for training and testing of the model. Therefore, the task-oriented training strategy can improve the performance of the model to a certain extent, and increase the number of samples of the training and testing set required by the model, so that the over-fitting problem can be avoided.
Experiment and analysis
We answer the following study questions experimentally: why is RQ1 to introduce CDRs? How does CDRCPT compare to other vehicle types in a cold start scenario? RQ2 discusses the effect of cluster number k on CDCPT. Why is RQ3 better CDRCPT performance? How does CDRCPT generalize capability?
Table 1: data in cross-domain scenarios
Where overlay refers to the number of overlapping users and Ratio refers to the percentage of overlapping users.
1 experiment set-up
Data sets after some previous work, the data set we used was the well-known large public Amazon data set. In data preprocessing, we use the same partitioning method as in the previous work to construct the cross-domain recommended dataset of the work. Specifically, unlike most of the cross-domain data used previously, we used the newly published 2018 amazon recommended dataset and performed various experiments thereon. The data set is data after being sorted and mainly comprises 24 non-connected domain data sets. Each domain dataset contains relevant information such as user ID, item ID, score, etc., whereas in data preprocessing, only user ID, item ID and score are filtered to build our cross-domain training set and test set. The work mainly selects two real CDR data sets: scenario1: music-Movie and Scenrio2: game-Video. And taking Music and Game as source domains and moving and Video as corresponding target domains respectively. The CDR scene dataset is typically more data in the source domain than in the target domain. Since this work belongs to asymmetric CDRs, the goal is to use only the rich information of the source domain to obtain the user preferences of the target domain and recommend items of the target domain to cold-start users who have interactions only in the source domain. We filter users with more than 5 interactions and items with more than 10 interactions from the user set and the item set, respectively. For the partitioning of the data sets we use different ratios to construct the training set and the test set, β denotes the ratio of the test set, the values of which are chosen to be 20%, 30%, 50% and 80%, respectively. The value also represents the percentage of cold start users in the total data sample. Table 1 presents specific data in the CDR scenario.
The evaluation index adopts a training process with a task as a guide, and the training process is also a form of linear regression. Based on previous work, we used the error between the predictive score and the true value to evaluate the performance of CDRCPT. In this study we used RMSE (root mean square error) and MAE (mean absolute error), which are two effective error assessment indicators.
Baseline model on selection of baseline, two types of baseline models, single-domain and cross-domain, were selected. By selecting the single-domain and cross-domain recommendation models, the differences of the single-domain and cross-domain recommendation models can be well compared. Since this work pertains to asymmetric cross-domain recommendations, and is also a way to achieve knowledge transfer of source and target domains by building bridge functions, the CDR baselines mainly select asymmetric and bridge function-based models. Therefore, we mainly selected the following baselines for comparison with CDRCPT: 1) TGT is a single target domain recommendation, and only the data of the target domain is used for training a model. 2) CMF means that the user's embedding is shared in both domains. 3) The EMCDR is a CDR model based on embedding and mapping. It first learns the embedding of users and items using a potential factor model, and then builds a bridge function shared by all users. 4) PTUPCDR is the latest method, belongs to asymmetric and bridge function-based models, and utilizes a meta-network to learn the bridge function between each user source domain and each user target domain, so that personalized preference migration is realized.
Implementation details in our model and baseline model, the Pytorch framework was used to develop designs. We use mainly the MF-based model to pre-train to get the embedded representation of the user and item in the source domain, while another selected GMF model is mainly used for model generalization experiments. For each scene and model we use a fixed random seed to divide the data and train the model. The random seed used in this work was 10. Adam is adopted to optimize internal parameters such as learning rate, weight attenuation and the like. The learning rate is optimized by grid search within 0.001,0.005,0.01,0.02,0.1. Furthermore, we set the embedding dimension of all modules except the cluster to 10 and the small batch to 256. In the aspect of clustering, a quick and efficient k-means algorithm is adopted to cluster user preferences, and reasonable mini-batch is selected according to the characteristics of data. To meet the performance and efficiency of the model, the batch input size of the clusters is typically chosen to be 5000 or 10000. The choice of the super-parameters is a more appropriate value obtained by continuous experiments, which depends on both the distribution characteristics of the data and the size setting of the batch. The number of hidden layer units of the meta network is 2×d, and its output shape is d×d.
Table 2: cold start experimental result under two cross-domain scenes
Where RMSE, MAE represent root mean square error and mean absolute error, respectively, the best results are indicated in bold. Beta represents the proportion of test (cold start) users.
In the design of the user preference encoder, we employ a attention mechanism to construct the preference encoder. The attention mechanism is a powerful tool that integrates items with which a user interacts in the source domain into a single representation, the item sequence chosen for this study being 20 in length. The attention mechanism learns the attention score using a double layer MLP with hidden units. In the aspect of user preference clustering, different small batch sizes and training models are respectively set, and more reasonable models are selected from a large number of experiments. To control the degree of clustering, we use two commonly used evaluation coefficients, namely the inertia coefficient and the profile coefficient, to measure the effect of clustering. Generally, the better the clustering, the lower the inertia coefficient, the higher the profile coefficient, and vice versa. In cold start experiments, different ratios of test users were used to evaluate the performance of the model. In the experiment of selecting the number of clusters, we mainly use scenario 2 for testing, and analyze the number of clusters to be selected in detail according to the proportion beta of different test sets. For each scene we discuss the results under the number of best clusters.
2 Cold Start experiment (RQ 1)
The section mainly analyzes the experimental result of the proposed CDCPT in detail under the cold start scene and explains the reason why the CDRs are better than the single domain recommendation. Since CDR generation is mainly to solve the problem of cold start, we refer to previous work to evaluate CDR performance in cold start scenario. We verified the effectiveness of CDRCPT in different environments by cold start experiments and obtained the best results on two real cross-domain datasets. The cold start test results are shown in table 2, wherein the optimal test results are shown in bold, and β represents the proportion of the test (cold start) users to the total users. From experimental data we can draw the following conclusions: (1) TGT is a single-domain recommendation model that directly trains models using target domain information, with results far from ideal. The reasons for poor TGT experimental results may be sparse target domain data, little acquired user information, and poor modeling of the user. Comparing the CDRs, it can be seen that enhancing the target domain information using source domain knowledge can improve the recommendation performance of the target domain. (2) CMF is an extension of MF, directly integrating user information of the source and target domains into an embedded representation shared by both domains. The method does not use a bridging function to connect the two domains, i.e. does not take into account the difference between the two domains when obtaining the embedded representation of the user, which may be why the CMF is not effective. Whereas CDR models based on bridge functions take into account the variability of user embedding between two domains. (3) EMCDR and PTUPCDR are based on building a cross-domain model of the bridge function. The EMCDR means that all users share one bridge function to realize information transmission from a source domain to a target domain, and the PTUPCDR constructs personalized bridge functions for each user in a meta learning mode to realize personalized recommendation. Both models achieved good results in cold start experiments. However, these two bridge function-based cross-domain models, while achieving some degree of better results, still suffer from some drawbacks. EMCDR allows all users to share a bridge function, which not only reduces recommended performance, but is a coarse-grained approach. In terms of building the bridging function, the PTUPCDR largely compensates for the coarse-grained approach of the EMCDR to some extent, but building the bridging function for each user may lead to over-fitting problems and is more susceptible to noise and epidemic bias. In other words, the ptupccdr may be a treatment with too fine a granularity, and cold start experiments indicate that under certain scenarios, the ptupccdr has an overfitting problem. (4) Based on the analysis of EMCDR and PTUPCDR, CDRCPT between EMCDR and PTUPCDR is proposed, which is a better granularity processing method. The model adopts the clustering idea, clusters the users according to their preferences, and then generates a class-type bridge function corresponding to a certain class of users by means of meta learning, thereby realizing that the same bridge function is used for a certain class of users with similar preferences. Experiments show that CDCPT achieves good effect in cold start experiments.
Discussion about the number of clusters (RQ 2)
This section mainly discusses the number of clusters selected among the clusters. Since the selection of the number of clusters has a certain influence on the performance of the model, we perform research experiments on the number of clusters to answer RQ2. The preference of users is clustered by using a clustering algorithm k-means in unsupervised learning, so that users with similar preference are clustered into a cluster, and a general preference representation of the users is obtained. After the general preference representation of a certain class of users is obtained, the general preference representation is input into a meta-network to generate a corresponding class-type bridge function. We will discuss the effect of the number of clusters in scenario 2. In scenario 2, the number of clusters at 4 specific ratios with cold start user ratios of 20%, 30%, 50% and 80%, respectively, is discussed. The investigation experiment on the number of clusters in scene 2 is shown in fig. 3. This experiment mainly describes RMSE index as a function of CDRCPT and PUTPCDR, where the performance of the ptupccdr remains unchanged as a function of change. The purpose of locating the PTUPCDR in these figures is to make a clear comparison. From fig. 3, we can conclude that: (1) By clustering, which helps to build a class-type bridge function for a class of users, CDRCPT can be found to be superior to ptupccdr by clustering user preferences. (2) It can be seen that as the number increases, the performance of CDRCPT gradually improves, eventually reaching a converged state. Especially at 50%, CDRCPT was more significantly affected by the change in value. (3) By observing the following change curves under four conditions, the improvement of the model performance has similar trend under different conditions, namely, the model performance is firstly reduced and then gradually flattened, which proves that the CDCPT has robustness. The above results show that a class-type bridge function used by users with similar preferences can be well constructed through clustering, and collaborative filtering signals can be well fused into the construction of the bridge function.
Discussion of generalization ability and related promotion (RQ 3)
In this section we performed a simple generalization experiment for the different base models and analyzed the improvement of CDRs by CDRCPT.
4.1 generalization experiments: it should be noted that the cold start experiments in this work mainly use a simple basic model MF, and no attempt is made to determine whether other basic models will have an impact on our model. In the design of CDR models, more attention is paid to how to bridge between two domains, thereby better migrating the appropriate information from the source domain to the target domain. The generalization experiments adopted in this section are to verify whether changing the base model will have a large impact on model performance. We performed a simple generalization experiment in CDR scenario 2. In generalization experiments, a more complex GMF was used as the base model. In the dot product prediction function, GMF gives different weights to different dimensions, and can be classified as generalization of the common MF. Specifically, the generalized experiment of scene 2 was performed using GMF, and all other control variables were consistent with the cold start experiment. Results as shown in fig. 4, fig. 4 shows that generalized experiments using CMF, EMCDR, PTUPCDR and CDRCPT on two basic models, MF and GMF, respectively, showed the best results. Through generalization experiments, CDCPT can still have good performance after replacing the base model and is superior to other three models, so that CDCPT is proved to have better performance and robustness when replacing other base models.
4.2 explanation about lifting: the effectiveness and generalization capability of the CDCPT method are respectively verified through theoretical and experimental analysis, and the number of clusters to be selected in the clustering is discussed in detail. We believe that CDRCPT can achieve better performance for the following reasons. Firstly, clustering users with similar preference into a cluster by clustering, constructing a class-type bridge function by embedding the general preference of the users in a clustering center, so that the users with similar preference share a common bridge function, and combining collaborative filtering signals into the class-type bridge function. Second, CDRCPT has better processing granularity. Finally, our model adopts a task oriented strategy during the training process. The strategy targets the model directly to the user score, unlike the accuracy of the mapping-based approach to target domain cold-start user prediction embedding. Task-oriented training not only utilizes the idea of linear regression, but also adds a large number of training samples to the model to a certain extent.
In summary, the present invention is directed to researching asymmetric types in CDRs, and aims to transfer user preferences from source domain to target domain, and solve the problem of cold start of target domain. Most existing methods have all users share a common bridging function to accomplish the preference migration, but this method results in a significant reduction in recommendation accuracy. To this end, we propose a cross-domain recommendation algorithm based on classification preference migration (CDRCPT). The method of the patent aims to construct a class-type bridge function used by each class of users with similar preference by using clustering and meta-learning, so as to realize that different types of users use different bridges. And utilizing the class-type bridge function to realize migration of user preference from the source domain to the target domain, and fusing the collaborative filtering signals into the class-type bridge function by using the unsupervised clustering.
To verify the effectiveness and robustness of the CDRCPT method, we performed a number of experiments using Amazon review datasets in two cross-domain scenarios, and verified the generalization ability of the method by generalization experiments. In addition, we further analyze and discuss the super parameter of the number of clusters. Experiments verify the effectiveness and generalization ability of CDCPT and discuss the super parameters in the clusters in detail. In the future we will further study how to alleviate the biased representation due to domain-specific factors resulting from independent pre-training of the respective domains and try to extend the approach to multi-domain recommendations.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
Claims (6)
1. A cross-domain recommendation method for cold start user based on classification preference migration is characterized by comprising the following steps:
s1, respectively learning embedded representations of a user and an article in a source domain and embedded representations of the user and the article in a target domain through pre-training, and obtaining a preference representation of the user in the source domain by using a preference encoder;
The obtaining, with a preference encoder, a preference representation of a user in a source domain includes:
s1-1, giving an interaction sequence of a user in a source domainExtracting the overall preference characteristics of the source domain user by using a representation learning method, and transferring the overall preference characteristics to the target domain to predict the embedded representation of the target domain cold start user;
adding an attention mechanism in sequence modeling, and adopting a weighted sum form as an integration operation:
wherein the method comprises the steps ofA preference representation of the user in the source domain after preference encoding is referred to;
f Att (. Cndot.) represents an attention mechanism function;
s1-2, taking an embedded representation sequence of an object interacted with by a user in a source domain as input of an attention network, and obtaining attention scores as output of the attention network;
the attention network is expressed as:
wherein a is j ' represents the output of the attention network, which is the attention score;
W 1 、W 2 is two learnable matrices;
b is a bias vector;
ReLU () represents a ReLU activation function;
s1-3, carrying out normalization operation on the attention score through a Softmax function to obtain a final attention score, wherein the expression is as follows:
a j ' represents the output of the attention network, which is the attention score;
a j representing the final attention score, which is a scalar;
a′ m attention scores obtained by MLP for class labels representing training sets;
s2, clustering users with similar preference into a cluster through an unsupervised learning clustering algorithm to obtain class labels and centroids of each class of users; the centroid is a general preference representation of the type of user;
s3, embedding the general preference representation into an input meta-network, generating a class-type bridge function of the user of the type, and realizing that different bridge functions are used by different users of the type;
the structure of the meta-network is an MLP structure with a hidden layer of two layers, and the structure is expressed as follows:
wherein h (·) refers to a meta-network;
θ refers to a set of parameters within the meta-network, the set of parameters including a weight matrix and a bias vector;
c i refers to the user category as l i The corresponding centroid vector;
thus, the class-bridge function is:
wherein the method comprises the steps ofThe expression class is l i Class-type bridge functions of (a);
f (·) is a linear function;
S4, inputting the embedded representation of the source domain where the cold start user is located into the type bridge function to obtain the predicted embedded representation in the target domain.
2. The cold-start user-oriented cross-domain recommendation method based on classification preference migration of claim 1, wherein the clustering comprises: clustering the users according to the preference of the users to obtain class labels and centroids of each class of users;
the euclidean distance is used as a measure of similarity of user preferences, and the formula is as follows:
wherein the method comprises the steps ofA preference vector representation representing user i in the source domain;
dist ed (. Cndot.) is a Euclidean function;
d represents a dimension;
l represents a label vector;
c represents a matrix composed of cluster center vectors;
l, C represents the result obtained by the clustering function, and comprises a label vector and a centroid matrix corresponding to the label vector;
f cluster (. Cndot.) refers to a clustering function;
3. The cold-start user-oriented cross-domain recommendation method based on classification preference migration of claim 2, wherein the clustering function is a k-means algorithm.
4. The cold-start user-oriented cross-domain recommendation method based on classification preference migration of claim 1, wherein S4 comprises:
for the followingCold start userInputting the embedded representation of the source domain where the embedded representation is located into a category bridge function to obtain the predicted embedded representation in the target domain;
the embedding of the predictions of the cold-start user in the target domain is expressed as:
wherein f (·) is a linear function;
5. The cold-boot user-oriented cross-domain recommendation method based on classification preference migration of claim 1, further comprising: optimizing global parameters by the following penalty function:
|·| represents absolute value;
r ij representing a real grading value of a target domain where the cold start user is located;
f (·) is a linear function;
6. The cold-start user-oriented cross-domain recommendation method based on classification preference migration of claim 1, further comprising testing training results:
obtaining a bridge function for the global parameters obtained through training, and carrying out calculation of RMSE and MAE according to the predicted value and the true value to obtain a test result;
the predicted value is as follows: the user scores with labels are obtained through the user and the article information;
the true value is: actual tagged user scores.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211085488.3A CN115438732B (en) | 2022-09-06 | 2022-09-06 | Cross-domain recommendation method for cold start user based on classified preference migration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211085488.3A CN115438732B (en) | 2022-09-06 | 2022-09-06 | Cross-domain recommendation method for cold start user based on classified preference migration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115438732A CN115438732A (en) | 2022-12-06 |
CN115438732B true CN115438732B (en) | 2023-05-26 |
Family
ID=84247628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211085488.3A Active CN115438732B (en) | 2022-09-06 | 2022-09-06 | Cross-domain recommendation method for cold start user based on classified preference migration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115438732B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116244501B (en) * | 2022-12-23 | 2023-08-08 | 重庆理工大学 | Cold start recommendation method based on first-order element learning and multi-supervisor association network |
CN115757529B (en) * | 2023-01-06 | 2023-05-26 | 中国海洋大学 | Cross-domain commonality migration recommendation method and system based on multi-element auxiliary information fusion |
CN116028728B (en) * | 2023-03-31 | 2023-06-16 | 特斯联科技集团有限公司 | Cross-domain recommendation method and system based on graph learning |
CN116502271B (en) * | 2023-06-21 | 2023-09-19 | 杭州金智塔科技有限公司 | Privacy protection cross-domain recommendation method based on generation model |
CN116629983B (en) * | 2023-07-24 | 2023-09-22 | 成都晓多科技有限公司 | Cross-domain commodity recommendation method and system based on user preference |
CN116910375B (en) * | 2023-09-13 | 2024-01-23 | 南京大数据集团有限公司 | Cross-domain recommendation method and system based on user preference diversity |
CN117556148B (en) * | 2024-01-11 | 2024-08-09 | 南京邮电大学 | Personalized cross-domain recommendation method based on network data driving |
CN118332194B (en) * | 2024-06-13 | 2024-09-27 | 材料科学姑苏实验室 | Cross-domain cold start recommendation method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149734A (en) * | 2020-09-23 | 2020-12-29 | 哈尔滨工程大学 | Cross-domain recommendation method based on stacked self-encoder |
CN113449205A (en) * | 2021-08-30 | 2021-09-28 | 四川省人工智能研究院(宜宾) | Recommendation method and system based on metadata enhancement |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10803386B2 (en) * | 2018-02-09 | 2020-10-13 | Twitter, Inc. | Matching cross domain user affinity with co-embeddings |
CN109711925A (en) * | 2018-11-23 | 2019-05-03 | 西安电子科技大学 | Cross-domain recommending data processing method, cross-domain recommender system with multiple auxiliary domains |
CN109635291B (en) * | 2018-12-04 | 2023-04-25 | 重庆理工大学 | Recommendation method for fusing scoring information and article content based on collaborative training |
US20210110306A1 (en) * | 2019-10-14 | 2021-04-15 | Visa International Service Association | Meta-transfer learning via contextual invariants for cross-domain recommendation |
US20210149671A1 (en) * | 2019-11-19 | 2021-05-20 | Intuit Inc. | Data structures and methods for enabling cross domain recommendations by a machine learning model |
CN113806630B (en) * | 2021-08-05 | 2024-06-14 | 中国科学院信息工程研究所 | Attention-based multi-view feature fusion cross-domain recommendation method and device |
CN113761389A (en) * | 2021-08-18 | 2021-12-07 | 淮阴工学院 | Cross-domain recommendation method based on subject label |
CN113918833B (en) * | 2021-10-22 | 2022-08-16 | 重庆理工大学 | Product recommendation method realized through graph convolution collaborative filtering of social network relationship |
CN114691988B (en) * | 2022-03-23 | 2024-08-06 | 华南理工大学 | Cross-domain recommendation method based on personalized migration of user preference |
CN114862514A (en) * | 2022-05-10 | 2022-08-05 | 西安交通大学 | User preference commodity recommendation method based on meta-learning |
-
2022
- 2022-09-06 CN CN202211085488.3A patent/CN115438732B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149734A (en) * | 2020-09-23 | 2020-12-29 | 哈尔滨工程大学 | Cross-domain recommendation method based on stacked self-encoder |
CN113449205A (en) * | 2021-08-30 | 2021-09-28 | 四川省人工智能研究院(宜宾) | Recommendation method and system based on metadata enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN115438732A (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115438732B (en) | Cross-domain recommendation method for cold start user based on classified preference migration | |
Sun et al. | Deep learning for industrial KPI prediction: When ensemble learning meets semi-supervised data | |
Liang et al. | A survey of recent advances in transfer learning | |
Zhu et al. | Personalized image aesthetics assessment via meta-learning with bilevel gradient optimization | |
CN113362131B (en) | Intelligent commodity recommendation method based on map model and integrating knowledge map and user interaction | |
Chiroma et al. | Progress on artificial neural networks for big data analytics: a survey | |
Wang et al. | Fine-grained learning performance prediction via adaptive sparse self-attention networks | |
Li et al. | A CTR prediction model based on user interest via attention mechanism | |
CN113378074A (en) | Social network user trajectory analysis method based on self-supervision learning | |
CN112967088A (en) | Marketing activity prediction model structure and prediction method based on knowledge distillation | |
Liu et al. | Multi-grained and multi-layered gradient boosting decision tree for credit scoring | |
CN113590965B (en) | Video recommendation method integrating knowledge graph and emotion analysis | |
Chen et al. | A survey on heterogeneous one-class collaborative filtering | |
CN111723285A (en) | Depth spectrum convolution collaborative filtering recommendation method based on scores | |
Zha et al. | Career mobility analysis with uncertainty-aware graph autoencoders: A job title transition perspective | |
CN116662564A (en) | Service recommendation method based on depth matrix decomposition and knowledge graph | |
Sun et al. | INGCF: an improved recommendation algorithm based on NGCF | |
Lin et al. | Transfer learning for collaborative recommendation with biased and unbiased data | |
Han et al. | AMD: automatic multi-step distillation of large-scale vision models | |
Zgurovsky et al. | Formation of Hybrid Artificial Neural Networks Topologies | |
Liu et al. | Learning a similarity metric discriminatively with application to ancient character recognition | |
Qiao et al. | SRS-DNN: a deep neural network with strengthening response sparsity | |
Wang et al. | Replacing self-attentions with convolutional layers in multivariate long sequence time-series forecasting | |
CN115310004A (en) | Graph nerve collaborative filtering recommendation method fusing project time sequence relation | |
CN115905617A (en) | Video scoring prediction method based on deep neural network and double regularization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240914 Address after: 230000 B-1015, wo Yuan Garden, 81 Ganquan Road, Shushan District, Hefei, Anhui. Patentee after: HEFEI MINGLONG ELECTRONIC TECHNOLOGY Co.,Ltd. Country or region after: China Address before: No.69 Hongguang Avenue, Banan District, Chongqing Patentee before: Chongqing University of Technology Country or region before: China |
|
TR01 | Transfer of patent right |