[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
SSHP-YOLO: A High Precision Printed Circuit Board (PCB) Defect Detection Algorithm with a Small Sample
Previous Article in Journal
A Modular Power Converter Topology to Interface Removable Batteries with 400 V and 800 V Electric Powertrains
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interactive, Enhanced Dual Hypergraph Model for Explainable Contrastive Learning Recommendation

1
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
2
China Waterborne Transport Research Institute, Beijing 100080, China
3
Wuhan Second Ship Design & Research Institute, Wuhan 430064, China
4
China Chile ICT Belt and Road Joint Laboratory, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(2), 216; https://doi.org/10.3390/electronics14020216
Submission received: 3 December 2024 / Revised: 30 December 2024 / Accepted: 2 January 2025 / Published: 7 January 2025

Abstract

:
In recent years, it has become a hot topic to combine graph neural networks with contrastive learning, a method which has been applied not only in recommendation tasks but has also achieved impressive results in many fields, such as text processing and spatio-temporal modeling. However, existing methods are still constrained by several issues: (1) Most graph learning methods do not explore the imbalance of node and edge type distribution caused by different user–item interactions. (2) The randomness of data expansion and sampling strategies in contrastive learning may lead to confusion about the importance of key items for users. To overcome the problems, in this paper, we propose an explanation-guided contrastive recommendation model based on interactive message propagation and dual-hypergraph convolution (ECR-ID). Specifically, we designed two different interactive propagation mechanisms for the user–item dual-hypergraph sets to promote comprehensive dynamic information propagation and exchange, which further mitigates the imbalance problem of hyperedges and nodes in the hypergraph convolution, as well as the propagation loss of synergistic information between nodes. In addition, we developed an explanation-guided contrastive learning framework, which highlights the important items in user–item interactions through an explanation-based approach and guided the training of the contrastive learning framework based on the differences in the importance scores of the items, thus generating accurate positive and negative views and improving the contrastive learning performance. Finally, we integrated the contrastive learning framework with the dual-hypergraph networks based on joint training to further improve the recommendation performance. Extensive experimental evaluations on real datasets show that ECR-ID outperforms state-of-the-art recommendation algorithms. In the future, we will conduct in-depth tests based on a wider range of real-world datasets to alleviate the limitation that the existing experimental datasets all comprise data from single business services like Alibaba and Amazon, thus validating the effectiveness of our model more comprehensively.

1. Introduction

In the era of rapid development of the Internet, web data is exploding. It is difficult for users to select the content they are interested in from the huge amount of data, which leads to information overload. A recommender system provides personalized recommendation services for users by analyzing their historical activity data and mining their potential preferences. Recommender systems have become an effective means to solve the information overload problem and have attracted much attention in recent years, and they are widely used in sharing platforms such as e-commerce and short video platforms as well as social networks. Various approaches have been proposed to improve the performance of recommender systems. However, recommendation methods based on unsupervised learning are usually less accurate due to predefined features [1,2]. With the rapid development of deep learning techniques, especially with the help of deep neural network models, the performance of recommendation methods has made significant progress [2,3,4].
In recent years, many researchers have achieved good performance by introducing graph learning approaches to the recommendation domain [5,6,7]. However, compared with those recommendation methods based on neural network models, these graph convolutional network-based methods mentioned above have not demonstrated a satisfactory performance improvement. One possible reason is that in real scenarios, there are many-to-many, high-order relationships between the user and item [8,9]. Obviously, simple graphs cannot describe such complex relationships. Therefore, it is very important and meaningful to find a suitable graph-learning method to solve this problem.
Fortunately, hypergraphs [10] have been introduced to solve this problem. A hypergraph extends the notion of edges in graph structures by proposing a hyperedge that can connect multiple nodes. Inspired by this, hypergraphs have been widely used in recommendation tasks [11,12,13]. Specifically, the potential higher-order relationships between user and item interaction data are effectively captured by utilizing the hypergraph convolution method to manipulate the hyperedge aggregation for propagating the information of nodes with the same interactions. However, user–item interaction data in real-world scenarios are usually sparse, which greatly hinders the application of hypergraph-based learning models in real-world recommendation scenarios [14]. As a special technique of machine learning, contrastive learning extracts rich information and transferable knowledge from a large amount of unlabeled data based on well-designed self-supervised tasks to mitigate the data sparsity in recommender systems [15,16,17]. Hence, many researchers have introduced the idea of contrastive learning in hypergraph-based recommender systems to further improve the performance of recommendation models and have achieved positive results [16,18,19], but there are still the following challenges:
(1)
The direct application of user–item interaction data from real-world scenarios in hypergraph convolutional training recommendation models may encounter significant performance degradation. This is due to the multiple types of interactions (such as clicks, queries, and purchases) and node attributes in user–item interactions in real scenarios, resulting in an imbalanced distribution of node and hyperedge types in the constructed hypergraph structure. When aggregating and propagating node information to generate potential feature representations for items, this imbalance issue causes a considerable amount of important collaborative information among different types of nodes to be overlooked, leading to inadequate utilization of the critical hidden information of various types of hypergraph nodes and an ineffective representation of the underlying semantics of users and items.
(2)
Existing hypergraph-based contrastive learning recommendation models with randomized strategies (e.g., random pruning, random masking, and randomly adding and removing edges) for graph model augmentation and sampling may lead to degraded recommendation performance. This is due to the fact that the aforementioned stochastic strategies can easily change the original topological semantics and introduce false-positive and false-negative examples, resulting in confusing “important” key items and “non-important” items to the user and thus generating false-positive and negative views, which ultimately has a negative impact on model training [20,21].
To address the aforementioned issues, we propose a novel hypergraph contrastive recommendation model. Specifically, first of all, we use a user–item multiplex bipartite network to construct a dual-hypergraph neural network based on a double interactive information propagation mechanism. For the given user–item data, we construct the user hypergraph set and the item hypergraph set separately. In this network, multiple items purchased (clicked or queried) by the same user create connections with that user through multiple hyperedges; similarly, multiple users can connect to a single item through multiple hyperedges. On the one hand, we designed a dynamic information propagation mechanism that focuses on the interactions between different types of nodes, dynamically controlling the transmission of interactive information among different types of nodes in this hypergraph convolution, thus effectively capturing local and higher-order interaction relationships. On the other hand, we also designed a novel interactive information propagation mechanism to dynamically adjust the information propagation of different user–item interaction types in this hypergraph convolution, thus more efficiently utilizing the synergistic signals in the various interaction information and comprehensively capturing the key information of different user–item interaction types hidden among different hypergraphs, which further mitigates the challenges brought by the imbalance problem of the hyperedges and node types. In addition, we used the concept of importance scores to develop a new explanation-guided contrastive learning framework that helps to identify key items that are “important” to the user by introducing interpretive operations based on the concept of importance scores to infer the ranking of important key items according to the user–item interactions in the interpretive methodology, thus helping to counteract the negative effects of false-positive and false-negative examples in the contrastive learning process. Then, we utilized item importance differences to guide contrastive learning for better training in constructing more accurate positive and negative views, thereby further improving contrastive learning performance. Finally, we unified the recommendation task and contrastive learning task, optimizing model parameters through a two-stage training method. Extensive experiments on real datasets validate the high performance of our proposed model.
In summary, the main contributions of this paper are as follows:
  • We carefully designed a dual-hypergraph set-based convolutional interaction propagation learning framework, which uses two different interaction propagation strategies to comprehensively capture the dynamic propagation and filtering between user–item interaction information of multiple interaction types, as well as the latent semantic information between different interaction types, alleviating the imbalance problem of hyperedges and node types and achieving the optimization of the latent features of the users and items.
  • We propose an explanation-guided contrastive learning strategy to alleviate the false-positive and false-negative problems during view generation for the more efficient distinction between positive views and negative views, which is seamlessly coupled with a dual-hypergraph set convolutional network design.
  • We developed a new variant of hypergraph utilizing an interpretation-guided contrastive learning model for recommendation tasks. Experiments demonstrate that ECR-ID achieves varying degrees of improvement in several metrics compared to state-of-the-art methods. To the best of our knowledge, it is the first to consider an interpretable approach for different items and new variants of hypergraph embeddings in the item recommendation task.
  • We comprehensively demonstrate the superiority of ECR-ID over baselines based on several challenging datasets.

2. Related Work

In this section, we review the research works related to our topic in three aspects.

2.1. Graph Learning in Recommendation

Graphs as a data structure can accurately describe the complex interactions between things; thus, they have been widely used in many scientific and engineering fields in recent years. Most of the early graph neural networks in recommender systems were based on simple graph structures, i.e., static, homogeneous, pairwise graph structures, to define and represent the relationships between users and items. Nevertheless, in the real world, the interactions between users and items constitute a graph structure that is often heterogeneous, i.e., nodes and edges are of multiple types. Therefore, more complex and diverse graph structures have been considered. Hypergraph structures have significant advantages in expressing many-to-many data relationships through their property of connecting an arbitrary number of nodes by hyperedges. Neural network approaches have penetrated the field of hypergraph learning, utilizing their powerful learning capabilities to study hypergraph-based higher-order user–item interactions and thus accurately predict user preferences. Ji et al. [8] propose a hypergraph structure for modeling users and items, which is combined with jump-link operations to capture higher-order correlations and effective embedding propagation. Zhang et al. [22] present a combination of a user- and group-level hierarchical hypergraph convolutional network and a self-supervised learning strategy to capture inter- and intra-group user interactions in group recommendation, which can effectively mitigate the sparsity of data and improve the performance of the recommendation. Li et al. [23] combined a hypergraph convolutional network and hyperbolic embedding space in sequence recommendation to effectively alleviate the challenging sparsity problem and long-tailed data problem. Yu et al. [19] proposed a new multi-channel hypergraph convolutional network, which comprehensively captures user representations by utilizing the aggregation of multiple hypergraph-based convolutional network channels to achieve the purpose of aggregating higher-order user relationships to enhance social recommendation.

2.2. Interpretability in Recommendation

Interpretability has become an important research direction in the field of machine learning, and many scholars and researchers have devoted themselves to exploring how to improve the interpretability of machine learning models. Zhou et al. [24] review and discuss different data-driven fuzzy modeling techniques for interpretability problems in terms of low-level interpretability and high-level interpretability to clearly differentiate between the different roles of components such as fuzzy sets and input variables, which play a role in realizing an interpretable fuzzy machine learning model. Chen et al. [25] designed a time-aware gated recursive unit to simulate the user’s dynamic preferences and interpret the predicted user preferences based on valuable item-based review information highlighted by sentence-level convolutional neural networks. Cai et al. [26] propose a multi-objective interpretable social recommendation model based on a knowledge graph to provide users with better recommendation results by considering the interpretability, accuracy, novelty, and content quality of the social recommendation results using the indirect social relationship between users or the interpretability of the social recommendation results. Zhang et al. [27] introduced an interpretable point-in-time process model based on learning motivation for course recommendation, which comprehensively increases the interpretability and persuasiveness of course recommendation by capturing the factors that affect students’ course selection based on learning motivation and the process of point-in-time change, improving the quality of recommendation and achieving the purpose of optimizing course recommendation.

2.3. Contrastive Learning in Recommendation

Contrastive learning methods learn meaningful node representations by comparing positive and negative samples. Inspired by the results of contrastive learning in areas such as computer vision, many scholars have applied contrastive learning to recommender systems. Since contrastive learning works in a self-supervised manner, it helps to mitigate data sparsity and the interference of data noise. Yi et al. [21] apply a contrastive learning mechanism to a multimodal recommender system to improve the recommendation quality, using two designed enhancement techniques edge culling and modal masking-to generate a more discriminative user/item view. Wang et al. [28] introduced a contrastive learning framework into session recommendation to alleviate data sparsity which improves the recommendation performance by maximizing the mutual information between different session representations. Wei et al. [29] innovatively incorporated the information bottleneck theory and contrastive learning into a recommendation model which utilizes the information bottleneck principle to minimize the mutual information between the original and generated views to further remove the noise in the contrasted view and capture more semantic synergistic information, thus further enhancing the recommendation performance. Yang et al. [30] propose a multi-intent contrastive strategy to comprehensively characterize the association relationship between user reviews and item reviews, enabling effective modeling of the user intent contained in the reviews and thus accurately predicting the predicted preferences. Gu et al. [31] present a star-shaped contrastive learning framework for multi-behavioral recommendation to capture the embedded commonalities between the target behaviors and the auxiliary behaviors, which alleviates the sparsity of supervisory signals and reduces the auxiliary redundancy among behaviors and extracts the most critical information in the behaviors.

3. Methodology

In this section, the overall architecture of the model is shown in Figure 1.
As shown in Figure 1, ECR-ID consists of three key modules: the hypergraph construction module, hypergraph interactive learning module, and explanation-guided contrastive learning module. In the hypergraph construction module, we use user–item interaction data to build a user–item multiple bipartite network. Then, we initialize the feature representations of users and items to construct a user hypergraph set G U with the user set as nodes and item hypergraph set G I with the item set as nodes, respectively. Each hypergraph in G U and G I is constructed separately according to the interaction type. In the hypergraph interactive learning module, we designed two interactive messaging mechanisms to dynamically control the filtering and propagation of comprehensive information in the dual-hypergraph sets for capturing critical information in hiding important synergistic information between different types of nodes, thus ultimately obtaining accurate user and item representations. In the explanation-guided contrastive learning modules, we use the explanatory methodology to derive the important items in the user’s item interaction record, and based on this, we derive the accurate positive and negative views and further improve the performance of contrastive learning.

3.1. Formalization

Input: We initialize the input information to obtain the user–item feature matrix X = X U , X I . We denote the user set and item set by X U R U × F and X I R I × F , respectively, while the user features are denoted by X U , the item features are denoted by X I , and the feature dimensions are denoted by F. Output: We predict user preferences for items based on given user and item information.

3.2. Hypergraph Construction Module

In this section, as in the literature [32], we introduce a detailed step-by-step explanation of this construction process, as follows:
Step 1: We construct an isomorphic user hypergraph set G U to characterize the complex user–item network in the input data.
G U = g U , b a s e , g U , 1 , g U , k
where g U , j = U , ε U , j denotes the user hypergraphs in the hypergraph sets G U .
Step 2: We construct an isomorphic item hypergraph set G I to characterize the complex user–item network in the input data:
G I = g I , b a s e , g I , 1 , g I , k
where g I , j = I , ε I , j denotes the item hypergraphs in the hypergraph sets G I .
Step 3: We set ε U , j and ε I , j to denote the hyperedges of g U , j and g I , j .
Step 4: We set all hypergraphs in G U to share the same user node set U.
Step 5: We set all hypergraphs in G I share the same item node set I.
Step 6: For each item node i I , the hyperedge ε U i , j is introduced in the hypergraph g U , j to connect the set of nodes U i = u | u U , ( u , i ) E j .
Step 7: For a user node u U , a hyperedge ε I u , j is introduced in g I , j which connects the set of nodes I u = i | i I , ( u , i ) E j .
Step 8: We present a detailed example to further explain the above steps. For example, user u 2 clicks on three items i 2 , i 3 , i 5 . The user node u 2 corresponds to the hyperedge connecting item i 2 , i 3 , i 5 in the isomorphic hypergraph g I , c l i c k . Since items are purchased by three users, the item node corresponds to the hyperedge connecting this user in the isomorphic hypergraph. Similarly, item i 1 is purchased by three users u 1 , u 3 , u 4 , and item node i 1 corresponds to the hyperedge connecting this user u 1 , u 3 , u 4 in the isomorphic hypergraph g U , b u y .
Step 9: We set two special sub-hypergraphs g U , 1 G U and g I , 1 G I defined as g U , j = 1 k ε U , j and g V , j = 1 k ε I , j .
Step 10: In short, these hypergraphs are the hypergraphs composed of all interaction types between user items. Finally, we consider the set of dual hypergraphs constructed from the original user–item data as the original view.

3.3. Hypergraph Interactive Learning Module

The designed hypergraph convolution model based on interactive information considers the initialized user–item feature X = X U , X I and the original view, positive view, and negative view as input. It captures the potential higher-order interaction information in the user–item interaction information through the sub-module double hypergraph convolution module and interactive information propagation module. Finally, it outputs the final embedded Z, Z + , and Z .

3.3.1. Dual Hypergraph Convolution

The dual-hypergraph convolution module efficiently exploits the hyperedges and nodes in two hypergraph sets to perform convolution operations for learning the representation of user and item. The process is as follows:
Given a hypergraph g U , j = U , ε U , j , where j 1 , , k denotes the different interaction types between user items and k is the number of interaction types between user items, the association matrix H of g U , j is defined as follows:
ζ U , j u , e = 1 , u e e ε U , j 0 , o t h e r w i s e
where ζ U , j R U × ε U , j denotes the association matrix of the hypergraph g U , j , where j 1 , , k . The association matrix of the hypergraph is denoted as ζ I , j R I × ε I , j . Thus, the hypergraph convolution operator is as follows:
X U , j l + 1 = Λ ( δ U , j 1 ζ U , j W U ς U , j 1 ζ U , j T · X U , j l ψ U , j l )
where Λ is the ReLU function, μ U R | I | × | I | is a unit matrix, and ψ U , j l R F l × F l + 1 is a learnable transformation matrix. δ u , i and ς u , i denote diagonal matrices of the node degree and the hyperedge degree, respectively. Thus, In line with the literature [32], all of the above hypergraph operators can also be used independently for hypergraph g U , j and hypergraph g I , j learning features in hypergraph set G U and hypergraph set G I . Finally, we can obtain node representations of users and items: X U , 1 , , X U , k and X I , 1 , , X I , k , respectively.

3.3.2. Interactive Information Propagation Mechanism

The different types of interaction information and node synergies are ignored in most hypergraph convolutions, which may lead to imbalances in the topology based on hyperedges and node types. Therefore, to alleviate this problem, we designed two interactive information propagation mechanisms designed for the above hypergraph convolution process according to the different types of interactions as well as the aggregation of the information, which realizes the efficient transfer of the information between different types of interactions in the hypergraph set and efficiently captures the users’ and items’ semantic information features.
First, like in the literature [33], we define the interaction message passed from node i to node u as follows:
m ( x i , x u ) = x u x i < x u , x i > , x u = σ m ( x u , x u ) + m ( x i , x i ) W
where σ ( ) is the activation function; x i , x u are the feature representations of nodes and nodes, respectively; and W R F × F is the learnable feature change matrix. We use the vector-by-element product ⊙ to construct the interaction information, and < , > denotes the inner product for the normalization operation.
Interactive information propagation mechanism between different interaction types within a hypergraph set (different interaction):To capture the correlation between different interaction types, we designed an interactive information propagation mechanism between different interaction types within a hypergraph set. The hypergraph g U , 1 contains all the interaction information. For each user u, we pass the node feature information x u , 1 learned through g U , 1 to the feature representation x u , j of the corresponding user u for the remaining interaction types. The propagation process is as follows:
x u , j l + 1 = σ m x u , j l , x u , j l + m x u , 1 l , x u , j l W u l
where x u , 1 l , x u , j l denotes the information passed from x u , 1 to x u , j . x u , j l + 1 is the feature information of node u for the g U , j hypergraph after feature fusion, and W u l is the learnable parameter. The item hypergraph set is the same, and the information propagation process is as follows:
x i , j l + 1 = σ m x i , j l , x i , j l + m x i , b a s e l , x i , j l W i l
where m x i , b a s e l , x i , j l denotes the information transmitted from x i , b a s e to x i , j . x i , j l + 1 is the feature information of node i for the g U , j hypergraph after feature fusion, and W i l is the learnable parameter.
Interactive information propagation mechanism between different types of nodes in the hypergraph set (different node): For each user u in hypergraph g U , j , we aggregate the user features generated by its convolution as x u , j l and the item features from the hypergraph G, and the information propagation process is as follows:
x u , j l + 1 = σ m ( x u , j l , x u , j l ) + i N u m ( x i , j l , x u , j l ) W u l
where m ( x i , j l , x u , j l ) is the information transmitted from the connected node x i , j to x u , j for the class j interaction, W u l is the learnable parameter, and x u , j l + 1 is the feature information of node u for the g U , j hypergraph after feature fusion. Similarly, for each item i in the hypergraph g i , j , the information transfer process is defined as follows:
x i , j l + 1 = σ m ( x i , j l , x i , j l ) + u N i m ( x u , j l , x i , j l ) W i l
where m ( x u , j l , x i , j l ) is the information passed from connected node x u , j to x i , j for the j-class interaction, W i l is the learnable parameter, and x i , j l + 1 is the feature information of node i for the g I , j hypergraph after feature fusion.
Thus, for the dual-hypergraph convolution process based on interaction information propagation, after t iterations, we obtain the embeddings of users and items X ¯ U t = X U , 1 t , , X U , k t , X ¯ U , j t R U × F t , X ¯ I t = X I , 1 t , , X I , k t , and X ¯ I , j t R I × F t , and k is the number of interaction types.
Then, we obtain the final embedding Z = X ¯ U , X ¯ I for users and items for the original view, the final user-item embedding information Z + = X ¯ U + , X ¯ I + for the positive view, and the final user–item embedding information Z = X ¯ U , X ¯ I for the negative view.

3.4. Explanation-Guided Contrastive Learning Module

In this section, in order to further mitigate the randomness problem during the hypergraph contrastive learning process, we present an explanation-guided supervised contrastive learning framework by proposing an explanation-guided enhancement-oriented approach. An explanation approach is used to comprehensively determine the importance ranking of items for users, and then explanation-guided supervised contrastive learning for different views is introduced.

3.4.1. Explanation-Guided Importance Scores

In the recommendation process, the important and non-important items for the user will be differentiated, which is necessary to accurately predict the user’s interest. Therefore, it is effective to compute item weights as a basis for item importance. The saliency method [32] is employed as a widely interpretive method. We introduce the saliency method as an explanatory method with the predicted probability of the next item input into to obtain the correlation score, then the importance score of the item is obtained based on the summing operation and normalization operation. The details are given below:
Firstly, to obtain the bootstrap importance score of nodes, we input the embedding feature of item set I i = { i 1 u , i 2 u , , i n u , i I i u } and the prediction probability p u to the interpretation method E x p l ( · ) [32]; thus, we obtain the importance score of the items denoted as follows:
s c o r e I i = E x p l ( p u , I i ) = [ s c o r e i 1 u , s c o r e i 2 u , , s c o r e i n u , , s c o r e i I i u
where s c o r e I i represents the importance score of the items i n , and the prediction probability p u = soft max X ¯ U X ¯ I is denoted as the result of the product of user and item embeddings [12].
Next, we compute the importance score of the embedding vector x i n , j u for item i n based on dimension f, which can be defined as follows:
s c o r e x i n , j u = p u x i n , j u
where F is the dimension of embedding vector x i I i u . By summing and normalizing the importance scores in dimension F based on saliency [34], we can obtain the importance scores of
i n
as follows:
s o r e x i n , j u = f = 1 F s c o r e ( x i n , j u ) k = 1 I i f = 1 F s c o r e ( x i n , j u )
where s o r e x i n , j u returns a value in 0 , 1 to reflect its importance in item set s o r e x i n , j u .

3.4.2. Explanation-Guided Contrastive Learning for Different Views

In this module, we propose two operations to guide hypergraph augmentation operations, one for building a positive view (ecorp+) and one for building a negative view (ecorp-). Then, we introduce explanation-guided contrastive learning for different views, as follows:
Explanation-Guided Hypergraph Augmentation for Positive and Negative Views (ecorp+, ecorp-):
In I i = { i 1 u , i 2 u , , i n u , i I i u } , we choose to remove the nodes with low importance scores of n items to generate a positive view and remove the nodes with high importance scores of I i n items to generate a negative view, where n = γ I i , γ 0 , 1 is the hyperparameter, and { i 1 u , i 2 u , , i n u } are the n items with the lowest importance scores in I i . Then, the explanation-guided prune for positive and negative views is defined as follows:
I i p r u n e + = I i { i 1 u , i 2 u , , i n u } , I i p r u n e = { i 1 u , i 2 u , , i n u }
The contrastive loss of the positive view l c l + :
We obtained the positive view based on the original view by explanation-guided augmentation of ecorp+. The user–item feature representations Z = X ¯ U , X ¯ I and Z + = X ¯ U + , X ¯ I + based on the two views and the user set U and the item set I are also available. Here, we separate the users and items and obtain the contrastive loss of the explanation-guided positive view l c l + u and l c l + i separately. For the user–item set U, the representations of the same nodes in both views are treated as the positive pair x u , x u + and x i , x i + , and the different nodes are treated as the negative pair x u , x u + and x i , x i + . In line with the literature [10], to maximize the mutual information between the two views, we define the contrastive loss of the explanation-guided positive view based on the interpretation guidance of the user set and item set, as follows:
l c l + u = log exp ( s i m ( x u , x u + ) ) exp ( s i m ( x u , x u + ) + u U u exp ( s i m ( x u , x u + ) ) )
Similarly, the contrastive loss for the item-set based explanation-guided positive view is defined as follows:
l c l + i = log exp ( s i m ( x i , x i + ) ) exp ( s i m ( x i , x i + ) + i I i exp ( s i m ( x i , x i + ) ) )
The explanation-guided positive view’s contrastive loss is defined as follows:
l c l + = l c l + u + l c l + u
The contrastive loss of the negative view l c l :
In keeping with the operation in l c l + , we can obtain the user–item feature representation based on the negative view by the ecorp- operations in Z = X ¯ U , X ¯ I . In line with the literature [28], to maximize the mutual information between the two views, we define the contrastive loss of the negative view based on the interpretation guidance of the user set and the item set, as follows:
l c l u = 1 U 1 u U u log exp ( s i m ( x u , x u + ) ) u | U | ( exp ( s i m ( x u , x u + ) ) + exp ( s i m ( x u , x u ) )
Similarly, the contrastive loss for the item-set based explanation-guided negative view is defined as follows:
l c l i = 1 I 1 i I i log exp ( s i m ( x i , x i + ) ) u | U | ( exp ( s i m ( x i , x i + ) ) + exp ( s i m ( x i , x i ) )
Finally, the explanation-guided negative view’s contrastive loss is defined as follows:
l c l = l c l u + l c l u

3.5. Optimization

We combine three loss objectives for ECR-ID based on the above three modules to further optimize ECR-ID. The three objective functions are as follows: 1. recommendation loss l r e c , 2. contrastive loss for positive views of explanation-guide l c l + , 3. and contrastive loss for negative views of explanation-guide l c l . Therefore, the total loss of ECR-ID is as follows:
l E C R I D = l r e c + ξ c l ( l c l + + β c l l c l )
where ξ c l , β c l are hyperparameters.
Recommendation loss l r e c :
We calculate the scores of all candidate items i I by utilizing the inner product between the learned embeddings and then using the s o f t m a x ( ) function to calculate the probability of the next item based on Φ . Finally, the cross-entropy loss function [12] is used to design the recommendation loss function. The loss function of recommendation is defined as follows:
l r e c = i N Γ i log ( Γ ^ i ) + 1 Γ i log ( 1 Γ ^ i )
Φ = Z u T Z i

4. Experiment

We conducted experiments to demonstrate the effectiveness of our model.

4.1. Experimental Dataset

The statistical information of the two datasets is shown in Table 1.
The Amazon dataset [35] was constructed based on Amazon’s product catalog, which contains information such as users’ purchasing behavior and click-browsing behavior. The whole dataset is represented in the form of a graph, where nodes represent products and users, and edges represent the interactions between users and products. The Amazon dataset [35] contains two types of nodes (users and products) and two types of interactions (clicks and purchases). A total of 60,658 interactions are recorded, and the user–item matrix density is 0.279%.
The Alibaba dataset [35] comprises randomly collected logs of users’ and items’ behaviors from the Alibaba e-commerce platform based on different dimensions, which include users’ purchases, queries, and other behaviors of the items from 1 April 2020, to 30 April 2020. The entire dataset contains two types of nodes (users and items) and three types of interactions (clicks, queries, and contacts). In total, 27036 interactions were recorded, and the user–item matrix density was 0.108%.
The user–item matrix density refers to the ratio of non-empty elements in the user–item interaction matrix. And the user–item matrix is a two-dimensional matrix where the rows represent users and the columns represent items. If the user interacts with the item, the element in the corresponding position is not empty; otherwise, it is empty. The user–item matrix density refers to the ratio of non-empty elements in the user–item interaction matrix.

4.2. Experimental Setting

To evaluate the performance of our model, we conducted experiments on two real-world datasets, Amazon and Alibaba, published by Xue et al. [35]. According to the strategy in BGNN [36], we used the area under the ROC curve (AUROC), the area under the precision-recall curve (AUPRC), the prediction precision (precision), and recall (recall) as evaluation metrics. Meanwhile, a logistic regression classifier was used to predict whether user–item interaction occurred.
p r e c i s i o n = t p t p + f p
r e c a l l = t p t p + f n
t p r = t p t p + f n
f p r = f p f p + t n
A U P R C = 1 2 i = 1 k 1 r e c a l l i + 1 r e c a l l i × p r e c i s i o n i + 1 + p r e c i s i o n i
A U R O C = 1 2 i = 1 k 1 f p r i + 1 f p r i × t p r i + 1 + t p r i
t p is the number of samples that were predicted to be positive after positive sampling in the experiment. f n is the number of samples that were predicted to be negative in the results after positive sampling in the experiment. f p is the number of samples predicted to be positive after negative sampling in the experiment. t n is the number of samples predicted to be negative after negative sampling in the experiment.
To fully evaluate our proposed method, we randomly selected 60% of the edges as the training set and the remaining edges as the test set. For each edge in the test set, the embeddings of the nodes (learned from the training set) were used as features. The performance of the logistic regression classifier was evaluated on the test set edges using 5-fold cross-validation. The whole process was repeated 5 times to obtain different random samples from the training set and the test set.

4.3. Baselines

To evaluate the performance of the model, we compared ECR-ID with the following baselines.
  • GraphSAGE [37] proposes a generalized inductive architecture that uses local neighborhood sampling and aggregation features of nodes to generate embeddings of nodes.
  • GCN [38] is an efficient method for graph learning using the spectral graph convolution operator.
  • GAT [39] is a new approach to graph learning that utilizes a mask self-attention mechanism to assign different weights to each node and its neighboring nodes based on their features.
  • HGNN [10] designed the super-edge convolution operation to handle the data correlation during representation learning. With this method, the hyperedge convolution operation can be effectively used to capture the implicit layer representation of higher-order data structures.
  • HyperGCN [13] is a new GCN method for hypergraph semi-supervised learning based on hypergraph theory.
  • DualHGCN [35] is a contrastive dual-hypergraph convolutional network model that converts a multilayer bipartite graph network into two sets of sub-hypergraphs.
  • SGL [18] utilizes three types of data augmentation based on different perspectives to complement/supervise the recommendation task with contrastive signals on user–item graphs.
  • HCCF [40] employs a hypergraph structure learning module and cross-view hypergraph comparison coding model based on contrastive learning to learn better user representations by characterizing both local and global collaborative relationships in the joint embedding space.

4.4. Experimental Results Analysis

We conducted comprehensive experiments from the following three different perspectives.

4.4.1. Comparison with Baselines

The results of experiments on both datasets are shown in Table 2. There are experimental results for four evaluation metrics AUROC, AUPRC, precision, and recall, and we can conclude the following:
  • GraphSAGE, GCN, and GAT perform poorly on both datasets, which may be because these embedding methods, relying on simple homogeneous graphs, have weak interaction representations and do not deal well with non-planar relationships between nodes. The superiority of SGL is due to the contrastive learning of data enhancement in recommendation tasks. The worst performance of GCN is probably due to the fact that it suffers from oversmoothing problems.
  • DualHGCN outperforms HGNN and HyperGCN, which may be because when facing complex user–item interaction information for different types, HyperGCN and HGNN do not distinguish the interaction types well enough to capture potential higher-order information about users and items based on different interaction types effectively. In addition, the overall performance of DualHGCN is weaker than HCCF, which may be because the hypergraph-based contrastive learning model in HCCF utilizes the local-to-global cross-view supervision information to effectively alleviate the problem of sparse interaction data. HyperGCN and HGNN are essentially different variants of hypergraphs, and it is clear that the optimization of the structure of the hypergraph based on the hyperedge convolution operation is superior to improvements in the hypergraph training method for final performance.
  • The performance of SGL is weaker than that of HCCF, and the model shows the limitations of simple graphs in edge representation. Meanwhile, the inferior performance of HCCF compared with the model may be because its designed homogeneous hypergraph embedding method cannot handle the connection between different interaction types well.
In summary, ECR-ID outperforms the baselines. There are three possible reasons for this: (1) ECR-ID is more able to obtain the potential higher-order features of users and items for different types of data compared to a simple graph due to the introduction of dual-hypergraph convolution. (2) Employing the dual-hypergraph convolution as an encoder, an interaction information propagation method was designed to effectively mitigate the information loss existing in the hypergraph convolution process and capture more potential higher-order information based on users and items with different interaction types. Meanwhile, by improving the propagation method, the hypergraph structure was optimized so that the model can learn richer information. (3) Finally, the designed augmentation and training strategy based on interpretation learning can effectively eliminate the randomness problem and data sparsity problem in the existing graph enhancement method and improve performance. The methodology for calculating item importance scores based on saliency explores the items that are most important to the user.

4.4.2. Ablation Analysis

In order to verify the different effects of different modules on ECR-ID, we conducted component experiments and comparative analysis. Init denotes only the feature initialization module. ECR denotes the introduction of the basic dual-hypergraph convolution module after the feature initialization, containing no interaction-based information propagation module. ECR-CM denotes the introduction of the two interaction-based information propagation modules to ECR.
As shown in Figure 2, in the performance comparison of Init, ECR illustrates the effectiveness and efficiency of the dual-hypergraph convolution module. The stronger performance of ECR-CM than ECR may be attributed to the fact that the two designed interactive information propagation mechanisms play an active and effective role in the hypergraph convolution process, which greatly mitigates the detrimental effect of the topology imbalance problem on the convolution. Finally, the fact that ECR-CM is weaker than ECR-ID suggests that the introduction of an explanation-guided methodology of contrastive learning is positive and effective in improving the performance of the final model.

4.4.3. Sensitivity Analysis

This section focuses on the effect of different parameter settings, as follows:
The Learning Factor  ξ c l
As shown in Figure 3, the learning factor ξ c l is used to moderate the balance between the prediction task and the contrastive task. We increased the value of the learning factor ξ c l from 0.1 to 0.5, and the experimental results are shown in the table. From the table, it can be found that the model achieves better results when ξ c l = 0.1 or 0.2, and the performance of ECR-ID decreases with the increase in parameter ξ c l . It is possible that the performance decreases are caused by the conflict of learning loss between the recommendation task and the contrastive learning task with the higher weight of contrastive loss.
The Learning Factor  β c l
As shown in Figure 4, the learning factor β c l adjusts the weight of the negative view in contrast loss during training to regulate the balance between the positive view and negative view in the contrastive learning task. We increased the value of the learning factor β c l from 0.1 to 0.6, and the experimental results are shown in the table. From the table, we can find that the model performance gradually improves with the increase in parameter β c l , and the model achieves better results when β c l = 0.4 or 0.5, while the model performance slightly decreases with the continued increase in parameter β c l . This may be due to the excessive contrastive loss weight of the negative view, which makes it impossible to pull apart the contrastive learning of the same nodes between the two views, affecting the maximization of the mutual information between them and thus leading to a degradation in the model performance.

5. Conclusions

In this paper, we propose an explanation-guided contrastive learning model based on interactive dual-hypergraph convolution. Specifically, we designed a dual-hypergraph convolution model based on two interactive propagation mechanisms to further mitigate the adverse effects of topological imbalances as well as to comprehensively capture potential higher-order representations between users and items. In addition, we developed an explanation-guided contrastive learning module, which mainly consists of two parts: explanation-guided enhancement and explanation-guided contrastive learning. Finally, based on the joint training, the contrastive comparison learning and recommendation models were unified to further improve the recommendation performance. The superiority and validity of the proposed method were verified in real experiments. In the future, we will further optimize the dual-hypergraph networks and explanation-guided contrastive learning strategies. On the one hand, dual-hypergraph networks are too complex, and we expect to design more effective optimization methods to reduce their complexity and speed up the training. On the other hand, explanation-guided contrastive learning is also too complex, and we expect to design appropriate techniques to reduce its computational overhead.

Author Contributions

Conceptualization, J.L. and R.G.; Methodology, J.L. and R.G.; Software, L.Y. and D.L.; Validation, X.W. (Xiang Wan) and D.L.; Formal Analysis, J.L. and L.Y.; Investigation, J.L. and R.G.; Resources, X.W. (Xinyun Wu) and D.L.; Data Curation, D.L. and X.W. (Xinyun Wu); Original Draft Preparation, R.G. and D.L.; Writing—Review & Editing, L.Y. and J.H.; Visualization, J.L. and L.Y.; Supervision, X.W. (Xiang Wan) and J.H.; Project Administration, D.L. and X.W. (Xinyun Wu); Funding Acquisition, R.G. and L.Y. All authors have read and agreed to the final version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (62472149, 62206116, 62402164), the Ministry of Education Chunhui Plan Cooperation Project (HZKY20220350), the Hubei Provincial Natural Science Foundation (2021CFB273), and the National College Student Innovation and Entrepreneurship Training Plan Program (202110500014).

Data Availability Statement

All data are available publicly online and on request from the corresponding author.

Conflicts of Interest

Author Donghua Liu was employed by China Waterborne Transport Research Institute, Beijing, China; author Xiang Wan was employed by Wuhan Second Ship Design & Research Institute, Wuhan, China; author Jiwei Hu was employed by China Chile ICT Belt and Road Joint Laboratory, Wuhan, China. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Zhang, W.; Wang, J. A collective bayesian poisson factorization model for cold-start local event recommendation. In Proceedings of the 21th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Sydney, NSW, Australia, 10–13 August 2015; pp. 1455–1464. [Google Scholar]
  2. Hu, G.; Dai, X.; Song, Y.; Huang, S.; Chen, J. A Synthetic Approach for Recommendation: Combining Ratings, Social Relations, and Reviews. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015; pp. 1756–1762. [Google Scholar]
  3. Yu, Z.; Lian, J.; Mahmoody, A.; Liu, G.; Xie, X. Adaptive User Modeling with Long and Short-Term Preferences for Personalized Recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; Volume 7, pp. 4213–4219. [Google Scholar]
  4. Ma, C.; Zhang, Y.; Wang, Q.; Liu, X. Point-of-interest recommendation: Exploiting self-attentive autoencoders with neighbor-aware influence. In Proceedings of the 27th ACM International Conference on Information & Knowledge Management, Torino, Italy, 22–26 October 2018; pp. 697–706. [Google Scholar]
  5. Wu, S.; Tang, Y.; Zhu, Y.; Wang, L.; Xie, X.; Tan, T. Session-based recommendation with graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 346–353. [Google Scholar]
  6. Wang, X.; He, X.; Wang, M.; Feng, F.; Chua, T.S. Neural graph collaborative filtering. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 165–174. [Google Scholar]
  7. Yu, F.; Zhu, Y.; Liu, Q.; Wu, S.; Wang, L.; Tan, T. TAGNN: Target attentive graph neural networks for session-based recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 20–25 July 2020; pp. 1921–1924. [Google Scholar]
  8. Ji, S.; Feng, Y.; Ji, R.; Zhao, X.; Tang, W.; Gao, Y. Dual channel hypergraph collaborative filtering. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, 23–27 August 2020; pp. 2020–2029. [Google Scholar]
  9. Wang, J.; Ding, K.; Hong, L.; Liu, H.; Caverlee, J. Next-item recommendation with sequential hypergraphs. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrievall, Xi’an, China, 20–25 July 2020; pp. 1101–1110. [Google Scholar]
  10. Jiang, J.; Wei, Y.; Feng, Y.; Cao, J.; Gao, Y. Dynamic hypergraph neural networks. In Proceedings of the International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 2635–2641. [Google Scholar]
  11. Zhou, P.; Wu, Z.; Zeng, X.; Wen, G.; Ma, J.; Zhu, X. Totally dynamic hypergraph neural network. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; pp. 2476–2483. [Google Scholar]
  12. Xia, X.; Yin, H.; Yu, J.; Wang, Q.; Cui, L.; Zhang, X. Self-supervised hypergraph convolutional networks for session-based recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 4503–4511. [Google Scholar]
  13. Yadati, N.; Nimishakavi, M.; Yadav, P.; Nitin, V.; Louis, A.; Talukdar, P. Hypergcn: A new method for training graph convolutional networks on hypergraphs. Adv. Neural Inf. Process. Syst. 2019, 32, 1511–1522. [Google Scholar]
  14. Wang, S.; Hu, L.; Wang, Y.; He, X.; Sheng, Q.Z.; Orgun, M.A.; Cao, L.; Ricci, F.; Yu, P.S. Graph Learning based Recommender Systems: A Review. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, Montreal, QB, Canada, 19–27 August 2021; pp. 4644–4652. [Google Scholar]
  15. Wei, Y.; Wang, X.; Li, Q.; Nie, L.; Li, Y.; Li, X.; Chua, T.S. Contrastive learning for cold-start recommendation. In Proceedings of the 29th ACM International Conference on Multimedia, Jinjiang, China, 20–24 October 2021; pp. 5382–5390. [Google Scholar]
  16. Xia, X.; Yin, H.; Yu, J.; Shao, Y.; Cui, L. Self-supervised graph co-training for session-based recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 2180–2190. [Google Scholar]
  17. Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; Wang, L. Deep graph contrastive representation learning. arXiv 2020, arXiv:2006.04131. [Google Scholar]
  18. Wu, J.; Wang, X.; Feng, F.; He, X.; Chen, L.; Lian, J.; Xie, X. Self-supervised graph learning for recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 11–15 July 2021; pp. 726–735. [Google Scholar]
  19. Yu, J.; Yin, H.; Li, J.; Wang, Q.; Hung, N.Q.V.; Zhang, X. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proceedings of the Web Conference, Ljubljana, Slovenia, 19–23 April 2021; pp. 413–424. [Google Scholar]
  20. Liu, M.; Lin, Y.; Liu, J.; Liu, B.; Zheng, Q.; Dong, J.S. B2-sampling: Fusing balanced and biased sampling for graph contrastive learning. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 1489–1500. [Google Scholar]
  21. Yi, Z.; Wang, X.; Ounis, I.; Macdonald, C. Multi-modal graph contrastive learning for micro-video recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 1807–1811. [Google Scholar]
  22. Zhang, J.; Gao, M.; Yu, J.; Guo, L.; Li, J.; Yin, H. Double-scale self-supervised hypergraph learning for group recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, Australia, 1–5 November 2021; pp. 2557–2567. [Google Scholar]
  23. Li, Y.; Chen, H.; Sun, X.; Sun, Z.; Li, L.; Cui, L.; Yu, P.S.; Xu, G. Hyperbolic hypergraphs for sequential recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 988–997. [Google Scholar]
  24. Zhou, S.M.; Gan, J.Q. Low-level interpretability and high-level interpretability: A unified view of data-driven interpretable fuzzy system modelling. Fuzzy Sets Syst. 2008, 159, 3091–3131. [Google Scholar] [CrossRef]
  25. Chen, X.; Zhang, Y.; Qin, Z. Dynamic explainable recommendation based on neural attentive models. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 53–60. [Google Scholar]
  26. Cai, X.; Guo, W.; Zhao, M.; Cui, Z.; Chen, J. A knowledge graph-based many-objective model for explainable social recommendation. IEEE Trans. Comput. Soc. Syst. 2023, 10, 3021–3030. [Google Scholar] [CrossRef]
  27. Zhang, W.; Zhou, X.; Zeng, X.; Zhu, S. Learning-Motivation-Boosted Explainable Temporal Point Process Model for Course Recommendation. IEEE Access 2024, 12, 93876–93888. [Google Scholar] [CrossRef]
  28. Wang, X.; Liu, N.; Han, H.; Shi, C. Self-supervised heterogeneous graph neural network with co-contrastive learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, 14–18 August 2021; pp. 1726–1736. [Google Scholar]
  29. Wei, C.; Liang, J.; Liu, D.; Wang, F. Contrastive graph structure learning via information bottleneck for recommendation. Adv. Neural Inf. Process. Syst. 2022, 35, 20407–20420. [Google Scholar]
  30. Yang, W.; Huo, T.; Liu, Z.; Lu, C. based Multi-intention Contrastive Learning for Recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Taipei, Taiwan, 23–27 July 2023; pp. 2339–2343. [Google Scholar]
  31. Gu, S.; Wang, X.; Shi, C.; Xiao, D. Self-supervised Graph Neural Networks for Multi-behavior Recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence, Vienna, Austria, 23–29 July 2022; pp. 2052–2058. [Google Scholar]
  32. Jing, B.; Park, C.; Tong, H. Hdmi: High-order deep multiplex infomax. In Proceedings of the Web Conference, Ljubljana, Slovenia, 19–23 April 2021; pp. 2414–2424. [Google Scholar]
  33. Bo, D.; Wang, X.; Shi, C.; Shen, H. Beyond low-frequency information in graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 3950–3957. [Google Scholar]
  34. Ma, M.; Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part I; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  35. Xue, H.; Yang, L.; Rajan, V.; Jiang, W.; Wei, Y.; Lin, Y. Multiplex bipartite network embedding using dual hypergraph convolutional networks. In Proceedings of the Web Conference, Ljubljana, Slovenia, 19–23 April 2021; pp. 1649–1660. [Google Scholar]
  36. He, C.; Xie, T.; Rong, Y.; Huang, W.; Li, Y.; Huang, J.; Ren, X.; Shahabi, C. Bipartite graph neural networks for efficient node representation learning. arXiv 2019, arXiv:1906.11994. [Google Scholar]
  37. Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 2017, 30, 1–19. [Google Scholar]
  38. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  39. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  40. Xia, L.; Huang, C.; Xu, Y.; Zhao, J.; Yin, D.; Huang, J. Hypergraph contrastive collaborative filtering. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 70–79. [Google Scholar]
Figure 1. The framework of ECR-ID.
Figure 1. The framework of ECR-ID.
Electronics 14 00216 g001
Figure 2. Impact of each module. (a) Impact of auroc; (b) impact of auprc; (c) impact of precision; (d) impact of recall.
Figure 2. Impact of each module. (a) Impact of auroc; (b) impact of auprc; (c) impact of precision; (d) impact of recall.
Electronics 14 00216 g002
Figure 3. Impact of ξ c l . (a) Impact of auroc; (b) impact of auprc; (c) impact of precision; (d) impact of recall.
Figure 3. Impact of ξ c l . (a) Impact of auroc; (b) impact of auprc; (c) impact of precision; (d) impact of recall.
Electronics 14 00216 g003
Figure 4. Impact of β c l . (a) Impact of auroc; (b) impact of auprc; (c) impact of precision; (d) impact of recall.
Figure 4. Impact of β c l . (a) Impact of auroc; (b) impact of auprc; (c) impact of precision; (d) impact of recall.
Electronics 14 00216 g004
Table 1. Statistics of datasets.
Table 1. Statistics of datasets.
DatasetUserItemInteractionInteraction TypesDensity
Amazon3781574960,65820.279%
Alibaba186913,34927,03630.108%
Table 2. Performance comparison for auroc, auprc, precision and recall.
Table 2. Performance comparison for auroc, auprc, precision and recall.
MethodsAmazonAlibaba
Auroc Auprc Precision Recall Auroc Auprc Precision Recall
GraphSAGE66.9969.3963.4752.7466.4960.3663.4752.74
GCN64.9377.4569.4671.5356.8777.6669.4671.53
GAT66.7070.1663.3451.3955.3854.4963.3451.39
HGNN80.1482.9478.5469.5169.6473.5078.5469.51
HyperGCN68.4273.7867.1261.6161.3865.2167.1261.61
DualHGCN83.4688.6985.6376.3984.5786.0285.6376.39
SGL90.3494.5689.6380.3987.6688.9989.6380.39
HCCF94.2795.3291.5691.6790.1389.3290.2782.13
ECR-ID96.6897.8993.7094.9893.5791.7691.7684.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Gao, R.; Yan, L.; Liu, D.; Wan, X.; Wu, X.; Hu, J. Interactive, Enhanced Dual Hypergraph Model for Explainable Contrastive Learning Recommendation. Electronics 2025, 14, 216. https://doi.org/10.3390/electronics14020216

AMA Style

Li J, Gao R, Yan L, Liu D, Wan X, Wu X, Hu J. Interactive, Enhanced Dual Hypergraph Model for Explainable Contrastive Learning Recommendation. Electronics. 2025; 14(2):216. https://doi.org/10.3390/electronics14020216

Chicago/Turabian Style

Li, Jin, Rong Gao, Lingyu Yan, Donghua Liu, Xiang Wan, Xinyun Wu, and Jiwei Hu. 2025. "Interactive, Enhanced Dual Hypergraph Model for Explainable Contrastive Learning Recommendation" Electronics 14, no. 2: 216. https://doi.org/10.3390/electronics14020216

APA Style

Li, J., Gao, R., Yan, L., Liu, D., Wan, X., Wu, X., & Hu, J. (2025). Interactive, Enhanced Dual Hypergraph Model for Explainable Contrastive Learning Recommendation. Electronics, 14(2), 216. https://doi.org/10.3390/electronics14020216

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop