[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Explainable Recommender With Geometric Information Bottleneck

Published: 01 July 2024 Publication History

Abstract

Explainable recommender systems can explain their recommendation decisions, enhancing user trust in the systems. Most explainable recommender systems either rely on human-annotated rationales to train models for explanation generation or leverage the attention mechanism to extract important text spans from reviews as explanations. The extracted rationales are often confined to an individual review and may fail to identify the implicit features beyond the review text. To avoid the expensive human annotation process and to generate explanations beyond individual reviews, we propose to incorporate a geometric prior learnt from user-item interactions into a variational network which infers latent factors from user-item reviews. The latent factors from an individual user-item pair can be used for both recommendation and explanation generation, which naturally inherit the global characteristics encoded in the prior knowledge. Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender using the Wasserstein distance while achieving performance comparable to existing content-based recommender systems in terms of recommendation behaviours.

References

[1]
A. Ghazimatin, O. Balalau, R. Saha Roy, and G. Weikum, “Prince: Provider-side interpretability with counterfactual explanations in recommender systems,” in Proc. 13th Int. Conf. Web Search Data Mining, 2020, pp. 196–204.
[2]
Y. Zhang et al., “Explainable recommendation: A survey and new perspectives,” Found.s Trends Inf. Retrieval, vol. 14, no. 1, pp. 1–101, 2020.
[3]
Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Liu, and S. Ma, “Explicit factor models for explainable recommendation based on phrase-level sentiment analysis,” in Proc. 37th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2014, pp. 83–92.
[4]
J. Ni, J. Li, and J. J. McAuley, “Justifying recommendations using distantly-labeled reviews and fine-grained aspects,” in Proc. Conf. Empirical Methods Natural Lang. Process., 9th Int. Joint Conf. Natural Lang. Process., Hong Kong, China, 2019, pp. 188–197.
[5]
Z. Chen et al., “Co-attentive multi-task learning for explainable recommendation,” in Proc. 28th Int. Joint Conf. Artif. Intell., Macao, China, 2019, pp. 2137–2143.
[6]
L. Li, Y. Zhang, and L. Chen, Generate Neural Template Explanations for Recommendation. New York, NY, USA: Association for Computing Machinery, 2020, pp. 755–764.
[7]
J. Tan, S. Xu, Y. Ge, Y. Li, X. Chen, and Y. Zhang, Counterfactual Explainable Recommendation. New York, NY, USA: Association for Computing Machinery, 2021, pp. 1784–1793.
[8]
C. Chen, M. Zhang, Y. Liu, and S. Ma, “Neural attentional rating regression with review-level explanations,” in Proc. Conf. World Wide Web, Lyon, France, 2018, pp. 1583–1592.
[9]
P. Wang, R. Cai, and H. Wang, “Graph-based extractive explainer for recommendations,” in Proc. ACM Web Conf., Lyon, France, 2022, pp. 2163–2171.
[10]
C. Wu, F. Wu, J. Liu, and Y. Huang, “Hierarchical user and item representation with three-tier attention for recommendation,” in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics: Hum. Lang. Technol., MN, USA, 2019, pp. 1818–1826.
[11]
Y. Tay, A. T. Luu, and S. C. Hui, “Multi-pointer co-attention networks for recommendation,” in Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, London, UK, 2018, pp. 2309–2318.
[12]
C. Li, C. Quan, L. Peng, Y. Qi, Y. Deng, and L. Wu, “A capsule network for recommendation and explaining what you like and dislike,” in Proc. 42nd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, Paris, France, 2019, pp. 275–284.
[13]
X. Chen, Y. Zhang, and Z. Qin, “Dynamic explainable recommendation based on neural attentive models,” in Proc. 33rd AAAI Conf. Artif. Intell., 31st Innov. Appl. Artif. Intell. Conf., 9th AAAI Symp. Educ. Adv. Artif. Intell., Honolulu, Hawaii, USA, 2019, pp. 53–60.
[14]
H. Liu, W. Wang, Q. Peng, N. Wu, F. Wu, and P. Jiao, “Toward comprehensive user and item representations via three-tier attention network,” ACM Trans. Inf. Syst., vol. 39, no. 3, pp. 25:1–25:22, 2021.
[15]
T. Zhang, C. Sun, Z. Cheng, and X. Dong, “AENAR: An aspect-aware explainable neural attentional recommender model for rating predication,” Expert Syst. Appl., vol. 198, 2022, Art. no.
[16]
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in Proc. 2nd Int. Conf. Learn. Representations, 2014. [Online]. Available: http://arxiv.org/abs/1312.6114
[17]
J. McAuley and J. Leskovec, “Hidden factors and hidden topics: Understanding rating dimensions with review text,” in Proc. 7th ACM Conf. Recommender Syst., New York, NY, USA: Association for Computing Machinery, 2013, pp. 165–172.
[18]
C. Wang and D. M. Blei, “Collaborative topic modeling for recommending scientific articles,” in Proc. 17th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, San Diego, CA, USA, 2011, pp. 448–456.
[19]
D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” J. Mach. Learn. Res., vol. 3, pp. 993–1022, 2003. [Online]. Available: http://jmlr.org/papers/v3/blei03a.html
[20]
H. Fei, Y. Ren, S. Wu, B. Li, and D. Ji, “Latent target-opinion as prior for document-level sentiment classification: A variational approach from fine-grained perspective,” in Proc. Web Conf., 2021, pp. 553–564.
[21]
Q. Truong, A. Salah, and H. W. Lauw, “Bilateral variational autoencoder for collaborative filtering,” in Proc. 14th ACM Int. Conf. Web Search Data Mining, Israel, 2021, pp. 292–300.
[22]
X. Wang, H. Jin, A. Zhang, X. He, T. Xu, and T. Chua, “Disentangled graph collaborative filtering,” in Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2020, pp. 1001–1010.
[23]
J. Ma, P. Cui, K. Kuang, X. Wang, and W. Zhu, “Disentangled graph convolutional networks,” in Proc. 36th Int. Conf. Mach. Learn., Long Beach, CA, USA, 2019, pp. 4212–4221. [Online]. Available: http://proceedings.mlr.press/v97/ma19a.html
[24]
L. Li, Y. Zhang, and L. Chen, “Personalized prompt learning for explainable recommendation,” 2022,.
[25]
J. Tan, S. Xu, Y. Ge, Y. Li, X. Chen, and Y. Zhang, “Counterfactual explainable recommendation,” in Proc. 30th ACM Int. Conf. Inf. Knowl. Manage., Queensland, Australia, 2021, pp. 1784–1793.
[26]
T. Chen, H. Yin, G. Ye, Z. Huang, Y. Wang, and M. Wang, “Try this instead: Personalized and interpretable substitute recommendation,” in Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2020, pp. 891–900.
[27]
A. Yang, N. Wang, R. Cai, H. Deng, and H. Wang, “Comparative explanations of recommendations,” in Proc. ACM Web Conf., Lyon, France, 2022, pp. 3113–3123.
[28]
S. Pan, D. Li, H. Gu, T. Lu, X. Luo, and N. Gu, “Accurate and explainable recommendation via review rationalization,” in Proc. ACM Web Conf., 2022, pp. 3092–3101.
[29]
J. M. Tomczak and M. Welling, “VAE with a vampprior,” 2017,.
[30]
J. J. Zhao, Y. Kim, K. Zhang, A. M. Rush, and Y. LeCun, “Adversarially regularized autoencoders for generating discrete structures,” 2017,.
[31]
N. Tishby and N. Zaslavsky, “Deep learning and the information bottleneck principle,” 2015. [Online]. Available: https://arxiv.org/abs/1503.02406
[32]
A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy, “Deep variational information bottleneck,” in Proc. 5th Int. Conf. Learn. Representations, Toulon, France, 2017. [Online]. Available: https://openreview.net/forum?id=HyxQzBceg
[33]
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” 2013. [Online]. Available: https://arxiv.org/abs/1312.6114
[34]
P. Diaconis and D. Freedman, “A dozen de Finetti-style results in search of a theory,” Annales de l’I.H.P. Probabilités et Statist.stiques, vol. 23, no. S2, pp. 397–423, 1987. [Online]. Available: http://www.numdam.org/item/AIHPB_1987__23_S2_397_0/
[35]
P. Hall, R. C. L. Wolff, and Q. Yao, “Methods for estimating a conditional distribution function,” J. Amer. Stat. Assoc., vol. 94, no. 445, pp. 154–163, 1999. [Online]. Available: http://www.jstor.org/stable/2669691
[36]
S. Czolbe, O. Krause, I. J. Cox, and C. Igel, “A loss function for generative neural networks based on watson's perceptual model,” in Proc. Annu. Conf. Neural Inf. Process. Syst., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
[37]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[38]
X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “LightGCN: Simplifying and powering graph convolution network for recommendation,” in Proc. 43rd Int. ACM SIGIR Conf.Res. Develop. Inf. Retrieval, 2020, pp. 639–648.
[39]
K. Mao, J. Zhu, X. Xiao, B. Lu, Z. Wang, and X. He, “UltraGCN: Ultra simplification of graph convolutional networks for recommendation,” in Proc. 30th ACM Int. Conf. Inf. Knowl. Manage., 2021, pp. 1253–1262.
[40]
S. Peng, K. Sugiyama, and T. Mine, “SVD-GCN: A simplified graph convolution paradigm for recommendation,” in Proc. 31st ACM Int. Conf. Inf. Knowl. Manage., 2022, pp. 1625–1634.
[41]
F. Wang, Z. Zheng, Y. Zhang, Y. Li, K. Yang, and C. Zhu, “To see further: Knowledge graph-aware deep graph convolutional network for recommender systems,” Inf. Sci., vol. 647, 2023, Art. no. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025523010502
[42]
L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res., vol. 9, no. 11, pp. 2579–2605, 2008.
[43]
J. Wu et al., “Self-supervised graph learning for recommendation,” in Proc. 44th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2021, pp. 726–735.
[44]
J. J. McAuley, J. Leskovec, and D. Jurafsky, “Learning attitudes and attributes from multi-aspect reviews,” in Proc. 12th IEEE Int. Conf. Data Mining, Brussels, Belgium, 2012, pp. 1020–1025.
[45]
R. He and J. J. McAuley, “UPS and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering,” in Proc. 25th Int. Conf. World Wide Web, Montreal, Canada, 2016, pp. 507–517.
[46]
L. Zheng, V. Noroozi, and P. S. Yu, “Joint deep modeling of users and items using reviews for recommendation,” in Proc. 10th ACM Int. Conf. Web Search Data Mining, Cambridge, U.K., 2017, pp. 425–434.
[47]
J. Shuai et al., “A review-aware graph contrastive learning framework for recommendation,” in Proc. 45th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New York, NY, USA: Association for Computing Machinery, 2022, pp. 1283–1293.
[48]
I. O. Tolstikhin, O. Bousquet, S. Gelly, and B. Schölkopf, “Wasserstein auto-encoders,” 2017,.
[49]
H. Fu, C. Li, X. Liu, J. Gao, A. Celikyilmaz, and L. Carin, “Cyclical annealing schedule: A simple approach to mitigating KL vanishing,” in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics: Hum. Lang. Technol., Minneapolis, MN, USA, 2019, pp. 240–250.
[50]
X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, “Neural collaborative filtering,” in Proc. 26th Int. Conf. World Wide Web, 2017, p. 173–182.
[51]
A. M. Elkahky, Y. Song, and X. He, “A multi-view deep learning approach for cross domain user modeling in recommendation systems,” in Proc. 24th Int. Conf. World Wide Web, 2015, pp. 278–288.
[52]
X. Ma, C. Zhou, and E. H. Hovy, “MAE: Mutual posterior-divergence regularization for variational autoencoders,” in Proc. 7th Int. Conf. Learn. Representations, New Orleans, LA, USA, 2019. [Online]. Available: https://openreview.net/forum?id=Hke4l2AcKQ
[53]
R. Fong and A. Vedaldi, “Interpretable explanations of black boxes by meaningful perturbation,” 2017,.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering  Volume 36, Issue 7
July 2024
876 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 01 July 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media