[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3604915.3608853acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
short-paper
Open access

Stability of Explainable Recommendation

Published: 14 September 2023 Publication History

Abstract

Explainable Recommendation has been gaining attention over the last few years in industry and academia. Explanations provided along with recommendations in a recommender system framework have many uses: particularly reasoning why a suggestion is provided and how well an item aligns with a user’s personalized preferences. Hence, explanations can play a huge role in influencing users to purchase products. However, the reliability of the explanations under varying scenarios has not been strictly verified from an empirical perspective. Unreliable explanations can bear strong consequences such as attackers leveraging explanations for manipulating and tempting users to purchase target items that the attackers would want to promote. In this paper, we study the vulnerability of existent feature-oriented explainable recommenders, particularly analyzing their performance under different levels of external noises added into model parameters. We conducted experiments by analyzing three important state-of-the-art (SOTA) explainable recommenders when trained on two widely used e-commerce based recommendation datasets of different scales. We observe that all the explainable models are vulnerable to increased noise levels. Experimental results verify our hypothesis that the ability to explain recommendations does decrease along with increasing noise levels and particularly adversarial noise does contribute to a much stronger decrease. Our study presents an empirical verification on the topic of robust explanations in recommender systems which can be extended to different types of explainable recommenders in RS.

References

[1]
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. Advances in neural information processing systems 31 (2018).
[2]
Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, and Himabindu Lakkaraju. 2022. Rethinking Stability for Attribution-based Explanations. http://arxiv.org/abs/2203.06877 arXiv:2203.06877 [cs].
[3]
Chirag Agarwal, Marinka Zitnik, and Himabindu Lakkaraju. 2022. Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. http://arxiv.org/abs/2106.09078 arXiv:2106.09078 [cs].
[4]
David Alvarez-Melis and Tommi S. Jaakkola. 2018. On the Robustness of Interpretability Methods. http://arxiv.org/abs/1806.08049 arXiv:1806.08049 [cs, stat].
[5]
David Alvarez-Melis and Tommi S. Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. http://arxiv.org/abs/1806.07538 arXiv:1806.07538 [cs, stat].
[6]
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 648–657. https://doi.org/10.1145/3351095.3375624
[7]
Ali Borji and Sikun Lin. 2019. White Noise Analysis of Neural Networks. arxiv:1912.12106 [cs.CV]
[8]
Tong Chen, Hongzhi Yin, Guanhua Ye, Zi Huang, Yang Wang, and Meng Wang. 2020. Try This Instead: Personalized and Interpretable Substitute Recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Virtual Event China, 891–900. https://doi.org/10.1145/3397271.3401042
[9]
Zhiyong Cheng, Ying Ding, Xiangnan He, Lei Zhu, Xuemeng Song, and Mohan Kankanhalli. 2018. A⌃3NCF: An Adaptive Aspect Attention Model for Rating Prediction. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden, 3748–3754. https://doi.org/10.24963/ijcai.2018/521
[10]
Ronky Francis Doh, Conghua Zhou, John Kingsley Arthur, Isaac Tawiah, and Benjamin Doh. 2022. A Systematic Review of Deep Knowledge Graph-Based Recommender Systems, with Focus on Explainable Embeddings. Data 7, 7 (July 2022), 94. https://doi.org/10.3390/data7070094
[11]
Ann-Kathrin Dombrowski, Maximilian Alber, Christopher J. Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. 2019. Explanations can be manipulated and geometry is to blame. http://arxiv.org/abs/1906.07983 arXiv:1906.07983 [cs, stat].
[12]
Christian Etmann, Sebastian Lunz, Peter Maass, and Carola-Bibiane Schönlieb. 2019. On the Connection Between Adversarial Robustness and Saliency Map Interpretability. http://arxiv.org/abs/1905.04172 arXiv:1905.04172 [cs, stat].
[13]
Wenqi Fan, Wei Jin, Xiaorui Liu, Han Xu, Xianfeng Tang, Suhang Wang, Qing Li, Jiliang Tang, Jianping Wang, and Charu Aggarwal. 2022. Jointly Attacking Graph Neural Network and its Explanations. http://arxiv.org/abs/2108.03388 arXiv:2108.03388 [cs].
[14]
Wenqi Fan, Xiangyu Zhao, Xiao Chen, Jingran Su, Jingtong Gao, Lin Wang, Qidong Liu, Yiqi Wang, Han Xu, Lei Chen, 2022. A Comprehensive Survey on Trustworthy Recommender Systems. arXiv preprint arXiv:2209.10117 (2022).
[15]
Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, and Yongfeng Zhang. 2022. A survey on trustworthy recommender systems. arXiv preprint arXiv:2207.12515 (2022).
[16]
Yingqiang Ge, Juntao Tan, Yan Zhu, Yinglong Xia, Jiebo Luo, Shuchang Liu, Zuohui Fu, Shijie Geng, Zelong Li, and Yongfeng Zhang. 2022. Explainable Fairness in Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Madrid Spain, 681–691. https://doi.org/10.1145/3477495.3531973
[17]
Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of Neural Networks Is Fragile. Proceedings of the AAAI Conference on Artificial Intelligence 33, 01 (July 2019), 3681–3688. https://doi.org/10.1609/aaai.v33i01.33013681
[18]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. (2014). https://doi.org/10.48550/ARXIV.1412.6572 Publisher: arXiv Version Number: 3.
[19]
Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial Personalized Ranking for Recommendation. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 355–364. https://doi.org/10.1145/3209978.3209981 arXiv:1808.03908 [cs, stat].
[20]
Yunfeng Hou, Ning Yang, Yi Wu, and Philip S. Yu. 2019. Explainable recommendation with fusion of aspect information. World Wide Web 22, 1 (Jan. 2019), 221–240. https://doi.org/10.1007/s11280-018-0558-1
[21]
Chao Huang, Huance Xu, Yong Xu, Peng Dai, Lianghao Xia, Mengyin Lu, Liefeng Bo, Hao Xing, Xiaoping Lai, and Yanfang Ye. 2021. Knowledge-aware Coupled Graph Neural Network for Social Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence 35, 5 (May 2021), 4115–4122. https://doi.org/10.1609/aaai.v35i5.16533
[22]
Beomsu Kim, Junghoon Seo, and Taegyun Jeon. 2019. Bridging Adversarial Robustness and Gradient Interpretability. http://arxiv.org/abs/1903.11626 arXiv:1903.11626 [cs, stat].
[23]
Trung-Hoang Le and Hady W. Lauw. 2021. Explainable Recommendation with Comparative Constraints on Product Aspects. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. ACM, Virtual Event Israel, 967–975. https://doi.org/10.1145/3437963.3441754
[24]
Lei Li, Yongfeng Zhang, and Li Chen. 2021. Personalized Transformer for Explainable Recommendation. http://arxiv.org/abs/2105.11601 arXiv:2105.11601 [cs].
[25]
Mengchen Liu, Shixia Liu, Hang Su, Kelei Cao, and Jun Zhu. 2018. Analyzing the Noise Robustness of Deep Neural Networks. arxiv:1810.03913 [cs.LG]
[26]
Yanzhang Lyu, Hongzhi Yin, Jun Liu, Mengyue Liu, Huan Liu, and Shizhuo Deng. 2021. Reliable Recommendation with Review-level Explanations. In 2021 IEEE 37th International Conference on Data Engineering (ICDE). IEEE, Chania, Greece, 1548–1558. https://doi.org/10.1109/ICDE51399.2021.00137
[27]
Marko Mihajlović and Nikola Popović. 2018. Fooling a neural network with common adversarial noise. In 2018 19th IEEE Mediterranean Electrotechnical Conference (MELECON). 293–296. https://doi.org/10.1109/MELCON.2018.8379110
[28]
Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, and Ravi P. Ramachandran. 2022. Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks. IEEE Signal Processing Magazine 39, 4 (July 2022), 73–84. https://doi.org/10.1109/MSP.2022.3142719 arXiv:2107.11400 [cs].
[29]
Sicheng Pan, Dongsheng Li, Hansu Gu, Tun Lu, Xufang Luo, and Ning Gu. 2022. Accurate and Explainable Recommendation via Review Rationalization. In Proceedings of the ACM Web Conference 2022. ACM, Virtual Event, Lyon France, 3092–3101. https://doi.org/10.1145/3485447.3512029
[30]
Xiao Sha, Zhu Sun, and Jie Zhang. 2021. Hierarchical Attentive Knowledge Graph Embedding for Personalized Recommendation. Electronic Commerce Research and Applications 48 (July 2021), 101071. https://doi.org/10.1016/j.elerap.2021.101071 arXiv:1910.08288 [cs].
[31]
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2019. Learning Important Features Through Propagating Activation Differences. arxiv:1704.02685 [cs.CV]
[32]
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. http://arxiv.org/abs/1911.02508 arXiv:1911.02508 [cs, stat].
[33]
Akshayvarun Subramanya, Vipin Pillai, and Hamed Pirsiavash. 2019. Fooling Network Interpretation in Image Classification. http://arxiv.org/abs/1812.02843 arXiv:1812.02843 [cs].
[34]
Chang-You Tai, Liang-Ying Huang, Chien-Kun Huang, and Lun-Wei Ku. 2021. User-centric path reasoning towards explainable recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 879–889.
[35]
Juntao Tan, Shuyuan Xu, Yingqiang Ge, Yunqi Li, Xu Chen, and Yongfeng Zhang. 2021. Counterfactual Explainable Recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. ACM, Virtual Event Queensland Australia, 1784–1793. https://doi.org/10.1145/3459637.3482420
[36]
Thanh Tran, Renee Sweeney, and Kyumin Lee. 2019. Adversarial Mahalanobis Distance-based Attentive Song Recommender for Automatic Playlist Continuation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Paris France, 245–254. https://doi.org/10.1145/3331184.3331234
[37]
Alexandra Vultureanu‐Albişi and Costin Bădică. 2022. A survey on effects of adding explanations to recommender systems. Concurrency and Computation: Practice and Experience 34, 20 (Sept. 2022). https://doi.org/10.1002/cpe.6834
[38]
Nan Wang, Hongning Wang, Yiling Jia, and Yue Yin. 2018. Explainable Recommendation via Multi-Task Learning in Opinionated Text Data. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 165–174. https://doi.org/10.1145/3209978.3210010 arXiv:1806.03568 [cs].
[39]
Shoujin Wang, Xiuzhen Zhang, Yan Wang, Huan Liu, and Francesco Ricci. 2022. Trustworthy Recommender Systems. http://arxiv.org/abs/2208.06265 arXiv:2208.06265 [cs].
[40]
Bingbing Wen, Yunhe Feng, Yongfeng Zhang, and Chirag Shah. 2022. Towards Generating Robust, Fair, and Emotion-Aware Explanations for Recommender Systems. http://arxiv.org/abs/2208.08017 arXiv:2208.08017 [cs].
[41]
Yikun Xian, Tong Zhao, Jin Li, Jim Chan, Andrey Kan, Jun Ma, Xin Luna Dong, Christos Faloutsos, George Karypis, S. Muthukrishnan, and Yongfeng Zhang. 2021. EX3: Explainable Attribute-aware Item-set Recommendations. In Fifteenth ACM Conference on Recommender Systems. ACM, Amsterdam Netherlands, 484–494. https://doi.org/10.1145/3460231.3474240
[42]
Ning Yang, Yuchi Ma, Li Chen, and Philip S Yu. 2020. A meta-feature based unified framework for both cold-start and warm-start explainable recommendations. World Wide Web 23 (2020), 241–265.
[43]
Feng Yuan, Lina Yao, and Boualem Benatallah. 2019. Adversarial Collaborative Neural Network for Robust Recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Paris France, 1065–1068. https://doi.org/10.1145/3331184.3331321
[44]
Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, and Ting Wang. 2020. Interpretable deep learning under fire. In 29th { USENIX} security symposium ({ USENIX} security 20).
[45]
Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends® in Information Retrieval 14, 1 (2020), 1–101. https://doi.org/10.1561/1500000066
[46]
Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. ACM, Gold Coast Queensland Australia, 83–92. https://doi.org/10.1145/2600428.2609579
[47]
Yongfeng Zhang, Haochen Zhang, Min Zhang, Yiqun Liu, and Shaoping Ma. 2014. Do users rate or review?: boost phrase-level sentiment labeling with review-level sentiment classification. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. ACM, Gold Coast Queensland Australia, 1027–1030. https://doi.org/10.1145/2600428.2609501
[48]
Yao Zhou, Haonan Wang, Jingrui He, and Haixun Wang. 2021. From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems. http://arxiv.org/abs/2110.14844 arXiv:2110.14844 [cs].

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
RecSys '23: Proceedings of the 17th ACM Conference on Recommender Systems
September 2023
1406 pages
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 September 2023

Check for updates

Author Tags

  1. Adversarial Attacks
  2. Explainable Recommendation
  3. Machine Learning
  4. Neural Networks
  5. Recommender Systems
  6. Robust Recommender Systems

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

RecSys '23: Seventeenth ACM Conference on Recommender Systems
September 18 - 22, 2023
Singapore, Singapore

Acceptance Rates

Overall Acceptance Rate 254 of 1,295 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 499
    Total Downloads
  • Downloads (Last 12 months)322
  • Downloads (Last 6 weeks)42
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media