[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3477314.3507029acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Shap-enhanced counterfactual explanations for recommendations

Published: 06 May 2022 Publication History

Abstract

Explanations in recommender systems help users better understand why a recommendation (or a list of recommendations) is generated. Explaining recommendations has become an important requirement for enhancing users' trust and satisfaction. However, explanation methods vary across different recommender models, increasing engineering costs. As recommender systems become ever more inscrutable, directly explaining recommender systems sometimes becomes impossible. Post-hoc explanation methods that do not elucidate internal mechanisms of recommender systems are popular approaches. State-of-art post-hoc explanation methods such as SHAP can generate explanations by building simpler surrogate models to approximate the original models. However, directly applying such methods has several concerns. First of all, post-hoc explanations may not be faithful to the original recommender systems since the internal mechanisms of recommender systems are not elucidated. Another concern is that the outputs returned by methods such as SHAP are not trivial for plain users to understand since background mathematical knowledge is required. In this work, we present an explanation method enhanced by SHAP that can generate easily understandable explanations with high fidelity.

References

[1]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
[2]
Zahra Vahidi Ferdousi. 2020. From Traditional to Context-Aware Recommendations by Correlation-Based Context Model. Ph.D. Dissertation. Paris-Dauphine University, PSL Research University.
[3]
Bruce Ferwerda, Kevin Swelsen, and Emily Yang. 2018. Explaining Content-Based Recommendations. New York (2018), 1--24.
[4]
Azin Ghazimatin, Oana Balalau, Rishiraj Saha Roy, and Gerhard Weikum. 2020. PRINCE: Provider-side interpretability with counterfactual explanations in recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining. 196--204.
[5]
Mingming Guo, Nian Yan, Xiquan Cui, Simon Hughes, and Khalifeh Al Jadda. 2021. Online Product Feature Recommendations with Interpretable Machine Learning. arXiv preprint arXiv:2105.00867 (2021).
[6]
Sergiu Hart. 1989. Shapley value. In Game Theory. Springer, 210--216.
[7]
Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. 241--250.
[8]
Denis Hilton. 2017. Social attribution and explanation. (2017).
[9]
Vassilis Kaffes, Dimitris Sacharidis, and Giorgos Giannopoulos. 2021. Model-Agnostic Counterfactual Explanations of Recommendations. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 280--285.
[10]
Andrej Košir, Ante Odic, Matevz Kunaver, Marko Tkalcic, and Jurij F Tasic. 2011. Database for contextual personalization. Elektrotehniški vestnik 78, 5 (2011), 270--274.
[11]
I Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. 2020. Problems with Shapley-value-based explanations as feature importance measures. In International Conference on Machine Learning. PMLR, 5491--5500.
[12]
Lei Li, Li Chen, and Ruihai Dong. 2020. CAESAR: context-aware explanation based on supervised attention for service recommendations. Journal of Intelligent Information Systems (2020), 1--24.
[13]
Peter Lipton. 1990. Contrastive explanation. Royal Institute of Philosophy Supplements 27 (1990), 247--266.
[14]
Zachary C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31--57.
[15]
Tania Lombrozo. 2006. The structure and function of explanations. Trends in cognitive sciences 10, 10 (2006), 464--470.
[16]
Scott Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874 (2017).
[17]
David Martens and Foster Provost. 2014. Explaining data-driven document classifications. MIS quarterly 38, 1 (2014), 73--100.
[18]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1--38.
[19]
Joanna Misztal and Bipin Indurkhya. 2015. Explaining Contextual Recommendations: Interaction Design Study and Prototype Implementation. In IntRS@ RecSys. 13--20.
[20]
Christoph Molnar. 2020. Interpretable machine learning. Lulu. com.
[21]
Cataldo Musto, Giuseppe Spillo, Marco de Gemmis, Pasquale Lops, and Giovanni Semeraro. 2020. Exploiting Distributional Semantics Models for Natural Language Context-aware Justifications for Recommender Systems. In IntRS@ RecSys. 65--71.
[22]
Caio Nóbrega and Leandro Marinho. 2019. Towards explaining recommendations through local surrogate models. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. 1671--1678.
[23]
Robert Nozick. 1983. Philosophical explanations. Harvard University Press.
[24]
Georgina Peake and Jun Wang. 2018. Explanation mining: Post hoc interpretability of latent factor models for recommendation systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2060--2069.
[25]
Lara Quijano-Sanchez, Christian Sauer, Juan A Recio-Garcia, and Belen Diaz-Agudo. 2017. Make it personal: a social explanation system applied to group recommendations. Expert Systems with Applications 76 (2017), 36--48.
[26]
Yanou Ramon, David Martens, Foster Provost, and Theodoros Evgeniou. 2020. A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C. Advances in Data Analysis and Classification 14, 4 (2020), 801--819.
[27]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135--1144.
[28]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206--215.
[29]
Jaspreet Singh and Avishek Anand. 2019. Exs: Explainable search using local model agnostic interpretability. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 770--773.
[30]
Chayaporn Suphavilai, Denis Bertrand, and Niranjan Nagarajan. 2018. Predicting cancer drug response using a recommender system. Bioinformatics 34, 22 (2018), 3907--3914.
[31]
Jeroen Van Bouwel and Erik Weber. 2002. Remote causes, bad explanations? Journal for the Theory of Social Behaviour 32, 4 (2002), 437--449.
[32]
MK Vijaymeena and K Kavitha. 2016. A survey on similarity measures in text mining. Machine Learning and Applications: An International Journal 3, 2 (2016), 19--28.
[33]
Giorgio Visani, Enrico Bagli, Federico Chesani, Alessandro Poluzzi, and Davide Capuzzo. 2020. Statistical stability indices for LIME: obtaining reliable explanations for machine learning models. Journal of the Operational Research Society (2020), 1--11.
[34]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
[35]
Muhammad Rehman Zafar and Naimul Mefraz Khan. 2019. DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv preprint arXiv:1906.10263 (2019).
[36]
Yongfeng Zhang and Xu Chen. 2018. Explainable recommendation: A survey and new perspectives. arXiv preprint arXiv:1804.11192 (2018).
[37]
Jinfeng Zhong and Elsa Negre. 2021. Towards better representation of context into recommender systems. Presented in ICIKS 2021 and submitted to IJKBO.
[38]
Dávid Zibriczky12. 2016. Recommender systems meet finance: a literature review. In Proc. 2nd Int. Workshop Personalization Recommender Syst. 1--10.

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 3-Oct-2024
  • (2024)GNNUERS: Fairness Explanation in GNNs for Recommendation via Counterfactual ReasoningACM Transactions on Intelligent Systems and Technology10.1145/365563116:1(1-26)Online publication date: 26-Dec-2024
  • (2024)Are We Explaining the Same Recommenders? Incorporating Recommender Performance for Evaluating ExplainersProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3691709(1113-1118)Online publication date: 8-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SAC '22: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing
April 2022
2099 pages
ISBN:9781450387132
DOI:10.1145/3477314
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 May 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. explainable recommendations
  2. model-agnostic explanations

Qualifiers

  • Research-article

Conference

SAC '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,650 of 6,669 submissions, 25%

Upcoming Conference

SAC '25
The 40th ACM/SIGAPP Symposium on Applied Computing
March 31 - April 4, 2025
Catania , Italy

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)163
  • Downloads (Last 6 weeks)18
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 3-Oct-2024
  • (2024)GNNUERS: Fairness Explanation in GNNs for Recommendation via Counterfactual ReasoningACM Transactions on Intelligent Systems and Technology10.1145/365563116:1(1-26)Online publication date: 26-Dec-2024
  • (2024)Are We Explaining the Same Recommenders? Incorporating Recommender Performance for Evaluating ExplainersProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3691709(1113-1118)Online publication date: 8-Oct-2024
  • (2024)CEERS: Counterfactual Evaluations of Explanations in Recommender SystemsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688015(1323-1329)Online publication date: 8-Oct-2024
  • (2024)A Counterfactual Framework for Learning and Evaluating Explanations for Recommender SystemsProceedings of the ACM Web Conference 202410.1145/3589334.3645560(3723-3733)Online publication date: 13-May-2024
  • (2024)A personalized product recommendation model in e-commerce based on retrieval strategyJournal of Open Innovation: Technology, Market, and Complexity10.1016/j.joitmc.2024.10030310:2(100303)Online publication date: Jun-2024
  • (2024)Learning-based counterfactual explanations for recommendationScience China Information Sciences10.1007/s11432-023-3974-267:8Online publication date: 25-Jul-2024
  • (2023)Combining Embedding-Based and Semantic-Based Models for Post-Hoc Explanations in Recommender Systems2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC53992.2023.10394410(4619-4624)Online publication date: 1-Oct-2023
  • (2023)Prototype-Guided Counterfactual Explanations via Variational Auto-encoder for RecommendationMachine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track10.1007/978-3-031-43427-3_39(652-668)Online publication date: 18-Sep-2023
  • (2022) A 3 R: Argumentative explanations for recommendations 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA)10.1109/DSAA54385.2022.10032419(1-9)Online publication date: 13-Oct-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media