[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3583780.3614884acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

FINRule: Feature Interactive Neural Rule Learning

Published: 21 October 2023 Publication History

Abstract

Though neural networks have achieved impressive prediction performance, it's still hard for people to understand what neural networks have learned from the data. The black-box property of neural networks already becomes one of the main obstacles preventing from being applied to many high-stakes applications, such as finance and medicine that have critical requirement on the model transparency and interpretability. In order to enhance the explainability of neural networks, we propose a neural rule learning method-Feature Interactive Neural Rule Learning (FINRule) to incorporate the expressivity of neural networks and the interpretability of rule-based systems. Specifically, we conduct rule learning as differential discrete combination encoded by a feedforward neural network, in which each layer acts as a logical operator of explainable decision conditions. The first hidden layer can act as sharable atomic conditions which are connected to next hidden layer for formulating decision rules. Moreover, we propose to represent both atomic condition and rules with contextual embeddings, with aim to enrich the expressivity power by capturing high-order feature interactions. We conduct comprehensive experiments on real-world datasets to validate both effectiveness and explainability of the proposed method.

References

[1]
Darius Afchar and Romain Hennequin. 2020. Making neural networks interpretable with attribution: application to implicit signals prediction. In RecSys. 220--229.
[2]
Sercan Ö Arik and Tomas Pfister. 2021. Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 6679--6687.
[3]
Leo Breiman. 2001. Random forests. Machine learning, Vol. 45, 1 (2001), 5--32.
[4]
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In KDD. 785--794.
[5]
Lingyang Chu, Xia Hu, Juhua Hu, Lanjun Wang, and Jian Pei. 2018. Exact and consistent interpretation for piecewise linear neural networks: A closed form solution. In KDD. 1244--1253.
[6]
William W Cohen. 1995. Fast effective rule induction. In Machine learning proceedings. 115--123.
[7]
Luc De Raedt, Robin Manhaeve, Sebastijan Dumancic, Thomas Demeester, and Angelika Kimmig. 2019. Neuro-symbolic= neural logical probabilistic. In NeSy'19@ IJCAI, the 14th International Workshop on Neural-Symbolic Learning and Reasoning.
[8]
Evelyn Fix and Joseph Lawson Hodges. 1989. Discriminatory analysis. Nonparametric discrimination: Consistency properties. International Statistical Review/Revue Internationale de Statistique, Vol. 57, 3 (1989), 238--247.
[9]
Jerome H Friedman and Bogdan E Popescu. 2008. Predictive learning via rule ensembles. The annals of applied statistics (2008), 916--954.
[10]
Claire Glanois, Zhaohui Jiang, Xuening Feng, Paul Weng, Matthieu Zimmer, Dong Li, Wulong Liu, and Jianye Hao. 2022. Neuro-Symbolic Hierarchical Rule Induction. In ICML. 7583--7615.
[11]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2018), 1--42.
[12]
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: a factorization-machine based neural network for CTR prediction. In AAAI.
[13]
Deepesh V Hada and Shirish K Shevade. 2021. ReXPlug: Explainable Recommendation using Plug-and-Play Language Model. In SIGIR. 81--91.
[14]
Tameru Hailesilassie. 2016. Rule extraction algorithm for deep neural networks: A review. arXiv preprint arXiv:1610.05267 (2016).
[15]
Sergei Ivanov and Liudmila Prokhorenkova. 2021. Boost then Convolve: Gradient Boosting Meets Graph Neural Networks. In ICLR.
[16]
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. In NeurIPS.
[17]
Guolin Ke, Zhenhui Xu, Jia Zhang, Jiang Bian, and Tie-Yan Liu. 2019. DeepGBM: A deep learning framework distilled by GBDT for online prediction tasks. In KDD. 384--394.
[18]
Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, and Samuel Rota Bulo. 2015. Deep neural decision forests. In ICCV. 1467--1475.
[19]
Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. 2016. Interpretable decision sets: A joint framework for description and prediction. In KDD. 1675--1684.
[20]
Meng Li, Lu Yu, Ya-Lin Zhang, Xiaoguang Huang, Qitao Shi, Qing Cui, Xinxing Yang, Longfei Li, Wei Zhu, Yanming Fang, et al. 2022. An adaptive framework for confidence-constraint rule set learning algorithm in large dataset. In CIKM. 3252--3261.
[21]
Zekun Li, Zeyu Cui, Shu Wu, Xiaoyu Zhang, and Liang Wang. 2019. Fi-gnn: Modeling feature interactions via graph neural networks for ctr prediction. In CIKM. 539--548.
[22]
Bin Liu, Chenxu Zhu, Guilin Li, Weinan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, and Yong Yu. 2020. Autofis: Automatic feature interaction selection in factorization models for click-through rate prediction. In KDD. 2636--2645.
[23]
Dunja Mladenić, Janez Brank, Marko Grobelnik, and Natasa Milic-Frayling. 2004. Feature selection using linear classifier weights: interaction with classification models. In SIGIR. 234--241.
[24]
Kevin P Murphy et al. 2006. Naive bayes classifiers. University of British Columbia, Vol. 18, 60 (2006), 1--8.
[25]
Sergei Popov, Stanislav Morozov, and Artem Babenko. 2019. Neural oblivious decision ensembles for deep learning on tabular data. ICLR.
[26]
Litao Qiao, Weijia Wang, and Bill Lin. 2021. Learning accurate and interpretable decision rule sets from neural networks. In AAAI. 4303--4311.
[27]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In KDD. 1135--1144.
[28]
Irina Rish et al. 2001. An empirical study of the naive Bayes classifier. In IJCAI 2001 workshop on empirical methods in artificial intelligence, Vol. 3. 41--46.
[29]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, Vol. 1, 5 (2019), 206--215.
[30]
Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, and Chudi Zhong. 2022. Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistics Surveys, Vol. 16 (2022), 1--85.
[31]
Shaoyun Shi, Hanxiong Chen, Weizhi Ma, Jiaxin Mao, Min Zhang, and Yongfeng Zhang. 2020. Neural logic reasoning. In CIKM. 1365--1374.
[32]
Shaoyun Shi, Yuexiang Xie, Zhen Wang, Bolin Ding, Yaliang Li, and Min Zhang. 2022. Explainable Neural Rule Learning. In The WebConf. 3031--3041.
[33]
Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, and Jian Tang. 2019. Autoint: Automatic feature interaction learning via self-attentive neural networks. In CIKM. 1161--1170.
[34]
Juntao Tan, Shuyuan Xu, Yingqiang Ge, Yunqi Li, Xu Chen, and Yongfeng Zhang. 2021. Counterfactual explainable recommendation. In CIKM. 1784--1793.
[35]
Yan Shuo Tan, Chandan Singh, Keyan Nasseri, Abhineet Agarwal, and Bin Yu. 2022. Fast interpretable greedy-tree sums (FIGS). arXiv preprint arXiv:2201.11931 (2022).
[36]
Walter Denis Wallis. 2011. A beginner's guide to discrete mathematics. Springer Science & Business Media.
[37]
Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. 2017a. A bayesian framework for learning rule sets for interpretable classification. The Journal of Machine Learning Research, Vol. 18, 1 (2017), 2357--2393.
[38]
Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. 2017b. A Bayesian Framework for Learning Rule Sets for Interpretable Classification. J. Mach. Learn. Res., Vol. 18 (2017), 70:1--70:37.
[39]
Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. 2019. Explainable reasoning over knowledge graphs for recommendation. In AAAI, Vol. 33. 5329--5336.
[40]
Zifeng Wang and Jimeng Sun. 2022. TransTab: Learning Transferable Tabular Transformers Across Tables. In NeurIPS.
[41]
Zhuo Wang, Wei Zhang, Ning Liu, and Jianyong Wang. 2021. Scalable rule-based representation learning for interpretable classification. NeurIPS, Vol. 34 (2021), 30479--30491.
[42]
Zhuo Wang, Wei Zhang, LIU Ning, and Jianyong Wang. 2020. Transparent classification with multilayer logical perceptrons and random binarization. In AAAI, Vol. 34. 6331--6339.
[43]
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In ICML. 6861--6871.
[44]
Yuexiang Xie, Zhen Wang, Yaliang Li, Bolin Ding, Nezihe Merve Gürel, Ce Zhang, Minlie Huang, Wei Lin, and Jingren Zhou. 2021. Fives: Feature interaction via edge search for large-scale tabular data. In KDD. 3795--3805.
[45]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? ICLR (2018).
[46]
Fan Yang, Kai He, Linxiao Yang, Hongxia Du, Jingbang Yang, Bo Yang, and Liang Sun. 2021. Learning Interpretable Decision Rule Sets: A Submodular Optimization Approach. In NeurIPS. 27890--27902.
[47]
Yongxin Yang, Irene Garcia Morillo, and Timothy M Hospedales. 2018. Deep neural decision trees. ICML Workshop on Human Interpretability (2018).
[48]
Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. Gnnexplainer: Generating explanations for graph neural networks. In NeurIPS.
[49]
Lu Yu, Meng Li, Xiaoguang Huang, Wei Zhu, Yanming Fang, Jun Zhou, and Longfei Li. 2022. MetaRule: A Meta-path Guided Ensemble Rule Set Learning for Explainable Fraud Detection. In CIKM. 4650--4654.
[50]
Mateo Espinosa Zarlenga, Zohreh Shams, and Mateja Jamnik. 2021. Efficient Decompositional Rule Extraction for Deep Neural Networks. NeurIPS 2021 Workshop on eXplainable AI approaches for debugging and diagnosis (2021).
[51]
Wen Zhang, Bibek Paudel, Liang Wang, Jiaoyan Chen, Hai Zhu, Wei Zhang, Abraham Bernstein, and Huajun Chen. 2019. Iteratively learning embeddings and rules for knowledge graph reasoning. In WWW. 2366--2377.
[52]
Wei Zhang, Junbing Yan, Zhuo Wang, and Jianyong Wang. 2022. Neuro-Symbolic Interpretable Collaborative Filtering for Attribute-based Recommendation. In The WebConf. 3229--3238.

Cited By

View all
  • (2024)Classification, Regression, and Survival Rule Induction with Complex and M-of-N Elementary ConditionsMachine Learning and Knowledge Extraction10.3390/make60100266:1(554-579)Online publication date: 5-Mar-2024
  • (2024)Feature-Enhanced Neural Collaborative Reasoning for Explainable RecommendationACM Transactions on Information Systems10.1145/369038143:1(1-33)Online publication date: 28-Aug-2024

Index Terms

  1. FINRule: Feature Interactive Neural Rule Learning

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
      October 2023
      5508 pages
      ISBN:9798400701245
      DOI:10.1145/3583780
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. interpretable machine learning
      2. neuro-symbolic network
      3. rule learning

      Qualifiers

      • Research-article

      Conference

      CIKM '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      CIKM '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)141
      • Downloads (Last 6 weeks)6
      Reflects downloads up to 04 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Classification, Regression, and Survival Rule Induction with Complex and M-of-N Elementary ConditionsMachine Learning and Knowledge Extraction10.3390/make60100266:1(554-579)Online publication date: 5-Mar-2024
      • (2024)Feature-Enhanced Neural Collaborative Reasoning for Explainable RecommendationACM Transactions on Information Systems10.1145/369038143:1(1-33)Online publication date: 28-Aug-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media