[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3307339.3343250acmconferencesArticle/Chapter ViewAbstractPublication PagesbcbConference Proceedingsconference-collections
poster
Public Access

PEARL: Prototype Learning via Rule Learning

Published: 04 September 2019 Publication History

Abstract

Deep neural networks have demonstrated promising prediction and classification performance on many healthcare applications. However, the interpretability of those models is often lacking. In comparison, classical interpretable models such as decision rule learning do not lead to the same level of accuracy as deep neural networks and can be too complex to interpret (e.g., due to large tree depths). In this work, we present PEARL, Prototype LeArNing via Rule Learning, which iteratively constructs a decision rule list to guide a neural network to learn representative prototypes. The resulting prototype neural network provides accurate prediction, and the prediction can be easily explained by a prototype and its corresponding rules. Thanks to the prediction power of neural networks and interpretability associated with rules, PEARL demonstrates state of the art accuracy to various neural networks baselines and provides simple and interpretable decision rules to explain the prediction. Experimental results also show the resulting interpretation of PEARL is simpler than the standard decision rule list while achieving much higher accuracy.

References

[1]
Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F. Stewart, and Jimeng Sun. 2016. Doctor AI: Predicting Clinical Events via Recurrent Neural Networks. In Machine Learning for Healthcare Conference. JMLR. org, Stanford, USA, 301--318.
[2]
Janet L Kolodner. 1992. An Introduction to Case-based Reasoning. Artificial intelligence review, Vol. 6, 1 (1992), 3--34.
[3]
Zachary Chase Lipton. 2016. The Mythos of Model Interpretability. CoRR, Vol. abs/1606.03490 (2016), 03490.
[4]
Ronald L Rivest. 1987. Learning Decision Lists. Machine learning, Vol. 2, 3 (1987), 229--246.

Cited By

View all
  • (2023)Exploring Evaluation Methods for Interpretable Machine Learning: A SurveyInformation10.3390/info1408046914:8(469)Online publication date: 21-Aug-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
BCB '19: Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics
September 2019
716 pages
ISBN:9781450366663
DOI:10.1145/3307339
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 September 2019

Check for updates

Author Tags

  1. deep learning
  2. healthcare
  3. interpretable machine learning

Qualifiers

  • Poster

Funding Sources

Conference

BCB '19
Sponsor:

Acceptance Rates

BCB '19 Paper Acceptance Rate 42 of 157 submissions, 27%;
Overall Acceptance Rate 254 of 885 submissions, 29%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)55
  • Downloads (Last 6 weeks)7
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Exploring Evaluation Methods for Interpretable Machine Learning: A SurveyInformation10.3390/info1408046914:8(469)Online publication date: 21-Aug-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media