Cited By
View all- Alangari NEl Bachir Menai MMathkour HAlmosallam I(2023)Exploring Evaluation Methods for Interpretable Machine Learning: A SurveyInformation10.3390/info1408046914:8(469)Online publication date: 21-Aug-2023
Deep neural networks have demonstrated promising prediction performance on many health analytics tasks. However, the interpretability of the deep models is often lacking. In comparison, classical interpretable models such as decision rule learning do ...
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people ...
Though neural networks have achieved impressive prediction performance, it's still hard for people to understand what neural networks have learned from the data. The black-box property of neural networks already becomes one of the main obstacles ...
Association for Computing Machinery
New York, NY, United States
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in