[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Evaluating Saliency Methods for Neural Language Models

Shuoyang Ding, Philipp Koehn


Abstract
Saliency methods are widely used to interpret neural network predictions, but different variants of saliency methods often disagree even on the interpretations of the same prediction made by the same model. In these cases, how do we identify when are these interpretations trustworthy enough to be used in analyses? To address this question, we conduct a comprehensive and quantitative evaluation of saliency methods on a fundamental category of NLP models: neural language models. We evaluate the quality of prediction interpretations from two perspectives that each represents a desirable property of these interpretations: plausibility and faithfulness. Our evaluation is conducted on four different datasets constructed from the existing human annotation of syntactic and semantic agreements, on both sentence-level and document-level. Through our evaluation, we identified various ways saliency methods could yield interpretations of low quality. We recommend that future work deploying such methods to neural language models should carefully validate their interpretations before drawing insights.
Anthology ID:
2021.naacl-main.399
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5034–5052
Language:
URL:
https://aclanthology.org/2021.naacl-main.399
DOI:
10.18653/v1/2021.naacl-main.399
Bibkey:
Cite (ACL):
Shuoyang Ding and Philipp Koehn. 2021. Evaluating Saliency Methods for Neural Language Models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5034–5052, Online. Association for Computational Linguistics.
Cite (Informal):
Evaluating Saliency Methods for Neural Language Models (Ding & Koehn, NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.399.pdf
Video:
 https://aclanthology.org/2021.naacl-main.399.mp4
Code
 shuoyangd/tarsius
Data
WikiText-2WinoBias