[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models

Nicholas Meade, Elinor Poole-Dayan, Siva Reddy


Abstract
Recent work has shown pre-trained language models capture social biases from the large amounts of text they are trained on. This has attracted attention to developing techniques that mitigate such biases. In this work, we perform an empirical survey of five recently proposed bias mitigation techniques: Counterfactual Data Augmentation (CDA), Dropout, Iterative Nullspace Projection, Self-Debias, and SentenceDebias. We quantify the effectiveness of each technique using three intrinsic bias benchmarks while also measuring the impact of these techniques on a model’s language modeling ability, as well as its performance on downstream NLU tasks. We experimentally find that: (1) Self-Debias is the strongest debiasing technique, obtaining improved scores on all bias benchmarks; (2) Current debiasing techniques perform less consistently when mitigating non-gender biases; And (3) improvements on bias benchmarks such as StereoSet and CrowS-Pairs by using debiasing strategies are often accompanied by a decrease in language modeling ability, making it difficult to determine whether the bias mitigation was effective.
Anthology ID:
2022.acl-long.132
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1878–1898
Language:
URL:
https://aclanthology.org/2022.acl-long.132
DOI:
10.18653/v1/2022.acl-long.132
Bibkey:
Cite (ACL):
Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2022. An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models (Meade et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.132.pdf
Software:
 2022.acl-long.132.software.zip
Video:
 https://aclanthology.org/2022.acl-long.132.mp4
Code
 mcgill-nlp/bias-bench +  additional community code
Data
CrowS-PairsStereoSetWikiText-2