%0 Conference Proceedings %T Not Wacky vs. Definitely Wacky: A Study of Scalar Adverbs in Pretrained Language Models %A Lorge, Isabelle %A Pierrehumbert, Janet B. %Y Belinkov, Yonatan %Y Hao, Sophie %Y Jumelet, Jaap %Y Kim, Najoung %Y McCarthy, Arya %Y Mohebbi, Hosein %S Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP %D 2023 %8 December %I Association for Computational Linguistics %C Singapore %F lorge-pierrehumbert-2023-wacky %X Vector-space models of word meaning all assume that words occurring in similar contexts have similar meanings. Words that are similar in their topical associations but differ in their logical force tend to emerge as semantically close – creating well-known challenges for NLP applications that involve logical reasoning. Pretrained language models such as BERT, RoBERTa, GPT-2, and GPT-3 hold the promise of performing better on logical tasks than classic static word embeddings. However, reports are mixed about their success. Here, we advance this discussion through a systematic study of scalar adverbs, an under-explored class of words with strong logical force. Using three different tasks involving both naturalistic social media data and constructed examples, we investigate the extent to which BERT, RoBERTa, GPT-2 and GPT-3 exhibit knowledge of these common words. We ask: 1) Do the models distinguish amongst the three semantic categories of MODALITY, FREQUENCY and DEGREE? 2) Do they have implicit representations of full scales from maximally negative to maximally positive? 3) How do word frequency and contextual factors impact model performance? We find that despite capturing some aspects of logical meaning, the models still have obvious shortfalls. %R 10.18653/v1/2023.blackboxnlp-1.23 %U https://aclanthology.org/2023.blackboxnlp-1.23 %U https://doi.org/10.18653/v1/2023.blackboxnlp-1.23 %P 296-316