Overinformative Question Answering by Humans and Machines
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content
eScholarship
Open Access Publications from the University of California

Overinformative Question Answering by Humans and Machines

Creative Commons 'BY' version 4.0 license
Abstract

When faced with a polar question, speakers often provide overinformative answers going beyond a simple “yes” or “no”. But what principles guide the selection of additional information? In this paper, we provide experimental evidence from two studies suggesting that overinformativeness in human answering is driven by considerations of relevance to the questioner’s goals which they flexibly adjust given the functional context in which the question is uttered. We take these human results as a strong benchmark for investigating question-answering performance in state-of-the-art neural language models, conducting an extensive evaluation on items from human experiments. We find that most models fail to adjust their answering behavior in a human-like way and tend to include irrelevant information. We show that GPT-3 is highly sensitive to the form of the prompt and only achieves human-like answer patterns when guided by an example and cognitively-motivated explanation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View