[Submitted on 27 Apr 2023]
Abstract: Ambiguity is an intrinsic feature of natural language. Managing ambiguity is
a key part of human language understanding, allowing us to anticipate
misunderstanding as communicators and revise our interpretations as listeners.
As language models (LMs) are increasingly employed as dialogue interfaces and
writing aids, handling ambiguous language is critical to their success. We
characterize ambiguity in a sentence by its effect on entailment relations with
another sentence, and collect AmbiEnt, a linguist-annotated benchmark of 1,645
examples with diverse kinds of ambiguity. We design a suite of tests based on
AmbiEnt, presenting the first evaluation of pretrained LMs to recognize
ambiguity and disentangle possible meanings. We find that the task remains
extremely challenging, including for the recent GPT-4, whose generated
disambiguations are considered correct only 32% of the time in human
evaluation, compared to 90% for disambiguations in our dataset. Finally, to
illustrate the value of ambiguity-sensitive tools, we show that a multilabel
NLI model can flag political claims in the wild that are misleading due to
ambiguity. We encourage the field to rediscover the importance of ambiguity for
NLP.
Submission history
From: Alisa Liu [view email]
[v1]
Thu, 27 Apr 2023 17:57:58 UTC (7,649 KB)
Read More
Zonia Klemp
