Bluesky Thread

Hallucinations are accidentally created by evals

View original thread
Hallucinations are accidentally created by evals

They come from post-training. Reasoning models hallucinate more because we do more rigorous post-training on them

The problem is we reward them for being confident

cdn.openai.com/pdf/d04913be...
highlighted text: language models are optimized to be good test-takers, and guessing when uncertain improves test performance

full text: 

Like students facing hard exam questions, large language models sometimes guess when
uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such
“hallucinations” persist even in state-of-the-art systems and undermine trust. We argue that
language models hallucinate because the training and evaluation procedures reward guessing over
acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern
training pipeline. Hallucinations need not be mysterious—they originate simply as errors in binary
classification. If incorrect statements cannot be distinguished from facts, then hallucinations
in pretrained language models will arise through natural statistical pressures. We then argue
that hallucinations persist due to the way most evaluations are graded—language models are
optimized to be good test-takers, and guessing when uncertain improves test performance. This
“epidemic” of penalizing uncertain responses can only be addressed through a socio-technical
mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate
leaderboards, rather than introducing additional hallucination evaluations. This change may
steer the field toward more trustworthy AI systems
65 7
e.g. MMLU tests knowledge, it's one of the core benchmarks that people care about

if it says "idk", it's score goes down

if it confidently guesses regardless of it's own confidence, it's score goes up (and so do hallucinations)
21 3
65 likes 7 reposts

More like this

×