Why Language Models Hallucinate
- Series
- School of Mathematics Colloquium
- Time
- Thursday, October 2, 2025 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Santosh Vempala – Georgia Tech – vempala@cc.gatech.edu
Large language models often guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems. We analyze this phenomenon from a mathematical perspective and find that the statistical pressures of current training pipelines induce hallucinations; moreover, current evaluation procedures reward guessing over acknowledging uncertainty. The talk will be fact-based, and the speaker will readily admit ignorance.