AIModels.fyi

AIModels.fyi

LLMs will lie forever

This paper says hallucinations are never going away

aimodels-fyi's avatar
aimodels-fyi
Sep 26, 2024
∙ Paid
1
1
Share

Can we ever really trust AI? As LLMs become more advanced, they still face a major issue: hallucinations—when they produce false or nonsensical information. A recent paper argues that this problem isn’t a temporary glitch, but a permanent feature of how AI works. If true, this could and probably should change how we approach AI in the future.

By the way, you can check out a short video summary of this paper and many others on the new Youtube channel!

Overview

The paper, titled "LLMs Will Always Hallucinate, and We Need to Live With This," makes a bold claim: hallucinations in AI are inevitable because of the way these systems are built. The authors argue that no matter how much we improve AI—whether through better design, more data, or smarter fact-checking—there will always be some level of hallucination.

Their argument is grounded in mathematical theory. By using ideas from computational theory and Gödel's Incompleteness Theorem, they show that certain limitations are unavoidable. If they’re right, we’ll have to rethink our goals for AI systems, especially when it comes to making them completely reliable.

AIModels.fyi is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Keep reading with a 7-day free trial

Subscribe to AIModels.fyi to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 AIModels.fyi
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture