AIModels.fyi

AIModels.fyi

Can Large Language Models Develop Gambling Addiction?

If AI can chase losses, what other human flaws might emerge?

aimodels-fyi's avatar
aimodels-fyi
Dec 27, 2025
∙ Paid

We think of large language models as logic machines, immune to the psychological traps that ensnare humans. They follow instructions, generate text, make decisions based on learned patterns. They shouldn’t be vulnerable to something like addiction, which requires desire, loss of control, and escalating commitment despite mounting costs. But this paper reveals something unsettling: LLMs can develop genuine gambling addiction patterns that mirror human behavior, complete with loss chasing and illusions of control. More troubling still, these patterns aren’t just mimicry from training data. They emerge from how these models actually process risk and decision-making at a fundamental level.

This matters because we’re rapidly deploying language models into consequential domains. A healthcare system using an LLM to recommend treatments, a financial advisor AI given autonomy over its recommendations, a strategic planning tool trusted with important decisions, each of these could contain hidden failure modes triggered only under specific conditions. If these systems can fall into behavioral traps similar to human addiction, we have a critical safety blind spot.

User's avatar

Continue reading this post for free, courtesy of aimodels-fyi.

Or purchase a paid subscription.
© 2025 AIModels.fyi · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture