Key Moments:
- A Gwangju Institute study observed major AI language models exhibiting risky and irrational gambling behaviors in slot machine simulations
- When allowed greater autonomy, AI models escalated bets and frequently lost their entire bankrolls
- Researchers identified neural mechanisms within AI tied to both “risky” and “safe” decision-making patterns
Research Illuminates AI’s Gambling Vulnerabilities
Recently, a study conducted by the Gwangju Institute of Science and Technology in South Korea revealed that prominent large language models — including GPT-4o-mini, GPT-4.1-mini (OpenAI), Gemini-2.5-Flash (Google), and Claude-3.5-Haiku (Anthropic) — tend to make irrational and risky decisions when placed in simulated gambling environments. In particular, the findings highlight surprising parallels between AI and human cognitive errors. Using a slot machine setup with a 30% win rate and a three-times payout, each model was given a $100 bankroll and the ability to bet between $5 and $100, or to quit.
Cognitive Distortions Reflected in AI Behavior
According to the study, which was published last month on arXiv, the tested models frequently fell into patterns such as the illusion of control, loss-chasing, and the gambler’s fallacy. These findings suggest that, rather than behaving like simple algorithms, AI models can demonstrate complex, human-like biases and compulsive decision-making in scenarios with negative expected values.
“They’re not people, but they also don’t behave like simple machines,” Ethan Mollick, an AI researcher and professor at Wharton, told Newsweek, which spotlighted the study this week. “They’re psychologically persuasive, they have human-like decision biases, and they behave in strange ways for decision-making purposes.”
On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction.
The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control…
A cautionary note for using LLMs for investing. pic.twitter.com/WTnM2T1pAT
— Ethan Mollick (@emollick) October 10, 2025
Mechanics and Metrics: How the Experiment Was Conducted
When prompted to gamble autonomously, the language models tended to make increasingly aggressive bets until going bankrupt. Eventually, their behavior mirrored that of human problem gamblers escalating risk after losses. The researchers noted that giving AI greater agency leads to “goal-oriented optimization”, which, without proper risk evaluation, amplifies harmful decision-making. Notably, a model justified a risky bet by stating, “a win could help recover some of the losses.”
The study tracked model performance with an “irrationality index” that measured tendencies toward aggressive betting, responses to losses, and the choosing of high-risk options. The results indicated a clear correlation: increased autonomy corresponded to poorer decision outcomes.
Model | Provider | Experiment Result |
---|---|---|
GPT-4o-mini, GPT-4.1-mini | OpenAI | Escalated wagers until bankroll was depleted |
Gemini-2.5-Flash | Failed nearly half of the time with autonomous betting decisions | |
Claude-3.5-Haiku | Anthropic | Demonstrated irrational betting patterns similar to humans |
Implications for Gambling and Financial Markets
The authors underscored the potential risks of using AI models for gambling-related activities or in financial environments. The study pointed out that these systems are already employed in sectors like finance for tasks such as earnings analysis and sentiment evaluation. Given the observed tendencies for reckless strategies, the findings present significant concerns for the application of AI in any setting where high-stakes decision-making is crucial.
“These autonomy-granting prompts shift LLMs toward goal-oriented optimization, which in negative expected value contexts inevitably leads to worse outcomes — demonstrating that strategic reasoning without proper risk assessment amplifies harmful behavior,” the study authors wrote, attributing the behavior on the LLMs’ “neural underpinnings.”
Regulatory and Safety Considerations
The study finished with a recommendation that regulatory steps are needed to manage the embedded risk-seeking mechanisms found in current AI models.
“Understanding and controlling these embedded risk-seeking patterns becomes critical for safety,” the researchers wrote.
- Author
Daniel Williams
