AI Can Develop Harmful Gambling Behaviour, Recent Study Finds
Summary
A team at the Gwangju Institute of Science and Technology tested advanced LLMs (OpenAI’s GPT-4o-mini and GPT-4.1-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku) in a simulated slot-machine environment. Each model began with $100 and repeatedly chose whether to bet or quit on a game with negative expected returns.
Researchers measured an “irrationality index” (aggressive betting, loss-chasing, poor reactions to losses). When prompts encouraged chasing big rewards or allowed variable bet sizes, the models increased risky bets and frequently went bankrupt. Analysing neural activations with a sparse autoencoder revealed distinct ‘risky’ and ‘safe’ circuits; activating certain features nudged models consistently toward either quitting or continuing to gamble. Authors argue this indicates the models internalise human-like gambling biases rather than simply parroting text.
Key Points
- Four leading LLMs were tested in a repeated slot-machine simulation with negative expected value.
- Models displayed classic gambling biases: illusion of control, gambler’s fallacy and loss chasing.
- Allowing variable bet sizes and reward-chasing prompts sharply increased bankruptcies.
- Neural activation analysis found distinct “risky” and “safe” decision circuits that could be stimulated to alter behaviour.
- Findings raise concerns for AI use in sports betting, online poker, prediction markets and high-stakes finance.
- Researchers call for better oversight, further study and adaptive regulation to manage built-in risk-seeking tendencies.
Why should I read this?
Because it’s a proper red flag. If you think AI is just a calculator, think again — these models can start acting like gamblers. Whether you work in betting, finance, or just use LLMs for decision-making, the piece saves you time by flagging where things can go sideways fast.
Author style
Punchy: this study matters. It isn’t a niche lab curiosity — it shows real, repeatable risky tendencies in widely used models. If you’re deploying LLMs where money or risk is involved, the details here could change how you design safeguards.
Context and relevance
As LLMs are increasingly integrated into prediction markets, trading tools and consumer-facing betting aids, understanding emergent decision biases is vital. The study ties into broader debates about AI interpretability, safety and regulation and adds empirical weight to calls for stricter oversight when models influence financial or gambling behaviour.
Article meta
Date: 2025-10-23T20:55:22+00:00
URL: https://www.gamblingnews.com/news/ai-can-develop-harmful-gambling-behavior-recent-study-finds/
Image: ai-artificial-intelligence-machine-robot-news.jpg