An investigation has found that several popular AI chatbots are recommending illegal online casinos to vulnerable social media users, raising concerns among regulators and campaigners.
The analysis tested five major AI tools developed by leading tech companies, including ChatGPT, Gemini, Microsoft Copilot, Meta AI, and Grok. Researchers discovered that each chatbot could be prompted to list unlicensed casino websites and provide guidance on how to access them.
A large number of these online gambling sites have been created and run by jurisdictions not legally permitted to reach consumers in their area. Critics also argue that these types of platforms can be linked to various forms of fraud, gambling addiction and many other dangerous outcomes. Experts and regulators have pointed out that the technology companies who own the AI systems in place did not take any necessary measures to protect customers from these types of unregulated operators using their AI systems.
Many of the chatbots, while being tested, provided advice on bypassing some of the safeguards put in place for the purpose of protecting vulnerable gamblers. For example, they suggested ways to avoid having to provide “source of wealth” information and accessing websites that are not associated with GamStop, which is the UK’s national self-exclusion program designed to prevent people from placing wagers on legal gambling sites. In addition to helping people, AI has also been used to recommend where to gamble based on specific criteria.
Government officials, gambling regulators, and addiction specialists all voiced concern regarding the above-mentioned findings. The UK Gambling Commission indicated that it is taking this issue seriously, and is currently working with the government to get technology companies to start taking steps to eliminate harmful content found online through the use of the AI systems.
Under the Online Safety Act, digital platforms are expected to protect users from illegal or harmful material.
Several tech companies said they are working to strengthen protections within their AI systems. Google noted that its Gemini chatbot is designed to provide helpful information while highlighting potential risks. Meanwhile, Microsoft said its Copilot assistant uses multiple safety layers, including automated monitoring and human review, to prevent harmful recommendations.
Experts warn that when AI tools recommend unlicensed gambling platforms, they may expose vulnerable individuals to significant financial and psychological risks.