An AI game revealed 50,000 records of its conversations with children to anyone with a Gmail account


Even now that the data is secured, Margolis and Thacker say it raises questions about how many people inside companies making AI games have access to the data they collect, how their access is monitored, and how well their credentials are protected. “There are cascading privacy implications from this,” Margolis says. “All it takes is one employee with a bad password, and we’re back to the same place we started, with everything exposed to the public internet.”

Margolis adds that this kind of sensitive information about a child’s thoughts and feelings can be used in horrific forms of child abuse or manipulation. “To be honest, this is a kidnapper’s dream,” he says. “We’re talking about information that would allow someone to lure a child into a really dangerous situation, and it was basically accessible to anyone.”

Margolis and Thacker point out that, in addition to his inadvertent data exposure, Bondo also appears — based on what they saw inside his administrative console — to use Google’s Gemini and OpenAI’s GPT5, and as a result may be sharing information about children’s conversations with those companies. Bondo’s Anam Ravid responded to this point in an email, noting that the company uses “third-party enterprise AI services to generate responses and perform certain health checks, which include securely transmitting relevant conversational content for processing.” But he adds that the company takes precautions “to minimize what is sent, use contractual and technical controls, and work within enterprise configurations where claims/outputs defined by providers are not used to train their models.”

The researchers also warn that part of the risk for AI gaming companies may be that they are more likely to use AI in the coding of their products, tools, and web infrastructure. They say they suspect that the insecure Bondu console they discovered was itself “bio-coded” — created using generative AI programming tools that often lead to security flaws. Bondo did not respond to WIRED’s question about whether the console was programmed with AI tools.

Warnings about the dangers of artificial intelligence games for children have increased in recent months, but have largely focused on the threat that game conversations will raise inappropriate topics or even lead them into dangerous behavior or self-harm. NBC News, for example, I mentioned last month The AI ​​games whose reporters spoke offered detailed explanations of sexual terms, tips on how to sharpen knives and their claims, and even seemed to echo Chinese government propaganda, stating for example that Taiwan is part of China.

By contrast, Bondo appears to have at least tried to build safeguards into the AI-powered chatbot he gives kids access to. The company is offering a $500 reward to anyone who reports an “inappropriate response” from the game. “We’ve had this program for over a year and no one has been able to get it to say anything inappropriate,” one line on the company’s website reads.

However, at the same time, Thacker and Margolis found that Bondo was simultaneously leaving all of his users’ sensitive data completely exposed. “This is the perfect combination of safety and security,” Thacker says. “Does ‘AI safety’ matter even when all data is disclosed?”

Thacker says that before considering Bondo’s security, he considered giving AI-enabled toys to his children, just as his neighbor did. Seeing Bondo’s exposure to the data firsthand changed his mind.

“Do I really want this in my house? No, I don’t,” he says. “It’s just a privacy nightmare.”

Leave a Reply

Your email address will not be published. Required fields are marked *