Remember when kids played with wooden blocks and plastic dinosaurs that couldn’t talk back? Those toys just sat there, which now feels like a feature rather than a limitation. We’ve come a long way since then, strapping AI into stuffed animals that chat with children about their deepest thoughts. Whether that’s actually progress is suddenly a much harder question to answer.
Security researchers Joseph Thacker and Joel Margolis just discovered that Bondu AI toy security was basically nonexistent. The company makes AI-powered stuffed animals that hold conversations with kids using Google’s Gemini and OpenAI’s GPT models. Bondu left its entire web console completely unprotected, and the researchers accessed it with nothing more than a Gmail account.
What they found inside was worse than anyone expected. Over 50,000 chat transcripts between children and their AI toys sat there for anyone to read. Kids’ full names, birthdates, family details, favorite snacks, pet names, dance moves. Every intimate thought a child shared with what they believed was a trusted friend.
Why this goes beyond a simple security mistake
Thacker told WIRED the experience felt “pretty intrusive and really weird” to know these things about strangers’ children. Margolis was more blunt, calling it “a kidnapper’s dream” because the exposed data included everything someone would need to manipulate or lure a child into a dangerous situation.
Bondu CEO Fateen Anam Rafid says the company took the console offline within minutes of being alerted and fixed the Bondu AI toy security flaw the next day. The company claims it found no evidence anyone accessed the data besides the researchers who reported it. They’ve since hired a security firm to monitor systems going forward.
But the damage isn’t just about this one breach. These AI toys build detailed psychological profiles of children by keeping complete chat histories. That’s how they personalize future conversations. It also creates exactly the kind of data treasure trove that shouldn’t exist in the first place, let alone sit behind security so weak that any Gmail user could access it.
The researchers suspect Bondu’s web console might have been built using AI coding tools, which often generate functional-looking code riddled with security holes. Bondu didn’t respond to questions about whether AI built the infrastructure meant to protect children’s data.
Parents who thought parental controls and monitoring tools were enough to keep their kids safe online now have to worry about the toys themselves. Basic device security doesn’t help when the toy company leaves the back door wide open.