People are using Sora 2 to create disturbing videos with kids creating artificial intelligence


On October 7, A TikTok account called @fujitiva48 posed a provocative question alongside their latest video. “What are your thoughts on this new toy for toddlers?” They asked more than 2,000 viewers who stumbled upon what appeared to be a parody of a TV ad. The response was clear. “Hey, this isn’t funny,” one person wrote. “Whoever did this should be investigated.”

It’s easy to see why the video sparked such a strong reaction. The fake ad begins with a real-life little girl holding a toy – sparkling pink with a bumblebee adorning the handle. It’s a pen, we’re told, as the girl and two others scribble on some paper while a grown man’s voiceover narrates. But it’s clear that the design of the pink body, its ability to buzz, and its name — Vibro Rose — sounds very much like a sex toy. The “Add yours” button — the feature on TikTok that encourages people to share the video on their feeds — along with the phrase “I’m using my rose toy,” removes even the tiniest shred of doubt. (WIRED reached out to @fujitiva48 for comment, but did not receive a response.)

The obnoxious clip was created using Sora 2, OpenAI The latest video generator, which was initially released by invitation only in the United States On September 30th. In just one week, videos like Sora’s Vibro Rose clip went viral and landed on TikTok’s For You page. Some of the other fake ads were more obvious, as WIRED discovered several accounts posting similar Sora 2-made videos featuring rose or mushroom-shaped water toys and cake decorating tools that spray “sticky milk,” “white foam,” or “foam” over realistic images of children.

The above would, in many countries, be cause for investigation if these were real children and not a digital amalgamation. But laws surrounding AI-generated fetish content involving minors remain unclear. New 2025 data from Internet Monitoring Foundation In the UK, reports of artificial intelligence-generated child sexual abuse material, or CSAM, have reportedly doubled in one year from 199 in January-October 2024 to 426 in the same period in 2025. Fifty-six percent of this content falls into Category A – the most serious category in the UK that includes penetrative sexual activity, sexual activity with an animal, or sadism. 94% of illegal AI images tracked by the IWF were of girls. (Sora doesn’t appear to be creating any Category A content.)

“All too often, we see real images of children being commoditized to create nude or sexualized images, and we overwhelmingly see AI being used to create images of girls. It’s another way in which girls are being targeted online,” Kerry Smith, CEO of the International Weightlifting Federation, told WIRED.

This influx of harmful substances generated by artificial intelligence has prompted the United Kingdom to introduce a New amendment to the draft law on crime and policewhich will allow “certified testers” to verify that AI tools are unable to generate child sexual abuse material. As the BBC reported, this amendment will ensure that models will have safeguards around certain images, including extreme pornography and non-consensual intimate images in particular. In the United States, 45 states have implemented laws Criminalization of child sexual abuse material (CSAM) generated by artificial intelligencemostly over the past couple of years, as AI generators continue to evolve.

Leave a Reply

Your email address will not be published. Required fields are marked *