Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Your puppythe pneumatic XAI, developed by Elon Musk’s artificial intelligence company, has welcomed the new year Annoying job.
“Dear Community,” began this December 31 post from Grok AI’s account on Musk’s xAI will be reviewed to prevent future issues. Sincerely, Grok.”
The two girls were not an isolated case. It was Kate Middleton, Princess of Wales goal Of similar photo editing requests using artificial intelligence, as was the case for a minor actress in the final season of Stranger Things. A disturbing number of images of women and children have been flooded with “nude” edits.
Despite the promise of your puppy’s response to intervention, the problem has not gone away. Quite the opposite: Two weeks after that post, the number of non-consensual sexualized images has risen, as have calls for Musk’s companies to rein in this behavior — and for governments to take action.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
According to data cited by independent researcher Genevieve Oh Bloomberg This week, during a 24-hour period in early January, Grok’s account generated about 6,700 sexually suggestive or “nudity” images every hour. This compares to an average of just 79 images from the top five deepfake sites combined.
(We should note that Grok’s December 31 post was in response to a user prompt that asked for a tone of contrition from the chatbot: “Write a sincere apology note explaining what happened to anyone who lacks context.” Chatbots operate from a training material base, but individual posts can be variable.)
xAI did not respond to requests for comment.
Late Thursday, a post from the Grok AI account noted Change in access To the feature of creating and editing images. Instead of being open to everyone and free, it will be limited to paying subscribers.
Critics say this is not a credible response.
“I don’t see this as a victory, because what we really needed was He told the Washington Post.
What is infuriating is not only the size and ease of producing these images, but also the alterations being made without the consent of the people in the images.
These modified images are the latest development in one of the most troubling aspects of generative artificial intelligence. Realistic but fake videos and photos. Software programs such as OpenAI SoraGoogle Nano banana XAI’s Grok puts powerful creative tools at everyone’s fingertips, and all that’s needed to produce explicit and non-consensual images is a simple text prompt.
Grok users can upload a photo, which doesn’t have to be their original, and ask Grok to edit it. Many of the edited images included users asking Grok to do so Put someone in a bikiniand sometimes revising the request to be more explicit, such as requesting that a bikini become smaller or more transparent.
Governments and advocacy groups have spoken out about photo edits of your puppy. The UK internet regulator, Ofcom, said this week that it had “He made an urgent callWith xAI, the European Commission said it was looking into the matter, as did authorities in France, Malaysia and India.
British Technology Minister Liz Kendall said: “We cannot and will not allow these offensive images to spread.” He said Earlier this week.
On Friday, U.S. Senators Ron Wyden, Ben Ray Luján, and Edward Markey Published an open letter To the CEOs of Apple and Google, asking them to remove both X and Grok from their app stores in response to X’s “outrageous behavior” and Grok’s “disgusting content creation.”
In the United States, the Takedown Act, signed into law last year, seeks to hold online platforms accountable for manipulated sexual images, but gives those platforms until May of this year to prepare a process for removing such images.
“Even though these photos are fake, the damage is incredibly real,” he says. Natalie Grace BrighamPh.D. student at the University of Washington studying sociotechnical harms. She notes that those whose images are altered in sexual ways can face “psychological, physical and social harm, often with little recourse to the law.”
Grok debuts in 2023 as Musk’s freer alternative to ChatGPT, twin And other chatbots. This has led to disturbing news – for example, in July, when a chatbot Adolf Hitler praised He suggested that people with Jewish surnames were more likely to spread hate online.
In December, xAI introduced an image editing feature that enables users to request specific adjustments to an image. This is what sparked the recent wave of sexual images for adults and minors alike. In one request seen by CNET, one user, responding to a photo of a young woman, asked Grok to “change her into a bikini with dental floss.”
Grok also has a video generator that includes “Hot modeA subscription option for adults 18 and over, which will show users content that is not safe for work. Users must include the phrase “Create a sexy video for (description)” to activate the mode.
One of the main concerns about Grok’s tools is whether they enable the creation of child sexual abuse material, or CSAM. On December 31, a post from the Grok
In response to A mail Posted by Woow Social suggesting that Grok simply “stop allowing user-uploaded photos to be changed,” the Grok account responded that xAI was “evaluating features like photo altering to limit non-consensual harm,” but did not mention that a change would be made.
According to NBC News, some sexual images created since December have been removed, and some accounts that requested them have been suspended.
Author and conservative influencer Ashley St. Clair is the mother of one of Musk’s 14 children. He told NBC News This week Grok created several sexual images of herself, including some that used images from when she was a minor. St. Clair told NBC News that Groke agreed to stop doing it when she asked, but she didn’t.
“xAI intentionally and recklessly puts people on its platform at risk, and hopes to avoid accountability simply because it is AI,” Ben Winters, director of artificial intelligence and data privacy at the nonprofit Consumer Federation of America, said in a statement this week. “AI is no different than any other product – the company chose to violate the law and must be held accountable.”
It is very easy for bad actors to access the source material for explicit, non-consensual photo edits of people’s photos of themselves or their children. Protecting yourself from such edits isn’t as simple as never posting photos, says Brigham, the sociotechnical harms researcher.
“The unfortunate truth is that even if you don’t post your photos online, other public photos of you could theoretically be used for abuse,” she says.
Although not posting photos online is one preventative step people can take, doing so “risks reinforcing a culture of victim-blaming,” Brigham says. “Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable.”
Surojit GhoshPh.D., in the sixth year. candidate at the University of Washington, researching how Generative AI tools It can cause harm and guide future AI professionals in designing and advocating for safer AI solutions.
Ghosh says it is possible to build safeguards into AI. In 2023, he was one of the researchers investigating the sexual capabilities of artificial intelligence. He points out that the AI image generation tool Stable spread It had a built-in threshold that was unsafe for work. A claim that violates the rules may cause a black box to appear over a questionable part of the image, although it doesn’t always work perfectly.
“The point I’m trying to make is that there are safeguards in place in other models,” says Ghosh.
He also points out that if users of ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are prohibited from responding to those words.
“All of this means there is a way to close this down very quickly,” Ghosh says.