Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In summary
Attorney General Rob Bonta said his office is investigating whether a new artificial intelligence image-editing tool from Elon Musk’s company violates California law.
California Attorney General Rob Bonta today announced an investigation into how and whether Elon Musk’s X and xAI violated the law over the past few weeks by allowing the distribution of nude or sexual images without consent.
xAI it is reported updated its Grok AI tool last month to allow image editing. Users of social media platform X, which is affiliated with the tool, have started using Grok to remove clothing from photos of women and children.
“The avalanche of reports detailing the non-consensual sexually explicit material that xAI has created and posted online in recent weeks is shocking,” Bonta said in a written statement. “This material, which depicts women and children in nude and overtly sexual situations, has been used to harass people online. I call on xAI to take immediate action to ensure this does not continue.”
Bonta urged Californians who want to report images of themselves or their children undressed or performing sexual acts to visit oag.ca.gov/report.
Research obtained by Bloomberg found that X now produces more nude or sexual images without consent than any other website online. In a post on X, Musk promised “consequences” for people who made illegal content with the tool. On Friday, Grok restricted image editing to paying subscribers.
One potential way for Bonta to pursue xAI is a law that entered into force just two weeks ago creating legal liability for creation and distribution of “deep fake” pornography.
X and xAI appear to violate provisions of that law, known as AB 621, said Sam Dordulian, who previously worked in the Los Angeles County District Attorney’s Office’s sex crimes division but now works as a lawyer for people in cases involving deep counterfeiting or revenge porn in private practice.
Assemblywoman Rebecca Bauer-Kahan, who authored the law, told CalMatters in a statement last week that she reached out to prosecutors, including the San Francisco attorney general’s office and city attorney, to remind them they can act under the law. What’s happening at X, Bauer-Kahan said, is what AB 621 is designed to address.
“Images of real women are being manipulated without consent, and the psychological and reputational damage is devastating,” the San Ramon Democrat said in an emailed statement. “Their images of underage children are being used to create child sexual abuse material and these websites are knowingly facilitating it.”
Bonta’s query also comes shortly after a call for investigation by Governor Gavin Newsom, backlash from regulators in the European Union and India, and X bans in Malaysia, Indonesia and potentially the UK. While the Grok app is downloading growth in the Apple and Google app storess, X has faced calls from lawmakers and advocates to ban the downloads.
Why Grok created the feature the way it did and how it will respond to the controversy surrounding it is unclear, and answers may not be forthcoming as recently completed analysis that it is the least transparent of the major AI systems available today. xAI did not respond to questions about the investigation from CalMatters.
“The psychological and reputational damage is devastating.”
Rebecca Bauer-Kahan, Democratic Assemblywoman, San Ramon
Evidence of specific harm from deepfakes is accumulating. In 2024, the FBI warned that the use of deepfake tools to extort young people is a growing problem which has led to cases of self-harm and suicide. Multiple audits have found this child sexual abuse material is in the data to train AI modelsmaking them capable of generating vulgar pictures. A Center for Democracy and Technology 2024 Survey found that 15 percent of high school students had heard or seen sexually explicit images of someone they knew at school in the past year.
The investigation announced today is the latest action by the attorney general to push AI companies to keep children safe. Late last year, Bonta approved a bill that would have prevented chatbots that talk about self-harm and engage in sexually explicit conversations from interacting with people under 18. He also joined attorneys general from 44 other states in sending a letter questioning why companies like Meta and OpenAI allow their chatbots to have sexually inappropriate conversations with minors.
California has passed roughly half a dozen laws since 2019 to protect people from deep-pocketed fraud. the latest, Assembly Bill 621amends and strengthens a 2019 law, notably allowing district attorneys to bring cases against companies that “recklessly aid and abet” the distribution of deepfakes without the consent of the person depicted nude or performing sexual acts. This means that the average person can ask the state attorney general or the county attorney where they live to file a lawsuit on their behalf. It also increases the maximum amount a judge can award a person from $150,000 to $250,000. Under the law, a prosecutor does not have to prove that a person depicted in an AI-generated nude or sexual image suffered actual harm in order to bring a lawsuit. Websites that refuse to comply within 30 days could face fines of up to $25,000 per violation.
In addition to these laws, two bills (AB 1831 and SB 1381), signed in 2024, expands the state’s definition of child pornography to make possession or distribution of artificially generated child sexual abuse material illegal. Mandatory other social media platforms to give people an easy way to request immediate removal of deepfakes and defines the posting of such material as a form of digital identity theft. A California law limiting the use of deep rigging in elections was signed into law last year, but it was removed by a federal judge last summer after a lawsuit by X and Elon Musk.
Each new state law helps give lawyers like Dordulian a new avenue to address the harmful uses of deepfakes, but he said people still need new laws to protect themselves. He said his clients face challenges in proving a violation of existing laws because they require sharing explicit material, such as with a messaging app or social media platform, to trigger protections. In his experience, people who use nudifying apps usually know each other, so distribution doesn’t always happen, and if it does, it can be difficult to prove.
For example, he said, he has a client who works as a nanny who claims that the father of the children she cares for made images of her using photos she posted on Instagram. The nanny found the images on his iPad. This discovery was disturbing to her and caused her emotional trauma, but since he can’t use deepfake laws, he has to sue based on negligence or emotional distress and laws that were never created to deal with deepfakes. in the same way victims told CNBC last year that the distinction between creating and distributing deepfakes has left a gap in the law in a number of US states.
“The law needs to keep up with what’s really happening on the ground and what women are going through, which is just the simple act of creating the problem,” Dordoulian said.
California has been at the forefront of passing laws to protect people from deep counterfeiting, but the existing law is not up to the mark, said Jennifer Gibson, co-founder and director of Pssta group formed just over a year ago that provides pro bono legal services to technology and artificial intelligence workers interested in whistleblowing. A A California law that went into effect on January 1st protects whistleblowers at AI companies, but only if they work on a catastrophic risk that could kill more than 50 people or cause more than $1 billion in damage. If the law protected the people who work on deepfakes, the ex-employees of X who testified in detail how Grock generated illegal sexually explicit material last year to Business Insider Gibson said they would be protected if they shared the information with authorities.
“There needs to be a lot more protection for exactly this kind of scenario where an insider sees that this is foreseeable, knows that this is going to happen, and has somewhere to go to report both of them to hold the company accountable and protect the public.”