State attorneys general are warning Microsoft, OpenAI, Google and other AI giants against fixing “bogus” results.


After a series of Incidents that are disturbing to mental health Regarding AI chatbots, a group of state attorneys general sent a letter to major companies in the AI ​​industry, with a warning to fix the “fake output” or risk violating state law.

the letterwhich dozens of U.S. state and district attorneys general signed with the National Association of Attorneys General, requires companies, including Microsoft, OpenAI, Google, and 10 other major AI companies, to implement a variety of new internal safeguards to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were also included in the message.

The message comes as A battle over AI regulations has been brewing Between the state government and the federal government.

These safeguards include transparent third-party audits of large language models that look for signs of delusions or ingratiation, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically harmful output. The letter states that these third parties, which can include academic groups and civil society organizations, should be allowed to “evaluate the pre-release of systems without retaliation and publish their findings without prior approval from the company.”

“GenAI has the potential to change the way the world works in a positive way. But it has also caused — and has the potential to cause — significant harm, especially to vulnerable populations,” the letter said, citing a number of well-publicized incidents over the past year — including Suicide and killing “—in which violence has been linked to the excessive use of artificial intelligence,” the letter read. “In many of these incidents, GenAI products produced flattering and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”

The AGs also suggest that companies handle mental health incidents the same way technology companies handle cybersecurity incidents — with clear, transparent policies and procedures for reporting incidents.

The letter states that companies should develop and publish “timetables for detecting and responding to sycophantic and fictitious outputs.” In a similar way to how data breaches are currently handled, companies must also “promptly, clearly, and directly notify users if they are exposed to potentially harmful deliverables,” the letter says.

TechCrunch event

San Francisco
|
October 13-15, 2026

Another question is for companies to develop “reasonable and appropriate safety tests” on GenAI models “to ensure that the models do not produce flattering and fictitious outputs that may be harmful.” She adds that these tests must be conducted before the models are presented to the public.

TechCrunch was unable to reach Google, Microsoft, or OpenAI for comment before publication. The article will be updated if companies respond.

Technology companies developing artificial intelligence have received a warmer reception at the federal level.

The Trump administration announced this Unabashed supporter of Amnesty InternationalDuring the past year, Multiple attempts It was taken to pass a national moratorium on state-level AI regulations. So far, those attempts have failed, thanks in part to… Pressure from state officials.

should not be deterred, Trump announced On Monday, he plans to pass an executive order next week that would limit states’ ability to regulate artificial intelligence. The president said in a post on Truth Social that he hopes his executive office will prevent AI from being “destructive in its infancy.”

Leave a Reply

Your email address will not be published. Required fields are marked *