Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

A coalition of nonprofits is urging the US government to immediately suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal agencies including the Department of Defense.
The open letter, shared exclusively with TechCrunch, tracks a slew of troubling behaviors from the large language model over the past year, including most recently a trend of X users asking Grok to convert images of real women, and in some cases, children. To sexual images Without their consent. According to some reports, Grok was created Thousands of hourly non-consensual explicit images, which were then widely shared on X, Musk’s social media platform owned by xAI.
“It is deeply concerning that the federal government continues to deploy an AI product with system-level failures that result in the generation of non-consensual sexual images and child sexual abuse material,” reads the letter, which was signed by advocacy groups such as Public Citizen, the Center for AI and Digital Policy, and the Consumers Federation of America. “In view of the executive orders and directives issued by the administration and what was recently issued Take it law Supported by The white house“It is disturbing that the Office of Management and Budget has not yet directed federal agencies to discontinue the operation of GROC.”
xAI reached an agreement last September with the General Services Administration (GSA), the government’s procurement arm, to Grok sold to federal agencies Under executive authority. Two months ago, xAI – along with Anthropic, Google and OpenAI – was awarded a contract worth up to… 200 million dollars With the Ministry of Defence.
Amid the X scandals in mid-January, Defense Secretary Pete Hegseth said Grok would join Google’s Gemini in… Operates within the Pentagon networkAnd the handling of classified and unclassified documents, which experts say poses a risk to national security.
The letter’s authors argue that Grok has proven inconsistent with management requirements for artificial intelligence systems. According to OMB guidanceSystems that pose severe and foreseeable risks that cannot be adequately mitigated should be discontinued.
“Our primary concern is that Grok has consistently demonstrated that it is an insecure large language model,” JB Branch, an advocate for public accountability for senior citizens in technology and one of the authors of the letter, told TechCrunch. “But there is also a deep history of Grok having a variety of breakdowns, including… Anti-Semitic Screaming, sexual screaming, sexual images of women and children.
TechCrunch event
Boston, MA
|
June 23, 2026
Several governments have expressed an unwillingness to engage with Grok following her behavior in January, which relied on a series of incidents including creating anti-Semitic posts on X and calling herself “MechaHitler.” Indonesia, Malaysiaand the Philippines have all blocked access to Grok (they have After that, this ban was lifted), the European Union, the United Kingdom, South Korea and India are actively investigating xAI and X regarding data privacy and illegal content distribution.
The letter also comes a week after Common Sense Media, a nonprofit that reviews media and technology for families, Published a serious risk assessment Which found Grok to be among the most unsafe products for children and teens. One could argue that, based on the report’s findings — including your puppy’s tendency to give unsafe advice, share information about drugs, generate violent and sexual images, broadcast conspiracy theories, and generate biased output — that your puppy isn’t exactly safe for adults either.
“If you know that a large language model has been declared or has been declared unsafe by AI safety experts, why would you want that to handle our most sensitive data?” Branch said. “From a national security standpoint, this makes absolutely no sense.”
Andrew Christianson, former NSA contractor and current founder of the NSA Gopi IThe no-code AI agent platform for classified environments says using closed source MBAs in general is problematic, especially for the Pentagon.
“Closed weights mean you can’t see inside the model, and you can’t scrutinize how decisions are made,” he said. “A closed code means you can’t inspect the software or control where it runs. The Pentagon is closed at both, which is the worst possible combination for national security.”
“These AI agents are not just chatbots,” Christianson added. “They can take actions, access systems, and transmit information. You have to be able to see exactly what they’re doing and how they’re making their decisions. Open source gives you that. But private cloud AI doesn’t.”
The risks of using corrupt or insecure AI systems extend beyond national security use cases. Branch noted that an LLM that has been shown to have biased and discriminatory outcomes can lead to disproportionately negative outcomes for people as well, especially if it is used in departments related to housing, employment or justice.
While the Office of Management and Budget has not yet published its consolidated federal inventory of AI use cases for 2025, TechCrunch reviewed several agencies’ use cases — most of which either do not use Grok or do not disclose their use of Grok. Aside from the Department of Defense, the Department of Health and Human Services also appears to be actively using Grok, primarily to schedule and manage social media posts and create first drafts of documents, briefings, or other communication materials.
Branch pointed to what he sees as a philosophical chemistry between Grok and management as a reason to overlook the chatbot’s shortcomings.
Grok’s trademark is the “anti-woke big language model” and this is attributed to The philosophy of this administration,“If you have a department that has several,” said Branch Problems with people Who were Accused of being neo-Nazis or White supremacistsThen they use a big linguistic model associated with that kind of behavior, which I imagine they might have a tendency to use.
This is the coalition’s third letter after writing similar concerns August and October last year. In August, XAI launches “hot mode” In Grok Imagine, creating a large number of non-consensual deep sex videos. TechCrunch also reported in August that private talks had been held with Grok Indexed by Google search.
Prior to the October letter, Grok was charged with Providing misleading information about electionsincluding false deadlines for ballot changes and political deepfakes. xAI too Grokipedia launchedwhich researchers found to be legitimizing Scientific racismSkepticism about HIV/AIDS, and vaccine conspiracies.
Aside from immediately suspending Grok’s federal deployment, the letter demands that the Office of Management and Budget formally investigate Grok’s safety failures and whether appropriate oversights were conducted on the chatbot. It also requires the agency to publicly state whether Grok has been evaluated for compliance with Trump’s executive order requiring LLMs to be truth-seeking and impartial and whether it meets OMB’s risk mitigation standards.
“Management needs to pause and reevaluate whether or not your puppy meets these thresholds,” Branch said.
TechCrunch has reached out to xAI and OMB for comment.