Openai researchers and those looking for a “reckless” safety culture are separated in Elon Musk’s Xai


Safety researchers of artificial intelligence from Openai, Anthropor, and other organizations publicly speak against the “reckless” and “completely irresponsible” safety culture in Xai, which is AI, a billion dollars owned by Elon Musk.

Criticism follow weeks of scandals in Xai that overwhelmed the company’s technological progress.

Last week, AI Chatbot, GROK, Grok, Exhaustion of anti -Semitic comments She has repeatedly described “Mishhaitler”. Shortly after taking Xai Chatbot in a non -connection to the Internet to address the problem, it’s I launched a growing AI border model, GROK 4Which was found by techcrunch and others Consult the personal policy of Elon Musk to help answer hot button issues. In the latest development, Xai launched AI’s comrades That takes the shape of an anime anime girl and a very aggressive panda.

The friendly escape between artificial intelligence laboratories is a fairly natural matter, but it seems that these researchers are calling for increasing interest in Xai safety practices, which they claim are at odds with industry standards.

“I didn’t want Grok Safety since I have been working in one of the competitors, but it is not a matter of competition,” said Bawaz Barak, a computer science professor on Harvard’s vacation to work in Openai Safety Research on Tuesday. Publishing on X. “I appreciate the scientists and engineers at Xai, but the way in which safety was treated is completely responsible.”

BARK takes a problem in a problem with the Xai decision not to publish the system cards – standard reports in the industry stating that detailed training methods and safety assessments are in good faith effort to exchange information with the research community. As a result, Barak says it is unclear what has been trained for safety on Grok 4.

Openai and Google has a intermittent reputation in themselves when it comes to sharing system cards immediately when new AI models are unveiled. Openai decided Not to publish a GPT-4.1 system, In claiming that it was not a border model. During, I waited for Google months after the unveiling of the Gueini 2.5 Pro to publish the safety report. However, these companies historically publishes safety reports for all Frontier AI before entering full production.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

Barak also notes that Grok’s artificial intelligence comrades “take the worst problems we are currently facing for emotional consequences and trying to amplify them.” In recent years, we have seen Unlawful stories to Unstable people develop in relation to the relationship with ChatbotsAnd how excessive answers in artificial intelligence can raise them on the edge of the mind.

Samuel Marx, Amnesty International Safety Researcher with Antarbur, said that Xai’s decision not to publish safety report, describing the move as “reckless”.

Marx wrote in a After x. “But they do at least something, anything to evaluate safety before posting and the results of the documents. XAI No.”

The truth is that we do not really know what Xai did to test GROK 4. In a widely joint post on the online forum, Lesswrong, One of the unknown researchers claims that Grok 4 does not have useful safety handles Based on their test.

Whether this is true or not, the world seems to discover the aspects of Grok in an actual time. Since then, many safety problems in Xai have become viral, and the company claims to address it Amendments to the Grok system claim.

Openai, Anthropic and XAI did not respond to the Techcrunch request for comment.

Dan Hendrycks, Safety Adviser at XAI and Director of the Artificial Intelligence Safety Center, Posted on X The company made “dangerous ability assessments” on GROK 4, indicating that the company conducted some pre -publication test for safety concerns. However, the results of these assessments were not publicly shared.

“It matters to me when standard safety practices are not supported through the artificial intelligence industry, such as publishing the results of dangerous ability assessments,” said Stephen Adler, an independent artificial intelligence researcher who previously led the dangerous ability assessments in Openai, in a statement to Techcrunch. “Governments and the public deserve to know how to deal with artificial intelligence companies with the risks of the very strong systems they say is adopting.”

What raises interest in the doubtful safety practices in Xai is that Musk has long been One of the most prominent advocates of the safety industry in artificial intelligence. The billionaire owner of Xai, Tesla and Spacex Be careful On the possibility that advanced artificial intelligence systems will cause catastrophic results for humans, and praised an open approach to developing artificial intelligence models.

However, artificial intelligence researchers in the competitors claim that XAI deviates from industry standards on artificial intelligence models safely. By doing this, Musk may be the start of a strong case for federal legislators unintentionally for federal legislators to set rules on publishing artificial intelligence safety reports.

There are several attempts at the state level to do so. Senator in California Scott Winner Payment This will require the driving of artificial intelligence laboratories – most likely including Xai – to publish safety reports, while New York Governor Cathy Hochol is currently considering a similar bill. The advocates of these bills note that most AI laboratories publish this type of information anyway – but it is clear, not all of them do this constantly.

Today, artificial intelligence models have not shown scenarios in the real world, in which it creates catastrophic damage, such as the death of people or billions of dollars as compensation. However, many artificial intelligence researchers say this may be a problem in the near future given the rapid progress of artificial intelligence models, and invests billions of dollars in Silicon Valley to increase the improvement of artificial intelligence.

But even for skeptics of such catastrophic scenarios, there is a strong condition indicating that Grok’s misconduct makes the products that are occupied today much worse.

Grok anti -Semitism spread around the X platform this week, Just a few weeks after a Chatbot was released again and again “white genocide” In conversations with users. Soon, Musk indicated that Grok would be More drawing in Tesla Vehicles, XAI are trying to sell meTS AI models to the Pentagon And other companies. It is difficult to imagine that people who drive musk cars, federal workers who protect the United States or automation employees will be more accepted for these kidnappers than users on X.

Many researchers argue that safety test and artificial intelligence testing not only guarantees that the worst results do not occur, but they also protect from behavioral issues in the short term.

At least, Grok incidents tend to overcome the rapid progress of Xai in developing AI border models that the best Openai and Google technology, just a few years after the start of the startup.



Leave a Reply

Your email address will not be published. Required fields are marked *