Anthropy’s CEO applauds after Trump officials accused the company of fear-mongering about artificial intelligence


Anthropic CEO Dario Amodei Published a statement Tuesday to “set the record straight” about the company’s alignment with the Trump administration’s AI policy, in response to what it called a “recent rise in inaccurate claims about Anthropic’s policy positions.”

“Humanity is built on a simple principle: artificial intelligence should be a force for human progress, not a danger,” Amodei wrote. “This means making products that are truly useful, talking openly about the risks and benefits, and working with anyone who is serious about getting this right.”

Amodei’s response comes next Dogpiling last week on Anthropic From AI leaders and senior members of the Trump administration, including AI czar David Sachs and White House senior AI policy advisor Sriram Krishnan – They all accuse the AI ​​giant of stoking fears to harm the industry.

The first hit came from Sachs next Anthropic co-founder Jack Clarke shared His hopes and “appropriate fears” about AI, including that AI is a powerful, mysterious, and “somewhat unpredictable” creature, rather than a reliable machine that can be easily mastered and operated.

bags answer: “Anthropic operates a sophisticated regulatory takeover strategy based on fear mongering. It is primarily responsible for the nation’s regulatory frenzy that is damaging the startup ecosystem.”

California State Senator Scott Weiner, author of SB 53 Artificial Intelligence Safety Act, Defend Anthropycriticizing “President Trump’s efforts to prevent states from acting in the field of artificial intelligence without strengthening federal protections.” Sachs then redoubled his efforts, claiming that Anthropic was working with Weiner to “impose the left’s vision of AI regulation.”

More commentary followed, with anti-regulation advocates like Groq COO Sunny Madra Saying that Anthropic was “causing chaos for the entire industry” by calling for minimal AI safety measures rather than unfettered innovation.

TechCrunch event

San Francisco
|
October 27-29, 2025

In his statement, Amodei said that managing the societal impacts of AI should be a matter of “more politics than politics,” and that he believes everyone wants to make sure America secures its leadership in AI development while also building technology that benefits the American people. He defended Anthropic’s alliance with the Trump administration on key AI policy areas, and cited examples of times he personally played with the president.

As an example, Amodei pointed to Anthropic’s work with the federal government, including the company’s work Claude presented to the federal government and a $200 million Anthropic agreement with the Department of Defense (which Amodei called the “War Department,” reflecting Trump’s preferred term, though changing the name would require congressional approval). He also noted that Anthropic has publicly praised Trump’s AI action plan and has been supportive of Trump’s efforts to expand energy savings in order to “win the AI ​​race.”

Despite these offers of collaboration, Anthropic has received criticism from industry peers for departing from the Silicon Valley consensus on some policy issues.

The company initially angered Silicon Valley-connected officials when it opposed A Proposed 10-year ban on regulating artificial intelligence at the state level, a provision that has faced widespread bipartisan opposition.

Many in Silicon Valley, including OpenAI leaders, have claimed that government AI regulation would slow the industry and give China the lead. Amodei responded that the real danger is that the United States continues to fill Chinese data centers with powerful AI chips from Nvidia, adding that Anthropic restricts Selling its AI services to Chinese-controlled companies despite declining revenues.

“There are products we won’t build and risks we won’t take, even if they make money,” Amodei said.

Anthropists also fell out of favor with some powerful players when this happened Support California SB 53a simple security bill that would require the largest AI developers to make borderline standard safety protocols public. Amodei noted that the draft law is intended for companies with annual gross revenues of less than $500 million, which would relieve most startups of any undue burdens.

“Some have suggested that we are somehow interested in hurting the startup ecosystem,” Amodei wrote, referring to Sachs’ post. “Startups are among our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and VCs. Claude is powering a whole new generation of AI-driven companies. Damaging this ecosystem makes no sense to us.”

Amodei said in his statement that Anthropic’s run rate has grown from $1 billion to $7 billion over the past nine months while managing to deploy “AI thoughtfully and responsibly.”

“Anthropic is committed to constructive engagement on public policy issues. When we agree, we say so. When we disagree, we propose an alternative for consideration,” Amodei wrote. “We will remain honest and direct, and we will advocate for policies that we believe are right. The risks posed by this technology are too great to do otherwise.”

Leave a Reply

Your email address will not be published. Required fields are marked *