Anthropic launches new think tank amid Pentagon blacklist battle


Middle A Long weeks conflict with Pentagon, which led to black list And a lawsuitAnthropic is changing its executive suite and research initiatives. The company announced Wednesday that it will launch a new internal think tank called the Anthropic Institute, which brings together three of Anthropic’s existing research teams. The conference will focus on research into the wide-ranging impacts of AI, such as “what happens to jobs and economies, whether AI makes us safer or introduces new risks, how its values ​​can shape our values, and whether we can retain control,” according to the company.

The news comes with changes in the C-suite as well. Anthropic co-founder Jack Clarke is moving into a new role leading the research center. His new title will be Chief Public Interest, after more than five years as Chief Public Policy Officer. The public policy team — which will triple in size in 2025, according to Anthropic — will now be led by Sarah Hick, who was previously head of external affairs. Anthropic will also open its planned office in Washington, D.C., and the public policy team will continue to focus on issues such as national security, AI infrastructure, energy, and “democratic leadership in AI.”

Clark said Edge That the Anthropology Institute’s debut had been in the works for a while, and that it had been contemplating moving into a role like this since November. But the timing comes just days after Anthropic filed a lawsuit against the US government over its designation as a supply chain risk, which would prevent its customers from using Anthropic’s technology at all in their work with the Department of Defense. The lawsuit alleges that the Trump administration illegally blacklisted the company for “redlining” mass domestic surveillance and fully autonomous lethal weapons.

When asked about this, Clark said: “Working in AI here at Anthropic is never boring – there’s always something going on… The pace of AI progress isn’t slowed by external events, and neither are we.” Clark said the situation did not “directly change” the planned research agenda, but he felt it “confirmed” Anthropic’s decision to release more information to the public. “What we’ve seen in the last few weeks kind of shows you how much there is a desire to have a larger national conversation by the public about this technology,” he said.

The Anthropic Institute is launching with about 30 people, including founding members Matt Botvinnik, formerly of Google DeepMind; Anton Korinek, Professor Emeritus from the Department of Economics at the University of Virginia; and Zoe Hitzig, researcher Who left OpenAI Following its decision to serve ads within ChatGPT. The new research center combines anthropological studies Community Impact TeamWhich studies the effects of artificial intelligence on different areas of society; its Red Frontier Team, which tests AI systems for vulnerabilities and issues; and its economic research team, which tracks the effects of artificial intelligence on the economy and labor market. The Anthropic Institute also plans to “incubate” new teams, such as one led by Botvinnik studying how artificial intelligence will impact the legal system. Hitzig and Korinek will lead major economic research projects. Clark said he expects the research center’s staff to double each year for the foreseeable future.

This year in particular, there is an increasing amount of pressure on highly valued AI companies like Anthropic, which It is said Plans to go public this year. Anthropic court filings It revealed that the company has generated more than $5 billion in commercial revenue ever, and that it has spent $10 billion to date on model training and inference. It also said that the company “has received communications from numerous external partners… expressing confusion about what is being asked of them and concern about their ability to continue working with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance and “in some cases, understanding of their termination rights.” Depending on the interpretation of what, exactly, the government will ban, Anthropic said, at least “hundreds of millions of 2026 revenues are at risk,” and in the most extreme case, several billion dollars.

Is Anthropic concerned about allocating more resources to long-term research when it is very likely to lose part of its income in the short-term? when Edge Clark asked, and he said he had “no concerns.”

“People tend to buy trust,” Clark said. “A lot of what we can produce is the kinds of research that helps companies trust us… Over the long term, Anthropic has always viewed its investment in safety – and studying and reporting on the safety of its systems – not as a cost centre, but as a profit centre.”

Clark also said he believes strong A.I Special anthropic term for AGI(or artificial general intelligence) by the end of this year or early 2027, and that he decided to change roles largely due to “the pace of advancement of artificial intelligence.” He added that when he looked at his work last year, he focused more on political issues, e.g SB 53And he carried out research and development in the field of artificial intelligence and other issues that he wanted to pay attention to. The Anthropic Institute is specifically dedicated to answering “the toughest questions posed by powerful AI,” Anthropic said in a statement.

Of course, as well Edge books In DecemberMany tech companies are pro-transparency until it becomes bad for business. So what happens if and when research teams at the Anthropological Institute uncover results that make the company look bad?

Anthropic’s founders have “similar values” about the importance of public disclosure, especially since the company is technically a public benefit corporation, meaning it has the ability to carry out goals “not just for credit gain,” Clark said. He added that in a conversation he had with CEO Dario Amodei last week, they agreed on the importance of transparency despite the public relations challenges that may arise from it.

But the Anthropic Institute’s research may require significant calculations at a time when companies are racing Give priority to commercial products. Clark said that outside of the resources allocated to pre-training the frontier model, Anthropic allocates its accounts on a week-by-week basis according to “what seems most important,” so no specific portion is set aside, but he doesn’t expect there to be any conflicts.

The Anthropic Institute also plans to study people’s emotional dependence on artificial intelligence, an increasingly acute problem Gained public awareness During the past year. So far, Anthropic’s research teams have studied the types of conversations that occur with Claude and measured the technology’s ability to persuade people of things or act ingratiatingly, but they haven’t spent much time talking with people who use the technology about their individual experiences, Clark said. He said the research center plans to conduct large-scale research in the social sciences, including the use of anthropological artificial intelligence Conduct interviews with users.

“I think it was this: Social media had a huge impact on society, and it wasn’t just based on what was happening on social media platforms,” Clark said. “It was, ‘How was the use of social media changing people?’” “We want to understand how the use of AI is changing people?”

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *