Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

I spent it In the past few days I have been asking AI companies to convince me that Horizons… Artificial intelligence safety Didn’t dim. Just a few years ago, there seemed to be universal agreement among everyone Companies, Legislatorsand the general public that serious regulation and oversight of AI was not only necessary, but inevitable. People have speculated that international bodies are creating rules to ensure that AI is taken more seriously than other emerging technologies, and this could at least pose obstacles to its more risky applications. The companies pledged to prioritize safety over competition and profits. While pessimists continue to spin dystopian scenarios, a global consensus has been forming to reduce the risks of AI while reaping its benefits.
The events of the past week have dealt a severe blow to those hopes, starting with Bitter hostility Between the Pentagon and Anthropy. All parties agree that the existing contract between the two is used to specify – at Anthropic’s insistence – that the Department of Defense (which now refers to itself as the War Department) will not use Anthropic’s CLOUD AI models for autonomous weapons or mass surveillance of Americans. Now, the Pentagon wants to erase those red lines, and Anthropic’s rejection not only spelled the end of her contract, but also prompted Defense Secretary Pete Hegseth to… Advertise the company Supply chain risk, a classification that prevents government agencies from doing business with Anthropic. Without going into detail about the terms of the contract and the personal dynamics between Hegseth and Anthropic CEO Dario Amodei, the bottom line seems to be that the military is determined to resist any restrictions on how it uses AI, at least within the bounds of legality — by its own definition.
The bigger question seems to be how we got to the point where launching killer robot drones and bombs that locate and eliminate human targets are in the conversation as something the US military might consider. Have I missed the international debate on the merits of creating swarms of autonomous killer drones that survey war zones, patrol borders, or monitor drug traffickers? Hegseth and his supporters complain about the absurdity of private companies limiting what the military can do. I think it’s crazy that it would take a single company risking existential penalties to stop a potentially uncontrollable technology. However, the lack of international agreements means that every advanced militia must use artificial intelligence in all its forms, simply to keep up with its opponents. For now, an AI arms race seems inevitable.
The risks extend beyond the military. Overshadowed by the Pentagon drama Annoying advertisement Anthropic was published on February 24. The company said it is making changes to its system to mitigate catastrophic risks caused by artificial intelligence, called a responsible scaling policy. It was a key founding policy for Anthropic, as the company promised to tie its AI model release timeline to its safety procedures. The policy stated that models should not be launched without guardrails preventing the worst uses. It served as an internal incentive to ensure that safety was not neglected in the rush to launch advanced technologies. More importantly, Anthropic hoped that adopting this policy would inspire or shame other companies to do the same. This process was called “Race to the top“The expectation was that embodying such principles would help influence industry-wide regulations that put limits on the chaos that artificial intelligence can cause.
At first, this approach seemed promising. Both DeepMind and OpenAI have adopted aspects of the Anthropic framework. Recently, as investment dollars swelled, competition among AI labs increased, the prospect of federal regulation began to look more remote, and Anthropic admitted that a policy of responsible scaling was lacking. The thresholds did not create the consensus about the risks of AI that they had hoped to achieve. As the company noted in a blog post, “The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful momentum at the federal level.”
At the same time, competition among AI companies is becoming more fierce. Instead of a race to the top, the AI-based competition feels more like a stripped-down version of King of the Mountain. When the Pentagon axed Anthropic, OpenAI rushed to fill the gap with its own Department of Defense contract. OpenAI CEO Sam Altman insisted he entered into his hasty deal with the Pentagon to relieve pressure on Anthropic, but Amodei was having none of it. “Sam is trying to undermine our position while appearing to support it,” Amodei said in a statement. Internal memo. “He is trying to make it possible for the official to punish us by undermining our public support.” (Amodi He later apologized His tone in the message.)