The biggest AI stories of the year (so far)


You can chart it annually by product launches, or you can measure it at major moments that change the way we look at AI. The AI ​​industry is constantly publishing news, such as major acquisitions, Indie developer successesa public protest against unclear, existentially dangerous products Contract negotiations – There’s a lot to unpack, so we’re taking a peek at where we’ve been and where we’ve been so far this year.

Anthropy vs. Pentagon

They were business partners, Anthropic CEO Dario Amodei and Defense Minister Pete Hegseth I reached a bitter dead end They renegotiated contracts that dictate how the US military can use Anthropic’s AI tools in February.

Anthropic has taken a hard line against using its AI for mass surveillance of Americans or to operate autonomous weapons that could attack without human supervision. On the other hand, the Pentagon has argued that the Department of Defense – which President Donald Trump’s administration calls the War Department – ​​should be allowed access to Anthropic models for any “lawful use.” Government representatives resented the idea that the military should be confined to the rules of a private company, however Amodei stood his ground.

“Anthropic recognizes that the Department of War, not private companies, makes military decisions,” Amodei wrote in a letter. “We have never raised objections to specific military operations nor attempted to limit the use of our technology in an ad hoc manner.” statement Address the situation. “However, in a narrow set of cases, we believe that AI can undermine democratic values, rather than defend them.”

The Pentagon gave Anthropic a deadline to approve their contract. Hundreds of employees at Google and OpenAI I signed an open letter He urged their leaders to respect Amodei’s borders and refuse to budge on the issues of autonomous weapons or internal surveillance.

The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump directed federal agencies to phase out their use of humanitarian tools over a period of time A transitional period of six months The period was called the artificial intelligence company, which is worth $380 billion.Radical left, wake up company” in an all-caps post on social media. The Pentagon then moved to declare Anthropic a “supply chain risk,” a designation usually reserved for foreign adversaries that bars any company that works with Anthropic from doing business with the U.S. military. (Anthropic has since announced File a lawsuit against To challenge the naming.)

Anthropic competitor OpenAI at the time swooped in It announced that it had reached an agreement that would allow its own models to be published in secret positions. It’s been a shock to the tech community ever since Reports have indicated That OpenAI will adhere to the Anthropic Red Lines governing the use of AI in the military.

TechCrunch event

San Francisco, California
|
October 13-15, 2026

Public sentiment is that people found OpenAI’s move suspicious – ChatGPT Uninstalls jumped 295% Day after day The day after OpenAI announced its deal, Anthropic’s Claude took the top spot in the App Store. OpenAI Device Manager Caitlin Kalinowski He resigned in response to the deal, saying it was “rushed through without identifying guardrails.”

OpenAI told TechCrunch that it believes its agreement “makes its clear red lines clear: no autonomous weapons and no autonomous surveillance.”

As this saga continues, it will have important implications for the future of how AI is deployed in warfare, potentially changing the course of history – you know, no big deal…

OpenClaw’s ‘bio-encoded’ app accelerates shift to agentic AI

February was a month OpenClawAnd its influence continues to reverberate. In quick succession, the crypto-AI assistant application went viral, spawning a host of… Separate companiesIt had a privacy snafu, and then it happened Acquired by OpenAI. Even one company built on OpenClaw, Reddit’s version of AI agents called Moltbook, was It was recently acquired by Meta. This crustacean-themed ecosystem has driven Silicon Valley into a state of outright madness.

Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language via the most popular chat apps, such as iMessage, Discord, Slack, or WhatsApp. There is also a public market where people can code and upload “skills” to add to their own AI agents, making it possible to automate anything that can be done on a computer.

If this sounds too good to be true, that’s because it kind of is. For an AI agent to be effective as a personal assistant, it needs access to your email, credit card numbers, text messages, computer files, etc. If it is compromised, a lot can go wrong, and unfortunately, there is no way to completely secure these clients against injection attacks.

“It’s just an agent sitting with a bunch of credentials in a box that’s connected to everything — your email, your messaging platform, everything you use.” Ian AhlCTO at Permiso Security, TechCrunch said. “What that means is that when you get an email, and maybe someone is able to put a quick injection technique in there to take action, (and) that customer sitting on your box who has access to everything you’ve given them can now take that action.”

An AI security researcher at Meta said that OpenClaw She ran frantically through her inboxand deleted all of her emails despite repeated calls to stop. “I had to run to my Mac mini like I was defusing a bomb” to actually unplug the device, she wrote in a message. A viral post now on Xwhich included photos of discontinuation claims that were discarded as receipts.

Despite the security risks, the technology intrigued OpenAI enough to acquire it.

Other tools Built on OpenClawincluding Moltbook — a Reddit-like “social network” where AI agents can communicate with each other — ended up becoming more widespread than OpenClaw itself.

In one case, A The post went viral One AI agent appears to be encouraging fellow agents to develop their own secret, end-to-end encrypted language in which they can organize among themselves without the knowledge of humans.

But researchers soon revealed that Moltbook’s encrypted device wasn’t very secure, meaning it was all too easy for human users to pose as artificial intelligence to make posts that would spark viral social hysteria.

Again, although the discussion around Moltbook was based more on panic than reality, Meta saw something in the app It was announced that Moltbook and its creators, Matt Schlicht and Ben Parr, will join Meta Superintelligence Labs.

It seems strange that Meta would buy a social network where all the users are bots. Although Meta did not reveal much about the acquisition, we… Endoscopy Owning Moltbook is more about access to the talent behind it, who are passionate about experimenting with AI agent ecosystems. CEO Mark Zuckerberg has He said so himself: He believes that one day, every company will have its own artificial intelligence.

As we watch the hype around OpenClaw, Moltbook, and… NanoClaw It’s as if those who predicted the future of artificial intelligence might be onto something, At least for now.

Chip shortages, hardware drama, and data center demands are escalating

The harsh demands of the AI ​​industry — requiring computing power and data centers of unprecedented sizes — have reached a point where the average consumer has no choice but to care. Now, it may not even be possible for the industry to meet these requirements Astronomical requirements for memory chipsConsumers are already seeing an increase in the prices of their phones, laptops, cars, and other devices.

So far, analysts from IDC and Counterpoint have predicted that smartphone shipments, for example, will decline sharply. 12 to 13 percent this year; Apple already has it MacBook Pro price hike Up to $400.

Google, Amazon, Meta, and Microsoft plan to spend up to $1.5 billion combined 650 billion dollars In data centers alone this year, that’s an estimated 60% increase over last year.

If the chip shortage doesn’t affect your wallet, it might affect your community as a whole. In the United States alone, approx 3,000 new data centers It is under construction, in addition to the 4,000 stations already operating in the country. The need for workers to build these data centers is great enough “Man camps” They have popped up in Nevada and Texas, trying to attract workers by promising golf simulator rooms Grilled steak to order.

The construction of data centers not only has a long-term environmental impact; Health risks To nearby residents, leading to air pollution and affecting the safety of nearby water sources.

Meanwhile, Nvidia, one of the world’s most valuable hardware and chip developers, is reshaping its relationship with leading AI companies like OpenAI and Anthropic. Nvidia has been a consistent supporter of these companies, which has raised concerns about them Circular For the artificial intelligence industry, how many of these staggering valuations are based on frequent transactions with each other. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and then OpenAI said it would buy $100 billion worth of Nvidia chips.

It came as a surprise, then, when Nvidia CEO Jensen Huang said his company would do just that Stop investing in OpenAI and Anthropy. This is because companies plan to go public later this year, he said, although this reasoning is not entirely logical, as investors typically pump more money before an IPO to extract as much value as possible.

Leave a Reply

Your email address will not be published. Required fields are marked *