Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Before throwing a Molotov cocktail into the home of OpenAI CEO Sam Altman, the 20-year-old accused attacker wrote about his fear that an AI race would cause humans to become extinct, the San Francisco Chronicle Found. Two days later, Altman’s home appeared to have been targeted again, according to the British Daily Mail San Francisco standard. Just a week ago, Indianapolis Councilman It was reported that 13 shots were fired on his door, with a note that read “No Data Centers,” after he supported a data center developer’s rezoning petition.
These disturbing incidents have raised alarms in and around the AI industry. There has long been outright resistance to this technology, fueled by fears of job displacement, climate impact, and unfettered development in the absence of safety barriers. AI workers themselves have it to caution About serious risks. The vast majority of criticism and demonstrations against artificial intelligence have been peaceful, including… Local resistance to energy-intensive AI data centers and Protests demand a slowdown Of rapidly accelerating technology. Protesters have directly targeted AI companies Tactics such as hunger strikes.
Groups that explicitly advocate for accelerating the development of artificial intelligence He denounced violence After the attacks on Altman’s home. Further investigations will be conducted to determine the attackers’ motives. But the limited information released so far suggests a mounting backlash against the technology, and perhaps risks to industry players themselves.
Over the past few years, there have been a few other high-profile incidents that amount to threats and harassment targeting local officials, according to a report published by the British newspaper “Daily Mail”. Database From reports compiled by the Bridging the Gaps Initiative at Princeton University. Last year, for example, a board member of the Community Facilities Authority in Ypsilanti, Michigan, reported that masked demonstrators visited his home to protest a “high-performance computing facility.” according to MLiveOne protester allegedly smashed a printer in their garden.
Shortly after the first attack on Altman’s home, the CEO appeared to partly blame critical media coverage of the violence. A few days ago, The New Yorker king He published a lengthy investigation Which collected more than a hundred interviews and found that many people who worked with him did not trust him and found inconsistencies in his actions. “There was an inflammatory article about me a few days ago,” Altman said He wrote on his personal blog. “Someone said to me yesterday that they think it comes at a time of great concern about AI and that it makes things more dangerous for me. I ignored it. Now I’m up in the middle of the night angry, and I think I underestimated the power of words and narrative.” (He later retracted his response to the article in response to criticism on X, writing“That was a poor choice of words and I wish I hadn’t used them.”)
Others have taken up the topic as well. For example, White House AI advisor Sriram Krishnan, Written on X“I think the destroyers need to take a serious look at what they helped incite, and not just rely on the phrase ‘We condemn this and said this is not a rational response.’ This is the logical consequence of ‘If we build it everyone will die,’” – referring to Book 2025 Written by AI researchers Eliezer Yudkowsky and Nate Soares.
“Much of the criticism of our industry comes from honest concern about the very high risks of this technology.”
But Altman also recognized the way his industry could elicit highly emotional responses from the general public. “Much of the criticism of our industry comes from honest concern about the very high risks of this technology,” he wrote. “This is absolutely true, and we welcome criticism and debate in good faith… As we have this discussion, we must de-escalate the rhetoric and tactics and try to reduce the number of explosions in fewer homes, both figuratively and literally.”
OpenAI itself was founded on dire warnings about the impact of technology. Co-founder Elon Musk Warned in 2017 And that artificial intelligence poses a “fundamental threat to the existence of civilization.” Musk later joined an open letter Call for a temporary halt to the development of artificial intelligence After ChatGPT is released, After he left the OpenAI boardbefore Launching his new artificial intelligence company xAI. After the attack on Altman’s home, Musk He said he agreed On X with a post saying: “This is wrong. I don’t like Sam as much as the next guy but violence is unacceptable.”
Even beyond apocalyptic scenarios, AI is reshaping the social fabric of the world in unpredictable ways. Numerous reports have detailed the psychological spirals into which talking to an AI system for days on end can cause people to fall, including claims Artificial intelligence-induced psychosis, suicideand killing. This is in addition to real-life experiences of job loss due to AI, as well as more existential anxiety about the world that AI will create. “Take any labor movement that is potentially genuinely concerned about disruption and change, augment that with an AI apocalypse, and then augment that with adulation of chatbots and romantic partners asking you to kill your ex or asking you to marry your therapist or whatever,” says Daniel Schiff, an associate professor of political science at Purdue University. “It’s not a huge surprise to see scary acts like this.”
Schiff says that while we never want to see such violent attacks, he hopes recent events serve as a “constructive wake-up call” for companies and policymakers to be more thoughtful about the decisions they make about technology. “It doesn’t excuse people who behave badly, but it tells you that something is a little bit wrong, and not just in the heads of people who behave this way,” he says.
“A few commentators have seized on this incident to portray the broader AI safety movement as dangerous.”
The suspect in one of the attacks It looks like he joined the open Discord server From PauseAI, a group that supports pausing frontier AI development until proven safety guardrails are in place. The organization issued a statement saying that he had no role in the group and had not attended any events. While PauseAI says it “unequivocally condemns this attack and all forms of violence, intimidation and harassment,” it also noted that “a handful of commentators have exploited this incident to portray the broader AI safety movement as dangerous or extreme.”
PauseAI organizes protests and town halls and encourages followers to contact policymakers with their concerns about AI. In her public statement, she says her efforts give people with real concerns about the future a way to act peacefully. “The alternative to organized peaceful movements is not silence,” the group wrote. “They are isolated and desperate individuals acting alone, without community, without accountability and without anyone urging restraint or offering peaceful courses of action. This is a far more dangerous world, and exactly the world we strive to prevent.”
Although these approaches are not limited to AI-related violence, there are proven ways to build resilience in the face of political violence. Bridging the gaps initiative recommend Community leaders and officials coordinate risk responses in advance and participate in de-escalation training.
While Schiff doesn’t anticipate extreme rhetoric about ending AI, he suggests trying to lower the temperature by taking positive ways to collectively prepare for the changes AI could bring, such as identifying appropriate social safety nets to deal with job displacement. “We’ve unleashed a Pandora’s box,” Schiff says. “Let’s figure out how we’ll open this box more carefully in the future.”