Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

On the seventeenth day of Guido Reichstadter without eating, he said he was well – moving a little slower, but well.
Every day since September 2, Reichstadtler appeared outside the San Francisco headquarters in Ai Startup Anthropic, standing from about 11 am to 5 pm. Its blackboard is mentioned “Strike Hunger: Day 15”, although he has already stopped eating on August 31. The mark calls for “stopping the race to artificial public intelligence” or AGI: The concept of artificial intelligence system that equals or transcends human cognitive capabilities.
AGI is a favorite cry for technology executives, with both major companies and startups to achieve a personal teacher first. To Reichstadler, it is an existential risk that these companies do not take seriously. “Trying to build AGI-at the human level, or outside the systems, and experts-this is the goal of all these border companies,” he said freedom. “I think he’s crazy. It’s risky. Incredibly risky. I think it should stop now.” A hunger strike is the clearest way to attract the attention of artificial intelligence leaders – and now, it is not the only one.
Reichstadter indicated 2023 interview The CEO of Antarubor Dario Ameudi, who says he is a recklessness in making artificial intelligence. “My chance to have something wrong in the scope of human civilization may range between 10 and 25 percent,” said Amodei. Amodei and others have concluded that the development of AGI is inevitable and they say that their goal is simply to be the most responsible guardian-something that is called Reichstadtler “and” Specifies Self. “
From Reichstadter’s point of view, companies bear the responsibility for not developing technology that will hurt people on a large scale, and anyone who understands risks also bears some responsibility.
He said: “This is a kind of what I am trying to do. “I also have two children.”
The anthropier did not immediately respond to a request for comment.
Every day, Reichstadter said that he was waving security guards in the Antarbur office as he was creating, and watching anthropologist avoids their eyes walking behind him. He said that at least one employee shared some similar fears of the disaster, and hopes to inspire the employees of the artificial intelligence company “they have the courage to work as human beings and not as tools” for their company because they bear a deeper responsibility because “they develop the most dangerous technology on the ground.”
His fears are shared by countless others in the world of artificial intelligence. It is a divided society, with countless differences about the specific risks that AI poses in the long run and the best way to stop it-even the term “artificial intelligence” is risky. One thing that most of them can agree, although its current path heralds the disease for humanity.
Reichstadter said that he was first aware of the possibility of “Human International” Amnesty during his university studies about 25 years ago, and at that time, it seemed far away-but with the launch of Chatgpt in 2022, he sat and noticed. He says he is particularly interested in how he believes that artificial intelligence plays a role in increasing tyranny in the United States
He said, “I am concerned about my community.” “I am concerned about my family, their future. I am concerned about what is happening with artificial intelligence to influence them. I am concerned that it is not morally used. I am also concerned that it is offered realistic reasons for the belief that there are catastrophic risks and even existential risks.”
In recent months, Reichstadter has increasingly tried to attract the attention of technology leaders to an issue believed to be vital. He has worked in the past with a group called “Stop AI”, which seeks to permanently ban artificial intelligence systems “to prevent human extinction, loss of collective functions, and many other problems. In February, he and other members Help a series closing the doors To Openai’s offices in San Francisco, with a few of them, including Reichstadter, are arrested due to the obstacle.
Reichstadter delivered a handwritten letter to Amodei via the Human Security Office on September 2, and a few days later, published it online. The message requests that Amodei stop trying to develop an unimportant technique – and do everything in his power to stop the world’s artificial intelligence race – and that if he is not ready to do this, to tell him why not. In the letter, Reichstadter wrote, “For my children, with urgency and attractiveness in our situation in my heart, I started a hunger blow outside the human offices … while waiting for your response.”
“I hope he has the basic decency to answer this request,” said Reichstadter. “I don’t think any of them might face a really personal challenge. It is one thing unknown, abstract, think that the work you do may end up killing many people. It is another thing that you have one of the potential future victims face to face and explanation (why) as a human being.”
Soon after Ricchestadter started his peaceful protest, two others, inspired by a similar protest in London, began to maintain the presence of the Google DeepMind office. And someone joined it In IndiaFasting on the live broadcast.
Michael Trazzy participated in a hunger strike in London for seven days before choosing to stop due to two close episodes and consult a doctor, but he still supports the other participant, Dennis Sherimit, who works Today 10. Trazezi and Reichstadter participate in similar concerns about the future of humanity under the progress of continuous artificial intelligence, although they are reluctant to define themselves as part of a specific society or group.
Terzz said he has been thinking about the dangers of artificial intelligence since 2017. He has written a letter to DeepMind Demis Hassabis and publicly published it, as well as passed through a mediator.
In the letter, Trazzi asked that Hassabis “took a first step today towards a future stop coordination on the development of cancellation, by publicly saying that Deepmind will agree to stop developing the AI border models if all other major AI companies in the West and China will do the same. Once all major companies agree on Bosf, governments may organize an international agreement to clarify them.”
Trezzi said freedom“If it is not very dangerous for Amnesty International, I don’t think I will be … very supportive of the organization, but I think … there are some things in the world, by default, the incentives (in) the wrong direction. I think Amnesty International, we need an organization.”
“Amnesty International is a sophisticated fast space and there will be different views on this technology. We believe in the possibility of providing science from artificial intelligence and improving billions of people’s lives. Safety, security and responsible governance, and it has always been higher priorities than the future where people benefit from technology from danger,” Google DeepMind said in a statement.
in After xTrazzi wrote that The Hunger Strike sparked a lot of discussion with technology workers, claiming that a dead employee asked him, “Why only Google Youth? We are doing a great job too. We are also in the race.”
He also wrote in The Post that one of DeepMind staff said that artificial intelligence companies are likely to not issue models that could cause catastrophic damage due to the cost of alternative opportunity, while another, “admitted that he believed that extinction from artificial intelligence was more likely, but chose to work in DeepMind because he was still one of the safety awareness companies.”
Resstadter has not yet received a response from their messages to Hasabis and Ameudi. (Google also refused to answer a question from freedom About the reason not to respond to the al -Husabi to the message.) They have faith, although their actions lead to acknowledgment, meeting or ideally, the commitment of executives to change their paths.
“We are in an unmalphical world race to a disaster,” he said. “If there is a way out, it will depend on the people who want to say the truth and say,” We are not in control. “Ask for help.”