Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

On Monday, more than 200 heads of countries, diplomats, who won Nobel, artificial intelligence leaders, scientists, and all others agreed on one thing: there must be an international agreement on “red lines” that artificial intelligence should not cross them-for example, not to allow Amnesty International to impersonate the personality of a human being or repeat themselves.
They have signed, along with more than 70 organizations dealing with artificial intelligence, all of whom are the global call to AI Red Airlines initiative, An invitation to governments to reach an “international political agreement on” Red Airways “for Amnesty International by the end of 2026. The signers include British Canadian scientists Jeffrey Hinton, the founder of Openai Wojciech Zaremba, Ciso Jason Clinton, Google DeepMind Ian Goodfellow, and others.
“The goal is not to respond after a major incident … but the risk is widely banned, and maybe irreversible it before it occurs,” Carbell Rafael Sigiri, CEO of the French Center for AI (CESIA), said at a press conference on Monday with reporters.
He added: “If the countries are not able to agree yet on what they want to do with artificial intelligence, then it must be agreed on what artificial intelligence must never do.”
This announcement comes before the eighty week of the New York General Assembly in New York, and the initiative led Cisia, the future community, and the California University Center in Berkeley for human intelligence compatible with human.
Maria Risa, Nobel Peace Prize winner, mentioned the initiative Opening notes In the association, when calling for the efforts made to “end the major technology through global accountability.”
Some red lines of regional artificial intelligence are present. For example, the European Union’s artificial intelligence law prohibits some of the uses of artificial intelligence, which is considered “unacceptable” within the European Union. There is also an agreement between the United States and China Nuclear weapons It should remain under the human being, not artificial intelligence, control. But there is no global consensus yet.
In the long run, more “voluntary pledges”, said Nikki Eladis, director of the international government’s international governance in the future community, told reporters on Monday. Responsible scaling policies that were conducted within the artificial intelligence companies “shortening a real application”. In the end, an independent global “with teeth” institution is needed to determine, monitor and enforce red lines.
“They can comply not to build AGI until they know how to make it safe,” Stewart Russell, a computer professor at the University of California at Berkeley and Amnesty International researcher. “Just as nuclear energy developers did not build nuclear plants so that they have an idea of how to prevent them from explosion, artificial intelligence must choose a different technology path, a path that is built in safety from the beginning, and we must know that they are doing it.”
Russell said that the red lines do not hinder economic development or innovation, as some critics argue the list of artificial intelligence. “You can get artificial intelligence for economic development without having AGI, we don’t know how to control,” he said. “This supposed division, if you want a medical diagnosis, you should accept the devastating AGI on the world-I think it is nonsense.”