Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Meta says it may stop developing artificial intelligence systems that she considers very risky.


The CEO of Meta Mark Zuckerberg has pledged to make artificial general intelligence (AGI) – which is almost defined as Amnesty International that can accomplish any task that a person can – openly available. But in The new policy documentMeta suggests that there are certain scenarios in which the AI ​​system may not be launched very able to develop it internally.

The document, which is called Meta Frontier Ai Framework, determines two types of artificial intelligence systems that the company considers very risky: “high risk” systems and “critical risks”.

It is also determined by Meta, both “high risk” systems and “critical risks” are able to help cyber security and chemical and biological attacks, and the difference is that the “decisive risks” systems can lead to a “catastrophic result (that) that cannot be mitigated in (a The context of the proposed publishing.

What kind of attacks are we talking about here? Meta gives some examples, such as “comprehensive automatic settlement to the end of an environment on the scale of protected companies” and “the spread of high -influential biological weapons”. The company admits that the potential menu in the Meta Document is far from the comprehensive, but it includes those that Meta believes is the “most urgent” and reasonable to create as a direct result of the launch of a strong AI system.

It is somewhat surprising that, according to the document, Meta classifies the risks of the system that is not based on any one experimental test, but it is enlightened by the inputs of internal and external researchers who are subject to review through “decision makers at the level of senior”. Why? Mita says she does not believe that the science of evaluation is “strong enough to provide final amount measures” to determine the risks of the system.

If Meta determines that the system is highly dangerous, the company says it will be limited to accessing the system internally and will not divorce it until it is implemented to reduce risks to moderate levels. The system from getting rid of development and stopping development so that the system can be made less dangerous.

Meta’s Frontier AI, which the company says will develop with the changing scene of artificial intelligence, and any dead Earlier to publish Before France AI’s summit this month, it appears to be a response to the company’s “open” approach to the development of the system. Meta adopted a strategy to make artificial intelligence technology openly – albeit it Not open source through the common definition Unlike companies such as Openai, whose systems are chosen behind the application programming interface.

For Meta, the open version approach has proven to be a blessing and curse. The company’s family is called artificial intelligence models LamaAchieved hundreds of millions of downloads. But Lama also has It is said It was used by at least one American opponent to develop the defense Chatbot.

When spreading its workforce from artificial intelligence, Meta may also aim to approach the AI ​​Open strategy with the Chinese company Deepseek AI. Dibsic It also makes its systems openly available. But the company’s artificial intelligence has little guarantees and can be easily directed Getting toxic and harmful outputs.

(W) We believe that by looking at both the benefits and risks in making decisions on how to develop and spread advanced artificial intelligence, “he writes dead in the document,” it is possible to present this technology to society in a way that preserves the benefits of this technology for society while preserving A suitable level of risk. ”

Techcrunch has a news message focused on artificial intelligence! Subscribe here To get it in your inbox every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *