Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Antarbur has updated the Claud Ai Chatbot policy in response to increasing safety concerns. In addition to the introduction of the rules of cybersecurity more strict, Anthrobur now determines Some of the most dangerous weapons that people should not develop using Claude.
Anthropology does not highlight the amendments made to its own weapons policy In the post, it summarizes its changesBut comparison between Old use policy of the company A new one reveals a noticeable difference. Although Antarbur was previously prohibited from using Claude to “produce, modify, design, market, distribute weapons, explosives, dangerous materials, or other systems designed to cause harm or loss of human life”, the updated version expands by prohibiting the development of high explosives specifically.
In May, Man “AI SaFty Level 3” protection has been implemented Besides the launch of the new Claude OPUS 4. The guarantees are designed to make the model more difficult to break the protection, as well as to help prevent it from helping to develop CBRN weapons.
In its post, the anthropier also recognizes the risks posed by the Aicneac AI tools, including the use of the computer, which Let’s control Claude From the user’s computer, as well as Claude Code, a tool that includes Claude directly at the developer station. “These powerful capabilities offer new risks, including the capabilities of mismanagement, creation of harmful programs, and electronic attacks,” writes Antarbur.
It responds to the start of the artificial intelligence of these potential risks by folding the new “non -bargaining computer or network systems” section in its use policy. This section includes rules for the use of Claude to discover, exploit weaknesses, create or distribute harmful programs, develop tools for service refusal attacks, and more.
In addition, Antarbur reduces its policy on political content. Instead of prohibiting the creation of all kinds of content related to political campaigns and pressure, Antarubor will now only prohibit people to use Cloud for “the use of deceptive or turbulent cases of democratic operations, or the involvement of voters and voters.” The company also made it clear that its requirements for all “at risk” uses, which play its role when people use Cloud to provide recommendations to individuals or clients, only apply to the scenarios facing the consumer, not for commercial use.