Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

“AI Agentic systems are done.”
This is one of the first lines of the new threat intelligence report for anthropology, today, which separates a wide range of cases where Claude’s abuse – and many pioneering AI customers and chat chat – likely.
First: “Adhere to vitality.” One of the advanced electronic crime episode that Antarbur says recently has recently disrupted the Claude Code, artificial intelligence coding agent in Antarbur, to blackmail data from at least 17 different organizations all over the world within one month. The penetration parties included health care organizations, emergency services, religious institutions and even government entities.
“If you are an advanced actor, then what would have been required to be a team of advanced actors, such as the issue of airspace, to do-now, one individual can conduct, with the help of the agent’s systems.” freedom In an interview. He added that in this case, Claude was “carrying out the operation from one side to the end.”
In the report he wrote that in such cases, artificial intelligence works “as an artistic advisor and an active operator, allowing attacks that will be more difficult and take a long time for individual actors to manually implement.” For example, Claude was specifically used to write “psychologically targeted extortion requirements”. After that, the electronic Hungarians discovered the amount of data – which included health care data, financial information, government credentials, and more – will be on the dark Internet and made ransom requirements exceeding $ 500,000, according to anthropologist.
“This is the most advanced use of the factors I saw … for electronic crime,” Klein said.
In another case study, Claude helped North Korea information technology workers in obtaining jobs in Fortune 500 companies in the United States to finance the country’s weapons program. Usually, in such cases, North Korea is trying to benefit from the people who went to the college, or have experience, or have some ability to communicate in English, for every Klein – but he said that in this case, the barrier is much lower for people in North Korea to pass technical interviews in large technology companies and then maintain their jobs.
With the help of Claude, Klein said: “We see people who do not know how to write the code, do not know how to communicate professionally, and know very little about the English language or culture, who only ask Claude to do everything … and after they go down the job, most of the work they are doing with Claude keeps the job.”
Another case study included a romantic fraud. A telegram robot has been announced with more than 10,000 users per month from Claude as a “EQ High” to help generate emotional, virtually for fraud. The indigenous English speakers have been able to write free convincing messages in order to obtain the confidence of the victims in the United States, Japan and Korea, and ask them for money. One of the examples in the report showed that the user downloads an image of a man in a tie and asks about the best way to complete it.
In the report, Anthropor itself acknowledges that although the company “has developed advanced safety and security measures to prevent the misuse of” artificial intelligence, and although the measures are “generally effective”, bad actors are still able to find ways around them. Antarubor says that Amnesty International has reduced the barriers attached to advanced e -crimes and that bad actors use technology for the definition of victims, automating their practices, creating wrong identities, analyzing stolen data, stealing credit card information, and more.
Each of the case studies in the report adds to the amount of increased evidence that artificial intelligence companies are often trying, they cannot keep pace with societal risks associated with the technology they make and put forward in the world. “Although the status studies offered below it probably reflects consistent patterns of behavior in all AI border models,” the report says, says the report.
Anthropor said that every study has prohibited the associated accounts, the creation of new works or other discovery measures, and joint information with appropriate government agencies, such as intelligence agencies or law implementation. He also said that his team’s status studies are part of a wider change in the risk of artificial intelligence.
“There is this transformation that occurs when artificial intelligence systems are not just chatbot because they can now take multiple steps,” Klein said, adding, “They are able to conduct action or activity as we see here.”