Claude Anthropy takes control of a robot dog


And more robots As they begin to appear in warehouses, offices, and even people’s homes, the idea of ​​large language models hacking complex systems sounds like the stuff of science fiction nightmares. So, of course, Anthropic The researchers were keen to see what would happen if Claude tried to control the robot, in this case a robotic dog.

In a new study, anthropological researchers found that Claude was able to automate much of the work involved in programming a robot and having it perform physical tasks. On one level, their findings demonstrate the efficient coding capabilities of modern AI models. On the other hand, they point out how these systems are beginning to expand into the physical world as models master more aspects of programming and get better at interacting with software and physical objects as well.

“We have a suspicion that the next step for AI models is to start reaching out and impacting the world more broadly,” Logan Graham, a member of the Red Team at Anthropic, who studies potential risk models, tells WIRED. “This will really require models that interact more with robots.”

Courtesy of Anthropy

Courtesy of Anthropy

Anthropic was founded in 2021 by former OpenAI employees who believed that AI might become problematic — even dangerous — as it progressed. Graham says today’s models aren’t smart enough to fully control a robot, but future models might be. Studying how people benefit from MBAs in robotics programming could help the industry prepare for the idea of ​​“models that eventually embody themselves,” he says, referring to the idea that AI might one day power physical systems.

It’s still unclear why an AI model would decide to take control of a robot, let alone do something harmful to it. But speculating about the worst-case scenario is part of Anthropic’s brand, and helps position the company as a major player in the responsible AI movement.

In the experiment, dubbed Project Fetch, Anthropic asked two groups of researchers with no prior robotics experience to take control of a robotic dog, the four-legged Unitree Go2, and program it to perform specific activities. Teams were given access to a console, and were then asked to complete increasingly complex tasks. One group was using Claude’s coding model, while the other group was writing code without the help of the AI. The group using Claude was able to complete some tasks — but not all — faster than the human-only programming group. For example, I was able to get the robot to walk around and find a beach ball, something that a human-only group couldn’t figure out.

Anthropic also studied the dynamics of cooperation in both teams by recording and analyzing their interactions. They found that the group that was unable to reach Claude showed more negative emotions and confusion. This may be due to Claude speeding up communication with the bot and coding an easier-to-use interface.

Courtesy of Anthropy

The Go2 robot used in Anthropic’s trials costs $16,900, which is relatively cheap by robotics standards. It is typically deployed in industries such as construction and manufacturing to conduct remote inspections and security patrols. The robot is capable of walking autonomously but generally relies on high-level software commands or a person operating the controller. The Go2 is manufactured by Unitree, based in Hangzhou, China. Its AI systems are currently the most popular on the market, according to the latest reports Report by Semi Analysis.

The large language models that support ChatGPT and other intelligent chatbots typically generate text or images in response to the prompt. More recently, these systems have become adept at generating code and drivers, turning them into agents rather than mere script generators.

Leave a Reply

Your email address will not be published. Required fields are marked *