Meta and other tech companies ban OpenClaw due to cybersecurity concerns


Last month, Jason Grad issued a late-night warning to the 20 employees of his tech startup. “You’ve probably seen Clawdbot trending on Recession message With red siren emoji. “Please keep Clawdbot off all company devices and away from work-related accounts.”

Jerad isn’t the only tech executive to raise concerns with employees about the experimental AI tool, which was briefly known as multipot And now as OpenClaw. One Meta executive says he recently asked his team to keep OpenClaw away from their regular work laptops or risk losing their jobs. The CEO told reporters that he believes the program Unpredictable It can lead to Violation of privacy If used in safe environments. He spoke on the condition of anonymity to speak frankly.

It was launched by Peter Steinberger, the sole founder of OpenClaw, as Free open source tool Last November. but Its popularity soared last month Other programmers contributed features and started sharing their experiences using it on social media. Last week, Steinberger join ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.

OpenClaw requires basic software engineering knowledge to set it up. Then, only limited guidance is needed to control the user’s computer and interact with other applications to help with tasks such as organizing files, performing web searches, and shopping online.

Some cybersecurity professionals have publicly urge Companies must take measures to strictly control how their workforce uses OpenClaw. The latest ban shows how companies are moving quickly to ensure they prioritize security before they want to experiment with emerging AI technologies.

“It is our policy to ‘mitigate first, investigate second’ when we encounter anything that could be harmful to our company, our users, or our customers,” says Jarrad, co-founder and CEO of Massive, which provides internet proxy tools to millions of users and businesses. He says his warning to employees was issued on January 26, before any of his employees had installed OpenClaw.

At another technology company, Valere, which works on software for institutions including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel to share the new technology for a potential trial. The company’s president quickly responded that OpenClaw was being used Strictly prohibitedValere CEO Jay Beston tells WIRED.

“If he gets access to one of our developer devices, he can access our cloud services and our customers’ sensitive information, including credit card information and GitHub codebases,” says Beston. “She’s very good at cleaning up some of her act, which also scares me.”

A week later, Pistone let Valer’s research team run OpenClaw on an employee’s old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who could give commands to OpenClaw and exposing it to the Internet only by using a password in its control panel to prevent unwanted access.

In a report shared with WIRED, Valere researchers added that users must “accept that the bot can be deceived.” For example, if OpenClaw is set up to summarize a user’s email, a hacker could send a malicious email to the person to direct the AI ​​to share copies of files on that person’s computer.

But Beeston is confident safeguards can be put in place to make OpenClaw more secure. He gave a team at Valere 60 days to investigate. “If we don’t think we can do it in a reasonable time, we’ll abandon it,” he says. “Whoever figures out how to make it safe for businesses will definitely have a winner.”

Leave a Reply

Your email address will not be published. Required fields are marked *