Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

For about two hours last week, Meta employees gained unauthorized access to company and user data thanks to an AI agent that provided an employee with inaccurate technical advice, as previously reported by Information. Meta spokeswoman Tracy Clayton said in a statement to Edge That “user data was not mishandled” during the incident.
A Meta engineer was using an internal AI agent, which Clayton described as “similar in nature to OpenClaw within a secure development environment,” to analyze a technical question posted by another employee on an internal company forum. But the agent also publicly independently answered the question after analyzing it, without getting consent first. The response was only intended to be shown to the requesting employee, not posted publicly.
An employee then acted on the advice of the AI, which “provided inaccurate information” that resulted in a security incident with a “SEV1” level, the second highest severity rating used by Meta. The incident temporarily allowed employees to access sensitive data that they were not allowed to view, but the issue has since been resolved.
According to Clayton, the AI agent in question took no technical action itself, other than posting inaccurate technical advice, something a human would have also done. However, the human may have conducted more testing and made a more complete judgment before sharing the information — and it’s not clear whether the employee who originally pushed the answer planned to post it publicly.
“The employee interacting with the system was fully aware that he was communicating with an automated bot. This is indicated by the disclaimer in the footer and by the employee’s own response to this thread,” Clayton commented. Edge. “The agent took no action except to respond to a question. If the engineer who acted on this had known better, or performed other checks, this could have been avoided.”
Last month, an artificial intelligence agent from Open source platform OpenClaw He went more directly rogue in the Meta when he was a staff member I asked him to sort through the emails in her inbox, and delete emails without permission. The whole idea behind agents like OpenClaw is that they can take action themselves, but like any other model of AI, they don’t always interpret prompts and instructions correctly or provide accurate responses, a fact that Meta staffers have now discovered twice.