Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic just announced a new feature called “Dreaming” at the company’s developer conference in San Francisco. It is part of the recently launched Anthropic I have an agent Infrastructure designed to help users manage and deploy tools that automate software processes. This “dream” aspect sorts through the text of what the agent recently completed and attempts to gather insights to improve the agent’s performance.
People using AI agents often send them on multi-step journeys, such as visiting a few websites or reading multiple files, to complete online tasks. This new “Dream” feature allows agents to look for patterns in their activity log and improve their abilities based on those insights.
The name of the feature immediately comes to mind the science fiction novel by Philip K. rooster, Do Androids dream of electric sheep?which explores the qualities that truly separate humans from powerful machines. While our current generative AI tools are nowhere near the machines mentioned in the book, I’m willing to draw the line here, right now: no more Generative artificial intelligence Features with names that disrupt human cognitive processes.
“Memory and dreaming together form a powerful memory system for self-improvement factors,” he says. Anthropic blog post About the launch of this developer research preview. “Memory allows each agent to capture what it learns It also works. The dream purifies that memory Between sessions,Pull shared learning across agents and keep it up to date.
Courtesy of Claude
Since spark chatbot After the 2022 revolution, AI company leaders did their best to name aspects of generative AI tools after what happens in the human brain. OpenAI has released its first version “Inference” model. In the year 2024, a chatbot needs time to “think”. the Company described This release at the time was “a new series of AI models designed to spend more time thinking before responding.” Many startups also refer to their chatbots as having “memories” about the user. Instead of the fast storage commonly referred to as computer “memories,” they are nuggets of information much like a human: He lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe.
It’s a consistent marketing approach used by AI leaders, who have continued to rely on brands that blur the line between what humans do and what machines can do. Even the ways in which these companies develop chatbots, e.g Claudewith distinct “personalities”, can make users feel as if they are talking to something that has the potential for a deep inner life, something He was I probably have dreams even when my laptop is closed.
At Anthropic, this anthropomorphism runs deeper than just marketing strategies. “We also discuss Claude in terms usually reserved for humans (e.g., ‘virtue,’ ‘wisdom’),” reads part of the book. Anthropic constitution Describes how Claude wants to act. “We do this because we expect Claude’s reasoning to be based on human concepts by default, given the role of the human script in Claude’s training; and we believe that encouraging Claude to embrace some human-like qualities may be actively desirable.” The company even employs A Resident philosopher To try to understand the “values” of the robot.