Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

As a journalist Covering AI, I hear from countless people who seem completely convinced that ChatGPT, Claude, or any other chatbot has achieved “consciousness.” Or “consciousness”. Or – my personal favorite – “A Mind of Its Own.” The Turing test was passed A while agoYes, but unlike rote intelligence, these things cannot be easily determined. Large linguistic models will claim to think for themselves, even describe inner torments or confess undying love, but such statements do not imply the inner.
Can they do that? Many of the actual builders of AI don’t speak in these terms. They are too busy chasing a performance standard known as “artificial general intelligence,” a purely functional category that has nothing to do with a machine’s potential experience of the world. So – despite my skepticism – I thought it might be useful, perhaps even enlightening, to spend some time with a company that believes it can decode malware. Consciousness itself.
Consum was founded in 2024 by British researcher and entrepreneur Daniel Hulme, and its advisors include an impressive lineup of neuroscientists, philosophers, and experts in animal consciousness. When we first spoke, Holm was realistic: There are good reasons to doubt that linguistic models are capable of consciousness. Crows, octopuses, and even amoeba can interact with their environments in ways that chatbots cannot. Experiments also indicate that AI utterances do not reflect coherent or consistent situations. As Holm puts it, echoing the broad consensus: “Large language models are very primitive representations of the brain.”
But – but – it all depends on what consciousness means in the first place. Some philosophers argue that consciousness is too personal to ever be studied or recreated, but Conchium is betting that if it exists in humans and other animals, it could be discovered, measured and integrated into machines.
There are competing and overlapping ideas about what the basic characteristics of consciousness are, including the ability to sense and “feel,” an awareness of oneself and one’s environment, and what is known as metacognition, or the ability to reflect on one’s own thought processes. Holm believes that the subjective experience of consciousness emerges when these phenomena are combined, just as the illusion of movement is created when you flip through sequential images in a book. But how do you recognize the components of consciousness – the individual animations, so to speak, as well as the force that brings them together? “You’re bringing the AI back to itself,” Holm says.
Conscium aims to break conscious thought down to its most basic form and stimulate it in the laboratory. “There has to be something out of which consciousness is built, and out of which it emerged in evolution,” says Mark Solms, a psychoanalyst and neuropsychologist from South Africa involved in the Consium Project. In his 2021 book, Hidden springSolms proposed a new sensitive method for Think about awareness. He said the brain uses perception and action in a feedback loop designed to reduce surprise and generate hypotheses about the future that are updated as new information arrives. The idea is based on the “free energy principle” developed by Karl Friston, another noteworthy, if controversial, neuroscientist (and fellow Consultant for Conseum). Solms goes on to suggest that this feedback loop in humans has evolved into a system mediated through emotions, and that it is the same feelings Which evokes feeling and awareness. This theory is reinforced by the fact that damage to the brainstem, which plays a crucial role in regulating emotions, appears to cause patients to lose consciousness.
At the end of his book, Solms proposes a way to test his theories in the laboratory. And now, he says, he’s done just that. He did not release the paper, but showed it to me. Is my brain broken? Yes a little. Solms’ artificial agents live in a simple computer simulation environment and are controlled by Freston-loop-type algorithms that mediate emotions and which he proposes as the basis of consciousness. “I have some motivation to do this research,” Solms said. “The first is that it’s interesting.”
Solms’ laboratory conditions are constantly changing and require constant modeling and adjustment. Agents’ experience of this world is mediated through mimetic-like responses of fear, excitement, and even pleasure. So they are, in short, pleasure robots. Unlike the AI agents everyone talks about today, Solms’s creations have real meaning He desires To explore their environment. To understand them properly, one must try to imagine how they “feel” about their little world. Solms believes that it should eventually be possible to combine the approach he is developing with a model of language, thus creating a system capable of speaking about its own conscious experience.