Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In the race to make AI models look increasingly impressive, technology companies have adopted a theatrical approach to language. They keep talking about AI as if it were a person. It’s not just AI “thinking” or “planning” – those words are already loaded – but now they’re discussing… The “soul” of the AI model How models “recognize,” “want,” “plan,” or “feel” uncertainty.
This is not a harmless marketing boom. The anthropomorphism of AI is misguided, irresponsible, and ultimately erodes the general public’s understanding of a technology that already struggles with transparency, at a time when clarity is most important.
Research by major AI companies that aims to shed light on generative AI behavior is often framed in ways that obscure more than illuminate. Take for example, A recent post from OpenAI Which details her work on getting her models to “admit” their mistakes or shortcuts. It’s a valuable experiment that explores how a chatbot self-reports certain “misbehaviour”, such as hallucinations and intrigue. But OpenAI’s description of the process as “recognition” suggests that there is a psychological element behind the output of a large language model.
This perhaps stems from a recognition of how challenging it is for the LLM program to achieve true transparency. We have seen, for example, that AI models cannot reliably prove their work in activities such as Solve Sudoku puzzles.
There is a gap between What Artificial intelligence can create and how It generates it, which is exactly why these human-like terms are so dangerous. We can debate the true limits and risks of this technology, but terminology that describes AIs as sentient beings only downplays concerns or glosses over the risks.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
AI systems have no souls, motives, feelings, or morals. They don’t “confess” because they feel compelled to be honest, just as a calculator doesn’t “apologize” when you press the wrong key. These systems generate patterns of text based on statistical relationships learned from large data sets.
That’s it.
Anything that appears human is a projection of our inner lives onto a highly sophisticated mirror.
Anthropomorphizing AI gives people the wrong idea about what these systems actually are. This has consequences. When we start assigning consciousness and emotional intelligence to an entity that doesn’t exist, we start trusting artificial intelligence in ways we were never supposed to trust.
Today, more and more people are turning to “Doctor ChatGPT” for Medical guidance Instead of relying on qualified and licensed doctors. Others are turning to AI-generated responses in areas such as Finance, Emotional health and personal relationships. Some become dependent Fake friendships with chatbots And referring to them for guidance, assuming that whatever the LLM spits out is “good enough” to inform their decisions and actions.
When companies rely on anthropomorphic language, they blur the line between simulation and consciousness. The jargon inflates expectations, incites fear, and distracts from the real issues that actually deserve our attention: bias in data sets, misuse by bad actors, safety, reliability, and concentration of power. None of these topics require mystical metaphors.
Take for example Anthropic’s recent leak of “Spirit document“, was used to train Claude Opus 4.5’s personality, self-awareness, and identity. This ridiculous piece of internal documentation wasn’t meant to make a metaphysical claim — more like its engineers were looking for a debugging guide. However, the language these companies use behind closed doors inevitably seeps into how the general public discusses it. Once that language sticks, it shapes our ideas about technology, as well as how we act around it.
Or take OpenAI’s research into it Research “planning” for artificial intelligencewhere a collection of rare but deceptive responses led some researchers to conclude that the models were intentionally hiding certain abilities. Checking AI results is a good practice; Implying that chatbots may have motives or strategies of their own is not the case. In fact, the OpenAI report said these behaviors were the result of training data and some motivating tendencies, not signs of deception. But because the word “scheming” was used, the conversation turned to concerns about AI being a kind of conspiring agent.
There are better, more precise, more technical words. Instead of “spirit,” talk about the structure or training of the model. Instead of “recognition,” call it bug reporting or internal consistency checks. Instead of saying typical “schemes”, describe their optimization process. We should refer to AI using terms such as trends, outputs, representations, optimizers, model updates, or training dynamics. It’s not as dramatic as “Soul” or “Confession,” but it has the advantage of being grounded in reality.
To be fair, there are reasons why these MBA behaviors seem human, as companies have trained them to mimic us.
As authors of the 2021 paper”About the dangers of random parrots“I point out that systems designed to replicate human language and communication will eventually mirror it – our verbiage, our phrasing, our tone and our tenor. Similarity does not mean true understanding. It means that the model is doing what it was optimized to do. When a chatbot imitates as convincingly as chatbots can now, we end up reading humanity into the machine, even though there is no such thing.”
Language shapes public perception. When words are intentionally dirty, magical, or anthropomorphic, the audience ends up with a distorted image. This distortion only benefits one group: AI companies that profit from holders of master’s degrees who appear more capable, useful, and humane than they actually are.
If AI companies want to build public trust, the first step is simple. Stop treating language models like mysterious beings with souls. They don’t have feelings – we have them. Our words should reflect that, not obscure it.
Read also: In the age of artificial intelligence, what does meaning look like?