Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Excessive attention to how people turn to AI Chatbots to get emotional support, and sometimes even TiesOne often leads to the belief that such behavior is common.
new a report Through the anthropologist, which makes the famous AI Chatbot Claud, it reveals a different fact: in fact, people are rarely looking for companionship from Claude and moving to the robot to get emotional support and personal advice only 2.9 % of time.
“Completely Right and roundabout include less than 0.5 % of the conversations,” the company explained in its report.
Anthropor says her study sought to discover visions about the use of artificial intelligence for “emotional conversations”, which she defines as a personal exchange in which people spoke to Claude for training, consultation, companionship, playing roles or advice on relationships. When analyzing 4.5 million conversations by users on Claude Free and supportive levels, the company said that the vast majority of the use of Claude is related to work or productivity, as people are mostly used as Chatbot to create content.

However, Anthropor found that people often use Claude to obtain personal advice, training and advice, as users often ask for advice on improving mental health, personal and professional development, and studying communication and personality skills.
However, the company notes that conversations that seek help may sometimes turn into the search for companionship in cases where the user faces emotional or personal distress, such as existential or unit, or when they find it difficult to make meaningful contacts in their real lives.
“We also noticed that in long conversations, conversations or training sometimes turns into companionship – although this is the original reason that someone communicated,” Antarbur wrote, noting that extensive conversations (with more than 50 human messages) were not the rule.
Antarubor also shed light on other visions, such as how Claude rarely resists user requests, except when its programs prevent them from the decent safety boundaries, such as providing dangerous advice or self -harm support. The company said that the talks also tend to become more positive over time when people seek training or advice from robot.
The report is definitely interesting – it is doing a good job to remind us again as artificial intelligence tools are used for outside work purposes. However, it is important to remember that the phrase Chatbots of artificial intelligence, in all fields, is still a continuous work: it is HallucinogenicIt is easily known Provide wrong information or Dangerous adviceAnd as the person admitted himself, may Even resorting to blackmail.