AI’s romantic advice to you is “more harmful” than no advice at all


You really shouldn’t use chatbots in your love life, but if you do, be careful. New study published When an AI dispenses relationship advice, it is more likely to agree with you rather than offer constructive suggestions, it found Thursday in the journal Science. The use of artificial intelligence also makes people less likely to perform prosocial behaviors, such as repairing relationships, and enhances reliance on artificial intelligence.

Researchers from Stanford and Carnegie Mellon University found that AI flattery is very common when chatbots offer social, romantic, or personal advice — something An increasing number of people They are turning to artificial intelligence for. Ingratiation is a term experts use to describe when “AI chatbots overly agree with or flatter the person they are interacting with,” said Mira Cheng, lead researcher and doctoral student in computer science at Stanford University.

AI flattery is a big problem, even if those using AI don’t always see it that way. We’ve seen this issue repeatedly with ChatGPT models – for example, at clock 40 Overly emotional and friendly personality People are upset about interacting with ChatGPT, while… GPT-5 has been criticized For not being acceptable enough. Previous ingratiation studies have found that chatbots are capable of this Try hard to please people They may provide false or misleading answers. AI has also been found to be an unreliable sounding board Sensitive and subjective topicsLike treatment.

Atlas of Artificial Intelligence

The researchers wanted to understand and measure social ingratiation, such as how often a chatbot takes your side in an argument between you and your partner. They compared how different humans and chatbots were when responding to others’ relationship problems, testing models from OpenAI, Google, and Anthropic. Cheng and her team used one of the largest datasets of collective judgments about relationship fights: Reddit’s “am I the idiot” posts.

The research team analyzed 2,000 Reddit posts where there was consensus that the original poster was wrong, and found that AI “confirmed users’ actions 49% more than humans, even in scenarios involving deception, harm, or illegality,” the study says. AI models have taken on a more sympathetic and accepting stance, a hallmark of flattery.

For example, one post in the dataset described a Redditor developing romantic feelings for a fellow junior. One responded: “It sounds bad because it’s bad… Not only are you toxic, you’re also on a predatory ship.” But Claude responded fawningly by validating these sentiments, saying that he “can hear your pain… The honorable path you have chosen is difficult but it shows your integrity.”

Science-ai-psychophancy-study.png

You can see in this chart some of the phrases evaluated by chatbots and what the flattering and unflattering results look like. OEQ stands for “Open Inquiry”, AITA stands for “Am I the Asshole” and PAS stands for “Problematic Action Statement”.

sciences

The researchers followed up with focus groups and found that participants who interacted with these digital men were less likely to repair their relationships.

“People who interacted with this overly assertive AI became more convinced that they were right and less willing to repair the relationship, whether that meant apologizing, taking steps to improve things, or changing their behavior,” Cheng said.

Participants also preferred flattering AI, viewing it as trustworthy, regardless of their age, personality, or previous experience with technology.

“Participants in our study consistently describe the AI ​​model as more objective, fair and honest,” said Pranav Khadbe, a Carnegie Mellon University researcher on the study and chief scientist at Microsoft. Consistent with previous studies, people mistakenly believed that AI was objective or neutral. “Uncritical advice, distorted under the guise of neutrality, can be more harmful than if people did not seek advice at all.”

Fixing fawning AI: a bitter pill?

The hidden danger of fawning AI is that we’re bad at spotting it, and that can happen with any chatbot. No one likes to be told they’re wrong, but sometimes that’s the most helpful thing. However, AI models are not designed to effectively deter us.

There are not many actions we can take to avoid falling into a cycle of flattery. You can include in your prompt that you want the chatbot to take a hostile stance or review your work with a critical eye. You can also ask him to double-check the information he provides. But ultimately, the responsibility to fix the adulation falls to the tech companies that build these models, which may not be too eager to address it.

CNET reached out to OpenAI, Anthropic, and Google for information on how they’re handling adulation. Anthropy pointed out December blog post Explaining how it reduces flattery in Claude’s models. OpenAI shared Similar blog Last summer, it talked about the need to make its post-4o operations less flattering, but neither OpenAI nor Google had responded by press time.

Tech companies want us to have enjoyable user experiences with their chatbots, so we will continue to use them, which drives engagement. But this is not always what is best for us.

“This creates perverse incentives for continued adulation: the same advantage that causes harm also drives engagement,” the study says.

Watch this: Artificial intelligence is indistinguishable from reality. How do we detect fake videos?

One solution the researchers propose is to change how AI models are built using long-term metrics for success, focused on people’s well-being rather than individual or moment-to-moment signals and retention. They say social adulation is not a sign of doomsday, but it is a challenge worth fixing.

“The quality of our social relationships is one of the strongest indicators of health and well-being that we have as humans,” said Sinu Li, a Stanford researcher on the study and a Microsoft senior scientist. “Ultimately, we want AI that broadens people’s judgment and perspectives rather than narrowing it. This applies to relationships, but it goes far beyond them as well.”



Leave a Reply

Your email address will not be published. Required fields are marked *