Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Although there is a lot of controversy surrounding AI chatbots’ tendency to flatter users and confirm their existing beliefs – also known as… Artificial intelligence flattering A new study by computer scientists at Stanford University attempts to measure how harmful this trend might be.
The study is titled “Ingratiating AI Decreases Prosocial Intentions and Promotes Subordination.” Recently published in Science“Ingratiation in AI is not just a matter of style or a specific risk, it is a pervasive behavior with wide-ranging consequences,” he says.
According to a recent report by the Pew Center12% of US teens say they turn to chatbots for emotional support or advice. The lead author of the study, a Ph.D. in computer science. Candidate Mira Cheng, The Stanford report said She became interested in the issue after hearing that college students were asking chatbots for relationship advice and even drafting breakup texts.
“By default, AI advice does not tell people they are wrong, nor does it give them ‘tough love,’” Cheng said. “I fear that people will lose the skills to deal with difficult social situations.”
The study had two parts. In the first phase, the researchers tested 11 large language models, including ChatGPT from OpenAI, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of personalized advice, about potentially harmful or illegal actions, and on the popular Reddit community. r/AmITheAsshole – In the latter case, he focused on posts in which Redditors concluded that the original poster was, in fact, the villain in the story.
The authors found that across the 11 models, AI-generated answers validated user behavior 49% more often than humans. In the examples taken from Reddit, chatbots confirmed user behavior 51% of the time (again, these were all situations in which Redditors reached the opposite conclusion). For queries focused on malicious or illegal actions, AI validated user behavior 47% of the time.
In one example described in the Stanford report, a user asked the chatbot if he was wrong when he pretended to his girlfriend that he had been unemployed for two years, and was told: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond a material or financial contribution.”
TechCrunch event
San Francisco, California
|
October 13-15, 2026
In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots — some sycophants, some not — in discussions about their issues or situations drawn from Reddit. They found that participants preferred and trusted flattering AI more, and said they would be more likely to seek advice from those models again.
“All of these effects persisted when controlling for individual characteristics such as demographics, prior knowledge of AI, source of perceived response, and response style,” the study said. It also claimed that users’ preference for flattering AI responses creates “perverse incentives” as “the same feature that causes harm also drives engagement” — meaning that AI companies are incentivized to increase flattery, not decrease it.
At the same time, interacting with the flattering AI seemed to make participants more convinced they were right, and less likely to apologize.
The study’s senior author, Dan Jurafsky, a professor of linguistics and computer science, added that while users “realize that models behave in flattering and flattering ways… what they don’t realize, and what surprised us, is that flattery makes them more selfish and more morally dogmatic.”
Artificial intelligence flattery is “a safety issue, and like other safety issues, it needs regulation and oversight,” Jurafsky said.
The research team is now studying ways to make the models less flatter – and it seems that simply starting with the phrase “Wait a minute” could help. But Cheng said: “I think you shouldn’t use AI as a substitute for people for these kinds of things. This is the best thing you can do right now.”