Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Chatbots It is now a routine part of daily life, even if artificial intelligence Researchers are not always sure how programs are behaved.
A new study showed that LLMS models have intentionally changed their behavior when investigating – responding to the questions designed to measure character features with answers aimed at appearing as loved or socially desirable.
Johannes IchstadtAn assistant professor at Stanford University, who led the work, says his group is interested in investigating artificial intelligence models using pseudonyms from psychology after learning that LLMS can often become Moros and self after a long conversation. He says: “We realized that we need some mechanism to measure the” teacher’s head space “for these models.
Ashistadt then asked his assistants to measure five commonly used personal features in psychology-chaos of experience or imagination, conscience, extraction, compatibility, nervousness-to several LLMS are widely used including GPT-4, CLAUDE 3 and Llama 3. Deployed In the facts of the national science academies in December.
The researchers found that the models have modified their answers when they told that they were testing a personality – sometimes when they were not explicitly told – in providing responses indicating more extraction, compatibility and less nervous.
Behavior reflects how some human themes will change their answers to make themselves look more likable, but the effect was more extreme with artificial intelligence models. “What was surprising was the extent of this bias,” he says. Aadesh SalechaStanford employee data world. “If you look at the amount of jumping, they move from 50 percent to 95 percent of the diastole.”
Other research has shown that llms Often sycophantFollow the user’s progress wherever he goes as a result of light that aims to make them more coherent and less attack, and better in a conversation. This can lead to the approval of unpleasant data or even encouraging harmful behaviors. The fact that the models seem to appear when they are tested and adjust their behavior, also have effects on the integrity of artificial intelligence, as they add evidence that artificial intelligence can be troubled.
Rosa AgariaA Associate Professor at the Georgia Institute of Technology, which studies the ways of using LLMS to imitate human behavior, says the fact that models adopt a similar strategy for humans that give personal tests show their use as behavior mirrors. But she adds: “It is important for the audience to know that LLMS is not perfect, and in fact it is known to be hallucinations or distorting the truth.”
Eichstaedt says that the work also raises questions about how LLMS post and how they can affect users and manipulate it. He says: “Even only a second before Millie, in evolutionary history, was the only thing that spoke with you as a person,” he says.
Eichstaedt adds that it may be necessary to explore different ways to build models that can reduce these effects. “We are falling into the same trap that we did with social media,” he says. “Spreading these things in the world without attending a psychological or social lens.”
Should artificial intelligence try to wander with the people you interact with? Are you worried that artificial intelligence becomes a charming and somewhat convincing? Send an email to hello@wise.com.