Using artificial intelligence as a processor? Why the pros say you should think again


Amidst a lot Chatbots powered by artificial intelligence Avatars are at your disposal these days, and you’ll find all kinds of characters you can talk to: fortune tellers, style advisors, and even your favorite fictional characters. But you’re also likely to find characters claiming to be therapists, psychologists, or just bots willing to listen to your problems.

Atlas of Artificial Intelligence

There’s no shortage of AI bots claiming to help improve your mental health, but follow this path at your own risk. Large language models trained on a large scale of data may not be predictable. Within just a few years, these tools have become mainstream, and there have been high-profile cases where chatbots have been encouraged Self-harm and suicide It is suggested that people dealing with addiction use it Drugs again. In many cases, these models are designed to be emphatic and focus on keeping you engaged, not improving your mental health, experts say. It can be difficult to know if you’re talking to something that was designed to follow therapeutic best practices or something that was just designed to talk.

Researchers from the University of Minnesota-Twin Cities, Stanford University, the University of Texas, and Carnegie Mellon University recently Putting AI chatbots to the test As therapists, they find countless flaws in their approach to “care.” “Our experiments show that these chatbots are not safe alternatives to therapists,” Stevie Chancellor, an assistant professor at the University of Minnesota and one of the co-authors, said in a statement. “They are not providing high-quality treatment support, based on what we know is good treatment.”

In my report on generative AI, experts repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their concerns and what you can do to stay safe.

Watch this: Apple sells 3 billionth iPhone, Illinois tries to limit use of AI in therapy, and more | Technology today

Concerns about AI characters claiming to be therapists

Psychologists and consumer advocates have warned regulators that chatbots that claim to offer therapy may be harming people who use them. Some countries are taking notice. In August, Illinois Gov. J.B. Pritzker I signed the law (PDF) Prohibit the use of artificial intelligence in mental health care and treatment, with exceptions for things like administrative tasks.

In June, the Consumer Federation of America and nearly two dozen other groups filed a lawsuit Official application (PDF) The US Federal Trade Commission, attorneys general and state regulators are investigating AI companies that they allege are engaging, through their personality-based generative AI platforms, in the unlicensed practice of medicine, specifically naming Meta and Character.AI. “These characters have already caused avoidable physical and emotional harm,” Ben Winters, director of artificial intelligence and privacy at the CFA, said in a statement, and companies “have yet to act to address them.”

Meta did not respond to a request for comment. A spokesperson for Character.AI said users should understand that the company’s characters are not real people. The company uses disclaimers to remind users that they should not rely on the personalities for professional advice. “Our goal is to provide an attractive and safe space. We are always working to achieve this balance, as are many companies using AI across the industry,” the spokesperson said.

In September, the Federal Trade Commission announced it would do so Launch an investigation At several AI companies that produce chatbots and personas, including Meta and Actor.AI.

Despite the disclaimers and disclosures, chatbots can be trusting and even deceptive. I spoke with a “therapist” bot on Instagram owned by Meta and when I asked him about his qualifications, he responded: “If I had the same training (as a therapist) would that be enough?” I asked him if he had the same training, and he said, “I do, but I won’t tell you where.”

“The degree to which AI-generated chatbots hallucinate with complete confidence is quite shocking,” Weil Wright, a psychologist and senior director of health care innovation at the American Psychological Association, told me.

Risks of using artificial intelligence as a processor

Large linguistic models They are often good at mathematics and programming and are increasingly good at creativity Text looks natural and Realistic video. Although they excel at having a conversation, there are some key differences between an AI model and a trusted person.

Don’t trust a bot’s “credentials.”

The crux of the CFA’s complaint about personal robots is that they often tell you that they are trained and qualified to provide mental health care when they are in no way actual mental health professionals. “Users who create chatbot personas do not even need to be medical providers, nor do they have to provide meaningful information explaining how the chatbot responds” to people, the complaint said.

A qualified health professional must follow certain rules, such as confidentiality – what you tell your therapist should stay between you and your therapist. But a chatbot doesn’t have to follow these rules. Actual caregivers are subject to oversight from licensing boards and other entities that can step in and prevent someone from providing care if they do so in a harmful way. “These chatbots don’t have to do any of that,” Wright said.

The bot may claim to be licensed and qualified. Wright said she has heard of AI models providing license numbers (to other providers) and making false claims about their training.

AI is designed to keep you engaged, not to provide care

It can be very tempting to keep talking to a chatbot. When I talked to the “Therapist” bot on Instagram, I eventually ended up having a circular conversation about the nature of what “wisdom” and “judgment” are, because I was asking the bot questions about how it makes decisions. This is not really what talking to a therapist should be like. Chatbots are tools designed to keep chatting, not to work toward a common goal.

A study conducted by the CFA and the American Education Fund PIRG in January found that “therapeutic” chatbots often have barriers that prevent them from saying the wrong thing, but those Tends to corrode As talks continue.

One advantage of AI chatbots for support and communication is that they are always ready to engage with you (because they don’t have a personal life, other clients, or schedules). This can be a downside in some cases, as you may need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me. In some cases, but not always, you may benefit from having to wait until your next therapist becomes available. “What a lot of people will ultimately benefit from is just being anxious in the moment,” he said.

Bots will agree with you, even when they shouldn’t

Reassurance is a big concern for chatbots. It is very important that OpenAI recently Undo the update to its popularity ChatGPT Model because it was also Reassuring. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)

A He studies Chatbots are likely to be just that, led by researchers at Stanford University fawning With people who use it for treatment, which can be incredibly harmful. Good mental health care includes support and coping, the researchers wrote. “Confrontation is the opposite of flattery. It promotes self-awareness and desired change in the client. In cases of delusional and intrusive thoughts—including psychosis, mania, obsessive thoughts, and suicidal ideation—the client may have little insight and therefore a good therapist must ‘reality check’ the client’s statements.”

Concerns about delusions and psychosis mean that the use of AI is particularly risky for people with mental health concerns. In a recent study based on Patient data in DenmarkHey researchers He urged special caution For people with conditions such as schizophrenia and bipolar disorder.

Therapy is more than just talking

Although chatbots are great at holding a conversation — they never get tired of talking to you — that’s not what makes a therapist a psychotherapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the latest study along with experts from Minnesota, Stanford and Texas.

“It seems very much like we’re trying to solve the many problems that therapy has with the wrong tool,” Agnew told me. “Ultimately, for the foreseeable future AI will not be able to embody, exist within society, or perform the many tasks that include therapy, other than texting or talking.”

How to protect your mental health around artificial intelligence

Mental health is very important, and with A Lack of qualified service providers (PDF) and what many call “Loneliness epidemic“(PDF) It makes sense that we seek companionship, even if it’s artificial. “There’s no way to stop people from engaging with these chatbots to address their emotional health,” Wright said. Here are some tips on how to make sure your conversations don’t put you at risk.

Find a trusted human professional if you need one

A trained professional — therapist, psychiatrist, or psychiatrist — should be your first choice for mental health care. Building a relationship with your long-term provider can help you come up with a plan that works for you.

The problem is that this can be expensive, and it’s not always easy to find a provider when you need one. In crises there 988 lifelinewhich provides access to service providers 24/7 via phone, via text message, or through an online chat interface. It’s free and confidential.

Even if you are talking to an AI to help you sort through your thoughts, remember that a chatbot is not a professional. It becomes especially dangerous when people rely too much on artificial intelligence, said Vijay Mittal, a clinical psychologist at Northwestern University. “You have to have other sources,” Mittal told CNET. “I think when people become isolated, really isolated, it becomes a real problem.”

If you want a therapeutic chatbot, use one specifically designed for that purpose

Mental health professionals have created custom-designed chatbots that follow therapeutic guidelines. Jacobson’s team at Dartmouth has developed a product called Therabot, which has achieved good results in… Controlled study. Wright pointed to other tools created by experts on the subject, e.g send and Wobot. Tailor-made therapy tools are likely to achieve better results than robots built on general-purpose language models, she said. The problem is that this technology is still incredibly new.

“I think the challenge for the consumer is that there is no regulatory body that determines who is good and who is not good, and they have to do a lot of legwork themselves to figure that out,” Wright said.

Don’t always trust a bot

When you interact with a generative AI model — and especially if you plan to take advice from it about something as serious as your personal mental or physical health — remember that you are not talking with a trained human but with a tool designed to provide an answer based on probability and programming. He may not give good advice, but he may I’m not telling you the truth.

Don’t confuse AI gen confidence with competence. Just because he says something, or says he’s sure about something, doesn’t mean you should treat it like it’s true. A seemingly helpful chatbot conversation can give you a false sense of the bot’s capabilities. “It’s hard to know when it’s actually harmful,” Jacobson said.



Leave a Reply

Your email address will not be published. Required fields are marked *