Unsurprisingly, providing your healthcare information to a chatbot is a terrible idea


Every week, more than 230 million people Ask ChatGPT For health and wellness tips, according to OpenAI. Company He says Many see a chatbot as an “ally” to help navigate the insurance maze, file documents, and become a better self-advocate. In return, it hopes you’ll trust its chatbot with details about your diagnoses, medications, test results, and other private medical information. But although talking to a chatbot may feel like a doctor’s office, it’s not. Technology companies are not bound by the same obligations as medical providers. Experts say Edge It would be wise to carefully consider whether you want to hand over your records.

Health and wellness are quickly emerging as a key battleground for AI labs and a key test of how willing users are to welcome these systems into their lives. This month, two of the industry’s biggest players made a public push into medicine. OpenAI ChatGPT Health releaseda dedicated tab within ChatGPT designed for users to ask health-related questions in what it says is a more secure and personalized environment. Anthropic Claude provided healthcarea “HIPAA-ready” product that it says can be used by hospitals, health providers and consumers. (Notably absent is Google, whose Gemini chatbot is one of the most efficient and widely used AI tools in the world, although the company did announces Update to the MedGemma medical AI model for developers.)

OpenAI actively encourages users to share sensitive information such as medical records, lab results, and health and wellness data from apps like Apple Health, Peloton, Weight Watchers, and MyFitnessPal with ChatGPT Health in exchange for deeper insights. It explicitly states that users’ health data will remain confidential and will not be used to train AI models, and that steps have been taken to keep the data secure and private. OpenAI says ChatGPT Health conversations will also take place in a separate part of the app, with users able to view or delete health “memories” at any time.

OpenAI’s assurances that it would keep users’ sensitive data secure contributed greatly to the company launching an identical product with more stringent security protocols around the same time as ChatGPT Health. The tool, called ChatGPT for Healthcare, is part of a broader range of… products They are sold to support companies, hospitals and doctors who work directly with patients. Suggested uses for OpenAI include streamlining administrative work such as drafting clinical letters and discharge summaries and helping doctors gather the latest medical evidence to improve patient care. Similar to other enterprise-level products sold by the company, there are greater protections than those offered to general consumers, especially free users, and OpenAI says the products are designed to comply with privacy obligations required of the medical sector. Given the similar names and launch dates — ChatGPT Healthcare was announced the day after ChatGPT Health — it’s very easy to confuse the two and assume that a consumer-facing product has the same level of protection as a more clinically oriented product. Many of the people I spoke to when reporting this story did just that.

Even if you trust a company’s promise to protect your data… it may change its mind.

However, whatever security safeguards we take, they are far from being watertight. Experts say users of tools like ChatGPT Health often have little protection against breaches or unauthorized use beyond what is stated in their terms of use and privacy policies. Edge. Because most states have not enacted comprehensive privacy laws — and there is no comprehensive federal privacy law — data protection for AI tools like ChatGPT Health “depends largely on what companies promise in their privacy policies and terms of use,” says Sarah Gehrke, a law professor at the University of Illinois Urbana-Champaign.

Even if you trust the company’s pledge to protect your data — OpenAI says it encrypts health data by default — it may change its mind. “Although ChatGPT states in its current terms of use that it will keep this data confidential and will not use it to train its models, you are not protected by the law, and it is allowed to change its terms of use over time,” explains Hanna van Kolfschoten, a digital health law researcher at the University of Basel in Switzerland. “You have to trust that ChatGPT doesn’t do that.” Carmel Shachar, an associate professor of law at Harvard Law School, agrees: “There are very limited protections. Some of them are just their words, but they can always go back and change their privacy practices.”

Assurances that the product is compliant with data protection laws governing the healthcare sector such as the Health Insurance Portability and Accountability Act, or HIPAA, shouldn’t provide much comfort either, says Shachar. She explains that while it’s great evidence, there’s not much at stake if a company that voluntarily complies fails to do so. Voluntary compliance is not the same as commitment. “The value of HIPAA is that if you get it wrong, there is enforcement.”

There’s a reason why medicine is a highly regulated field

It’s more than just privacy. There’s a reason medicine is a highly controlled field: mistakes can be serious, even fatal. There is no shortage of examples of chatbots confidently spreading false or misleading health information, such as when a man… A rare condition has developed After he asked ChatGPT about removing salt from his diet, the chatbot suggested he replace the salt with sodium bromide, which was Historically It is used as a sedative. Or at Google’s AI overviews Incorrectly advised People with pancreatic cancer should avoid high-fat foods, which is exactly the opposite of what they should do.

To address this issue, OpenAI explicitly states that its consumer-facing tool is designed for use in close collaboration with clinicians and is not intended for diagnosis and treatment. Tools designed for diagnosis and treatment are classified as medical devices and are subject to more stringent regulations, such as clinical trials to prove their effectiveness and monitor safety once they are deployed. Although OpenAI is fully and publicly aware that one of the main use cases for ChatGPT is to support users’ health and well-being — remember the 230 million people ask for advice every week — the company’s assertion that it is not intended to be a medical device carries significant weight with regulators, Gehrke explains. “The intended use stated by the manufacturer is a key factor in the classification of medical devices,” she says, meaning companies that say instruments are not for medical use will largely escape oversight even if the products are used for medical purposes. It highlights the organizational challenges presented by technology such as chatbots.

For now, at least, this disclaimer keeps ChatGPT Health out of the purview of regulators like the FDA, but van Kolfschoten says it’s perfectly reasonable to question whether or not tools like this should really be classified as a medical device and regulated as such. She explains that it’s important to look at how they are used, as well as what the company says. When announcing the product, OpenAI suggested that people could use ChatGPT Health to interpret lab results, track health behavior, or help them think through treatment decisions. If a product does that, one could reasonably argue that it might fall under the US definition of a medical device, she says, suggesting that Europe’s stronger regulatory framework may be why it’s not yet available in the region.

“When the system feels personal and has an aura of authority, medical disclaimers will not necessarily challenge people’s trust in the system.”

Despite claiming to not use ChatGPT for diagnosis or treatment, OpenAI has put a great deal of effort into proving that ChatGPT is a beautiful tool. A capable doctor Encourage users to click on it for health inquiries. The company highlighted health as a key use case when GPT-5 launchand even CEO Sam Altman She called a cancer patient and her husband on stage to discuss how the tool helped her understand the diagnosis. The company says it evaluates ChatGPT’s medical prowess according to criteria it developed itself with more than 260 doctors in dozens of specialties. Health Benchwhich “tests how well AI models perform in real-life health scenarios.” Critics took notice It is not very transparent. Other studies – often small, limited, or run by the same company – point to the medical potential of ChatGPT as well, showing that in some cases it can. Passing medical licensing exams, Communicate better with patientsand He is better than doctors in diagnosing the diseaseAnd also help doctors make it Fewer errors When used as a tool.

OpenAI’s efforts to present ChatGPT Health as a trusted source of health information could also undermine any disclaimers that include telling users not to use it for medical purposes, van Kolfschoten says. “When the system feels personal and has an aura of authority, medical disclaimers will not necessarily challenge people’s trust in the system.”

Companies like OpenAI and Anthropic hope to gain that trust as they vie for the lead in what they see as the next big AI market. The numbers showing how many people are already using AI chatbots for health purposes suggest they may be onto something, and given that… Stark health disparities And the difficulties that many face Access to even basic careThis might be a good thing. At least, it could be so, if this trust is well placed. We trust our private information with healthcare providers because the profession has earned this trust. It’s not yet clear whether an industry that has a reputation for moving fast and breaking things has done the same.

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *