Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Dr. Seena Parry, a practicing surgeon and healthcare AI lead at Data Corporation commanderhe has seen first-hand how ChatGPT can mislead patients with incorrect medical advice.
“I had a patient visit me recently, and when I recommended a medication, they had a printed dialogue from ChatGPT saying this medication has a 45% chance of causing a pulmonary embolism,” Dr. Barry told TechCrunch.
When Dr. Barry investigated further, he found that the statistic was from research on the effect of this drug on a specialized subgroup of people with TB, which did not apply to his patient.
However, when OpenAI announced its allocation ChatGPT Validity chatbot Last week, Dr. Barry felt more excited than concerned.
ChatGPT Health, which will be rolled out in the coming weeks, allows users to talk to a chatbot about their health in a more private environment, where their messages will not be used as training data for the underlying AI model.
“I think it’s great,” Dr. Barry said. “It’s something that’s already happening, so formalizing it to protect patient information and putting some safeguards around it (…) will make its use more powerful for patients.”
Users can get more personalized guidance from ChatGPT Health by uploading their medical records and syncing them with apps like Apple Health and MyFitnessPal. For the security-minded, this raises immediate red flags.
TechCrunch event
San Francisco
|
October 13-15, 2026
“Suddenly, medical data is being transferred from HIPAA-compliant organizations to non-HIPAA-compliant vendors,” Itai Schwartz, co-founder of data loss prevention firm MIND, told TechCrunch. “So I’m curious to see how the organizers will handle this.”
But in the view of some industry professionals, the cat is already out of the bag. Now, instead of searching for cold symptoms on Google, people are talking to AI-powered chatbots 230 million people They actually talk to ChatGPT about their health every week.
“This was one of the biggest use cases for ChatGPT,” Andrew Bracken, a partner at Gradient who invests in health technology, told TechCrunch. “So it makes a lot of sense that they would want to create a more private, secure, and optimized version of ChatGPT for these healthcare issues.”
AI-powered chatbots are constantly facing a problem HallucinationsThis is a particularly sensitive issue in health care. According to Vectara Actual consistency rubricOpenAI’s GPT-5 is more susceptible to hallucinations than many Google and Anthropy models. But AI companies see the potential to correct shortcomings in health care (Anthropic also announced a health product this week).
For Dr. Nigam Shah, professor of medicine at Stanford University and chief data scientist at Stanford Healthcare, the inability of American patients to access care is more pressing than the threat of ChatGPT distributing bad advice.
“Right now, you go into any health system and you want to see your primary care doctor — and the wait time is going to be three to six months,” Dr. Shah said. “If your choice was to wait six months to find a real doctor, or to talk to someone who is not a doctor but can do some things for you, which would you choose?”
Dr. Shah believes that the most obvious path to introducing AI into healthcare systems comes from the provider side, not the patient side.
Medical journals have mostly I mentioned Administrative tasks can take up about half of a primary care doctor’s time, reducing the number of patients they can see on a given day. If this type of work could be automated, doctors would be able to see more patients, perhaps reducing the need for people to use tools like ChatGPT Health without additional input from a real doctor.
Dr. Shah leads a team in development at Stanford University ChatEHRa software integrated into the electronic health record (EHR) system, allowing doctors to interact with a patient’s medical records in a more simple and efficient way.
“Making the electronic medical record more user-friendly means that doctors can spend less time searching every nook and cranny of it to get the information they need,” Dr. Sneha Jain, one of ChatEHR’s early labs, said in the Stanford University Journal of Medicine. condition. “ChatEHR can help them get this information upfront so they can spend time on what matters — talking to patients and finding out what’s going on.”
Anthropic is also working on AI products that could be used by doctors and insurance companies, rather than just the public-facing chatbot Claude. Anthropic announced this week Claude Healthcare By explaining how it can be used to reduce time spent on tedious administrative tasks, such as submitting prior authorization applications for insurance providers.
“Some of you are seeing hundreds and thousands of these pre-authorization cases weekly,” Mike Krieger, Anthropic’s chief procurement officer, said in a recent presentation at JPMorgan. Healthcare conference. “So imagine cutting twenty or thirty minutes off of each one — it’s a huge time saver.”
As AI and medicine become increasingly intertwined, there is an inevitable tension between the two worlds – a doctor’s primary motivation is to help their patients, while technology companies are ultimately accountable to their shareholders, even if their intentions are noble.
“I think stress is important,” Dr. Barry said. “Patients depend on us to be sarcastic and conservative in order to protect them.”