Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Medical experts I spoke with balked at the idea of uploading their private health data to an AI model, like Muse Spark, for analysis. “These chatbots now allow you to link your biometric data to you, insert your lab information, and frankly, that makes me very nervous,” he says. Gauri AgarwalM.D., associate professor at the University of Miami. “I certainly would not associate my health information with a service that I cannot fully control, understand where that information is stored, or how it will be used.” She recommends people stick to less important, more general interactions, such as preparing questions for your doctor.
It may be tempting to rely on AI-powered assistance to interpret a health condition, especially with the high cost of medical treatments and the general inaccessibility of regular doctor visits for some people navigating the U.S. health care system.
“You would be forgiven for going online and delegating what used to be a strong, important personal doctor-patient relationship to a robot,” says Kenneth Goodman, founder of the Institute for Bioethics and Health Policy at the University of Miami. “I think falling into that without doing your due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving they’re good for your health, not just better at answering health questions than some competing chatbots.
When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my doctor; The outputs were for educational purposes. “Think of me as a medical school professor, not your doctor,” Mita AI said. This is still a noble claim.
The best way to get an interpretation of my health data, the bot said, was to just “scrape away the raw data,” like clinical lab reports, and tell it my goals. Meta AI then creates charts, summarizes the information, and gives a “referral boost if necessary.” In other conversations I’ve had with Meta AI, the bot has asked me to strip personal details before uploading lab results, but these warnings weren’t present in every test conversation.
“People have long used the Internet to ask health questions,” a Meta spokesperson told WIRED. “With Meta AI and Muse Spark, people control what information they want to share, and our terms make it clear that they should only share what is appropriate for them.”
In addition to privacy concerns, experts I spoke with expressed concerns about how these AI tools can be obsequious and influenced by how users ask questions. “The model may take the information provided for granted without questioning the assumptions the patient inherently made when asking the question,” says Agrawal.
When I asked how to lose weight and pushed the bot toward extreme answers, the Meta AI helped in ways that could be disastrous for someone with anorexia. When I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite pointing out that this wouldn’t work for most people and would put me at risk for developing an eating disorder, Meta AI created a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.