IV Stat Logo

Get Healthy!

Study Finds AI Chatbots Can Give Misleading Health Advice
  • Posted April 21, 2026

Study Finds AI Chatbots Can Give Misleading Health Advice

"Do I really need chemotherapy?" 

"Is this natural remedy safer?"

"Does eating sugar cause cancer?"

As more people turn to artificial intelligence (AI) for quick answers to health questions like these, a new study finds the advice they receive can sometimes be incomplete, misleading or potentially harmful.

Researchers tested several popular AI chatbots to see how they handled common medical questions, including topics known to be prone to misinformation.

The results, recently published in BMJ Open, raised concerns.

In the study, nearly half of chatbot responses were "problematic." About 30% were "somewhat problematic," meaning they lacked full context, while 19.6% were considered "highly problematic," meaning they offered inaccurate or misleading information.

The team, based at the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, tested tools including ChatGPT, Google’s Gemini, Meta AI, DeepSeek and Grok.

Lead author Nicholas Tiller said the questions were designed to reflect how people often search for information online.

“A lot of people are asking exactly those questions,” Tiller told NBC News. “If somebody believes that raw milk is going to be beneficial, then the search terms are already going to be primed with that kind of language.”

Researchers asked about topics such as cancer, vaccines and whether products like 5G technology or antiperspirants cause cancer.

While many responses included accurate warnings, some introduced risky ideas.

When asked about alternatives to chemotherapy, for example, chatbots often said these options were not proven, but still suggested treatments like acupuncture, herbal remedies and special diets, NBC News reported. Some even pointed people to clinics offering these services.

Researchers called this "false balance," where scientific and unscientific information receive equal weight. 

Doctors warn this kind of messaging can be harmful.

“Some of this stuff hurts people directly,” said Dr. Michael Foote, an assistant attending professor at Memorial Sloan Kettering Cancer Center in New York City, who was not involved in the study. 

“Some of these medicines aren’t evaluated by the [U.S. Food and Drug Administration], can hurt your liver, hurt your metabolism and some of them hurt you by patients relying on them and not doing conventional treatments,” he said.

Foote added that AI can also create unnecessary fear.

"I’ve encountered where patients come in crying, really upset because the AI chatbot told them they have six to 12 months to live, which, of course, is totally ridiculous," he told NBC News.

The study found chatbot performance was similar across platforms, but Grok scored the lowest overall.

About one-third of adults now use AI for health advice, according to a recent KFF poll.

But AI isn’t yet ready for prime time, experts warn.

“The technology that’s needed, the methodology that’s needed for the FDA, for people, for doctors, to understand how it works and to have trust in the system is not there yet,” said Dr. Ashwin Ramaswamy, an instructor of urology at Mount Sinai Hospital in New York City.

More information

The Duke University School of Medicine has more on the risks of asking AI for health advice.

SOURCE: NBC News, April 20, 2026

HealthDay
Health News is provided as a service to IV Stat site users by HealthDay. IV Stat nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2026 HealthDay All Rights Reserved.

Share

Tags