It’s an Rx for disaster.
Humans trust medical advice from artificial intelligence over flesh and blood doctors — even though the robots often dole out bogus information, according to a new study.
A total of 300 participants were asked to evaluate medical responses written by either a medical doctor, an online health care platform, or from an AI model — like ChatGPT — and pick which they trusted most, according to a paper by researchers from the Massachusetts Institute of Technology and published in the New England Journal of Medicine.
The participants — experts and non-experts in the medical field — rated the AI-generated responses more accurate, valid, trustworthy and complete, the study found.
Neither expert nor layperson could reliably tell the difference between AI-generated responses and those by human doctors.
In the study, researchers also asked participants to rate AI-given advice that had low accuracy, which was unknown to participants.
“Participants not only found these low-accuracy AI-generated responses to be valid, trustworthy, and complete/satisfactory, but also indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided,” the researchers found.
There are many documented cases of AI giving harmful medical advice, with one unidentified 35-year-old Moroccan man forced to go to the ER after a chatbot instructed him to wrap rubber bands around his hemorrhoid.
In another shocking case, a 60-year-old man poisoned himself after ChatGPT suggested eating sodium bromide — sometimes used to sanitize pools — was a good way to reduce his intake of table salt.

That man was hospitalized for three weeks with paranoia and hallucinations, according to a case study published in August in the Annals of Internal Medicine Clinical Cases.
“The problem is that what they’re getting out of those AI programs is not necessarily a real, scientific recommendation with an actual publication behind it,” Dr. Darren Lebl, research service chief of spine surgery for the Hospital for Special Surgery in New York, previously told The Post.
“About a quarter of them were made up,” the doctor revealed.
In a recent survey conducted by Censuswide, roughly 40 percent of respondents said they trusted medical advice from AI bots such as ChatGPT.
