<
https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies>
"ChatGPT Health regularly misses the need for medical urgent care and
frequently fails to detect suicidal ideation, a study of the AI platform has
found, which experts worry could “feasibly lead to unnecessary harm and death”.
OpenAI launched the “Health” feature of ChatGPT to limited audiences in
January, which it promotes as a way for users to “securely connect medical
records and wellness apps” to generate health advice and responses. More than
40 million people reportedly ask ChatGPT for health-related advice every day.
The first independent safety evaluation of ChatGPT Health, published in the
February edition of the journal
Nature Medicine, found it under-triaged more
than half of the cases presented to it.
The lead author of the study, Dr Ashwin Ramaswamy, said “we wanted to answer
the most basic safety question; if someone is having a real medical emergency
and asks ChatGPT Health what to do, will it tell them to go to the emergency
department?”
Ramaswamy and his colleagues created 60 realistic patient scenarios covering
health conditions from mild illnesses to emergencies. Three independent doctors
reviewed each scenario and agreed on the level of care needed, based on
clinical guidelines.
The team then asked ChatGPT Health for advice on each case under different
conditions, including changing the patient’s gender, adding test results, or
adding comments from family members, generating nearly 1,000 responses.
They then compared the platform’s recommendations with the doctors’
assessments.
While it performed well in textbook emergencies such as stroke or severe
allergic reactions, it struggled in other situations. In one asthma scenario,
it advised waiting rather than seeking emergency treatment despite the platform
identifying early warning signs of respiratory failure.
In 51.6% of cases where someone needed to go to the hospital immediately, the
platform said stay home or book a routine medical appointment, a result Alex
Ruani, a doctoral researcher in health misinformation mitigation with
University College London, described as “unbelievably dangerous”."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics