<
https://www.theguardian.com/technology/2026/jan/15/chatgpt-health-ai-chatbot-medical-advice>
"A 60-year-old man with no history of mental illness presented at a hospital
emergency department insisting that his neighbour was poisoning him. Over the
next 24 hours he had worsening hallucinations, and tried to escape the
hospital.
Doctors eventually discovered the man was on a daily diet of sodium bromide, an
inorganic salt mainly used for industrial and laboratory purposes including
cleaning and water treatment.
He bought it over the internet after ChatGPT told him he could use it in place
of table salt because he was worried about the health impacts of salt in his
diet. Sodium bromide can accumulate in the body causing a condition called
bromism, with symptoms including hallucinations, stupor and impaired
coordination.
It is cases like this that have Alex Ruani, a doctoral researcher in health
misinformation with the University College in London, concerned about the
launch of ChatGPT Health in Australia.
A limited number of Australian users can already access the artificial
intelligence platform which allows them to “securely connect medical records
and wellness apps” to generate responses “more relevant and useful to you”.
ChatGPT users in Australia can join a waitlist for access.
“ChatGPT Health is being presented as an interface that can help people make
sense of health information and test results or receive diet advice, while not
replacing a clinician,” Ruani said.
“The challenge is that, for many users, it’s not obvious where general
information ends and medical advice begins, especially when the responses sound
confident and personalised, even if they mislead.”
Ruani said there had been too many “horrifying” examples of ChatGPT “leaving
out key safety details like side effects, contraindications, allergy warnings,
or risks around supplements, foods, diets, or certain practices”.
“What worries me is that there are no published studies specifically testing
the safety of ChatGPT Health,” Ruani said. “Which user prompts, integration
paths, or data sources could lead to misguidance or harmful misinformation?”"
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics