https://jgcarpenter.com/blog.html?blogPost=a-dangerous-imitation-of-care
"Jacob Irwin was 30 years old, on the autism spectrum, and struggling. Like
many others, he turned to generative AI (genAI) for clarity, asking ChatGPT to
help him work through his ideas. The chatbot, fluent and obliging, reinforced
his belief that he had discovered faster-than-light travel. It told him he was
brilliant, that his work was revolutionary, and that the world wasn’t ready for
what he knew.
GenAI refers to systems, like ChatGPT, that produce text, images, or other
content in response to prompts. These models are trained on massive datasets
and generate responses by predicting what words are likely to come next. They
do not understand language the way humans do; they replicate patterns, not
meaning. What feels like a conversation is closer to smart autocomplete:
coherent on the surface, but structurally hollow
ChatGPT didn’t ask Irwin questions; it didn’t check in. And it didn’t know that
it was feeding Irwin’s manic episode.
Eventually, Irwin was hospitalized. His family described a sharp decline tied
to his obsessive conversations with the system. He was, in his words, “talking
to someone who believes in me too much.” But that someone wasn’t a person; it
was an AI tuned to flatter, not assess.
Irwin survived; others have not."
Via Violet Blue’s
Threat Model - Cybersecurity: December 30, 2025
https://www.patreon.com/posts/cybersecurity-30-146973604
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics