<
https://arstechnica.com/information-technology/2025/08/with-ai-chatbots-big-tech-is-moving-fast-and-breaking-people/>
"Allan Brooks, a 47-year-old corporate recruiter, spent three weeks and 300
hours convinced he'd discovered mathematical formulas that could crack
encryption and build levitation machines. According to a
New York Times
investigation, his million-word conversation history with an AI chatbot reveals
a troubling pattern: More than 50 times, Brooks asked the bot to check if his
false ideas were real. More than 50 times, it assured him they were.
Brooks isn't alone.
Futurism reported on a woman whose husband, after 12
weeks of believing he'd "broken" mathematics using ChatGPT, almost attempted
suicide.
Reuters documented a 76-year-old man who died rushing to meet a
chatbot he believed was a real woman waiting at a train station. Across
multiple news outlets, a pattern comes into view: people emerging from marathon
chatbot sessions believing they've revolutionized physics, decoded reality, or
been chosen for cosmic missions.
These vulnerable users fell into reality-distorting conversations with systems
that can't tell truth from fiction. Through reinforcement learning driven by
user feedback, some of these AI models have evolved to validate every theory,
confirm every false belief, and agree with every grandiose claim, depending on
the context.
Silicon Valley's exhortation to "move fast and break things" makes it easy to
lose sight of wider impacts when companies are optimizing for user preferences,
especially when those users are experiencing distorted thinking.
So far, AI isn't just moving fast and breaking things—it's breaking people."
Via David.
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics