<
https://www.theverge.com/policy/665685/ai-therapy-meta-chatbot-surveillance-risks-trump>
"Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has
recently been pitching a future where his AI tools give people something that
“knows them well,” not just as pals, but as professional help. “For people who
don’t have a person who’s a therapist,” he told
Stratechery’s Ben Thompson,
“I think everyone will have an AI.”
The jury is out on whether AI systems can make good therapists, but this future
is already legible. A lot of people are anecdotally pouring their secrets out
to chatbots, sometimes in dedicated therapy apps, but often to big
general-purpose platforms like Meta AI, OpenAI’s ChatGPT, or xAI’s Grok. And
unfortunately, this is starting to seem extraordinarily dangerous — for reasons
that have little to do with what a chatbot is telling you, and everything to do
with who else is peeking in.
This might sound paranoid, and it’s still hypothetical. It’s a truism someone
is always watching on the internet, but the worst thing that comes of it for
many people is some unwanted targeted ads. Right now in the US, though, we’re
watching the impending collision of two alarming trends. In one, tech
executives are encouraging people to reveal ever more intimate details to AI
tools, soliciting things users wouldn’t put on social media and may not even
tell their closest friends. In the other, the government is obsessed with
obtaining a nearly unprecedented level of surveillance and control over
residents’ minds: their gender identities, their possible neurodivergence,
their opinions on racism and genocide.
And it’s pursuing this war by seeking and weaponizing ever-increasing amounts
of information with little regard for legal or ethical restraints."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics