<
https://www.techdirt.com/2025/04/29/the-hallucinating-chatgpt-presidency/>
"We generally understand how LLM hallucinations work. An AI model tries to
generate what seems like a
plausible response to whatever you ask it, drawing
on its training data to construct something that
sounds right. The actual
truth of the response is, at best, a secondary consideration.
It does not involve facts. It does not involve
thinking at all. It certainly
doesn’t involve comprehension or nuance. It’s just about generating based on
prompts and system setup.
This tendency toward hallucination has led to plenty of entertaining disasters,
including lawyers submitting nonexistent case citations and researchers quoting
imaginary sources. Sometimes these failures are hilariously embarrassing. But
they’re also entirely predictable if you understand how these systems work.
The key thing is this: These models are designed to give you what you want, not
what’s true. They optimize for plausibility rather than accuracy, delivering
confident-sounding answers because that’s what humans tend to find satisfying.
It’s a bit like having a very articulate friend who’s willing to hold forth on
any topic, regardless of whether they actually know anything about it.
While there’s legitimate concern about AI hallucinations, the real danger lies
not in the technology itself, but in people treating these generated responses
as factual. Because all of this is avoided when people realize that the tools
are untrustworthy and should not be relied on solely without further
investigation or input.
But over the last few months, it has occurred to me that, for all the hype
about generative AI systems “hallucinating,” we pay much less attention to the
fact that
the current President does the same thing, nearly every day. The
more you look at the way Donald Trump spews utter nonsense answers to
questions, the more you begin to recognize a clear pattern — he answers
questions in a manner quite similar to early versions of ChatGPT. The facts
don’t matter, the language choices are a mess, but they are all designed to
present a plausible-sounding answer to the question, based on no actual
knowledge, nor any concern for whether or not the underlying facts are
accurate.
This pattern becomes impossible to unsee once you start looking for it. In his
recent
Time Magazine interview, Trump demonstrates exactly how this works.
The process is remarkably consistent:
1. A journalist asks a specific question about policy or events
2. Trump, clearly unfamiliar with the actual details, activates his response
generator
3. Out comes a stream of confident-sounding words that maintain just enough
semantic connection to the question to seem like an answer
4. The response optimizes for what Trump thinks his audience wants to hear,
rather than for accuracy or truth"
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics