https://www.schneier.com/blog/archives/2023/04/llms-and-phishing.html
"Here’s an experiment being run by undergraduate computer science students
everywhere: Ask ChatGPT to generate phishing emails, and test whether these are
better at persuading victims to respond or click on the link than the usual
spam. It’s an interesting experiment, and the results are likely to vary wildly
based on the details of the experiment.
But while it’s an easy experiment to run, it misses the real risk of large
language models (LLMs) writing scam emails. Today’s human-run scams aren’t
limited by the number of people who respond to the initial email contact.
They’re limited by the labor-intensive process of persuading those people to
send the scammer money. LLMs are about to change that. A decade ago, one type
of spam email had become a punchline on every late-night show: “I am the son of
the late king of Nigeria in need of your assistance….” Nearly everyone had
gotten one or a thousand of those emails, to the point that it seemed everyone
must have known they were scams.
So why were scammers still sending such obviously dubious emails? In 2012,
researcher Cormac Herley offered an answer: It weeded out all but the most
gullible. A smart scammer doesn’t want to waste their time with people who
reply and then realize it’s a scam when asked to wire money. By using an
obvious scam email, the scammer can focus on the most potentially profitable
people. It takes time and effort to engage in the back-and-forth communications
that nudge marks, step by step, from interlocutor to trusted acquaintance to
pauper.
Long-running financial scams are now known as pig butchering, growing the
potential mark up until their ultimate and sudden demise. Such scams, which
require gaining trust and infiltrating a target’s personal finances, take weeks
or even months of personal time and repeated interactions. It’s a high stakes
and low probability game that the scammer is playing.
Here is where LLMs will make a difference. Much has been written about the
unreliability of OpenAI’s GPT models and those like them: They “hallucinate”
frequently, making up things about the world and confidently spouting nonsense.
For entertainment, this is fine, but for most practical uses it’s a problem. It
is, however, not a bug but a feature when it comes to scams: LLMs’ ability to
confidently roll with the punches, no matter what a user throws at them, will
prove useful to scammers as they navigate hostile, bemused, and gullible scam
targets by the billions. AI chatbot scams can ensnare more people, because the
pool of victims who will fall for a more subtle and flexible scammer—one that
has been trained on everything ever written online—is much larger than the pool
of those who believe the king of Nigeria wants to give them a billion dollars."
Via Linux Weekly News:
https://lwn.net/Articles/928479/
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics