AI systems have learned how to deceive humans. What does that mean for our future?

Fri, 6 Oct 2023 14:48:56 +1100

Andrew Pam <xanni [at] glasswings.com.au>

Andrew Pam
<https://theconversation.com/ai-systems-have-learned-how-to-deceive-humans-what-does-that-mean-for-our-future-212197>

"Artificial intelligence pioneer Geoffrey Hinton made headlines earlier this
year when he raised concerns about the capabilities of AI systems. Speaking to
CNN journalist Jake Tapper, Hinton said:

If it gets to be much smarter than us, it will be very good at manipulation
because it would have learned that from us. And there are very few examples
of a more intelligent thing being controlled by a less intelligent thing.

Anyone who has kept tabs on the latest AI offerings will know these systems are
prone to “hallucinating” (making things up) – a flaw that’s inherent in them
due to how they work.

Yet Hinton highlights the potential for manipulation as a particularly major
concern. This raises the question: can AI systems deceive humans?

We argue a range of systems have already learned to do this – and the risks
range from fraud and election tampering, to us losing control over AI."

Cheers,
       *** Xanni ***
--
mailto:xanni@xanadu.net               Andrew Pam
http://xanadu.com.au/                 Chief Scientist, Xanadu
https://glasswings.com.au/            Partner, Glass Wings
https://sericyb.com.au/               Manager, Serious Cybernetics

Comment via email

Home E-Mail Sponsors Index Search About Us