"Rapid progress in artificial intelligence (AI) has spurred some leading voices
in the field to call for a research pause, raise the possibility of AI-driven
human extinction, and even ask for government regulation. At the heart of their
concern is the idea AI might become so powerful we lose control of it.
But have we missed a more fundamental problem?
Ultimately, AI systems should help humans make better, more accurate decisions.
Yet even the most impressive and flexible of today’s AI tools – such as the
large language models behind the likes of ChatGPT – can have the opposite
Why? They have two crucial weaknesses. They do not help decision-makers
understand causation or uncertainty. And they create incentives to collect huge
amounts of data and may encourage a lax attitude to privacy, legal and ethical
questions and risks."
*** Xanni ***
Chief Scientist, Xanadu
Partner, Glass Wings
Manager, Serious Cybernetics