What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues

Wed, 9 Aug 2023 09:23:24 +1000

Andrew Pam <xanni [at] glasswings.com.au>

Andrew Pam
<https://theconversation.com/what-is-ai-alignment-silicon-valleys-favourite-way-to-think-about-ai-safety-misses-the-real-issues-209330>

"As increasingly capable artificial intelligence (AI) systems become
widespread, the question of the risks they may pose has taken on new urgency.
Governments, researchers and developers have highlighted AI safety.

The EU is moving on AI regulation, the UK is convening an AI safety summit, and
Australia is seeking input on supporting safe and responsible AI.

The current wave of interest is an opportunity to address concrete AI safety
issues like bias, misuse and labour exploitation. But many in Silicon Valley
view safety through the speculative lens of “AI alignment”, which misses out on
the very real harms current AI systems can do to society – and the pragmatic
ways we can address them."

Cheers,
       *** Xanni ***
--
mailto:xanni@xanadu.net               Andrew Pam
http://xanadu.com.au/                 Chief Scientist, Xanadu
https://glasswings.com.au/            Partner, Glass Wings
https://sericyb.com.au/               Manager, Serious Cybernetics

Comment via email

Home E-Mail Sponsors Index Search About Us