<
https://arstechnica.com/ai/2024/08/llms-have-a-strong-bias-against-use-of-african-american-english/>
'As far back as 2016, work on AI-based chatbots revealed that they have a
disturbing tendency to reflect some of the worst biases of the society that
trained them. But as large language models have become ever larger and
subjected to more sophisticated training, a lot of that problematic behavior
has been ironed out. For example, I asked the current iteration of ChatGPT for
five words it associated with African Americans, and it responded with things
like "resilience" and "creativity."
But a lot of research has turned up examples where implicit biases can persist
in people long after outward behavior has changed. So some researchers decided
to test whether the same might be true of LLMs. And was it ever.
By interacting with a series of LLMs using examples of the African American
English sociolect, they found that the AI's had an extremely negative view of
its speakers—something that wasn't true of speakers of another American English
variant. And that bias bled over into decisions the LLMs were asked to make
about those who use African American English.'
Via Muse.
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics