<
https://theconversation.com/using-your-ai-chatbot-as-a-search-engine-be-careful-what-you-believe-277616>
"During the first world war, the British government was looking for ways to
help people stretch their limited food supplies. It found pamphlets from a
noted 19th-century herbalist who said rhubarb leaves could be used as a
vegetable along with the stalks.
The government duly printed its own pamphlets advising people to eat rhubarb
leaves as a salad rather than throwing them out. There was one problem: rhubarb
leaves can be poisonous. People reportedly died or became ill.
The advice was corrected and the pamphlets pulled from circulation. But during
the second world war, the government was again looking for ways to stretch food
supplies.
It found a stockpile of old resources from the previous war that explained
unorthodox sources of food, including rhubarb leaves. Reusing the pamphlets
seemed an efficient thing to do, so they were sent out to the public. Once
again, people reportedly died or became ill.
Those pamphlets were misinformation, but the public had no reason to suspect
them either time. They were official resources developed by the government –
why wouldn’t they be safe?
That is how misinformation can cause problems even after the initial error is
corrected. And the moral of the story still reverberates in the age of
generative artificial intelligence (AI)."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics