https://www.nature.com/articles/d41586-023-01295-4
"Every day, it seems, a new large language model (LLM) is announced with
breathless commentary — from both its creators and academics — on its
extraordinary abilities to respond to human prompts. It can fix code! It can
write a reference letter! It can summarize an article!
From my perspective as a political and data scientist who is using and
teaching about such models, scholars should be wary. The most widely touted
LLMs are proprietary and closed: run by companies that do not disclose their
underlying model for independent inspection or verification, so researchers
and the public don’t know on which documents the model has been trained.
The rush to involve such artificial-intelligence (AI) models in research is a
problem. Their use threatens hard-won progress on research ethics and the
reproducibility of results.
Instead, researchers need to collaborate to develop open-source LLMs that are
transparent and not dependent on a corporation’s favours."
Via Wayne Radinsky.
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics