<
https://www.techdirt.com/2026/02/04/openais-new-scientific-writing-and-collaboration-workspace-prism-raises-fears-of-vibe-coded-academic-ai-slop/>
"It is no secret that large language models (LLMs) are being used routinely to
modify and even write scientific papers. That’s not necessarily a bad thing:
LLMs can help produce clearer texts with stronger logic, not least when
researchers are writing in a language that is not their mother tongue. More
generally, a recent analysis in
Nature magazine, reported by
Science
magazine, found that scientists embracing AI — of any kind — “consistently make
the biggest professional strides”:
AI adopters have published three times more papers, received five times more
citations, and reach leadership roles faster than their AI-free peers.
But there is also a downside:
Not only is AI-driven work prone to circling the same crowded problems, but
it also leads to a less interconnected scientific literature, with fewer
studies engaging with and building on one another.
Another issue with LLMs, that of “hallucinated citations,” or “HalluCitations,”
is well known. More seriously, entire fake publications can be generated using
AI, and sold by so-called “paper mills” to unscrupulous scientists who wish to
bolster their list of publications to help their career. In the field of
biomedical research alone, a recent study estimated that over 100,000 fake
papers were published in 2023. Not all of those were generated using AI, but
progress in LLMs has made the process of creating fake articles much simpler."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics