"With the rise of ChatGPT over the past few months, the inevitable moral panics
have begun. We’ve seen a bunch of people freaking out about how ChatGPT will be
used by students to do their homework, how it will replace certain jobs, and
other claims. Most of these are totally overblown. While some cooler heads have
prevailed, and argued (correctly) that schools need to learn to teach with
ChatGPT, rather than against it, the screaming about ChatGPT in schools is
likely to continue.
To help try to cut off some of that, OpenAI (the makers of ChatGPT) have
announced a classification tool that will seek to tell you if something was
written by an AI or a human.
We’ve trained a classifier to distinguish between text written by a human
and text written by AIs from a variety of providers. While it is impossible
to reliably detect all AI-written text, we believe good classifiers can
inform mitigations for false claims that AI-generated text was written by a
human: for example, running automated misinformation campaigns, using AI
tools for academic dishonesty, and positioning an AI chatbot as a human.
And, to some extent, that’s great. Using the tech to deal with the problems
created by that tech seems like a good start."
*** Xanni ***
Chief Scientist, Xanadu
Partner, Glass Wings
Manager, Serious Cybernetics