<
https://www.techdirt.com/2024/06/10/judge-experiments-with-chatgpt-and-its-not-as-crazy-as-it-sounds/>
"Would you freak out if you found out a judge was asking ChatGPT a question to
help decide a case? Would you think that it was absurd and a problem? Well, one
appeals court judge felt the same way… until he started exploring the issue in
one of the most thoughtful explorations of LLMs I’ve seen (while also being one
of the most amusing concurrences I’ve seen).
I recognize that the use of generative AI tools in lots of places raises a lot
of controversy, though I think the biggest complaint comes from the
ridiculously bad and poorly thought out uses of the technology (usually
involving over relying on the tech, when it is not at all reliable).
Back in April, I wrote about how I use LLMs at
Techdirt, not to replace
anyone or to do any writing, but as a brainstorming tool or a soundboard for
ideas. I continue to find it useful in that manner, mainly as an additional
tool (beyond my existing editors) to push me to really think through the
arguments I’m making and how I’m making them.
So I found it somewhat interesting to see Judge Kevin Newsom, of the 11th
Circuit, recently issue a concurrence in a case, solely for the point of
explaining how he used generative AI tools in thinking about the case, and how
courts might want to think (carefully!) about using the tech in the future.
The case itself isn’t all that interesting. It’s a dispute over whether an
insurance provider is required under its agreement to cover a trampoline injury
case after the landscaper who installed the trampoline was sued. The lower
court and the appeals court both say that the insurance agreement doesn’t cover
this particular scenario, and therefore, the insurance company has no duty to
defend the landscaper.
But Newsom’s concurrence is about his use of generative AI, which he openly
admits may be controversial, and begs for people to consider his entire
argument:
I concur in the Court’s judgment and join its opinion in full. I write
separately (and I’ll confess this is a little unusual) simply to pull back
the curtain on the process by which I thought through one of the issues in
this case—and using my own experience here as backdrop, to make a modest
proposal regarding courts’ interpretations of the words and phrases used in
legal instruments.
Here’s the proposal, which I suspect many will reflexively condemn as
heresy, but which I promise to unpack if given the chance: Those, like me,
who believe that “ordinary meaning” is the foundational rule for the
evaluation of legal texts should consider—consider—whether and how
AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and
Anthropic’s Claude might—might—inform the interpretive analysis. There,
having thought the unthinkable, I’ve said the unsayable.
Now let me explain myself."
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics