How do I plagiarize thee?

let me count the ways...

Monday, January 2, 2023

ChatGPT

From Darren Hudson Hick (Facebook),  Dec. 15:

Today, I turned in the first plagiarist I’ve caught using A.I. software to write her work, and I thought some people might be curious about the details.
The student used ChatGPT (https://chat.openai.com/chat), an advanced chatbot that produces human-like responses to user-generated prompts. Such prompts might range from “Explain the Krebs cycle” to (as in my case) “Write 500 words on Hume and the paradox of horror.”
This technology is about 3 weeks old.
ChatGPT responds in seconds with a response that looks like it was written by a human—moreover, a human with a good sense of grammar and an understanding of how essays should be structured. In my case, the first indicator that I was dealing with A.I. is that, despite the syntactic coherence of the essay, it made no sense. The essay confidently and thoroughly described Hume’s views on the paradox of horror in a way that were thoroughly wrong. It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullshitting after that. To someone who didn’t know what Hume would say about the paradox, it was perfectly readable—even compelling. To someone familiar with the material, it raised any number of flags. ChatGPT also sucks at citing, another flag. This is good news for upper-level courses in philosophy, where the material is pretty complex and obscure. But for freshman-level classes (to say nothing of assignments in other disciplines, where one might be asked to explain the dominant themes of Moby Dick, or the causes of the war in Ukraine—both prompts I tested), this is a game-changer.
ChatGPT uses a neural network, a kind of artificial intelligence that is trained on a large set of data so that it can do exactly what ChatGPT is doing. The software essentially reprograms and reprograms itself until the testers are satisfied. However, as a result, the “programmers” won’t really know what’s going on inside it: the neural network takes in a whole mess of data, where it’s added to a soup, with data points connected in any number of ways. The more it trains, the better it gets. Essentially, ChatGPT is learning, and ChatGPT is an infant. In a month, it will be smarter.
Happily, the same team who developed ChatGPT also developed a GPT Detector (https://huggingface.co/openai-detector/), which uses the same methods that ChatGPT uses to produce responses to analyze text to determine the likelihood that it was produced using GPT technology. Happily, I knew about the GPT Detector and used it to analyze samples of the student’s essay, and compared it with other student responses to the same essay prompt. The Detector spits out a likelihood that the text is “Fake” or “Real”. Any random chunk of the student’s essay came back around 99.9% Fake, versus any random chunk of any other student’s writing, which would come around 99.9% Real. This gave me some confidence in my hypothesis. The problem is that, unlike plagiarism detecting software like TurnItIn, the GPT Detector can’t point at something on the Internet that one might use to independently verify plagiarism. The first problem is that ChatGPT doesn’t search the Internet—if the data isn’t in its training data, it has no access to it. The second problem is that what ChatGPT uses is the soup of data in its neural network, and there’s no way to check how it produces its answers. Again: its “programmers” don’t know how it comes up with any given response. As such, it’s hard to treat the “99.9% Fake” determination of the GPT Detector as definitive: there’s no way to know how it came up with that result.
For the moment, there are some saving graces. Although every time you prompt ChatGPT, it will give at least a slightly different answer, I’ve noticed some consistencies in how it structures essays. In future, that will be enough to raise further flags for me. But, again, ChatGPT is still learning, so it may well get better. Remember: it’s about 3 weeks old, and it’s designed to learn.
Administrations are going to have to develop standards for dealing with these kinds of cases, and they’re going to have to do it FAST. In my case, the student admitted to using ChatGPT, but if she hadn’t, I can’t say whether all of this would have been enough evidence. This is too new. But it’s going to catch on. It would have taken my student about 5 minutes to write this essay using ChatGPT. Expect a flood, people, not a trickle. In future, I expect I’m going to institute a policy stating that if I believe material submitted by a student was produced by A.I., I will throw it out and give the student an impromptu oral exam on the same material. Until my school develops some standard for dealing with this sort of thing, it’s the only path I can think of.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home