Artificial intelligence seems to have been progressing—to borrow a memorable phrase from a Hemingway character’s description of how he went bankrupt—“gradually, then suddenly.” The most recent revolution to make news has been ChatGPT, a machine learning model trained by OpenAI that has been all the talk of the academic world.
With ChatGPT, students now have access to a program that can instantly produce a paper on a given topic at their command. For example, students can input passages from a novel or poem and receive a detailed analysis of the rhetoric, themes, and symbolism in the text. Don’t like what ChatGPT wrote for you? Ask it to change it, make it longer or shorter, or in a new style, different voice, more or less complex. Anything is possible . . . or so it seems.
It didn’t take long for issues beyond the obvious (academic integrity) to be identified. For one, ChatGPT fabricates passages in texts—it invents quotations whole cloth and attributes them to real authors. You can search a given text, from Beowulf to Beloved, and never find them. Puzzled professors in our own department have already encountered this and quickly realized what happened.
I ran an experiment myself and asked ChatGPT to produce a paper analyzing the “symbolism of computers in Hamlet.” Sure enough, it did, including fake lines from Hamlet himself asking to be “left alone at my computer.” I pointed out to ChatGPT that these were inaccurate lines—that none of this was in Shakespeare. The machine obediently learned and thanked me for correcting it. When I asked it again to produce that same paper, it said that it could not, for there were no computers in Hamlet, and in fact, computers did not exist in Shakespeare’s time. ChatGPT would count this as a success, an adaptation, an example of machine learning.
But our students won’t find much of a future in such a process. Similarly, ChatGPT promises to help students improve their writing skills by providing feedback and suggestions for improvement. They receive notes on sentence structure, word choice, tone, voice, and more. Here again, the lines between what the student produced and what the machine produced are both novel and ethically uncertain. Online detection programs already exist, too, that can help professors who suspect when students have relied overwhelmingly on AI. They are far from perfect, but they are a start.
In general, though, universities are playing catch-up in developing policies to deal with this new technology, such as asking students to disclose when they have used ChatGPT—but this raises other questions, such as why we don’t ask them to disclose when they’ve used human tutors with their writing or search engines to find basic data points that buttress the foundations of papers. More to the point of what we do in our department, we are likely better in the long run if we teach our students how to use AI responsibly; how to write with it; how to become the next generation of responsible, insightful, ethical writers who know how to keep us all from becoming dumbed-down bots replaced by language-generating machines.
To evolve with this rapidly changing technology, we’ll need to develop policies that address AI at the level of individual courses—depending on the topic and genre—as well as at both the school and University levels. Tech is moving in revolutions that leave us unsettled in their celerity, so we must adapt both quickly and diligently. Take a “random walk,” as they say on Wall Street, down the hallways of Pitt any day of the week, and you’ll find an array of presentations and workshops on AI, everywhere from the humanities to the health sciences. (In fact, a colleague and I recently attended one event on AI and, as we left, walked past another concurrent one on an adjacent AI topic in the room a floor below it.) We are moving as swiftly as we can, but the technology is moving even faster. So fast that leaders in the tech world called for a six-month pause on its development so that its effects could be studied.
We don’t know what is next, but we know what we have to offer for this conversation, as humanists and experts in writing, literary arts, professional communication, filmic and image generation and analysis, and much more. We will continue to contribute actively and with serious engagement so that AI is shaped by our voices rather than vice versa.
—Gayle Rogers