I’ll admit it: I’m getting tired. Tired of pushing against the current of “AI” hype. Tired of explaining that having a chatbot produce answers for you is not education. Tired of explaining that, indeed, taking ideas and words you didn’t make and submitting them as your own work (without attribution) is plagiarism. Tired of arguing that thinking for ourselves and not pursuing cognitive offloading to massive for-profit companies are genuine human goods.
Whatever, I sometimes say. If students want to stifle their own social, intellectual, and, dare I say, spiritual growth and have chatbots do their work, just let them. Like I said, I’m tired. I’m tired of flailing to explain how what I find precious in life is in fact precious. Maybe I’m just being a stick-in-the-techno-progress-myth mud. But then I see what some students submitted as their own work this term, academic work that dedicated instructors would have to read and evaluate, and I once again feel like I cannot give up on saying all this. Over and over.
Because, besides other misconduct, what I saw was students not bothering to secure a required book, not bothering to read it, and instead feeding prompts to ChatGPT, taking the output – I will not call it “writing” – and submitting it as their own considered thoughts on the matter. As the academic integrity director at my university, I saw a lot of this this spring, compared to the prior term. I’m assuming my institution is not an outlier in this regard. And I hear people say we need to get the discourse around academia and “generative AI” away from cheating. Would that we could. But what I saw in a large number of cases was not just disheartening regarding academic success or integrity, but also a gesture of vanity that took the quiet part of the “generative AI” moment and said it loud. Especially with OpenAI announcing their ChatGPT Edu application in May, those of us who see the problems with this new automated, corporate model of education can’t remain silent or passive.
It’s always difficult to see a student cheat or plagiarize or engage in fraud. But seeing these assignments was different. In case after case, I began reading what sounded like a college-level essay. But as I proceeded, I noticed the vagueness, the fifty-cent words, and the hyperbolic relevance that many of us have come to associate with chatbot outputs. But the primary problem was, in these particular essays, the actions of the characters as well as the plot points “the students” discussed weren’t in the book. They weren’t in the book at all. They were, rather, what the discourse refers to as “hallucinations” (but that term smacks of actual cognition too much for my taste). That is, they didn’t exist until the chatbot that students had prompted brought them into existence. And my, how the chatbot went on about how important these characters were, the moral valence of their activities, etc., etc. But it was all utter bullshit, in the technical sense of discourse unhooked from any sense of the value of truth and falsehood. Bullshit that some students see as at very least “good enough,” if not “better than I could do.”
Both of these evaluations are wrong. But it’s not that the chatbot “got it wrong” that’s so problematic. Humans get stuff wrong all the time. It was, rather, that students read material that had no reference to reality and found it so convincing – maybe, again, seeing it as “better than what I could do.” Rather than being better than what they could do, that output was really a tissue of probabilistic text that had no reference to reality, produced by something without any capacity to engage with reality – only digital inputs that “code” the reality that exists beyond the data system. We all know that chatbots produce random, meaningless, inaccurate text (and apparently this still isn’t a strike against machine-learning applications in education for lots of people), but some students are still convinced that these machines can do their work for them and do it better than they can.
As we all know, these products have massive hype behind them, convincing students (and plenty of more experienced people too) that, indeed, what the chatbot can do is “better than what I could do.” There’s the “no, it enhances what people do” angle in the hype, but let’s get serious: given the media ecology surrounding the tech sector in our contemporary neoliberal, Western culture; given the subconscious and not-so-subconscious ongoing narrative of progress that has glommed on to tech when all other accounts of intrinsic progress have been stymied; given the incessant proclamation by social media influencers that there’s no point in doing one’s own work when machines can do it for you; and especially given that our students are entering a world that reinforces a worldview oriented toward efficiency, productivity, and profitability with the devices the adults in their lives valorize and for which they train them (often without explicit critique), the idea that our young people will resist the view that their work is of a wholly different order than what chatbots produce (and of intrinsically superior value) is strained at best. We are seeing the commodification of communication itself. We need to keep reminding ourselves that chatbots are not magical robot agents but, rather, products made by technicians working at powerful companies in an extractive capitalist system.
We have to help them see this, and we’re not doing a good job.
But let’s get back to how these papers weren’t engaging with reality, and not doing so at length. Again, it’s not that one detail was off or one name was wrong. It was pages of prose about characters and plot points that don’t exist. And students took it at face value as solid work, solid enough to put one’s name at the top of and submit as one’s own. The vanity at work here is difficult to ignore after seeing all these examples. And that’s the word that I came to when I sought to describe this situation. One day in May I was talking through the difficulty of seeing so many students thinking this was an acceptable way of doing their intellectual work, when our office’s coordinator put words to what I was trying to get at: “the emptiness of it.”
Yes, the emptiness of having the opportunity to engage in serious reading, serious dialogue, serious thinking, serious interpreting, and instead “offloading” that reading, dialogue, thought, and interpretation to a probabilistic engine: emptiness, vanity. Vanity in the sense of Ecclesiastes: “Vanity of vanity, all is vanity!” Vanity in the sense of the Oxford English Dictionary: “That which is vain, futile, or worthless; that which is of no value or profit.” But especially, vanity from Latin vanitas, “emptiness, nothingness, nullity, want of reality” (Lewis and Short, A Latin Dictionary). Most especially that last one: “want of reality.”
And I already hear plenty of people saying, “But that’s not how you’re supposed to use these tools!” I get that. But we can wish that all we like; the essays I was reading were showing what can happen when humans engage with these applications. In the parlance of our times, this vanity is a feature, not a bug, of these systems. When students work with a chatbot to produce text that is passable, whatever learning took place there took place despite the chatbot’s production of text. Because the student had to know what text produced was based in reality, was prudently stated, and was relevant to the task. Otherwise, students don’t know what they don’t know about that text output. If it happens that a student doesn’t know these features and submits something that passes for reality anyway, it’s an accident; the crapshoot of probability worked out as far as assessment goes.
But education is not supposed to be a probabilistic crapshoot or an information-processing exercise to “solve the problem” of getting a degree. It’s supposed to be a formation of the mind (and the whole person, ideally). I think the chatbot moment is pointing up how our institutions and cultural expectations of education have come to see education really as the former, not the latter. There is a difference between producing a text for your boss and learning how to craft a text that is actually a representation of one’s own considered thought in light of one’s subjective engagement with the surrounding world.
And it is precisely here that the vanity of using these applications in education comes into view. While employing these systems for some tasks may have “value or profit” in terms of efficiency and production for markets, these qualities are not (or, I hope they are not) the goals of a liberal arts education. The friction, the struggle, the discerning of such an education are precisely the point, for it is only in that friction and struggle that discernment is exercised and one undergoes, as Saint John Henry Newman says, “enlargement [of the mind] or enlightenment.” For Newman, education is not an accumulation of knowledge. Rather,
The enlargement consists, not merely in the passive reception into the mind of a number of ideas hitherto unknown to it, but in the mind’s energetic and simultaneous action upon and towards and among those new ideas, which are rushing in upon it. It is the action of a formative power, reducing to order and meaning the matter of our acquirements; it is making the objects of our knowledge subjectively our own. (Knowledge Viewed in Relation to Learning, 153)
The activity of the intellect here is a kind of knowledge that is non-instrumental. This does not mean that it cannot be used for something else outside the human subject, but that its primary (and sine qua non) aspect is the formation of the human subject as one who understands the nature of the world and its parts in ever clearer and more complex ways.
Because the chatbot moment is pointing up the difference so starkly between instrumental production of text (and visuals and audio) on the one hand and learning that forms the individual’s intellect in the “enlarging” way Newman describes on the other (for some of us at least), it’s a great catalyst in fact for articulating exactly what we do want for our students. We want to invite them into work that is meaningful to them, challenges their current horizon, and leads them to ask more questions and wonder how the world could be this way and how it could be different. And, perhaps, to do something about it. When I’m able to see this aspect of our collective situation, I’m reinvigorated, not tired. But I have to keep working things out like I have in this essay to see it again.