Plough My Account Sign Out
My Account
    View Cart

    Subtotal: $

    Checkout
    a man studying in a coffee shop

    So, a Chatbot Did Your Homework

    If education is nothing but an information-processing exercise to get a degree, why not?

    By Jacob Riyeff

    August 23, 2024
    3 Comments
    3 Comments
    3 Comments
      Submit
    • leshy

      The present Author’s experience, that one had to confront people, with a degree, of severity; to get them, to think, for themselves. They had to be driven, into it, by very powerful confrontation, with deep insight. Nothing less, would do it, than a facing-off, with someone, far more knowledgeable, than they were ( or had been taught, to be ). You have to be hard, on people, for them, to respect you -- in their own interests -- but not so much, that they turn away !

    • Linda wilson

      This reminds me of a book I read in the 6th grade, 1960 to you, called “Danny Dunn and the Homework Machine.” In the book Danny Dunn’s father was a computer scientist and father built his own computer. Danny Dunn and his friends program the computer to do their homework for them. They also designed a gizmo that would let them write out four copies of the homework that changed the handwriting of each copy that was written out, so it looked like four different people’s handwriting. They got caught, if I remember correctly because though the handwriting was different the content was the same. When caught they said they programmed the computer, so in doing the programming they did the work. Danny and friends did not carry the day and had to stop using the computer. So, who really did the work? Technology requires us to confront difficult questions, but they should be confronted fairly. I remember when using online “resources” was questionable because of the doubtful reliability. Pierre Salinger, who once was JFK’s press secretary, gave an opinion on a controversial issue (don’ remember the issue) and said he was sure he was correct because he found it on the internet. This was at a time when a comment like that became a joke. But that is not the case now. I teach in an online classroom, but that was once a dubious educational platform. We learned to make it into a very viable one. I imagine that AI will go through a similar transformation and the time will probably come, more quickly than we may think, when this will be a viable tool, if it isn’t already. Instead of running away from it, we need to train students to use the tool properly. My experience with AI is minimal (the AI component of MS Word). It keeps telling me to change the word I used with a more common word that means the same thing. What it does is make my writing more pedestrian and removes what we call “style.” Cordially, J. D. Wilson, Jr.

    • Andree Koehler

      Thank you, Jacob, for this thoughtful piece. I am a doctoral faculty member and the discussions we've had to "distinguish" AI from generative AI are in many ways maddening, like the snake swallowing its tale. We discuss appropriate and inappropriate uses as well as penalties. And I dare say the students retort with "so what?" because they know we can't fully prove they've had a program write something for them because they've taken the time to craft detailed prompts for the bot to respond to (in many of our cases) and what comes out is not as deeply "wrong" as what you've described. At least not now ... it likely was back in their first year course when they got off with a warning, which provided them with the tools to use the bot better. My point to students is to ask, generally, how much time it took them to "teach the bot" what they wanted because in that time, they could have written the assignment themselves. I also remind them that when they get out in the world, people don't function like bots and will ask questions: it is a terrible thing to be marching around with letters in front of you name that you can't defend with your own scholarship.

    I’ll admit it: I’m getting tired. Tired of pushing against the current of “AI” hype. Tired of explaining that having a chatbot produce answers for you is not education. Tired of explaining that, indeed, taking ideas and words you didn’t make and submitting them as your own work (without attribution) is plagiarism. Tired of arguing that thinking for ourselves and not pursuing cognitive offloading to massive for-profit companies are genuine human goods.

    Whatever, I sometimes say. If students want to stifle their own social, intellectual, and, dare I say, spiritual growth and have chatbots do their work, just let them. Like I said, I’m tired. I’m tired of flailing to explain how what I find precious in life is in fact precious. Maybe I’m just being a stick-in-the-techno-progress-myth mud. But then I see what some students submitted as their own work this term, academic work that dedicated instructors would have to read and evaluate, and I once again feel like I cannot give up on saying all this. Over and over.

    Because, besides other misconduct, what I saw was students not bothering to secure a required book, not bothering to read it, and instead feeding prompts to ChatGPT, taking the output – I will not call it “writing” – and submitting it as their own considered thoughts on the matter. As the academic integrity director at my university, I saw a lot of this this spring, compared to the prior term. I’m assuming my institution is not an outlier in this regard. And I hear people say we need to get the discourse around academia and “generative AI” away from cheating. Would that we could. But what I saw in a large number of cases was not just disheartening regarding academic success or integrity, but also a gesture of vanity that took the quiet part of the “generative AI” moment and said it loud. Especially with OpenAI announcing their ChatGPT Edu application in May, those of us who see the problems with this new automated, corporate model of education can’t remain silent or passive.

    a man studying in a coffee shop

    Photograph by Hannah Wei / Unsplash.

    It’s always difficult to see a student cheat or plagiarize or engage in fraud. But seeing these assignments was different. In case after case, I began reading what sounded like a college-level essay. But as I proceeded, I noticed the vagueness, the fifty-cent words, and the hyperbolic relevance that many of us have come to associate with chatbot outputs. But the primary problem was, in these particular essays, the actions of the characters as well as the plot points “the students” discussed weren’t in the book. They weren’t in the book at all. They were, rather, what the discourse refers to as “hallucinations” (but that term smacks of actual cognition too much for my taste). That is, they didn’t exist until the chatbot that students had prompted brought them into existence. And my, how the chatbot went on about how important these characters were, the moral valence of their activities, etc., etc. But it was all utter bullshit, in the technical sense of discourse unhooked from any sense of the value of truth and falsehood. Bullshit that some students see as at very least “good enough,” if not “better than I could do.”

    Both of these evaluations are wrong. But it’s not that the chatbot “got it wrong” that’s so problematic. Humans get stuff wrong all the time. It was, rather, that students read material that had no reference to reality and found it so convincing – maybe, again, seeing it as “better than what I could do.” Rather than being better than what they could do, that output was really a tissue of probabilistic text that had no reference to reality, produced by something without any capacity to engage with reality – only digital inputs that “code” the reality that exists beyond the data system. We all know that chatbots produce random, meaningless, inaccurate text (and apparently this still isn’t a strike against machine-learning applications in education for lots of people), but some students are still convinced that these machines can do their work for them and do it better than they can.

    As we all know, these products have massive hype behind them, convincing students (and plenty of more experienced people too) that, indeed, what the chatbot can do is “better than what I could do.” There’s the “no, it enhances what people do” angle in the hype, but let’s get serious: given the media ecology surrounding the tech sector in our contemporary neoliberal, Western culture; given the subconscious and not-so-subconscious ongoing narrative of progress that has glommed on to tech when all other accounts of intrinsic progress have been stymied; given the incessant proclamation by social media influencers that there’s no point in doing one’s own work when machines can do it for you; and especially given that our students are entering a world that reinforces a worldview oriented toward efficiency, productivity, and profitability with the devices the adults in their lives valorize and for which they train them (often without explicit critique), the idea that our young people will resist the view that their work is of a wholly different order than what chatbots produce (and of intrinsically superior value) is strained at best. We are seeing the commodification of communication itself. We need to keep reminding ourselves that chatbots are not magical robot agents but, rather, products made by technicians working at powerful companies in an extractive capitalist system.

    We have to help them see this, and we’re not doing a good job.

    But let’s get back to how these papers weren’t engaging with reality, and not doing so at length. Again, it’s not that one detail was off or one name was wrong. It was pages of prose about characters and plot points that don’t exist. And students took it at face value as solid work, solid enough to put one’s name at the top of and submit as one’s own. The vanity at work here is difficult to ignore after seeing all these examples. And that’s the word that I came to when I sought to describe this situation. One day in May I was talking through the difficulty of seeing so many students thinking this was an acceptable way of doing their intellectual work, when our office’s coordinator put words to what I was trying to get at: “the emptiness of it.”

    Yes, the emptiness of having the opportunity to engage in serious reading, serious dialogue, serious thinking, serious interpreting, and instead “offloading” that reading, dialogue, thought, and interpretation to a probabilistic engine: emptiness, vanity. Vanity in the sense of Ecclesiastes: “Vanity of vanity, all is vanity!” Vanity in the sense of the Oxford English Dictionary: “That which is vain, futile, or worthless; that which is of no value or profit.” But especially, vanity from Latin vanitas, “emptiness, nothingness, nullity, want of reality” (Lewis and Short, A Latin Dictionary). Most especially that last one: “want of reality.”

    And I already hear plenty of people saying, “But that’s not how you’re supposed to use these tools!” I get that. But we can wish that all we like; the essays I was reading were showing what can happen when humans engage with these applications. In the parlance of our times, this vanity is a feature, not a bug, of these systems. When students work with a chatbot to produce text that is passable, whatever learning took place there took place despite the chatbot’s production of text. Because the student had to know what text produced was based in reality, was prudently stated, and was relevant to the task. Otherwise, students don’t know what they don’t know about that text output. If it happens that a student doesn’t know these features and submits something that passes for reality anyway, it’s an accident; the crapshoot of probability worked out as far as assessment goes.

    But education is not supposed to be a probabilistic crapshoot or an information-processing exercise to “solve the problem” of getting a degree. It’s supposed to be a formation of the mind (and the whole person, ideally). I think the chatbot moment is pointing up how our institutions and cultural expectations of education have come to see education really as the former, not the latter. There is a difference between producing a text for your boss and learning how to craft a text that is actually a representation of one’s own considered thought in light of one’s subjective engagement with the surrounding world.

    And it is precisely here that the vanity of using these applications in education comes into view. While employing these systems for some tasks may have “value or profit” in terms of efficiency and production for markets, these qualities are not (or, I hope they are not) the goals of a liberal arts education. The friction, the struggle, the discerning of such an education are precisely the point, for it is only in that friction and struggle that discernment is exercised and one undergoes, as Saint John Henry Newman says, “enlargement [of the mind] or enlightenment.” For Newman, education is not an accumulation of knowledge. Rather,

    The enlargement consists, not merely in the passive reception into the mind of a number of ideas hitherto unknown to it, but in the mind’s energetic and simultaneous action upon and towards and among those new ideas, which are rushing in upon it. It is the action of a formative power, reducing to order and meaning the matter of our acquirements; it is making the objects of our knowledge subjectively our own. (Knowledge Viewed in Relation to Learning, 153)

    The activity of the intellect here is a kind of knowledge that is non-instrumental. This does not mean that it cannot be used for something else outside the human subject, but that its primary (and sine qua non) aspect is the formation of the human subject as one who understands the nature of the world and its parts in ever clearer and more complex ways.

    Because the chatbot moment is pointing up the difference so starkly between instrumental production of text (and visuals and audio) on the one hand and learning that forms the individual’s intellect in the “enlarging” way Newman describes on the other (for some of us at least), it’s a great catalyst in fact for articulating exactly what we do want for our students. We want to invite them into work that is meaningful to them, challenges their current horizon, and leads them to ask more questions and wonder how the world could be this way and how it could be different. And, perhaps, to do something about it. When I’m able to see this aspect of our collective situation, I’m reinvigorated, not tired. But I have to keep working things out like I have in this essay to see it again.

    Contributed By JacobRiyeff Jacob Riyeff

    Jacob Riyeff is a teaching professor in English at Marquette University and has served as the university’s Academic Integrity Director.

    Learn More
    3 Comments
    You have ${x} free ${w} remaining. This is your last free article this month. We hope you've enjoyed your free articles. This article is reserved for subscribers.

      Already a subscriber? Sign in

    Try 3 months of unlimited access. Start your FREE TRIAL today. Cancel anytime.

    Start free trial now