Sunday, June 29, 2025

More on AI, Student Learning and Human Flourishing

One of the great things about last semester’s sabbatical is that I didn’t have to think about making my assignments harder to outsource to AI chatbots, such as ChatGPT. I did start collecting what others have to say about how Generative AI harms student learning.

Matteo Wong, writing for The Atlantic, puts it well:

A car that accelerates instead of braking every once in a while is not ready for the road. A faucet that occasionally spits out boiling water instead of cold does not belong in your home. Working properly most of the time simply isn’t good enough for technologies that people are heavily reliant upon. And two and a half years after the launch of ChatGPT, generative AI is becoming such a technology. ... Generative AI is a technology that works well enough for users to become dependent, but not consistently enough to be truly dependable. ... For now, the technology’s flaws are readily detected and corrected. But as people become more and more accustomed to AI in their life—at school, at work, at home—they may cease to notice. Already, a growing body of research correlates persistent use of AI with a drop in critical thinking; humans become reliant on AI and unwilling, perhaps unable, to verify its work.  

Alan Jacobs, from whom I got several of these links, agrees:

“Yes, students understand — they understand quite well, and vocally regret — that when they use chatbots they are not learning much, if anything.”

The problem, according to Jacobs, is that in today’s universities, “the acquisition of knowledge” competes with the desire for a good job and the desire for a good time. Students turn to AI because they value the “university experience” and the diploma more than education.

According to Jonathan Malesic, Generative AI apps, such as ChatGPT, are gimmicks that falsely promise a shortcut in place of the hard work that true teaching and learning requires:

“I have found, to overcome students’ resistance to learning, you often have to trick them. There’s the old bait-and-switch of offering grades, then seeing a few students learn to love learning itself. Worksheets are tricks, as are small-group discussions and even a teacher’s charisma. … I don’t know anyone for whom it’s a straightforward task. It’s the challenge for any teacher, and AI offers a tempting illusion to students—and evidently to some teachers—that there could be a shortcut. ... Part of a teacher’s job—certainly in the humanities, but even in professional fields like business—is to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds. ... I will sacrifice some length of my days to add depth to another person’s experience of the rest of theirs. ... The work is slow. Its results often go unseen for years. But it is no gimmick.”  

More than a gimmick, GenAI tools and the companies that produce them are actively hostile to learning:

Josh Eyler as quoted in CHE: “Learning is very hard work. It’s a deeply complex process …. If you offload onto AI the very cognitively demanding aspects of the learning process, then like a muscle atrophying, you’re weakening that process over time.” 

Joss Fong as quoted in CHE: “Education researchers have this term ‘desirable difficulties,’ which describes this kind of effortful participation that really works but also kind of hurts. And the risk with AI is that we might not preserve that effort, especially because we already tend to misinterpret a little bit of struggling as a signal that we’re not learning.” 

Lucas Ropek in “Multiple Studies Now Suggest That AI Will Make Us Morons”: “[T]he conclusion that using an app to complete a homework assignment makes you less capable of thinking for yourself would appear to be self-evident. Outsourcing mental duties to a software program means you’re not performing those duties yourself, and, as is pretty well established, doing something yourself is often the best way to learn.” 

James D. Walsh in New York Magazine: “It’ll be years before we can fully account for what all of this is doing to students’ brains. Some early research shows that when students off-load cognitive duties onto chatbots, their capacity for memory, problem-solving, and creativity could suffer. Multiple studies published within the past year have linked AI usage with a deterioration in critical-thinking skills; one found the effect to be more pronounced in younger participants.”

And, in the best thing I’ve read on the topic, Megan Fritts writes:

“[T]he work that we bypass when using a calculator is less important than what we bypass when using an AI language generator for writing. To be a human self, a human agent, is to be a linguistic animal. … [T]o learn to use language just is to learn how to think and move about in the world. When we stop doing this—when our needs for communication are met by something outside of us, a detached mouthpiece to summon, describe and regale—the intimate connection between thought and language disappears. It is not only the capacity for speech that we will slowly begin to lose under such conditions, but our entire inward lives. ... [T]he real threat AI poses is not one of job replacement or grading frustration or having to reimagine assignments but something entirely different. ... [L]anguage-generating AI, whether it is utilized to write emails or dissertations, stands as an enemy to the human form of life by coming between the individual and her words. ... Preserving art, literature and philosophy will require no less than the creation of an environment totally and uncompromisingly committed to abolishing the linguistic alienation created by AI, and reintroducing students to the indispensability of their own voice.”

In sum, indiscriminate use of Generative AI—as encouraged by the likes of Microsoft, Google and OpenAI—is inimical to learning and antithetical to human flourishing.

For my earlier reflections on Gen AI and education, see On Creativity, ChatGPT, and Doing Your Chores