The Argument

The Argument

ChatGPT and the end of learning

My students kept using ChatGPT. I couldn't make them stop.

Lakshya Jain's avatar
Lakshya Jain
Sep 30, 2025
∙ Paid
10
2
Share

I have taught an advanced computer science class at UC Berkeley nine times. The class, Introduction to Database Systems, teaches how data is organized, stored, and managed by computer systems, and is meant to prepare students for their first forays into the software engineering profession.

When you teach the same course multiple times, you get used to its rhythms and its tics: What types of questions will trip students up, what topics are boring, and at what point in the semester students tend to check out. But this past spring, the normal patterns of CS 186 were completely disrupted.

From my first lecture, something seemed different with the course. Office-hour queues were at an all-time low, I was fielding far fewer questions than normal and attendance for my lectures was as sparse as I had ever seen. Meanwhile, the student traffic on our class forum had plummeted. In other words, nobody seemed to have any questions or need any help on anything, no matter how complex it was.

But people kept getting perfect scores on their coding assignments anyway.

Maybe I just had an unusually brilliant batch of very self-sufficient students. But that didn’t square with the reality of the (handwritten) exam grades coming in at an all-time low, with the average score being 15% below what it normally was.

Instead, after talking to my students, I confirmed something far worse: Many of them were using AI tools like ChatGPT to finish their assignments. That would enable them to do their homework, but they wouldn’t actually learn what the assignments were trying to teach. So when the exam came around, and the chatbots were unavailable, they were caught out of their depth.

When I expressed my frustration and concern about their AI use, quite a few of my students were surprised at how upset I was. Some of them asked what the big deal was. The work was getting done anyways, so why did it matter how it was getting done? And in the end, they wouldn’t be blocked from using AI at work, so shouldn’t they be allowed to use it in school?

At its core, programming is about iterative problem-solving. In the olden times (like the 1970s), programmers would line up to submit physical punch cards to the handful of computers on a college campus. When something didn’t work, the cards would be handed back to them with an error log, and they would need to correct their code and do the entire process all over again. This forced people to think through their code extremely carefully.

Today, thankfully, things are exponentially faster and less painful, in part because we can do all of this by typing on our own personal computers. But the process is, in many ways, still quite similar: You have to think through the thing you want to do, write the code for it, execute it, watch it fail, learn what was wrong, fix it, and try again. That forces you to learn the guts of what’s happening from first principles, and the more you do it, the better you become.

I can’t overstate how damaging it is for students to use AI as a means of short-circuiting this process. Part of the purpose of a college course is in learning how to think about and work through complex problems; none of that is achieved if students trade the experience of coding for the convenience of chatbots. In doing this, my students weren’t just cheating the course. They were cheating themselves out of vital development as engineers, failing to make the connections that practice and hard work help form.

AI has its place. For experienced professionals, it can enhance productivity tenfold when used carefully. But students are nowhere near that stage, so the rules of engagement must be different. In the same way that a 6-year-old learning basic addition would never be allowed a calculator, a college junior learning their field of study should not be outsourcing critical thinking to ChatGPT. Otherwise, they risk stunting their intellectual development.

Share

I see the implications of this firsthand in my job as a machine learning engineer in San Francisco. I am constantly shocked by the number of young programmers I interview who try to cheat with AI during their interviews and are then unable to actually demonstrate evidence of competence when grilled on the subject verbally. It leads to some extremely funny stories, like the person who didn’t realize he had a ChatGPT window reflecting off his glasses while he was confidently (and incorrectly) explaining his answer to me.

But it’s mostly just depressing — I find that many people have simply lost the ability to think and reason about complex technical concepts. (Or perhaps they never fully developed that skill in the first place.)

I wasn’t shy about telling my students this either, but it all fell on deaf ears. As the semester progressed, and the content built on itself, the problems with AI also got worse. People who relied on it for the initial assignments never fully learned the content and obviously had to keep using it for future assignments as well. In fact, the second exam’s average score was even worse than the first one’s.

By not engaging with the content in ways that would further their learning, my students failed themselves. But in hindsight, I wonder if I also failed them by not recognizing and adapting my teaching style to a new world: one in which a significant chunk of the “internet generations” (millennials and Gen Z) were already using AI extensively.

This new reality is crystallized in the results of our latest poll.

In topic after topic from our newest survey at The Argument, it is clear that a primary line of division around AI is age, rather than ideology, race, or partisanship — and specifically, whether someone grew up in the internet era or not.

The divide really starts with usage — in our survey, millennial and Gen Z voters were far more likely to use chatbots than their older counterparts, with nearly a third of them reporting daily use.

It shouldn’t be a surprise that a notable batch of the “internet generations” were also significantly more likely to use AI for productivity than their elders were. In our survey, 37% of voters under 30 that used AI said they used it for “getting their work done.” That dwarfed the rates of the 45-64 and 65+ age groups from which professors are disproportionately drawn. (No wonder all of academia was caught flat-footed when ChatGPT came about!)

Perhaps as a result, younger voters are also the group most likely to view AI as having a positive impact — both on their lives and on society.

And though every group of voters supports some guardrails around AI, such as age verification and liability laws, the youngest ones are still the ones most likely to want no limitations or consequences for it.

The data reinforces something I’ve often noticed in conversation. Many people (and especially older ones) view AI-based products like ChatGPT as agents. They are external entities people can interact with in order to learn, analyze, and get things done, but their processes are packed with unknowns that require some wariness toward their results.

But to a segment of Americans (and especially younger Americans that grew up in the internet era), those products are really tools to use frequently — just like calculators. And the guardrails around usage tend to fall very quickly when you begin to view AI in that manner, because the onus (and credit) for the work done by a tool often lands squarely on the human using it.

Incidentally, my conversations with my students made me realize that despite their relative support, they aren’t lacking an understanding of some of AI’s potential complications. This finding is reinforced by our survey: Younger Americans are more fearful than older ones on whether AI can (or will) replace them in their profession.

Why? In my experience, it’s because many of them are (correctly) not seeing a potential replacement being done by a Skynet-esque robot. It’s still quite hard to imagine an independent agent replacing humans wholesale across a variety of fields. But they are seeing a future in which ChatGPT allows one human to do the work of five. And while older Americans don’t have as much time left in the workforce, it’s much easier to imagine that reality coming to fruition over the working lives of the millennial and Gen Z cohorts.

Given the yawning gap in our survey between the ways that young and old people interface with AI, I’m not quite convinced that our rather aged lawmakers in Washington are prepared to address the ways in which it is transforming society. A growing number of people are rapidly changing the ways they live, work, and think, and unless you’re in the thick of it, you can’t really grasp how acute it’s becoming.

Maybe we don’t like it. Maybe it’s wrong. But as my colleague Derek Thompson detailed so expertly in his recent column for us, I don’t see it changing anytime soon.

Trust me. I’ve tried.

Keep reading with a 7-day free trial

Subscribe to The Argument to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Jerusalem Demsas
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture