My undergrad didn't grade homework. The university administered tests: your college would prepare you for them, and if you wanted to prepare poorly you could. While I already thought it had many advantages, AI is going to amplify the benefits of that model. In the adversarial relationship between teacher-as-evaluator and students, students have acquired decisive operational advantage in homework, take-home tests, and demonstrations.
FWIW I am seeing more universities do this — moving away from grading homework and making exams worth more, but the HW has to be done for anyone to really succeed on the exam.
Speaking as a fellow instructor at an R1 CS program, it's certainly what I am doing. That said, there has been and remains a well-intentioned push from administration (here and elsewhere) to have a more supportive, less intimidating assessment environment -- more take-home work, fewer (or no) high-stakes in-person assessments. At the same time there is a push to "integrate AI into the curriculum." Like approximately every other academic on the Internet, I Have Thoughts about these conflicting goals. I have my doubts that it's going to end well given the short-term incentives!
Yeah, I understand emotionally the objection to high-stakes in-person assessments...and I think the only possible alternative is a task long and complex enough that SOTA models can't do it for you, and that's non-trivial to design and know unless you're deeply enmeshed in what can be done with AI models.
I don't remember homework being graded. Homework was just practice for the midterm and final, which were all that mattered. Cheat all you want; you're the one who'll suffer when you're studying for the final.
It will probably end up being important to work capable both with and without AI in software engineering. On the one hand, if you're learning a new package, want to spit out some boilerplate to write a csv, or need to convert a codebase from one common language to another, agents are invaluable. On the other hand, I just debugged a problem that would have required an unrealistically large context window and processing time yesterday. My work routinely involves using knowledge that isn't written down and never will be, and anything looking at only my IDE will always need to guess at what I'm trying to do.
Honestly, if professors take the opportunity to make homework harder for agents to one-shot and use exams to test contextual or higher-level knowledge instead of syntax, students might end up better prepared than they were before 2022.
I certainly could make homework harder for AI to one-shot -- but usually that means increasing the scope and scale of a particular problem, and removing structure and support. My experience is that the very existence of signposts and "here's how to decompose the problem" text serves to makes things "one-shot-able."
These larger problems are then much harder for students without background to solve -- and much harder to grade and provide meaningful feedback on, especially in the 200+ enrollment introductory courses. ("I know, you can simply use AI to..." and why that's probably not a great solution for a residential University is a separate post.)
No, and on the day that it understands how to integrate and synthesize the data and knowledge needed to design a computer chip or an aircraft wing we can probably all go home.
I can see it going either way. Both Claude and ChatGPT went backward in their latest iteration. I could see the singularity, or AI petering out in a few years if the physical computer power necessary becomes out of reach.
Why would you want it to peter out? I can certainly see why we wouldn’t want it to become self aware and rule over us, Terminator style. But I can see it pushing developed countries into a post-need paradigm. New jobs managing AI will come into being, as they always do.
One of my old undergrad pals grew up to become a professor. Our latest conversation, during our latest class reunion, featured complaints about the evolution of student expectations with regard to their time spent at university, their roles in classes, and the purpose of their commitment to university as students.
While my professor friend and I were students, we expected to learn things which we didn't know, for the purpose of nurturing an expertise which would be valuable enough one day, to become the foundation of the careers we'd pursue to build relationships and earn money. Students would sit up during lectures or seminars, while quietly absorbed in the experience of the moment.
Student expectations gradually evolved from this instructional relationship, to more of a client-vendor relationship where students were clients, and professors (along with the university itself) were simply vendors who'd been hired to provide a service — a service to them.
The expectations of the students described in this essay seem to have evolved to yet another stage, where they view university as a strictly formal process whose function is to validate their credentials — that is, students today expect classes to enable them to get paid. This is different than *learning* valuable expertise, for which they may be paid.
If today's students are shortchanging themselves on the experience of learning during university — and of learning by way of experience — it must be because such experience is simply not valuable to them. The reward structure they've understood during their lives must ignore process in favor of results. This would explain the rationale for their sincere insistence on using ChatGPT on assignments and their choice to disregard the idea that learning is a function of process.
I suggest that this absence of motivation among students, to build functional expertise and proficiency, is therefore mostly a *sociological* issue having to do with what we're taught to value in our lives.
Never before has the *perception* of value been more highly emphasized than tangible metrics of value, than it is today…
The other thing I'd point out is that students have a really hard time with the pressure of school. It's natural that they'd take a release valve when offered. The problem is that this particular release valve is perhaps the most damaging one they could use.
As much as it terrifies me, I really like this response.
I guess if we were to take it the next step further, I guess my question would be — why do they think they would be chosen to be paid (ie receive a job offer) compared to the next student?
I suppose this is what Altman talks about when he mentions “intelligence on tap”: the idea that the entirety of intelligence-based work would be commoditized.
I guess this could be seen as a transition step to AI taking jobs: maybe it won’t, and maybe Jevon’s Paradox would lead to more work, but I don’t see how wages won’t plummet as a result of weak differentiation.
I'm starting to think the paradigm of homework is just not going to work in the future. If we want to see mastery of anything that can't be done in a single exam period, I think we'll likely have to make long weekly class sessions for doing that type of work, like lab sections. I think it'll make university even more labor-intensive, but I really don't see how else you'd do it.
yeah, I sort of wonder about this too. It's definitely kept me up thinking about it, but I'm done with teaching and don't want to go back. The process of knowing that your teaching is being ignored for ChatGPT is truly exhausting.
Yes, I've been talking to everyone I can to see if we can get some trials set up of "writing labs" where students can sign up for time at a non-internet computer to work on writing assignments outside of class hours (the idea would be that writing assignments would only be submitted from these computers).
Easier said than done. We can't even properly monitor students in our alternative testing center (used for those who have accommodations, mostly extra time). Because it has to be asynchronous, they often learn about the test from friends ahead of time (they all have massive group chats for every class). Furthermore, students have been caught using phones in them, but who knows if there are proctors who don't pay attention sometimes.
I feel lucky for AI to have hit at this point in my career. I’m close enough to the end of it that I won’t see the full impact of AGI (assuming it’s not 18 months away).
And as a developer, AI’s coding ability is extraordinary. But I already have the basic reasoning skills. I know *how* the code needs to work, and I can tell when it’s made a mistake.
I also found that AI’s desire to please can backfire spectacularly. In one case, I made a design mistake halfway through, and told it that *it* had made a mistake, which it eagerly helped “fix.” I unwound half my code before realizing I was in error.
I also really enjoy having deep conversations on topics I’m curious about, and (I think) I’m savvy enough to see when it’s wrong. But it really is an amazing learning tool.
I will also say, I think we’re overcorrecting on the “no pain, no gain” aspect of learning. Some discomfort is necessary, but I don’t know that study sessions that leave you cross-eyed really are necessary. I learn best and fastest in small, almost effortless chunks, and AI seems ideally suited for that.
This is one where I'd really like to have seen a more detailed age breakout! For views about AI, I think that 18-44 is just way too wide a range. I suspect that 18-21 is quite different from 22-25 right now, because the younger group had AI tools for several years of schooling, while the 22-25 group were mostly already past college intro classes (or done with high school and no longer in education) by the time AI tools became relevant.
I'm not so sure if there's as big a difference between 25-30 and 30-35 or 35-45, but each of these age ranges does have a meaningfully different relationship to social media and smartphones as well, which might be relevant to their attitude to AI.
"Some of them asked what the big deal was. The work was getting done anyways, so why did it matter how it was getting done? And in the end, they wouldn’t be blocked from using AI at work, so shouldn’t they be allowed to use it in school?"
I've been trying to tell students that all their classwork assignments are like lifting weights at the gym. I really don't care if this work gets done any more than their trainer cares if the weights get lifted.
The point is to help you practice and build your muscles. Sure, you'll be allowed to use any machinery you want while you're working as a firefighter - but you still want some muscles to use for all the things the machines don't get to.
I am completely in agreement that because students are learning less in a world with easy gen AI access because they use it as a shortcut and that leads to a failure of assessments.
But that's also why I am a little frustrated that my fellow college educators are galaxy brain arguments about the meaning of ChatGPT (what is learning? what is capitalism? will we all be writing code in 50 years? blah blah) when there is a singularly achievable goal we could be working towards right now: an agreement between universities, learning management system (LMS) providers, and Gen AI companies to implement watermarking so that use of AI is easily detectable. This will also leave instructors free to decide how they want AI to be used in their classes. It’s won’t be easy (lots of details to work out) but it is a socio-technical solution that is definitely within our grasp. It's also fairly apolitical in that it should be something that people can agree on regardless of their stance on things like Silicon Valley behemoths, capitalism, or the purpose of college. https://computingandsociety.substack.com/p/how-do-you-solve-a-problem-like-chatgpt
A way I might approach this is to train students on learning processes -- fundamentals first, then incorporate technology and tools like AI. It's like learning to ski first without poles so you don't overrely on poles for things you should be using your lower body for. You do ultimately get poles, but there's a process. I often use AI for things I know a lot about myself to see how good the AI is, and sometimes it's excellent while other times it makes stupid mistakes that I can easily catch. Teaching the "why we teach how we teach" part goes beyond any one class and is really something that needs to be instilled throughout the schooling process. I suspect universities don't like addressing this question because trying to answer it honestly might call into question a lot of things they do. But it's something they have to face.
At my school, we have to program in our quizzes and midterms and finals in a special computer lab. Like, we reserve a 50-minute slot during the exam period. It’s not so bad. I personally don’t use chatbots on hw because (obviously) doing the hw is good practice for exam. But I did make an exception for a homework assignment that (1) the lecture didn’t prepare us for adequately in my view and (2) wasn’t going to be on the exam. I also have used chatbots to help with some of the fluff on group projects, like the presentation. I guess that, with an exam-heavy approach, you have to actually fail students who bomb an exam, which can be hard for social or administrative reasons.
My undergrad didn't grade homework. The university administered tests: your college would prepare you for them, and if you wanted to prepare poorly you could. While I already thought it had many advantages, AI is going to amplify the benefits of that model. In the adversarial relationship between teacher-as-evaluator and students, students have acquired decisive operational advantage in homework, take-home tests, and demonstrations.
FWIW I am seeing more universities do this — moving away from grading homework and making exams worth more, but the HW has to be done for anyone to really succeed on the exam.
Speaking as a fellow instructor at an R1 CS program, it's certainly what I am doing. That said, there has been and remains a well-intentioned push from administration (here and elsewhere) to have a more supportive, less intimidating assessment environment -- more take-home work, fewer (or no) high-stakes in-person assessments. At the same time there is a push to "integrate AI into the curriculum." Like approximately every other academic on the Internet, I Have Thoughts about these conflicting goals. I have my doubts that it's going to end well given the short-term incentives!
Yeah, I understand emotionally the objection to high-stakes in-person assessments...and I think the only possible alternative is a task long and complex enough that SOTA models can't do it for you, and that's non-trivial to design and know unless you're deeply enmeshed in what can be done with AI models.
I don't remember homework being graded. Homework was just practice for the midterm and final, which were all that mattered. Cheat all you want; you're the one who'll suffer when you're studying for the final.
Wouldn't you have to at least correct the homework so students would know where they've gone wrong?
Exactly the problem. How do you do this with CS, where you have to code?
It will probably end up being important to work capable both with and without AI in software engineering. On the one hand, if you're learning a new package, want to spit out some boilerplate to write a csv, or need to convert a codebase from one common language to another, agents are invaluable. On the other hand, I just debugged a problem that would have required an unrealistically large context window and processing time yesterday. My work routinely involves using knowledge that isn't written down and never will be, and anything looking at only my IDE will always need to guess at what I'm trying to do.
Honestly, if professors take the opportunity to make homework harder for agents to one-shot and use exams to test contextual or higher-level knowledge instead of syntax, students might end up better prepared than they were before 2022.
I certainly could make homework harder for AI to one-shot -- but usually that means increasing the scope and scale of a particular problem, and removing structure and support. My experience is that the very existence of signposts and "here's how to decompose the problem" text serves to makes things "one-shot-able."
These larger problems are then much harder for students without background to solve -- and much harder to grade and provide meaningful feedback on, especially in the 200+ enrollment introductory courses. ("I know, you can simply use AI to..." and why that's probably not a great solution for a residential University is a separate post.)
Have you tried Claude code?
No, and on the day that it understands how to integrate and synthesize the data and knowledge needed to design a computer chip or an aircraft wing we can probably all go home.
I fear we might only be a few years away from that
I can see it going either way. Both Claude and ChatGPT went backward in their latest iteration. I could see the singularity, or AI petering out in a few years if the physical computer power necessary becomes out of reach.
I certainly hope it peters out. But given the size of the data centers and amount of investment capital, I’m not holding my breath.
Why would you want it to peter out? I can certainly see why we wouldn’t want it to become self aware and rule over us, Terminator style. But I can see it pushing developed countries into a post-need paradigm. New jobs managing AI will come into being, as they always do.
One of my old undergrad pals grew up to become a professor. Our latest conversation, during our latest class reunion, featured complaints about the evolution of student expectations with regard to their time spent at university, their roles in classes, and the purpose of their commitment to university as students.
While my professor friend and I were students, we expected to learn things which we didn't know, for the purpose of nurturing an expertise which would be valuable enough one day, to become the foundation of the careers we'd pursue to build relationships and earn money. Students would sit up during lectures or seminars, while quietly absorbed in the experience of the moment.
Student expectations gradually evolved from this instructional relationship, to more of a client-vendor relationship where students were clients, and professors (along with the university itself) were simply vendors who'd been hired to provide a service — a service to them.
The expectations of the students described in this essay seem to have evolved to yet another stage, where they view university as a strictly formal process whose function is to validate their credentials — that is, students today expect classes to enable them to get paid. This is different than *learning* valuable expertise, for which they may be paid.
If today's students are shortchanging themselves on the experience of learning during university — and of learning by way of experience — it must be because such experience is simply not valuable to them. The reward structure they've understood during their lives must ignore process in favor of results. This would explain the rationale for their sincere insistence on using ChatGPT on assignments and their choice to disregard the idea that learning is a function of process.
I suggest that this absence of motivation among students, to build functional expertise and proficiency, is therefore mostly a *sociological* issue having to do with what we're taught to value in our lives.
Never before has the *perception* of value been more highly emphasized than tangible metrics of value, than it is today…
The other thing I'd point out is that students have a really hard time with the pressure of school. It's natural that they'd take a release valve when offered. The problem is that this particular release valve is perhaps the most damaging one they could use.
As much as it terrifies me, I really like this response.
I guess if we were to take it the next step further, I guess my question would be — why do they think they would be chosen to be paid (ie receive a job offer) compared to the next student?
I suppose this is what Altman talks about when he mentions “intelligence on tap”: the idea that the entirety of intelligence-based work would be commoditized.
I guess this could be seen as a transition step to AI taking jobs: maybe it won’t, and maybe Jevon’s Paradox would lead to more work, but I don’t see how wages won’t plummet as a result of weak differentiation.
I'm starting to think the paradigm of homework is just not going to work in the future. If we want to see mastery of anything that can't be done in a single exam period, I think we'll likely have to make long weekly class sessions for doing that type of work, like lab sections. I think it'll make university even more labor-intensive, but I really don't see how else you'd do it.
yeah, I sort of wonder about this too. It's definitely kept me up thinking about it, but I'm done with teaching and don't want to go back. The process of knowing that your teaching is being ignored for ChatGPT is truly exhausting.
Yes, I've been talking to everyone I can to see if we can get some trials set up of "writing labs" where students can sign up for time at a non-internet computer to work on writing assignments outside of class hours (the idea would be that writing assignments would only be submitted from these computers).
Easier said than done. We can't even properly monitor students in our alternative testing center (used for those who have accommodations, mostly extra time). Because it has to be asynchronous, they often learn about the test from friends ahead of time (they all have massive group chats for every class). Furthermore, students have been caught using phones in them, but who knows if there are proctors who don't pay attention sometimes.
It sounds like a new 1st level of weed out classes or tests is necessary.
If they don't want to think they should find a different university or major.
I feel lucky for AI to have hit at this point in my career. I’m close enough to the end of it that I won’t see the full impact of AGI (assuming it’s not 18 months away).
And as a developer, AI’s coding ability is extraordinary. But I already have the basic reasoning skills. I know *how* the code needs to work, and I can tell when it’s made a mistake.
I also found that AI’s desire to please can backfire spectacularly. In one case, I made a design mistake halfway through, and told it that *it* had made a mistake, which it eagerly helped “fix.” I unwound half my code before realizing I was in error.
I also really enjoy having deep conversations on topics I’m curious about, and (I think) I’m savvy enough to see when it’s wrong. But it really is an amazing learning tool.
I will also say, I think we’re overcorrecting on the “no pain, no gain” aspect of learning. Some discomfort is necessary, but I don’t know that study sessions that leave you cross-eyed really are necessary. I learn best and fastest in small, almost effortless chunks, and AI seems ideally suited for that.
This is one where I'd really like to have seen a more detailed age breakout! For views about AI, I think that 18-44 is just way too wide a range. I suspect that 18-21 is quite different from 22-25 right now, because the younger group had AI tools for several years of schooling, while the 22-25 group were mostly already past college intro classes (or done with high school and no longer in education) by the time AI tools became relevant.
I'm not so sure if there's as big a difference between 25-30 and 30-35 or 35-45, but each of these age ranges does have a meaningfully different relationship to social media and smartphones as well, which might be relevant to their attitude to AI.
"Some of them asked what the big deal was. The work was getting done anyways, so why did it matter how it was getting done? And in the end, they wouldn’t be blocked from using AI at work, so shouldn’t they be allowed to use it in school?"
I've been trying to tell students that all their classwork assignments are like lifting weights at the gym. I really don't care if this work gets done any more than their trainer cares if the weights get lifted.
The point is to help you practice and build your muscles. Sure, you'll be allowed to use any machinery you want while you're working as a firefighter - but you still want some muscles to use for all the things the machines don't get to.
I am completely in agreement that because students are learning less in a world with easy gen AI access because they use it as a shortcut and that leads to a failure of assessments.
But that's also why I am a little frustrated that my fellow college educators are galaxy brain arguments about the meaning of ChatGPT (what is learning? what is capitalism? will we all be writing code in 50 years? blah blah) when there is a singularly achievable goal we could be working towards right now: an agreement between universities, learning management system (LMS) providers, and Gen AI companies to implement watermarking so that use of AI is easily detectable. This will also leave instructors free to decide how they want AI to be used in their classes. It’s won’t be easy (lots of details to work out) but it is a socio-technical solution that is definitely within our grasp. It's also fairly apolitical in that it should be something that people can agree on regardless of their stance on things like Silicon Valley behemoths, capitalism, or the purpose of college. https://computingandsociety.substack.com/p/how-do-you-solve-a-problem-like-chatgpt
A way I might approach this is to train students on learning processes -- fundamentals first, then incorporate technology and tools like AI. It's like learning to ski first without poles so you don't overrely on poles for things you should be using your lower body for. You do ultimately get poles, but there's a process. I often use AI for things I know a lot about myself to see how good the AI is, and sometimes it's excellent while other times it makes stupid mistakes that I can easily catch. Teaching the "why we teach how we teach" part goes beyond any one class and is really something that needs to be instilled throughout the schooling process. I suspect universities don't like addressing this question because trying to answer it honestly might call into question a lot of things they do. But it's something they have to face.
At my school, we have to program in our quizzes and midterms and finals in a special computer lab. Like, we reserve a 50-minute slot during the exam period. It’s not so bad. I personally don’t use chatbots on hw because (obviously) doing the hw is good practice for exam. But I did make an exception for a homework assignment that (1) the lecture didn’t prepare us for adequately in my view and (2) wasn’t going to be on the exam. I also have used chatbots to help with some of the fluff on group projects, like the presentation. I guess that, with an exam-heavy approach, you have to actually fail students who bomb an exam, which can be hard for social or administrative reasons.