I've become more optimistic about this topic in the medium term (say 5-15 years out), though I am pessimistic about the next 2-5 years.
I suspect that AI will change the nature of white collar jobs rather than eliminate them. There may eventually even be more demand for white collar work than there is now. That isn't to say that demand for certain kinds of roles and tasks won't dramatically decline, but that those will be balanced out by increases in demand for other kinds of roles and tasks. Many of these roles and tasks may be new and ones we can't even imagine right now.
We've seen this in the past with programming. Many people thought higher level programming languages, and then low/no-code apps, would reduce demand for programmers. But that didn't happen, it just changed what projects programmers worked on and what skills programmers needed. Fewer programmers needed to understand the nuts and bolts of how hardware worked, for example. And in fact, because low-level work was automated, this allowed programmers to spend more time on other kinds of projects.
The thing with AI is it will only replace professionals when AI alone is as good or better than a human working with AI tools. I don't see that happening anytime soon for most jobs.
As I said, though, that doesn't mean there won't be short-term pain. Organizations will slowly restructure to incorporate AI, and this will eliminate many jobs. For people in roles that will get eliminated or have demand reduced due to AI, this will be painful. But eventually, some organizations will figure out ways to productively use humans + AI doing roles and tasks we can't even imagine right now, and we'll find a new equilibrium. We don't have filing clerks or old school secretaries anymore, but eventually those roles evolved into the modern administrative assistant.
This is great advice. I submit that the same could be said for the internet, generally — including and especially the mobile internet.
Someone once said that human beings became cyborgs the moment our ancestors picked up their first tool, because every tool is simply a cybernetic extension of our bodies. As humans have grown more cybernetically enhanced with more sophisticated tools — including tools which, in turn, are designed to direct other tools (as computer operating systems direct the various software programs which use them) — we have become relatively less human, and more machine. The trick is to let our tools help us with the work we do, and not to largely do this work themselves — or, rather, do it with minimal input.
If you've ever noticed someone who seems hopelessly lost without their cell phone — or, perhaps if you've ever felt lost yourself under this circumstance — then you know just what I mean…
Most of the things that AI seems to be good at, at least with LLM, is the dumb grunt work that few people wanted to do and made even less sense economically. Which means maybe that work wasn’t worth much to begin with. For example, my wife has used it to compose work emails that she hates writing. I’ve used to compose other work correspondence that’s required but that nobody reads.
I’ll go even further and say that maybe it’s revealed how shallow and not worthwhile some of work has become. For example, if I can feed a hundred Atlantic columns into a machine and then it spits me out a new one that I can’t tell the difference from, maybe those essays weren’t really all that important or thought provoking.
I don’t think AI is going to replace humans in the important ways. But I do think it’s going to reveal that we haven’t really been working or thinking in important ways in some jobs.
"For example, if I can feed a hundred Atlantic columns into a machine and then it spits me out a new one that I can’t tell the difference from, maybe those essays weren’t really all that important or thought provoking."
I'm pretty skeptical this would actually work. The average Atlantic article is much better written than most of what AI produces. Also, AI struggles to have a unique perspective on anything, which is really important for good writing.
I think you're right, though. It's telling that we haven't really seen any economy-wide increases in productivity, or even really anecdotal examples of higher quality or more quickly produced products, due to AI. We hear about how it increases personal productivity, but if it's mostly in low value tasks, then that's not going to be a revolution.
Still, though, I think there are a lot of jobs, especially certain kinds of entry-level white collar jobs, that mostly do this kind of low-value work. So I still think there might be some pain in the coming years.
I’m fairly confident if you feed ChatGPT every Thomas Chatterton Williams article / book and ask it to write a new one on cancel culture, you would be hard pressed to discern a difference. The only thing that might give it away is that ChatGPT is easier to read than Chatterton.
So, a lot of writers play a game where they'll ask ChatGPT to write a column in their own voice, just to see what comes out of it. I think it definitely does a pretty good job picking up on your specific stylistic tics, but it's not gonna churn out a full column.
“In a world increasingly governed by algorithms and outrage, perhaps the last place one expects to encounter intellectual fragility is the university campus—the supposed crucible of fearless inquiry. And yet, from Oberlin to Harvard, a quiet unmaking is taking place. Professors are no longer being challenged in seminars but excommunicated in group chats. Students, once expected to cultivate resilience and tolerance for dissent, now seem more invested in curating safety from ideas than confronting them.
I do not use the term "cancel culture" lightly. Like most phrases born in the fever swamps of social media, it is imprecise, easily caricatured, and often misapplied. But to pretend it doesn't exist—or worse, to suggest it is simply accountability by another name—is to abdicate intellectual honesty. What we are witnessing is not a necessary reckoning with power, but rather a dangerous deformation of the liberal values that make higher education worth defending.
The university, at its best, exists to challenge—not coddle—the mind. It is a space where error is instructive, where ideas are subjected to scrutiny rather than litmus tests, and where the distinction between disagreement and harm is vigilantly maintained. But increasingly, this space is being narrowed by a censorious mood that punishes deviation not just from the progressive consensus, but from the rapidly shifting codes of acceptability that govern identity and expression.
I have seen this up close. A friend—an accomplished scholar of Renaissance literature—was recently hauled before a diversity committee for quoting James Baldwin in class. The line in question? A meditation on the limits of racial essentialism, drawn from The Fire Next Time. That it came from Baldwin himself, a writer who defied all tribal identifications, was irrelevant. The quote made a student "uncomfortable," and the administrative machinery lurched into action. No context. No discussion. Just a chilling sense that the mere appearance of transgression is enough to invite suspicion.
This is not an isolated incident. In recent years, professors have been disinvited for publishing controversial op-eds, students have been ostracized for asking clumsy—but sincere—questions, and reading lists have been purged of authors who fail to mirror contemporary political values. The line between person and position has collapsed, and what remains is a kind of moral tribalism masquerading as justice.
What makes all this so disorienting is that it is not being imposed by authoritarian governments or religious zealots, but by students and faculty who see themselves as progressive. They are, in many cases, well-intentioned—driven by a genuine desire to redress historical injustices and protect the vulnerable. But good intentions, untethered from a commitment to open inquiry, can easily curdle into dogma. And once that happens, the university ceases to be a forum for growth and becomes a theater for conformity.
This is not to say that all ideas are equally valid, or that speech should be free from consequences. But the appropriate response to offensive or misguided expression is not excommunication—it is argument. It is, in fact, the very process by which bad ideas are exposed and better ones refined. What we risk losing in the current climate is not just a handful of speakers or syllabi, but the very habits of mind that allow pluralistic societies to function.
The irony is that many of the figures now deemed beyond the pale—Baldwin, Orwell, Nabokov, Mailer—were themselves critics of orthodoxy. They did not seek comfort, but confrontation. They wrote into complexity, not out of it. To engage with them is not to endorse all they said or did, but to recognize that human knowledge is a cumulative, often contradictory endeavor. The impulse to sanitize our intellectual inheritance—to treat the past as a crime scene—only impoverishes our understanding of both history and ourselves.
There is, of course, a deeper current running beneath all this: a generational loss of faith in liberalism itself. For many young people today, freedom of speech is not a self-evident good but a smokescreen for bigotry. Universalism is recast as erasure. Dialogue becomes complicity. I understand this impulse—it is born, in part, from a very real sense of disillusionment with the failures of the old order. But to abandon these principles is to mistake their betrayal for their essence.
The challenge before us is not to retreat into reaction or to deny the need for change. It is to rediscover a way of speaking across difference without demanding ideological purity. It is to build institutions where the pursuit of truth is not subordinated to the performance of virtue. And it is to remember that education is not about protecting students from discomfort, but about equipping them to face the world as it is—with all its ambiguities, contradictions, and unresolvable tensions.
In other words, the antidote to cancel culture is not censorship in reverse, but courage—courage to listen, to speak, and, perhaps most of all, to be wrong.”
If I asked to add in a few random citations in there for good effort it probably would do it.
It’s been less than three years since ChatGPT was introduced. The progress since then has been nothing short of astonishing if you are using the latest models. The people working in AI companies are using their AI to write somewhere around 25% of their own software code right now, and we should expect that number to keep rising as the models get better and data centers get larger.
The people predicting huge job loss are not basing this on where LLM’s are today, but rather where the technology is predicted to get to in another 2-5 years.
Great piece. I would also argue in favor of care careers - not just healthcare but childcare, early education, social work, ministry, psychotherapy. Many of these are poorly paid, in part because of the devaluing of traditional 'feminine' vocations, but we can change that through better policy.
I also wonder how much the push for remote work helps make it easier for employers to shift to bots instead of workers. If you don't get to know your employees personally and you don't have an in-person culture at a workplace that includes collaboration, brainstorming, etc, you can be way more transactional about the folks you hire.
I've become more optimistic about this topic in the medium term (say 5-15 years out), though I am pessimistic about the next 2-5 years.
I suspect that AI will change the nature of white collar jobs rather than eliminate them. There may eventually even be more demand for white collar work than there is now. That isn't to say that demand for certain kinds of roles and tasks won't dramatically decline, but that those will be balanced out by increases in demand for other kinds of roles and tasks. Many of these roles and tasks may be new and ones we can't even imagine right now.
We've seen this in the past with programming. Many people thought higher level programming languages, and then low/no-code apps, would reduce demand for programmers. But that didn't happen, it just changed what projects programmers worked on and what skills programmers needed. Fewer programmers needed to understand the nuts and bolts of how hardware worked, for example. And in fact, because low-level work was automated, this allowed programmers to spend more time on other kinds of projects.
The thing with AI is it will only replace professionals when AI alone is as good or better than a human working with AI tools. I don't see that happening anytime soon for most jobs.
As I said, though, that doesn't mean there won't be short-term pain. Organizations will slowly restructure to incorporate AI, and this will eliminate many jobs. For people in roles that will get eliminated or have demand reduced due to AI, this will be painful. But eventually, some organizations will figure out ways to productively use humans + AI doing roles and tasks we can't even imagine right now, and we'll find a new equilibrium. We don't have filing clerks or old school secretaries anymore, but eventually those roles evolved into the modern administrative assistant.
"…avoid letting AI de-skill you."
This is great advice. I submit that the same could be said for the internet, generally — including and especially the mobile internet.
Someone once said that human beings became cyborgs the moment our ancestors picked up their first tool, because every tool is simply a cybernetic extension of our bodies. As humans have grown more cybernetically enhanced with more sophisticated tools — including tools which, in turn, are designed to direct other tools (as computer operating systems direct the various software programs which use them) — we have become relatively less human, and more machine. The trick is to let our tools help us with the work we do, and not to largely do this work themselves — or, rather, do it with minimal input.
If you've ever noticed someone who seems hopelessly lost without their cell phone — or, perhaps if you've ever felt lost yourself under this circumstance — then you know just what I mean…
Most of the things that AI seems to be good at, at least with LLM, is the dumb grunt work that few people wanted to do and made even less sense economically. Which means maybe that work wasn’t worth much to begin with. For example, my wife has used it to compose work emails that she hates writing. I’ve used to compose other work correspondence that’s required but that nobody reads.
I’ll go even further and say that maybe it’s revealed how shallow and not worthwhile some of work has become. For example, if I can feed a hundred Atlantic columns into a machine and then it spits me out a new one that I can’t tell the difference from, maybe those essays weren’t really all that important or thought provoking.
I don’t think AI is going to replace humans in the important ways. But I do think it’s going to reveal that we haven’t really been working or thinking in important ways in some jobs.
"For example, if I can feed a hundred Atlantic columns into a machine and then it spits me out a new one that I can’t tell the difference from, maybe those essays weren’t really all that important or thought provoking."
I'm pretty skeptical this would actually work. The average Atlantic article is much better written than most of what AI produces. Also, AI struggles to have a unique perspective on anything, which is really important for good writing.
I think you're right, though. It's telling that we haven't really seen any economy-wide increases in productivity, or even really anecdotal examples of higher quality or more quickly produced products, due to AI. We hear about how it increases personal productivity, but if it's mostly in low value tasks, then that's not going to be a revolution.
Still, though, I think there are a lot of jobs, especially certain kinds of entry-level white collar jobs, that mostly do this kind of low-value work. So I still think there might be some pain in the coming years.
“I’m pretty skeptical this would actually work.”
I’m fairly confident if you feed ChatGPT every Thomas Chatterton Williams article / book and ask it to write a new one on cancel culture, you would be hard pressed to discern a difference. The only thing that might give it away is that ChatGPT is easier to read than Chatterton.
So, a lot of writers play a game where they'll ask ChatGPT to write a column in their own voice, just to see what comes out of it. I think it definitely does a pretty good job picking up on your specific stylistic tics, but it's not gonna churn out a full column.
“In a world increasingly governed by algorithms and outrage, perhaps the last place one expects to encounter intellectual fragility is the university campus—the supposed crucible of fearless inquiry. And yet, from Oberlin to Harvard, a quiet unmaking is taking place. Professors are no longer being challenged in seminars but excommunicated in group chats. Students, once expected to cultivate resilience and tolerance for dissent, now seem more invested in curating safety from ideas than confronting them.
I do not use the term "cancel culture" lightly. Like most phrases born in the fever swamps of social media, it is imprecise, easily caricatured, and often misapplied. But to pretend it doesn't exist—or worse, to suggest it is simply accountability by another name—is to abdicate intellectual honesty. What we are witnessing is not a necessary reckoning with power, but rather a dangerous deformation of the liberal values that make higher education worth defending.
The university, at its best, exists to challenge—not coddle—the mind. It is a space where error is instructive, where ideas are subjected to scrutiny rather than litmus tests, and where the distinction between disagreement and harm is vigilantly maintained. But increasingly, this space is being narrowed by a censorious mood that punishes deviation not just from the progressive consensus, but from the rapidly shifting codes of acceptability that govern identity and expression.
I have seen this up close. A friend—an accomplished scholar of Renaissance literature—was recently hauled before a diversity committee for quoting James Baldwin in class. The line in question? A meditation on the limits of racial essentialism, drawn from The Fire Next Time. That it came from Baldwin himself, a writer who defied all tribal identifications, was irrelevant. The quote made a student "uncomfortable," and the administrative machinery lurched into action. No context. No discussion. Just a chilling sense that the mere appearance of transgression is enough to invite suspicion.
This is not an isolated incident. In recent years, professors have been disinvited for publishing controversial op-eds, students have been ostracized for asking clumsy—but sincere—questions, and reading lists have been purged of authors who fail to mirror contemporary political values. The line between person and position has collapsed, and what remains is a kind of moral tribalism masquerading as justice.
What makes all this so disorienting is that it is not being imposed by authoritarian governments or religious zealots, but by students and faculty who see themselves as progressive. They are, in many cases, well-intentioned—driven by a genuine desire to redress historical injustices and protect the vulnerable. But good intentions, untethered from a commitment to open inquiry, can easily curdle into dogma. And once that happens, the university ceases to be a forum for growth and becomes a theater for conformity.
This is not to say that all ideas are equally valid, or that speech should be free from consequences. But the appropriate response to offensive or misguided expression is not excommunication—it is argument. It is, in fact, the very process by which bad ideas are exposed and better ones refined. What we risk losing in the current climate is not just a handful of speakers or syllabi, but the very habits of mind that allow pluralistic societies to function.
The irony is that many of the figures now deemed beyond the pale—Baldwin, Orwell, Nabokov, Mailer—were themselves critics of orthodoxy. They did not seek comfort, but confrontation. They wrote into complexity, not out of it. To engage with them is not to endorse all they said or did, but to recognize that human knowledge is a cumulative, often contradictory endeavor. The impulse to sanitize our intellectual inheritance—to treat the past as a crime scene—only impoverishes our understanding of both history and ourselves.
There is, of course, a deeper current running beneath all this: a generational loss of faith in liberalism itself. For many young people today, freedom of speech is not a self-evident good but a smokescreen for bigotry. Universalism is recast as erasure. Dialogue becomes complicity. I understand this impulse—it is born, in part, from a very real sense of disillusionment with the failures of the old order. But to abandon these principles is to mistake their betrayal for their essence.
The challenge before us is not to retreat into reaction or to deny the need for change. It is to rediscover a way of speaking across difference without demanding ideological purity. It is to build institutions where the pursuit of truth is not subordinated to the performance of virtue. And it is to remember that education is not about protecting students from discomfort, but about equipping them to face the world as it is—with all its ambiguities, contradictions, and unresolvable tensions.
In other words, the antidote to cancel culture is not censorship in reverse, but courage—courage to listen, to speak, and, perhaps most of all, to be wrong.”
If I asked to add in a few random citations in there for good effort it probably would do it.
For now
It’s been less than three years since ChatGPT was introduced. The progress since then has been nothing short of astonishing if you are using the latest models. The people working in AI companies are using their AI to write somewhere around 25% of their own software code right now, and we should expect that number to keep rising as the models get better and data centers get larger.
The people predicting huge job loss are not basing this on where LLM’s are today, but rather where the technology is predicted to get to in another 2-5 years.
Great piece. I would also argue in favor of care careers - not just healthcare but childcare, early education, social work, ministry, psychotherapy. Many of these are poorly paid, in part because of the devaluing of traditional 'feminine' vocations, but we can change that through better policy.
I also wonder how much the push for remote work helps make it easier for employers to shift to bots instead of workers. If you don't get to know your employees personally and you don't have an in-person culture at a workplace that includes collaboration, brainstorming, etc, you can be way more transactional about the folks you hire.