I think it's also relevant that, for professional writers who are trying to minimize their use of AI, their interactions with it are mostly going to unavoidable enounters integrated in legacy applications (Google search, MS Office suite, etc.), which tend to be annoyingly obtrusive and relatively low-quality as models go. That, and AI slop in advertisements. They make for a lousy impression.
"There are, loosely speaking, two mainstream concerns with AI:
1. It’s going to take your job.
2. It’s going to kill education."
I'm not so sure! There's a Pew poll from last September where only 9% of US adults are concerned about job loss and education isn't even on the list. The top two concerns were "erodes human abilities and connections" at 27% and "negative impact on accuracy of information" at 18%. Things could have changed since then, but probably not too much. I find all these vastly different conceptions from person to person to be quite interesting.
Timing of the evidence presented in this study feels important and not aligned with the narrative. At least for me, in the summer 2024-summer 2025 period, I was probably using AI for personal topics 10x more than work.
Since January of this year (and I’m not a coder!) the ratio is now 10:1 in the other direction. That’s a 100x change! I imagine the evidence will grow to show that in the coming months.
I don’t understand why you’d expect the messenger class not to notice the use of AI for personal advice. The messenger class definitely also uses AI for personal advice—probably more so, since overall usage and awareness is higher. My household asks Claude questions about taxes or plumbing or product research or whatever like 10x a day, mixed in with the work-related questions.
I'm skeptical about the idea that having on-demand advice imperils the jobs of professionals. I think the reality is, most of the time people either just didn't do things when they couldn't get advice/answers or they already were using google.
Plenty of people are just avoidant about going to the doctor, making a will, filing their taxes, or fixing their running toilet. Maybe AI just ends supercharging google/youtube in ways that are broadly helpful for this kind of thing.
I agree with Jerusalem’s general thesis that the messenger class is trapped by their own experiences (which applies beyond AI as she’s written). But in the case of AI, there’s another element to their irrationality on this topic.
In Musa al-Gharbi’s “We Have Never Been Woke” he offers a fundamental insight: the “symbolic capitalists” (professionals) see the regular capitalists (business _owners_) as rivals. It is our jobs to “hold them in check,” to keep the Zuckerbergs and Musks from running amok over the world. James Burnham points out this rivalry in The Managerial Revolution as well (that authentic capitalists are being usurped by managerial professionals).
Seen this way, the threat of AI to our (professional) livelihoods is not really the issue. That’s what we talk about because we can name it and imagine it. But the threat is actually deeper in two ways.
1. It’s just a huge symbolic win for capitalists. Some _*thing*_ developed on the dollar for and owned by a very few is actually just very useful. Lots of people get a benefit because other people poured money into making this thing, for which they collect a huge financial reward. That’s how it’s supposed to work in a world where capitalists get very rich and rule everything. So we, its critics, have to find flaws or we “lose” the argument.
2. We, as their mortal enemies, are weakened. Whatever non-professionals thought of us before, they can only lose respect for us now, because they probably need us even less. They may even enjoy the spectacle of us fearing for our jobs like we blithely let them do over the past several decades
Research in risk perception shows that these kind of perceptions, rooted in catastrophic fear and a sense of injustice, lead to over-estimation of risk (the concept is “dread”).
We, professionals, need to forget about the past and focus on how we _*actually*_ fit into a healthy society, instead of lamenting what could have or should have been.
I don't think I understand what point about AI is being made here. I guess it's that the median worker is not currently directly economically threatened by it, and interacts with it primarily in its capacity as a useful personal tool? But I don't understand what features of the current discourse would be invalidated if that were the case.
Separately, if I've correctly understood the two linked studies, they both specifically exclude ChatGPT usage by people who access it through a subscription provided by their workplace. Furthermore, according to various recent reporting, ChatGPT is currently winning in market share among consumers but trailing Claude among business customers. Both of these factors mean that the share of LLM usage for work will be understated. (It's likely that most of the work usage in the studies is by people who technically aren't supposed to be using it.)
I don't doubt that more people use LLMs for non-work more often than for work, than vice versa. But I think it's somewhat more doubtful that the majority of all LLM queries aren't for work, and even more doubtful that the majority of economic value created by LLMs is in non-work contexts.
I feel like this is sort of a strawman or blueskyman argument. Most normies are neither worried for their jobs nor concerned about AI ruining the world.
I think it's also relevant that, for professional writers who are trying to minimize their use of AI, their interactions with it are mostly going to unavoidable enounters integrated in legacy applications (Google search, MS Office suite, etc.), which tend to be annoyingly obtrusive and relatively low-quality as models go. That, and AI slop in advertisements. They make for a lousy impression.
"There are, loosely speaking, two mainstream concerns with AI:
1. It’s going to take your job.
2. It’s going to kill education."
I'm not so sure! There's a Pew poll from last September where only 9% of US adults are concerned about job loss and education isn't even on the list. The top two concerns were "erodes human abilities and connections" at 27% and "negative impact on accuracy of information" at 18%. Things could have changed since then, but probably not too much. I find all these vastly different conceptions from person to person to be quite interesting.
Timing of the evidence presented in this study feels important and not aligned with the narrative. At least for me, in the summer 2024-summer 2025 period, I was probably using AI for personal topics 10x more than work.
Since January of this year (and I’m not a coder!) the ratio is now 10:1 in the other direction. That’s a 100x change! I imagine the evidence will grow to show that in the coming months.
I don’t understand why you’d expect the messenger class not to notice the use of AI for personal advice. The messenger class definitely also uses AI for personal advice—probably more so, since overall usage and awareness is higher. My household asks Claude questions about taxes or plumbing or product research or whatever like 10x a day, mixed in with the work-related questions.
I'm skeptical about the idea that having on-demand advice imperils the jobs of professionals. I think the reality is, most of the time people either just didn't do things when they couldn't get advice/answers or they already were using google.
Plenty of people are just avoidant about going to the doctor, making a will, filing their taxes, or fixing their running toilet. Maybe AI just ends supercharging google/youtube in ways that are broadly helpful for this kind of thing.
I agree with Jerusalem’s general thesis that the messenger class is trapped by their own experiences (which applies beyond AI as she’s written). But in the case of AI, there’s another element to their irrationality on this topic.
In Musa al-Gharbi’s “We Have Never Been Woke” he offers a fundamental insight: the “symbolic capitalists” (professionals) see the regular capitalists (business _owners_) as rivals. It is our jobs to “hold them in check,” to keep the Zuckerbergs and Musks from running amok over the world. James Burnham points out this rivalry in The Managerial Revolution as well (that authentic capitalists are being usurped by managerial professionals).
Seen this way, the threat of AI to our (professional) livelihoods is not really the issue. That’s what we talk about because we can name it and imagine it. But the threat is actually deeper in two ways.
1. It’s just a huge symbolic win for capitalists. Some _*thing*_ developed on the dollar for and owned by a very few is actually just very useful. Lots of people get a benefit because other people poured money into making this thing, for which they collect a huge financial reward. That’s how it’s supposed to work in a world where capitalists get very rich and rule everything. So we, its critics, have to find flaws or we “lose” the argument.
2. We, as their mortal enemies, are weakened. Whatever non-professionals thought of us before, they can only lose respect for us now, because they probably need us even less. They may even enjoy the spectacle of us fearing for our jobs like we blithely let them do over the past several decades
Research in risk perception shows that these kind of perceptions, rooted in catastrophic fear and a sense of injustice, lead to over-estimation of risk (the concept is “dread”).
We, professionals, need to forget about the past and focus on how we _*actually*_ fit into a healthy society, instead of lamenting what could have or should have been.
I don't think I understand what point about AI is being made here. I guess it's that the median worker is not currently directly economically threatened by it, and interacts with it primarily in its capacity as a useful personal tool? But I don't understand what features of the current discourse would be invalidated if that were the case.
Separately, if I've correctly understood the two linked studies, they both specifically exclude ChatGPT usage by people who access it through a subscription provided by their workplace. Furthermore, according to various recent reporting, ChatGPT is currently winning in market share among consumers but trailing Claude among business customers. Both of these factors mean that the share of LLM usage for work will be understated. (It's likely that most of the work usage in the studies is by people who technically aren't supposed to be using it.)
I don't doubt that more people use LLMs for non-work more often than for work, than vice versa. But I think it's somewhat more doubtful that the majority of all LLM queries aren't for work, and even more doubtful that the majority of economic value created by LLMs is in non-work contexts.
I feel like this is sort of a strawman or blueskyman argument. Most normies are neither worried for their jobs nor concerned about AI ruining the world.
I thought this too but that's changed. Recent surveys indicate that most people are actually worried about AI.
https://poll.qu.edu/poll-release?releaseid=3955