"AI — not future, speculative AI, but AI as it exists today — has nonspeculative economic value. I pay $100 per month for Claude because Claude provides me with comfortably more than $100 per month of economic value."
Golly, how the goal posts have moved! I thought AI was going to cure cancer. I thought AI was going to result in 20% a year GDP growth, at a minimum. I thought AI was going to end death. Where's all that stuff? This is the point that AI hypemen can't respond to: the outsized claims about what AI is going to do have not been met by demonstrations of what AI can do. It's still all speculative. And we're nowhere close to justifying the economic investment involved. Who cares if you're get $100 of value out of Claude, if as a society we're paying far more than we get back?
Freddie you're doing the thing again where you lump a bunch of people with different opinions together and call it goalpost moving, when in fact it's... a bunch of people with different opinions. Kelsey never claimed that AI would cure cancer, in fact she has noted in one of her last articles that AI has not yet really begun to speed up scienctific research.
I don't recall being asked "as a society" to pay anything. Investors are investing as they usually do. If it doesn't pay off, they made a bad investment. They didn't ask society if they should invest their money, but then I don't expect them to.
There were two sets of goalkeepers, one saying “AI will someday create 20% GDP growth” and the other saying “AI will never create any economic value at all”. AI has already grown to earn tens of billions. This is obviously compatible with one of those goals and not the other!
Unfortunately, there’s a very large audience of people looking to confirm their priors that AI is useless and/or fraudulent and/or unethical and/or ruining the youth and/or hallucinates and/or, and/or, and/or ….
I am not an AI skeptic and use it for work and pleasure every day. That said, the bullet point list of what you pay for AI to do makes little sense to me. Finding out about a soccer club or missing books could be done much more easily by “asking someone” or “looking at the books.” I can’t figure out what kind of analysis you were even doing there.
I can't speak for Kelsey, but unless I have a specific website in mind whose name temporarily escapes me I don't bother with Google anymore. Claude is just better and faster than manual search. Upgraded Google is not worth $100/month, of course.
But if you’re looking at your homeschool library you can see what books are missing. If you’re watching a soccer team play you can ask someone “hey what team is this?”
I think the persistent skepticism of AI’s value is downstream of skepticism of the value of the “knowledge work” economy overall.
If you believe that most white collar employment is a kind of theater (the David Graeber “Bullshit Jobs” thesis), then automating white collar work will feel valuable to white collar workers and yet also not create much in terms of economic benefits to people at large.
I’m not as skeptical as Zitron, but it does seem like in seeing how this progresses we may learn if all those emails and PowerPoints were ever creating economic value after all. I expect some were and some weren’t.
As someone who generally likes Graeber's other work, he has a weird hate for delivering pizza in a way that makes me assume he just didn't like pizza very much. This, amazingly, calls him out as committing your Thesis #2 over the course of hundreds of pages.
Kinda surprised all this newfound attention on Ed hasn’t causes more folks to stumble upon his, umm, dubious business practices as a “fake it til you make it” PR person….
Fwiw the Internet bubble of the late 90s was driven substantially by the phenomenon of weak/poorly conceived and/or poorly run businesses attaching ". com" to their name and receiving a windfall of investment. I was CFO of a small company in that time we were seeking capital to expand and both our investment banker and the investors we talked to literally said "we'll give you a 5 times bigger multiple if you can make the case that this is actually an online business" [which it wasn't].
This is the same logic of "no doc loans" as shown in The Big Short.
In both cases, the wasted money is going to the "wrong" growing business, not to a whole business category that doesn't make sense.
There's other ways for bubbles to form, but that's not what's happening with AI, so I think the.com analogy is poor. People aren't throwing investment at catchy startups because they think they'll dominate a new category by "AI-ing" it.
There are plenty of reasons to be skeptical that we are on the verge of AGI, which is conventional wisdom in Silicon Valley at this point. So it’s deeply frustrating that many of the most prominent skeptics are so extreme and “head in the sand” about current AI capabilities. The discourse doesn’t elevate nuanced positions.
My company is working through an enterprise agreement with Anthropic and I (head of tax) am one of a group of people testing out Claude enterprise. It’s almost magical. Claude Code built out the full tax return workpapers for one of our jurisdictions. Claude Cowork is solving major issues for us on the tax provision. Their excel plugin (sans Cowork/Code) has been rolled out across finance and is a game changer in and of itself. I’ve been using Claude Pro for personal projects for a while now, and it’s amazing to see how much better these models are getting in real time.
I appreciate Kelsey Piper’s focus in this piece. I think there are a range of questions about AI that get conflated into one big up or down question. Is there an investment bubble in AI? Is AI a “normal” (general purpose and potentially transformative) technology that is subject to the same implementation frictions at bottlenecks as past technologies? Will AI inevitably be more transformative than past transformative technologies (e.g., the printing press, electrification, the green revolution, the internet)? Does AI’s capacity for recursive self-improvement make exponential improvement in AI models inevitable? These are different questions that deserve different answers, even if the answers to some questions have bearing on the answers to others.
I know you're being sarcastic, but when it comes to AI, it may as well be decades ago. If you're using a study on GPT-3.5 to claim that AIs in 2026 are dumb and useless, you're not a serious person and have nothing to say about the topic that's worth considering.
ChatGPT 3.5 was superseded by GPT-4 in March 2023, so I’m not sure why that’s relevant to the question of studies from 2024 or the distant past of (gasp) five months ago.
"AI — not future, speculative AI, but AI as it exists today — has nonspeculative economic value. I pay $100 per month for Claude because Claude provides me with comfortably more than $100 per month of economic value."
Golly, how the goal posts have moved! I thought AI was going to cure cancer. I thought AI was going to result in 20% a year GDP growth, at a minimum. I thought AI was going to end death. Where's all that stuff? This is the point that AI hypemen can't respond to: the outsized claims about what AI is going to do have not been met by demonstrations of what AI can do. It's still all speculative. And we're nowhere close to justifying the economic investment involved. Who cares if you're get $100 of value out of Claude, if as a society we're paying far more than we get back?
Freddie you're doing the thing again where you lump a bunch of people with different opinions together and call it goalpost moving, when in fact it's... a bunch of people with different opinions. Kelsey never claimed that AI would cure cancer, in fact she has noted in one of her last articles that AI has not yet really begun to speed up scienctific research.
I don't recall being asked "as a society" to pay anything. Investors are investing as they usually do. If it doesn't pay off, they made a bad investment. They didn't ask society if they should invest their money, but then I don't expect them to.
There were two sets of goalkeepers, one saying “AI will someday create 20% GDP growth” and the other saying “AI will never create any economic value at all”. AI has already grown to earn tens of billions. This is obviously compatible with one of those goals and not the other!
The frontier labs aren't finished yet.
Unfortunately, there’s a very large audience of people looking to confirm their priors that AI is useless and/or fraudulent and/or unethical and/or ruining the youth and/or hallucinates and/or, and/or, and/or ….
I am not an AI skeptic and use it for work and pleasure every day. That said, the bullet point list of what you pay for AI to do makes little sense to me. Finding out about a soccer club or missing books could be done much more easily by “asking someone” or “looking at the books.” I can’t figure out what kind of analysis you were even doing there.
I can't speak for Kelsey, but unless I have a specific website in mind whose name temporarily escapes me I don't bother with Google anymore. Claude is just better and faster than manual search. Upgraded Google is not worth $100/month, of course.
But if you’re looking at your homeschool library you can see what books are missing. If you’re watching a soccer team play you can ask someone “hey what team is this?”
I think the persistent skepticism of AI’s value is downstream of skepticism of the value of the “knowledge work” economy overall.
If you believe that most white collar employment is a kind of theater (the David Graeber “Bullshit Jobs” thesis), then automating white collar work will feel valuable to white collar workers and yet also not create much in terms of economic benefits to people at large.
I’m not as skeptical as Zitron, but it does seem like in seeing how this progresses we may learn if all those emails and PowerPoints were ever creating economic value after all. I expect some were and some weren’t.
There are two parts to the bullshit jobs thesis:
1) "an increasing number of people feel like their jobs are bullshit." That seems plausible, but is very complicated.
2) "a large share of jobs are actually bullshit." This is just juvenile ODD, a form of Gell-Mann amnesia that flatters academics and creative types.
As someone who generally likes Graeber's other work, he has a weird hate for delivering pizza in a way that makes me assume he just didn't like pizza very much. This, amazingly, calls him out as committing your Thesis #2 over the course of hundreds of pages.
Kinda surprised all this newfound attention on Ed hasn’t causes more folks to stumble upon his, umm, dubious business practices as a “fake it til you make it” PR person….
https://archive.nytimes.com/publiceditor.blogs.nytimes.com/2013/07/03/no-quid-pro-quo-by-youre-the-boss-writer-but-intrigue-anyway/
Fwiw the Internet bubble of the late 90s was driven substantially by the phenomenon of weak/poorly conceived and/or poorly run businesses attaching ". com" to their name and receiving a windfall of investment. I was CFO of a small company in that time we were seeking capital to expand and both our investment banker and the investors we talked to literally said "we'll give you a 5 times bigger multiple if you can make the case that this is actually an online business" [which it wasn't].
This is the same logic of "no doc loans" as shown in The Big Short.
In both cases, the wasted money is going to the "wrong" growing business, not to a whole business category that doesn't make sense.
There's other ways for bubbles to form, but that's not what's happening with AI, so I think the.com analogy is poor. People aren't throwing investment at catchy startups because they think they'll dominate a new category by "AI-ing" it.
There are plenty of reasons to be skeptical that we are on the verge of AGI, which is conventional wisdom in Silicon Valley at this point. So it’s deeply frustrating that many of the most prominent skeptics are so extreme and “head in the sand” about current AI capabilities. The discourse doesn’t elevate nuanced positions.
My company is working through an enterprise agreement with Anthropic and I (head of tax) am one of a group of people testing out Claude enterprise. It’s almost magical. Claude Code built out the full tax return workpapers for one of our jurisdictions. Claude Cowork is solving major issues for us on the tax provision. Their excel plugin (sans Cowork/Code) has been rolled out across finance and is a game changer in and of itself. I’ve been using Claude Pro for personal projects for a while now, and it’s amazing to see how much better these models are getting in real time.
Jensen Huang says he expects employees to use 2x their salary on tokens so 50% psychosis maybe
I appreciate Kelsey Piper’s focus in this piece. I think there are a range of questions about AI that get conflated into one big up or down question. Is there an investment bubble in AI? Is AI a “normal” (general purpose and potentially transformative) technology that is subject to the same implementation frictions at bottlenecks as past technologies? Will AI inevitably be more transformative than past transformative technologies (e.g., the printing press, electrification, the green revolution, the internet)? Does AI’s capacity for recursive self-improvement make exponential improvement in AI models inevitable? These are different questions that deserve different answers, even if the answers to some questions have bearing on the answers to others.
it’s surreal how much of a value add the “cynical British accent” is for Liberal Elites
Studies from 2024 and 2025?! That’s like literally decades ago!
That would never pass muster in any other field. Is he some kind of moron?
I know you're being sarcastic, but when it comes to AI, it may as well be decades ago. If you're using a study on GPT-3.5 to claim that AIs in 2026 are dumb and useless, you're not a serious person and have nothing to say about the topic that's worth considering.
ChatGPT 3.5 was superseded by GPT-4 in March 2023, so I’m not sure why that’s relevant to the question of studies from 2024 or the distant past of (gasp) five months ago.