AI's biggest critic has lost the plot
Ed Zitron vs. reality

We’re taking The Argument to San Francisco! On May 13, Kelsey Piper and I are debating a question that feels unavoidable right now: Is AI actually changing how science gets done, or are we in the middle of a very expensive illusion? I’m bullish; she’s skeptical.
And you won’t just be watching. You’ll get to join in on the argument, too.
Join us May 13 at The Chapel from 7 to 10 p.m. Come argue with us! RSVP here.
- Jerusalem Demsas, Editor-in-Chief
Ed Zitron thinks that AI is a bubble.
The tech columnist — whose newsletter reportedly has 80,000 subscribers and who has bylines in The Atlantic and The Guardian — first made his case in March 2024, quoting analysts who were saying things like we were “at the peak of the hype cycle around Large Language Models and other generative AI.”
He doubled down that summer: AI was a bubble and it wasn’t clear anyone but Nvidia would make money. Zitron is today one of the most prominent people, and certainly the most prolific person, making this case — but while I’m glad someone’s making it, reading through several hundred thousand words of his recent coverage on the topic left me wishing it was being made better.
When you read “AI is a bubble,” think of the dot-com boom of the late 1990s: Yes, the internet was going to be a big deal, but valuations soared for specific companies that had small or speculative revenue, often on the assumption that they would capture the value the internet would one day deliver. They didn’t, their stocks crashed, and the invested money was mostly lost. The internet was as big as imagined — bigger, even — but Pets.com didn’t survive to see it.
In 2024, Ed Zitron was hardly alone in wondering if AI would take this route; it seemed plausible to me too. Models like GPT-4 were tantalizing mostly because of what they suggested might be possible in the future, rather than for their direct economic utility. If building bigger models didn’t pan out, it was easy to imagine that we’d see some bankruptcies.
But time passes and situations evolve. Ed Zitron, though, clearly does not.
Over the last two years, he has called the top repeatedly: The AI bubble was definitely about to burst here, and here, and here, and here, and here, and here. His conclusion hasn’t changed, but his arguments have.
The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.
This is basically an admission that he can’t make the case in terms of the economics anymore. And in deciding how seriously to take his case in 2026, I think it’s valuable to read it in parallel with his case from 2024 and 2025.
“Have we reached Peak AI?” he asked on March 18, 2024. “Things are beginning to unwind in the most annoying bubble in history,” he told us on April 21, 2026. Let’s compare the two articles.
In 2024, Zitron’s coverage of the Bubble Question was rich with admissions from businesses that they weren’t really using AI yet and did not expect AI to have significant impacts on their revenues. He quoted from earning calls in which companies said that AI-related business impacts were zero. In order to be profitable in the future, he pointed out, AI would have to get a lot better — was there any reason to think it would?
Zitron repeatedly made a specific prediction that it would not and could not. “Generative AI,” he wrote in the summer of 2024, “is peaking, if it hasn’t already peaked. It cannot do much more than it is currently doing, other than doing more of it faster with some new inputs. It isn’t getting much more efficient.”
He claimed that “[former OpenAI CTO Mira] Murati and [OpenAI CEO Sam] Altman’s futures depend heavily on keeping the world believing that development and improvement of their models’ capabilities will continue a rapacious pace of progress that has unquestionably slowed,” (emphasis added).
Every part of that was correct except the last clause; that bit hasn’t aged well. By almost every metric, AI progress from 2024 to 2026 has been much faster than AI progress from 2022 to 2024. At the same time, the cost of querying a model at a given capability level has plummeted. GPT-4-level intelligence is about 1/1000th as expensive as it was when GPT-4 was released.
The massive increase in efficiency and decrease in cost fundamentally change the economics of AI, of course. Vastly more uses of AI are now economical, the most impressive things you can do with AI have become far more impressive, and the number of companies that are using and paying for AI has exploded. About 30% of Fortune 500 companies have enterprise deals with one of the leading AI startups. More than half of Americans use AI chatbots weekly or more often.
In the face of all that, you might expect that a 2026 “AI is a bubble” article would have more work to do. It can no longer argue that costs aren’t falling; they are. It can no longer argue that businesses don’t use it; they do. There is still a case, but it would need to argue either that:
at existing market share, AI can’t justify its build-out so even if market share is maintained the companies will go bankrupt.
existing use is just a fad.
Zitron gestures at both arguments occasionally but never really bothers to make either.
Instead, as he’s grown more prominent, his work has become less concrete. His writing has always been a mix of “Here’s an interview in which an AI executive looks kind of silly,” “Here’s why the economics of the business make no sense,” and accusations of outright fraud.
But as the economic case gets harder to make, the silliness argument and the accusations of fraud make up a larger and larger share of the articles. There is, of course, plenty of silliness: easy questions the models fail at, dumb things executives say in interviews, mismanaged product launches.
But this flatly does not matter except to add color and interest to the economic case against the companies. The bubble case simply must turn on revenue projections.
You could just as easily write an article about the absurdities of Amazon — their missteps and foibles, or how amateur some executive sounded in some press appearance, or the periodic findings of unsafe products and recalls. None of that would mean that Amazon is a bubble, though, because it makes a lot of money.
Maybe everything is a lie?
So if the “companies aren’t using it” case no longer stands up, and the “it will not get more efficient” case doesn’t stand up, what remains?
One answer, according to Zitron, is fraud.
At the end of March, OpenAI announced it had raised $122 billion, writing in their funding announcement: “OpenAI was the fastest technology platform to reach 10 million users, the fastest to 100 million users, and soon the fastest to 1 billion weekly active users. Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month.”
Zitron’s theory is that OpenAI is straightforwardly lying. He analogized them to FTX and said “While I don’t have proof, I would bet that OpenAI likely includes these free tokens in its revenues and then counts them as part of its billions of dollars of sales and market spend.”
It is a straightforward crime to claim $2 billion in monthly revenue if you mean that you are giving away services that would have a $2 billion market value.
Now, of course, “OpenAI is a massive fraud, like FTX, and its revenue numbers are not real” is a stance you can take. But I would like Zitron to admit what a change in posture that is from 2024!
In 2024, the argument was simple: These tools do not produce very much value, and they’re very expensive to build, so unless the company is right that the tools are going to massively improve soon, they’re going to run out of investor money.
That argument was true, but the company was right! The tools did massively improve soon after, and now OpenAI is approaching 1 billion weekly active users and reports $2 billion in revenue per month.
The bear case used to be “they’re taking a big bet on models continuing to improve, but we don’t know if they’ll win that bet,” and now it’s “they claim to have decisively won that bet, but they could be lying.”
His theory of Anthropic’s revenue numbers is also that they’re lying, apparently just because the numbers are so big that they have to be lying: “How Did It Go From $700 million in monthly revenue in December 2025 to $2.3 to $2.5 billion in April 2026?” He admitted one other possibility, which is “that Silicon Valley is effectively subsidizing Anthropic through an industry-wide token-burning psychosis” (he does not consider, even to disagree with it, the possibility that the industry is paying for Anthropic’s product for non-psychosis reasons, such as finding it useful).
That’s not his whole case that AI is a bubble, but the rest is even thinner. Gone are the business experts who don’t use the product. The business experts now do use the product, so in the place of their commentary, we have Zitron’s own on why he considers all Anthropic’s enterprise deals to be vaporware: “I don’t see much evidence of Anthropic creating custom integrations that actually matter or — and fuck have I looked! — any real examples of businesses ‘doing stuff with Claude’ other than making announcements about vague partnerships.”1
In addition to not being a real business because no companies are using it, it’s not a real business because users use it too much. Zitron wrote, “OpenAI’s ChatGPT Subscriptions are, like every LLM product, deeply unprofitable, which means that OpenAI needs constant funding to keep providing them. I have found users of OpenAI Codex who have been able to burn between $1,000 and $2,000 in the space of a week on a $200-a-month subscription, and OpenAI just reset rate limits for the second time in a month. This isn’t a real business.”
Many, many products have a business model where the overwhelming number of users are profitable, but there are some hardcore users who use the product so much that those users are unprofitable to serve. Your typical all-you-can-eat restaurant is profitable for the typical lunchgoer but probably loses money on the guests with the largest appetites. But Zitron seems to consider the existence of a user who is unprofitable as proof that the whole business is “deeply unprofitable.”
There are more serious analyses by other people on the same topic. Epoch AI has an in-depth analysis of the same financial questions from the same public information — and the result was not necessarily rosy or hype-friendly for the AI companies.
In order to fully understand OpenAI’s finances, the analysts looked at the costs and revenues over the whole “life cycle” of the GPT-5 model. How much did it cost to train GPT-5? How much was monthly revenue from serving GPT-5, through subscriptions and the API and enterprise deals and so on?
Their conclusion was that OpenAI made money serving GPT-5 but did not make enough to pay back all of the costs to the company of training the model in the first place before it was replaced by the next generation.
You could, if you wanted, use this as an argument that OpenAI is on shaky financial ground: The pressure to come out with the next generation of models and stay in the lead is so intense that models are retired before the company has actually turned a profit on them. Certainly, if I were thinking about investing in OpenAI, I would want to think about when that is expected to turn around.
Is the competitive pressure going to get any less intense? The training runs are going to keep getting pricier, right? Is this a winner-takes-all game where the only thing that matters is who can stay in the game the longest until its competitors go bankrupt? And if so, is there reason to think OpenAI will be the one to stay in the game the longest?
But Zitron is unable to make that argument because he’s too committed to the argument that, actually, none of the activities he surveys are profitable at all, that it’s not just a winner-takes-all game but a game without winners.
He is convinced that the companies are lying about their revenues and that they’re lying about their margins and that they’re lying when they separate training costs out from inference costs at all. He’s found too many grounds for dismissing all the financial information we have as dishonest or irrelevant to seriously engage with what any of it would imply if it were true.
Which means that nothing he has to say is actually useful if you think that these companies are probably not frauds but might still be taking major gambles on their capital build-out.
We’re not headed for March 2000
AI — not future, speculative AI, but AI as it exists today — has nonspeculative economic value. I pay $100 per month for Claude because Claude provides me with comfortably more than $100 per month of economic value.
In 2024, “it’s mostly useful as a curiosity or for cheating on college assignments” was at least a plausible gloss; in 2026, AI is useful for a huge range of tasks, and the overwhelming majority of people who are paying for it are doing so because they are getting a service of economic value in exchange.
Zitron seems to believe that this is not true. It’s hard to tell for sure, because he never directly says that it’s false.
“Nobody wants to talk about the fact that AI isn’t actually doing very much,” he complained, before going on to complain about people saying that agents are able to do tasks independently with oversight. “What tasks, exactly? Who knows!” he wrote.
Ed, thousands of people know and it is your journalistic responsibility to be one of them!
I know, because I test the agents regularly myself, something it’s not clear to me Zitron does despite routinely writing hundreds of thousands of words about what they are and aren’t capable of. Here are the sort of tasks I ask Claude to do:
Reproduce academic papers
Put coding projects online for me so I can share them with friends
Determine which books in a set are missing from the school library and find where they’re cheapest online
Figure out which soccer club the team I see practicing at the local rec center belongs to and how to register my son
Design a bunch of robot-themed handwriting activities for a kindergartner who needs to practice making his uppercase and lowercase letters distinct
Mostly, of course, they’re useful for coding. And coding is big business. It employed 1.9 million Americans in 2024 and tons more people overseas. The median American software engineer made $130,000 that year. That equates to around $250 billion per year just on salaries for software engineers, testers, and QAs.
Making them more productive is a big deal, and in 2026, AI makes them more productive.
Zitron can’t really contest this with contemporary data, so he cites 2024 and 2025 studies of much weaker AIs with much weaker productivity impacts.
At some point, pretending that how people use AI is a complete mystery is just lying to your audience. And at some point, Zitron’s “layers of skepticism” attitude — where he is skeptical that AI is a thing at all, that it has any uses, that those uses provide any economic value, that the revenue numbers are real, that adoption is a fad, that training costs are a meaningful R&D expense, that the capital build-out is going to happen at all, that the market could sustain the capital build-out if it happened2 — leaves one buried in too many impossibility assertions to actually sort them by plausibility.
It is radical skepticism, ultimately arriving at “perhaps nothing we see is real,” rather than principled skepticism about the relatively weakest links in the companies’ case for investment.
Zitron’s skepticism would be more useful if he accepted the fact that people are widely using AI agents for coding and paying money for this out of rational economic self-interest, then thought carefully about whether coding, alone, is a large enough market to justify the whole planned capacity build-out. (It’s probably not, if you’re pessimistic about AI continuing to improve.) He should also ponder whether OpenAI, in particular, is stuck hoping Anthropic runs out of money first and vice versa.3
Instead, his arguments are full of circular dependencies, each accusation of fraud reinforced by the other accusations of fraud. Once the revenue numbers have been dismissed as fraudulent, then investment planning based on those revenue numbers can be dismissed as a game of pretend since the company will never have the money to pay for it. Then the fact that someone takes those numbers seriously can constitute proof of their complicity.
I don’t actually think we need less skepticism in AI world. These companies are, indeed, run by people who are not very trustworthy, who often contradict each other or oversell their products.
And the things they say they’re trying to do are outrageous; people have every right to object to it. Skepticism is more than warranted.
But we desperately need better skepticism.
Recommended reading:
AI could destroy the labor market. We already know how to fix it.
Stop overthinking this. In reality, the most boring, well-established social democratic policy approaches will work perfectly fine to address AI-induced job displacement.
Karine Jean-Pierre is not a #GirlBoss
A qualified defense of strivers buried in a withering critique of former President Joe Biden's press secretary.
Amazon’s customer service bot is now powered by Claude, as are thousands of others across the industry; many of the rest are powered by ChatGPT or Gemini. Virtually all transcription and notetaking apps are powered by one of these enterprise partnerships. And, of course, a lot of these companies are just using Claude to write code.
This, in particular, struck me as a huge missed opportunity for more useful skepticism as I read through Zitron’s oeuvre. He consistently argues that the companies made capital investments that only make sense if demand grows insanely fast, faster than it will realistically grow, so they’ll be in a terrible position when the compute arrives online and they can’t sell it at the prices that justify the investment.
An interesting argument that might well be true!
He also consistently argues that the reported capital investment simply isn’t happening, with many promised data centers delayed and some projects cancelled. This is also true to some degree.
But these arguments plainly cut against each other: The capital build-out will be smaller and slower than the companies wanted, which should save them from the problems Zitron predicted for them or at least mitigate the problems.
From this set of beliefs, you could, in fact, defend a delightful bespoke AI bubble take: that AI would have been a catastrophic investment bubble, but the AI companies were saved from their mistakes by the determined NIMBYs of America killing off the excess data center build-out.
But that’s not Zitron’s stance. He seems to account “the build-out is too aggressive” and “the build-out is not happening as planned” as both independent strikes against AI — both things that show it’s bad, and the more of those he finds, the more bad it is.
It’s a superficiality of analysis — a disinterest in the interlocking pieces of the complex systems he describes — which he didn’t display a couple years ago. I miss the older approach.
OpenAI and Anthropic are in a competition that could easily bankrupt the loser. They’re releasing models on a pace that is, according to the Epoch AI analysis, slightly too fast to recoup the costs of training them, and they’re mostly doing this to keep up with one another.
Consumers are the beneficiaries of this race (unless the companies invent an AI that can kill us all), so I don’t lose too much sleep over it, but it is certainly not guaranteed to end with both companies succeeding.
It could easily end with them both failing, especially if Google takes and holds the lead, or if an open source model that was as good as Claude and ChatGPT were released. Though in that case, I would think compute providers would still be doing very well for themselves serving the open source model — how much of their capacity to provide compute is locked up with specific companies?





Unfortunately, there’s a very large audience of people looking to confirm their priors that AI is useless and/or fraudulent and/or unethical and/or ruining the youth and/or hallucinates and/or, and/or, and/or ….
"AI — not future, speculative AI, but AI as it exists today — has nonspeculative economic value. I pay $100 per month for Claude because Claude provides me with comfortably more than $100 per month of economic value."
Golly, how the goal posts have moved! I thought AI was going to cure cancer. I thought AI was going to result in 20% a year GDP growth, at a minimum. I thought AI was going to end death. Where's all that stuff? This is the point that AI hypemen can't respond to: the outsized claims about what AI is going to do have not been met by demonstrations of what AI can do. It's still all speculative. And we're nowhere close to justifying the economic investment involved. Who cares if you're get $100 of value out of Claude, if as a society we're paying far more than we get back?