Mad Libs: Purser v. Wertheimer
No subtweeting allowed...
Welcome to Mad Libs, an irregular debate column where our columnists, contributors, and staff writers (or perhaps even you, dear reader) duke it out over the big ideas we’re discussing in the metaphorical pages of this magazine.
Today’s Mad Libs was inspired by The Argument contributor Joel Wertheimer’s viral column “Treat Big Tech like Big Tobacco.” Wertheimer, a plaintiff’s attorney, wants to force tech companies to internalize the harms of their algorithmic recommendations that can lead to addiction, depression, and even social decay. In a provocative piece, he argued for amending Section 230 of the Communications Decency Act, thereby allowing users to sue Big Tech companies just as they sued tobacco companies of yore.
Nat Purser is a senior policy advocate at Public Knowledge, a left-leaning telecommunications think tank and advocacy organization, and she is clearly sick of hearing about Section 230. Purser strongly objected to Wertheimer’s proposed solution as yet another “one weird trick to fix the internet” (her proposed title for her piece). She argued for a viewpoint that is increasingly rare in policy discourse: There’s a lot of good stuff on the internet.
Read their back-and-forth below!
If you would like to pitch us on a Mad Libs, shoot us an email at pitches@theargumentmag.com.
-Jerusalem
Nat Purser, Public Knowledge senior policy advocate
Tech policy ideas come in waves. There was the privacy and surveillance wave, the antitrust wave, the disinformation wave, and now an AI wave.
Each reinterpreted the internet’s problems through the lens of the moment, and some combination of legal viability, political salience, and public appetite determined what stuck. So it’s striking that in 2025, we’re still seeing sweeping Section 230 reform proposals — as if the last 10 years of jurisprudence and litigation never happened.
Joel Wertheimer’s piece “Treat Big Tech Like Big Tobacco” argued that the harms of social media resemble nicotine addiction and should be treated similarly. He proposed to do this by stripping social media platforms that use reinforcement learning-based recommendation algorithms of their Section 230 immunity, letting tort law force them to “internalize” the costs of attention addiction. The comparison is arresting, but both it and the proposed solution unravel under close scrutiny.
The “active curator” fallacy
Wertheimer claims that social media companies used to be “merely passive hosts of third-party content” but are now “active curators,” and thus no longer deserve Section 230 protection. This is not true — platforms have always been active curators.
That’s part of why Section 230 was written: to encourage editorial functions like curation and moderation, while shielding platforms from being treated as publishers of user content.1 For instance, while publishers of newspapers can be sued if they run an article containing defamatory statements, platforms like Facebook or YouTube cannot be held liable for a user’s post making the same claim.
But even without Section 230 protection, moderating, ranking, and recommending content are expressive, editorial activities protected by the First Amendment, and courts have treated them that way for decades. As the Supreme Court explained in Moody v. NetChoice, “Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products … [and] the principle does not change because the curated compilation has gone from the physical to the virtual world.”2
That makes Wertheimer’s call to “remove protections for platforms that actively promote content using reinforcement learning-based algorithms” a nonstarter. “Actively promoting content” is a protected First Amendment activity! Section 230 simply ensures that platforms can organize user speech without publisher liability.
There has never been a “passivity” requirement in the statute: Section 230 is what usually allows YouTube to recommend videos, Spotify to generate playlists, and Netflix to personalize film recommendations without facing lawsuits for those recommendations. And Netflix’s algorithm often gets it right — I do want to rewatch Paris, Texas.
Addiction isn’t a tort on its own
Wertheimer’s tobacco analogy doesn’t hold up under scrutiny either. The successful cigarette litigation that Wertheimer references involved well-defined torts: fraud (lying about the health harms of cigarettes), failure to warn, and misrepresentation. Meanwhile, the settlements reflected the quantifiable public health harms of cigarettes.
Addictiveness itself wasn’t the tort; it was the companies’ deceit and the demonstrable physical harm that made the litigation possible. Smoking produced physical disease, the industry lied about it, and states sought reimbursement for citizens’ tobacco-related health harms. “Scrolling addiction” is different: Any resulting harms are psychological, variable, and diffuse.
To recover damages for torts like intentional infliction of emotional distress, plaintiffs must show the harm was intentional, a direct result of the defendant’s conduct, and severe. The causal relationship between the algorithm and the harm could be difficult to prove. Unlike developing cancer from smoke inhalation, the causal chain linking tech products to emotional harm is often unclear — did the platform cause a user to feel lonely, or did the user spend more time on the platform because they felt lonely? While causing someone to feel alienated who wouldn’t have otherwise is a real harm, it may be hard to bring a successful suit.
If your goal is to curb the harms of reinforcement learning-based recommendation systems, the real barrier isn’t Section 230 — it’s the absence of concrete, cognizable injuries that might lead to a successful tort claim.
Courts are unlikely to treat “making content too engaging” as a product defect. To succeed on that kind of claim, litigants would need to show that the algorithmic design itself predictably causes harm — a smoking-gun link akin to the medical evidence that doomed Big Tobacco.
Future studies may establish those links or they may not. If they do, Section 230 won’t block such suits. But even with Big Tobacco, the outcome was regulation of advertising and disclosure — not a ban on nicotine or cigarettes.
A better path forward
The tobacco example offers another, more useful lesson. We did not curb smoking by redesigning cigarettes — it was sustained public skepticism, pressure from lawmakers and regulators, product taxation, and public health campaigns that drove change.3
The same principle — targeted regulation, paired with broader cultural scrutiny — could make for a successful approach to social media. Personally, I think adults should retain the freedom to waste their time as they choose. However, our obligations toward children are different.
My view is that states should experiment with in-school phone bans to protect kids’ social and cognitive development. Likewise, content-neutral, minors-only feed limits — where the default is non-personalized recommendations, or where continuous-scroll apps force breaks after long sessions — could help without crossing constitutional lines.
However, as Jerusalem Demsas has written, our best strategies for managing social media addiction may lie in social incentives, not legal penalties. That doesn’t just mean discouraging screen time; it means encouraging better screen time. Spending hours reading research online or talking to loved ones over FaceTime is qualitatively different from less productive or social screen time.
You can watch high-quality shows instead of short-form slop. You can use Strava or Letterboxd to enjoy the social dimensions of your hobbies instead of watching influencer content. You can game with your friends instead of gaming with strangers.
We have agency over how we use tech — the task is to exercise it.
Joel Wertheimer, plaintiff’s attorney and The Argument contributor
I’m a bit surprised to learn that I suggested there’s one weird trick to fix the internet. I do not believe that there’s one weird trick to fix the internet.
I think that is the lesson of tobacco: Where the harms were diffuse and hard to attribute, there was no single trick. Regulation, taxation, public health campaigns, and litigation were all vital to changing America’s relationship to tobacco.
I want to make three points in response.
First, regarding the law, Purser is right that, historically, the law has given protections to firms that host and publish content regardless of whether they make active recommendations that are clearly acts of speech. This is precisely why, after Moody v. NetChoice, the 3rd U.S.Circuit Court of Appeals held in Anderson v. TikTok that the idea that recommendations were not speech was no longer tenable: “Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms ... it follows that doing so amounts to first-party speech under [Section 230 of the Communications Decency Act (DCA)], too.”
The current law in most of the country, including New York, where I am writing, is that “Merely arranging and displaying others’ content to users“ through algorithms is not enough to hold platforms like Meta or TikTok liable for harms social media content might produce (see: Anderson summarizing Force v. Facebook). Most other circuits hold that all algorithmic acts of speech by content hosts are not just protected by the First Amendment (good!) but completely immunized by Section 230 (bad!).
This means that these social media platforms have protections that go beyond the First Amendment. Just as gun manufacturers are usually protected from lawsuits if they sell a murderer a gun by the Protection of Lawful Commerce in Arms Act, Section 230 has been interpreted too broadly in influential circuit courts, granting social media companies extra protections for harmful speech acts on the internet.
Second, yes, this will impose some burdens on current recommendation engines. As Purser wrote, “Section 230 is what usually allows YouTube to recommend videos, Spotify to generate playlists, or Netflix to personalize film recommendations without facing lawsuits for those recommendations.”
My point is that if these companies choose to make recommendations without due care, users should be able to sue. We are all liable for our acts made without due care.
There’s no tort in suggesting you watch Paris, Texas. Netflix will be allowed to recommend cool movies you like and even stupid shows you don’t like. It will even be permitted to make recommendations to you that maximize your screen time.
But if they are aware they are causing harm or are shown to be causing harm, then they should not be immune from suit. I am quite confident that Netflix — which creates its own content library — will be fine. Perhaps TikTok and YouTube’s recommendations will get worse — that simply strikes me as a good thing!
Ultimately, where I think we disagree is that I think tort lawsuits are very hard to win against big corporations. Plaintiff’s attorneys do not take lawsuits that don’t have much of a chance of winning because we work on contingency and cannot afford to bring frivolous cases. Lawsuits are filed against pharmaceutical companies when they cause harm, but notably, those companies have not ceased to exist.
Third, I agree with Purser that proving these cases would be hard because any resulting harms would be “psychological, variable, and diffuse.” But it’s worth looking back at just how hard tobacco cases were to win.
The cigarette companies were incredibly successful in litigation arguing that the harms were diffuse, that no tobacco company could be held liable for a particular onset of lung cancer, and that the torts were not well defined. It was through extensive litigation that this became possible, but it was not always this way.
That “it was the companies’ deceit and the demonstrable physical harm that made the litigation possible,” I think, proves the point. The companies do know that they are causing harm, almost certainly. The Facebook Files demonstrated at the very least that Facebook knew it was causing depression among teenagers and was attempting to get young users hooked to the platforms anyway — precisely the sorts of claims made against tobacco companies.
And I agree that to succeed, “litigants would need to show that the algorithmic design itself predictably causes harm — a smoking-gun link akin to the medical evidence that doomed Big Tobacco.” But I think we will be able to show that. Right now, in most of the country, those acts of algorithmic recommendation are protected from suit; It’s blanket immunity.
I do want to quibble with one more thing: Purser wrote that “We did not curb smoking by redesigning cigarettes.” But we have redesigned nicotine delivery systems to make them less harmful. They’re called Juul, Nicorette, and Zyn: nicotine delivery systems that cause much less harm.
It seems entirely possible to me that social media companies could make highly addictive algorithms that are not actively harming our brains. The only question is whether we will incentivize them to.
My guess is excess Netflix consumption is not harming its users no matter the quality of Suits as content. Endless scrolling of user-generated content that feeds on the negativity bias of its consumers, however, seems to cause harm.
Regardless, we agree litigation is not a complete solution. The problem of curbing the broadly deleterious effects that algorithmic media have on us requires a multifaceted approach chipping away at the harms.
Communications Decency Act, 47 U.S.C. § 230 (1996)
Supreme Court of the United States. Moody v. NetChoice, LLC, No. 22-277 (July 1, 2024)
Cummings, K. Michael, and Robert N. Proctor. 2014. “The Changing Public Image of Smoking in the United States: 1964-2014.” Cancer Epidemiology, Biomarkers & Prevention 23 (1): 32-36







