It’s nice to see someone suggesting amending section 230 instead of claiming that it already says something it doesn’t. Prior section 230 discourses always seem to get hung up on content moderation. Focusing on what content that algorithms are actively pushing seems like a much stronger argument, although like you point out at the end it’s not a simple problem. There’s a meaningful difference between choosing to not remove a post and choosing to insert it into someone’s feed.
Really not feeling the solution to big tech is to bury them in lawsuits. I know lawyers salivate at the possibility, but most normal people don't think the issue is too few lawsuits.
The problem is addictive content. It could be that legalizing lawsuits is the way to mitigate this problem. It may be that something else is. But breaking up big tech doesn’t seem actually helpful in any way, since we just end up with a few dozen competitors to suck at our attention, rather than a few.
I guess I'm less sold on social media/algorithmic content pushes for removing 230 protection as a recourse (to redress their lower level harms) than I am towards taxing advertising revenue streams related, but I'm also more inclined to think there should be a easier liability tied to LLMs harms. Maybe that is inconsistent.
Why not both? The litigation is really aimed at the most critical threats - self-harm, sexual deepfakes, threats. But there is a much more banal threat that all of us face - the time suck that infinite scroll poses that takes away from family, friends, hobbies, that makes us mad, anxious, and generally leaves society worse off. A tax can help discourage time spent on the scroll.
I was basing this off of my perception of American law, that social media is not a free for all of say whatever you want and I think LLM should be held more accountable than social media.
"Chess" is not a solved game. "Chess with 7 pieces left on the board" is a solved game. Big difference between actually solving something and just having computers be better than humans at it.
Similarly heads up poker is solved, not multi-way. Poker is difficult in solving because humans will often get into multi-way situations that should never occur for a computer based on the specific holdings, so good luck getting game theory guidance.
Curators are collage artists. Their medium isn't paint, or stone, or any other raw material, but the finished work of other artists. When a content curator offers a collection of media, the collection itself is the aesthetic creation of that curator. They're like fashion stylists who create collections of garments and accessories which form distinct identities, beyond the characteristics of any of their components.
Algorithms are nothing more than automated curators, and the feeds they generate are nothing if not original, characteristic content, authored by software — software which, itself, was ultimately coded by human beings, on behalf of a media company, and published for use by that company.
So why should producers of this sort of *original content* — that is, the algorithm-generated content feeds — be immune from liability for harm caused by their consumption?
This is really a very important question!
What if their Communications Decency Act, Section 230 (CDA 230) protections really were removed? Would platforms be compelled to offer only content produced by others, algorithm-free, in an environment which would continue to shield them from liability? Would it be so bad if algorithmic feeds were replaced by directories which users would need to consult manually to find what they were looking for?
A solution like this would return the role of curation to users who would not only continue to be free to consume what they like but also, additionally, to *choose* the media they want to see, based on their own criteria and not the goals of content providers like Facebook or YouTube, who want only to harvest maximum audience attention — or else, insidiously, to direct audiences toward persuasive media, for the purpose of influencing them…
I think we should have timers be required on basically all apps. It should be displayed up at the top or bottom of your screen how long you've spent in the app this session. All apps should include a limited setting, where you can choose how many minutes per day or session you'll allow yourself on the app before it kicks you out. I'd be happy to decide that these limits on time spent in the app are mandatory for people under 18.
The great thing about making this time-based is that you don't run into the free speech problems at all because it's content neutral. Timers have no opinion on whether your hour of TikTok was well-spent, it will just stop you from spending more than an hour there.
Even better if the timer has to be set at the beginning of the session and can’t be reset until you close the app and spend at least five minutes outside of it.
I never really got on board with the “externality” argument. Externalities are costs not borne by the producers *or consumers*, and I don’t know why the author left the second group off in his definition. Children aside (they’re a special class and don’t have rights in many contexts), if somebody uses AI to develop psychosis, they have entirely internalized the cost of their consumption. If they get treatment they can’t afford or have a mental break and hurt somebody else, that would be an externality, but the article is short on examples of this, even conceptually.
And it’s important: another way you could tell the history of tobacco legislation is that nonsmokers reached critical mass and pushed them out of public spaces. The health risks of secondhand smoke provided a scientific justification, but a lot of people were just annoyed by the smoke. What mechanism would cause me to want to stop someone from scrolling Instagram in their room alone, besides some entirely theoretical anxieties on my part?
If it’s legal to drink alone in your room, base jump, or join the Church of Scientology, what are we protecting people from that’s so much worse? The only thing Reels have on them is scale.
I think it depends on how you conceptualize the consumer. There’s some sense in which a hangover is internal, because the human body that drinks is also the human body that feels the pain. But there’s another sense in which it’s an externality, because the day you suffer is not the day you make the decision - it’s effectively somebody else suffering from the point of view of the self having a good time at the party.
That’s not a classic externality, and would justify banning a wide range of human behaviors that are much more personally destructive than looking at your phone.
It's not a *classic* externality, because classic treatments treat humans as atomic units that have no internal conflicts. But conceptually, it's exactly the same thing.
I don't think externalities usually justify *banning* things - they usually justify regulating or taxing them in ways that make the one who benefits pay to repair the harms to the one who suffers the cost. There are a lot of light touch regulations that can do that effectively when it's the same human body involved on both sides - things like requiring waiting periods, or allowing people to set their own limits in ways that will be externally enforced (with a waiting period for changing those limits).
Conceptually, it’s not the exact same thing. Every preference has some time bias: if I save money, I’m trading off happiness today for more in the future. Other completely normal, healthy preferences run the other way. Externality theory is about harms that are felt by people who never consented to an agreement, not people who expressed a time preference.
It’s frankly not the government’s job to decide how much people should discount their futures. Partly because we all have different values: to a conservative, remaining single into your thirties might unfairly take advantage of future you. But partly because plenty of people don’t have an addictive relationship to their hobbies, and none of the costs are being pushed onto others.
You're still thinking as though there is one person who exists at both times who implicitly consented to the treatment at the later time because the person at the earlier time freely chose to do it. My point is that sometimes, the earlier self and the later self agree that it was better to spend the money early rather than save it for later, or agree that it was better to save the money for later rather than spend early, but other times, the early self and the late self disagree. When they disagree, letting one of them make a decision that affects both of them just is imposing an externality on the other. Just like with externalities affecting people who live in different bodies, free choices don't make everyone better off unless all parties who are affected are part of the decision.
It's better to let a third stage of the same body make the choice for what discount rate should be enforced among these two, rather than let one of the two make the choice with no input from the other. This was Mark Kleiman's idea in letting people make a decision about what their own monthly limit on weed purchases should be, with the government enforcing whatever limit a person chooses, where the person can only change that amount with a one-month waiting period.
If a person knows they don't have an addictive relationship to weed, they can set that limit very high, but if they do have an addictive relationship to it, they don't have to leave themselves at the mercy of whichever version of the self shows up at the shop. The point of the government is to ease these problems when one self comes in conflict with another over shared resources, and I don't think there's any reason we should only do this when the selves live in different bodies. In all cases, we should help the two come to a mutually advantageous agreement rather than imposing a blanket policy.
I get this conceptually, but a lot is resting on the assumption that everybody has a tortured relationship with social media. I think a lot of people see it as a harmless distraction, and a lot of its users like it. If true, that would change the value proposition of a ban - the main alternate justification is that you personally think users could be spending their time better - and means that politically a lot of people will vote against you.
Giving users a choice to limit their own use doesn’t address any concerns: children and psychotics and people who are simply wasting their time would continue to use it, and it would be difficult to enforce any legal obligation. Meanwhile, apps can already limit screen and app time, so why is that a policy question?
I don't think you should be able to build a machine that acts like a "mental ill" person for consumption (with consequences), can we bar people from becoming mentally ill is a different discussion imo (illogical I think).
I guess related to the article "is social media bad" in a very broad sense (I think yes, pre and post edits are useful but not easily available or thoughtfully accomplished for consumption of the information). One way to address this is a tax and I think tying a tax to advertisement revenue aligns incentives around excessive usage, but maybe there are deeper issues involved about fairness or logical legal extrapolations i am overlooking.
Publishing warmed-over bad takes on section 230 is not a productive direction for this publication. Maybe get Mike Masnic of Techdirt to write something on this topic?
As usual with pieces in this genre, the diagnosis of the problem is correct, and the proposed solution is silly. The rationale given - “230 protections were there for something that was on net good; but now the internet is net bad” is barely disguised nonsense. Removing Section 230 protections for that reason is like recognizing that soda is unhealthy, and therefore making beverage companies responsible for ensuring their aluminum cans are recycled. It solves the problem partially at best, does so sideways at best, and has massive spillover effects for a whole bunch of other players, some of which aren’t even clear from the outset. This is ignoring opening the can of worms that is 1st Amendment issues. The author is grasping for the first available regulatory lever despite its incompatibility with the problem.
The hard reality is that defining tobacco products is straightforward and therefore easy to regulate, whereas defining recommendation-based media surfaces is borderline impossible. This is not going to be solved overnight, and certainly not without democratic consensus. Despite how big a problem recsys media is, addressing it via borderline unrelated administrative mechanisms from the executive branch is simply not going to cut it.
Also a technical note: to my knowledge no company is using RL to power their recommendation systems. Recommendation systems employ supervised learning, based on offline logs of user interactions, to predict things like engagement or retention. RL refers to a specific approach to modeling, not broadly to *any* flywheel of interactions, modeling, and recommendations to iteratively optimize for a specific outcome.
If Sec 230 protection is lifted, isn't there still a pretty high bar with free speech protections? You get into it a bit, but I'd like to read more about what it would look like exactly if Sec 230 were lifted.
TBF, screw this obsession with 230. Just tax them!
It’s nice to see someone suggesting amending section 230 instead of claiming that it already says something it doesn’t. Prior section 230 discourses always seem to get hung up on content moderation. Focusing on what content that algorithms are actively pushing seems like a much stronger argument, although like you point out at the end it’s not a simple problem. There’s a meaningful difference between choosing to not remove a post and choosing to insert it into someone’s feed.
Really not feeling the solution to big tech is to bury them in lawsuits. I know lawyers salivate at the possibility, but most normal people don't think the issue is too few lawsuits.
The problem is addictive content. It could be that legalizing lawsuits is the way to mitigate this problem. It may be that something else is. But breaking up big tech doesn’t seem actually helpful in any way, since we just end up with a few dozen competitors to suck at our attention, rather than a few.
I mean, it works. Meta is facing lawsuits over some teen suicides when they were sextorted on Instagram, so now they're going to start age-gating.
I couldn't help but think while reading "why not simply ban black box, algorithmic recommendations?" I don't think algorithms have free speech rights.
I guess I'm less sold on social media/algorithmic content pushes for removing 230 protection as a recourse (to redress their lower level harms) than I am towards taxing advertising revenue streams related, but I'm also more inclined to think there should be a easier liability tied to LLMs harms. Maybe that is inconsistent.
Why not both? The litigation is really aimed at the most critical threats - self-harm, sexual deepfakes, threats. But there is a much more banal threat that all of us face - the time suck that infinite scroll poses that takes away from family, friends, hobbies, that makes us mad, anxious, and generally leaves society worse off. A tax can help discourage time spent on the scroll.
I was basing this off of my perception of American law, that social media is not a free for all of say whatever you want and I think LLM should be held more accountable than social media.
"Chess" is not a solved game. "Chess with 7 pieces left on the board" is a solved game. Big difference between actually solving something and just having computers be better than humans at it.
https://en.wikipedia.org/wiki/Solving_chess
Similarly heads up poker is solved, not multi-way. Poker is difficult in solving because humans will often get into multi-way situations that should never occur for a computer based on the specific holdings, so good luck getting game theory guidance.
Curators are collage artists. Their medium isn't paint, or stone, or any other raw material, but the finished work of other artists. When a content curator offers a collection of media, the collection itself is the aesthetic creation of that curator. They're like fashion stylists who create collections of garments and accessories which form distinct identities, beyond the characteristics of any of their components.
Algorithms are nothing more than automated curators, and the feeds they generate are nothing if not original, characteristic content, authored by software — software which, itself, was ultimately coded by human beings, on behalf of a media company, and published for use by that company.
So why should producers of this sort of *original content* — that is, the algorithm-generated content feeds — be immune from liability for harm caused by their consumption?
This is really a very important question!
What if their Communications Decency Act, Section 230 (CDA 230) protections really were removed? Would platforms be compelled to offer only content produced by others, algorithm-free, in an environment which would continue to shield them from liability? Would it be so bad if algorithmic feeds were replaced by directories which users would need to consult manually to find what they were looking for?
A solution like this would return the role of curation to users who would not only continue to be free to consume what they like but also, additionally, to *choose* the media they want to see, based on their own criteria and not the goals of content providers like Facebook or YouTube, who want only to harvest maximum audience attention — or else, insidiously, to direct audiences toward persuasive media, for the purpose of influencing them…
I think we should have timers be required on basically all apps. It should be displayed up at the top or bottom of your screen how long you've spent in the app this session. All apps should include a limited setting, where you can choose how many minutes per day or session you'll allow yourself on the app before it kicks you out. I'd be happy to decide that these limits on time spent in the app are mandatory for people under 18.
The great thing about making this time-based is that you don't run into the free speech problems at all because it's content neutral. Timers have no opinion on whether your hour of TikTok was well-spent, it will just stop you from spending more than an hour there.
I have set this on my social media apps. It has reduced my time spent on them considerably.
Even better if the timer has to be set at the beginning of the session and can’t be reset until you close the app and spend at least five minutes outside of it.
I never really got on board with the “externality” argument. Externalities are costs not borne by the producers *or consumers*, and I don’t know why the author left the second group off in his definition. Children aside (they’re a special class and don’t have rights in many contexts), if somebody uses AI to develop psychosis, they have entirely internalized the cost of their consumption. If they get treatment they can’t afford or have a mental break and hurt somebody else, that would be an externality, but the article is short on examples of this, even conceptually.
And it’s important: another way you could tell the history of tobacco legislation is that nonsmokers reached critical mass and pushed them out of public spaces. The health risks of secondhand smoke provided a scientific justification, but a lot of people were just annoyed by the smoke. What mechanism would cause me to want to stop someone from scrolling Instagram in their room alone, besides some entirely theoretical anxieties on my part?
If it’s legal to drink alone in your room, base jump, or join the Church of Scientology, what are we protecting people from that’s so much worse? The only thing Reels have on them is scale.
I think it depends on how you conceptualize the consumer. There’s some sense in which a hangover is internal, because the human body that drinks is also the human body that feels the pain. But there’s another sense in which it’s an externality, because the day you suffer is not the day you make the decision - it’s effectively somebody else suffering from the point of view of the self having a good time at the party.
That’s not a classic externality, and would justify banning a wide range of human behaviors that are much more personally destructive than looking at your phone.
It's not a *classic* externality, because classic treatments treat humans as atomic units that have no internal conflicts. But conceptually, it's exactly the same thing.
I don't think externalities usually justify *banning* things - they usually justify regulating or taxing them in ways that make the one who benefits pay to repair the harms to the one who suffers the cost. There are a lot of light touch regulations that can do that effectively when it's the same human body involved on both sides - things like requiring waiting periods, or allowing people to set their own limits in ways that will be externally enforced (with a waiting period for changing those limits).
Conceptually, it’s not the exact same thing. Every preference has some time bias: if I save money, I’m trading off happiness today for more in the future. Other completely normal, healthy preferences run the other way. Externality theory is about harms that are felt by people who never consented to an agreement, not people who expressed a time preference.
It’s frankly not the government’s job to decide how much people should discount their futures. Partly because we all have different values: to a conservative, remaining single into your thirties might unfairly take advantage of future you. But partly because plenty of people don’t have an addictive relationship to their hobbies, and none of the costs are being pushed onto others.
You're still thinking as though there is one person who exists at both times who implicitly consented to the treatment at the later time because the person at the earlier time freely chose to do it. My point is that sometimes, the earlier self and the later self agree that it was better to spend the money early rather than save it for later, or agree that it was better to save the money for later rather than spend early, but other times, the early self and the late self disagree. When they disagree, letting one of them make a decision that affects both of them just is imposing an externality on the other. Just like with externalities affecting people who live in different bodies, free choices don't make everyone better off unless all parties who are affected are part of the decision.
It's better to let a third stage of the same body make the choice for what discount rate should be enforced among these two, rather than let one of the two make the choice with no input from the other. This was Mark Kleiman's idea in letting people make a decision about what their own monthly limit on weed purchases should be, with the government enforcing whatever limit a person chooses, where the person can only change that amount with a one-month waiting period.
If a person knows they don't have an addictive relationship to weed, they can set that limit very high, but if they do have an addictive relationship to it, they don't have to leave themselves at the mercy of whichever version of the self shows up at the shop. The point of the government is to ease these problems when one self comes in conflict with another over shared resources, and I don't think there's any reason we should only do this when the selves live in different bodies. In all cases, we should help the two come to a mutually advantageous agreement rather than imposing a blanket policy.
I get this conceptually, but a lot is resting on the assumption that everybody has a tortured relationship with social media. I think a lot of people see it as a harmless distraction, and a lot of its users like it. If true, that would change the value proposition of a ban - the main alternate justification is that you personally think users could be spending their time better - and means that politically a lot of people will vote against you.
Giving users a choice to limit their own use doesn’t address any concerns: children and psychotics and people who are simply wasting their time would continue to use it, and it would be difficult to enforce any legal obligation. Meanwhile, apps can already limit screen and app time, so why is that a policy question?
I don't think you should be able to build a machine that acts like a "mental ill" person for consumption (with consequences), can we bar people from becoming mentally ill is a different discussion imo (illogical I think).
I guess related to the article "is social media bad" in a very broad sense (I think yes, pre and post edits are useful but not easily available or thoughtfully accomplished for consumption of the information). One way to address this is a tax and I think tying a tax to advertisement revenue aligns incentives around excessive usage, but maybe there are deeper issues involved about fairness or logical legal extrapolations i am overlooking.
Publishing warmed-over bad takes on section 230 is not a productive direction for this publication. Maybe get Mike Masnic of Techdirt to write something on this topic?
https://www.techdirt.com/2025/10/20/before-advocating-to-repeal-section-230-it-helps-to-first-understand-how-it-works/
As usual with pieces in this genre, the diagnosis of the problem is correct, and the proposed solution is silly. The rationale given - “230 protections were there for something that was on net good; but now the internet is net bad” is barely disguised nonsense. Removing Section 230 protections for that reason is like recognizing that soda is unhealthy, and therefore making beverage companies responsible for ensuring their aluminum cans are recycled. It solves the problem partially at best, does so sideways at best, and has massive spillover effects for a whole bunch of other players, some of which aren’t even clear from the outset. This is ignoring opening the can of worms that is 1st Amendment issues. The author is grasping for the first available regulatory lever despite its incompatibility with the problem.
The hard reality is that defining tobacco products is straightforward and therefore easy to regulate, whereas defining recommendation-based media surfaces is borderline impossible. This is not going to be solved overnight, and certainly not without democratic consensus. Despite how big a problem recsys media is, addressing it via borderline unrelated administrative mechanisms from the executive branch is simply not going to cut it.
Also a technical note: to my knowledge no company is using RL to power their recommendation systems. Recommendation systems employ supervised learning, based on offline logs of user interactions, to predict things like engagement or retention. RL refers to a specific approach to modeling, not broadly to *any* flywheel of interactions, modeling, and recommendations to iteratively optimize for a specific outcome.
If Sec 230 protection is lifted, isn't there still a pretty high bar with free speech protections? You get into it a bit, but I'd like to read more about what it would look like exactly if Sec 230 were lifted.
Some people need reminding behind what section 230 is doing and what it's for....paging Ken White...