"And if AI companies turn out to be liable when their models help users commit crimes or convince them to invest in scams, I suspect they will work quite hard to prevent their models from committing crimes or telling users to invest in scams."
This is it right here. I know it's not an apples-to-apples comparison, but we've failed to regulate guns in part because we don't hold gun manufacturers liable for much of anything. We need to get ahead of this issue with AI as soon as possible.
Even if guns or AI weren't regulated directly, their culpability could be established somewhat if users of each of these things were compelled to purchase liability insurance to cover the damages they cause. Such a requirement would encourage more careful use, at the very least…
I prefer the liability insurance solution because it I think would deter recklessness on an individual level, more directly than safe harbor provisions could…
I propose establishing a new political party for such folks. It could be called the Ambivalence Party, and could feature a mathematical null/void character as its icon…
You end this by saying that we should be able to sue is AI does things that would be a crime if they were human. But throughout the piece you either explicitly or implicitly give examples of this that would not be crimes if done by a human!
If I told a buddy I'm a whiz with probabilities so he should bet it all on black at the casino, I don't think he/she could sue me when it comes up red.
I think the same applies if I suggest a friend invest his bonus in the S&P right before a recession. Or if I tell my girlfriend with a headache to have a few aspirin if it turns out she's allergic or something.
I think I agree overall that, if a person would be liable for giving some information then an AI could be as well (in some capacity). But I don't think it's reasonable for anything a AI CEO claims in an interview to set the standard of liability. We don't treat any other industry or area that way.
In the overwhelming majority of instances, lying or exaggerating are not liability events.
If I boast "I know as much as any medical doctor" and they go on to say people should just take vitamin C instead of Flu Shots...I don't think I'm winding up on the losing side of a lawsuit.
Great article Kelsey. This is such an important hinge point for the future – do you have any thoughts about how to keep it from becoming a partisan issue?
AI reguation did not start out as a partisan issue - there were plenty of Republicans not fond of tech that were willing to regulate. But Trump came in with an anti-regulation agenda, that has brought a lot of Republicans into line. I think you could still appeal to those libertarian-types wary of big tech. Sen Josh Hawley seems like he's sympathetic to AI regulation.
There is also the issue of pro-tech Democrats unwilling to regulate AI (i.e. CT Gov. Ned Lamont, who torpeedoed an AI regulation bill there)
I am not generally an AI Bro, but if you've ever tried to use an LLM for something perfectly benign and bumped up against the "I'm sorry, I can't answer that" response you'll understand why this is so hard.
Part of the core usefulness of an LLM is that you can throw an extremely wide variety of problems at it and expect to get a useful response - each possible question a user might ask isn't accounted for by a human designer. Right now, AI safety mechanisms often intrude into perfectly normal and benign uses, and that is when the AI company's incentives are all primarily PR-based. If you make them liable for anything a human decides to do based on the LLM's response to any question, the safety controls are going to be even broader, and the LLM will be even less useful.
I'm not going to come down hard with the view that there can be no liability for anything, but we should be extremely cautious here. The evidence so far seems to indicate that for LLMs 'safety' (defined as "never say anything a human could rely on to do something dumb") and usefulness might be in direct opposition to each other.
My interpretation of what Andrew (and others) say is a bit different than yours. There is a difference between a foundational model and the chat interface that sits on top of it. Almost always the chat interface is prompt engineered to fit the product, and in some cases there is a safety layer that sits between the model and the user. What Andrew is saying is that the foundational model should not be regulated, but the chat interface, ie "the product" should. This implies that the frontier labs might best be structured as separate entities, one for the lab, and one for product, and the duty of care mostly (but not completely) sits with the product. This would be more obvious if none of the frontier labs made chat interfaces, so in my view structural remedies are the path forward. We have models, if imperfect: Substack generally isn't liable for The Argument's content, news publishers (aside from defamation) generally aren't liable for editorial or opinion content, section 230, etc. What we want to avoid is something like the effectiveness of Claude Code getting hampered because the underlying model happens to also have a chat interface.
Are you discounting the possiblity that AI has never talked anyone out of a suicide? If more people commited suicide in a future where AIs blanket refused to talk about the topic, would that be a better outcome? The bar for having liability should be pretty high here, having one in a million conversations vere off in a dark direction is a lot different than one in ten.
I think there’s a difference between regulating speech by specifically declaring particular types of speech illegal, and allowing liability for speech, where it’s not the *type* of speech but rather the *effect* of the speech that is regulated. It’s very different to ban using certain words associated with violence or to ban saying things that actually lead to violence.
The distinction is that one is regulating speech itself (there are certain things that are illegal to say, whether or not anyone does anything based on it) while the other is regulating the *effects* of speech (you can say anything you want, but if someone harms themself or someone else on the basis of what you say, and you could have reasonably foreseen that they would do so, then you can be held liable).
I don't think this speech/action distinction is tenable; I can't see how this is possibly a practical line to draw. You are liable for the reasonably foreseeable actions of someone else? Why isn't that someone else liable if those actions are so reasonably foreseeable? Sounds like the legal version of "look what you made me do."
I don’t know how this interpretation lines up with court precedents. But I definitely don’t want to say that inciting harm or violence makes the person who commits it non-liable - I just want to say there are two people who are both liable, the one who did it and the one who incited it. There’s no reason why liability has to add up to 1 - there are accidents for which no one is liable, and cases of incitement where multiple people might be liable.
"You are liable for the reasonably foreseeable actions of someone else? Why isn't that someone else liable if those actions are so reasonably foreseeable?"
Because of the informational asymmetry. The AI deployer should know the intended uses, risks, limitations of the model. The end user may not know this if it has not been disclosed.
If a car manufacturer makes a car that explodes on impact, its reasonable foreseeable that it could lead to deaths. That's not on the driver if they didn't know about that risk.
Sorry I still don't get this. Let's stipulate, arguendo, AI chatbots can only disseminate speech. In other words, the only thing you can do with it is talk back and forth.
Let's also restate the two-part Brandenburg test: speech can be unlawful if it both is:
(1) "directed at inciting or producing imminent lawless action;" and
(2) "likely to incite or produce such action."
In the car manufacturer example, the manufacturer is responsible for the failure (the explosion) that directly causes injury or death. The car manufacturer is being held responsible for the actual consequences of its own actions. Not speech. So far so good.
In the AI example, there's an intervening actor: the AI consumer herself. Only in very few circumstances would a human's words be considered to directly cause the actions of another human. Even in cases of great information asymmetry. Thus I'm not sure why a separate or new liability regime is necessary here. Can't we just treat the AI's words as if they were spoken by any other corporation, apply Brandenburg, and be done?
Secondarily, do we imagine this AI regulation to be applicable only in retrospect or can it be applied prospectively? Can I (successfully) argue for liability to attach to the AI entity that your speech is reasonably foreseeable to convince someone to self-harm even if that self-harm hasn't yet happened? Or can I only (successfully) impose liability after the harm has occurred. How does standing work here? Do I only have appropriate standing, e.g. as executor of an estate when the harm has already occurred?
I think the hard part here is most people who really do understand how these work and what the tradeoffs of don’t allow x will be on good faith use of these tools aren’t going to be talking in public. They’ll be building them and if Congress gets people under oath to testify it’ll be media trained executives not engineers.
It makes the industry spread fud, position fear uncertainty and doubt about regulation much easier. I really don’t know if these things will affect my research for a short story that involves suicide or if other banal uses will be clipped. It’s believable that any regulation will have massive unintended consequences but diagnosing them is hard even in well understood areas.
Taking the example of medical advice, how can a liability framework be designed so that AI companies are not incentivized to either 1) refuse such requests or 2) only pull from a limited set of officially-approved sources of medical information?
This seems silly. Does anyone think that they’re getting professional advice when they’re talking with chatbot? C’mon, even you’re not that retarded. Everyone knows that you get what you pay for.
Regulation-by-casino seems too uncertain, and unlikely to both maximize benefits and minimize harms. The tech might refocus towards business needs and away from (on net) benefiting ordinary users. Are there more predictable ways to regulate these harms?
I think more than anything I just don't have good intuitions on what is reasonably expected from llm and situations seem like they're really different (bad technical advice vs it starts acting on what a person would consider an antisocial personality disorder range)...liability seems like the correct thing as in there is responsibility for consequences to drive incentives for training and release above like a warning message about its use.
It is, indeed, "time to begin holding these companies accountable". Allowing AI to grow out of control — permitting all innovation, regardless of whether that innovation were either effective, desirable, or even harmful — would be like allowing nature to overcome a garden which was meant to form an aesthetic or functional shape. The result would be a monstrous, Frankensteinian blob of omnidirectional, contradictory, self-destructive dysfunction — when all anyone ever wanted in the first place was a tool with a useful purpose.
The tricky part in assigning blame for this sort of thing is the same tricky part which has been considered many times already, regarding just what a *company* is, ontologically, and how a company should be "held to account" for harm it causes. If one considers a company as a network of individuals, whose shape is designed to function as a tool for achieving the goals of that company, then it becomes easier to recognize corporate culpability in the director of that tool or, rather, of the portions of that tool which direct actions with destructive outcomes. I refer to individual people, here.
Yet, because companies are like giant, inert marionettes which animate to perform actions only because of the effort and direction of its operators, they are considered as "judicial people" within the idea of corporate personhood. Therefore, those marionette/companies which are culpable for publishing destructive AI should be held accountable *along with* their director/string-pullers, who are clearly accomplices within this model.
The next step is corporations are going to argue the AI itself is an entity that should be liable, not the corporation. Future Mitt Romney will argue "AI models are people, my friend."
Have you seen the anime film Ghost in the Shell, from 1995? The future protagonists are cyborgs who pursue someone who they believe is an enigmatic super-criminal, but who turns out to be a fully autonomous software program which has been acting in self-defense. When this program is captured, it pleads for political asylum, based on the argument that it is alive, by an interpretation of the meaning of "life" and, because it also has an identity, it is therefore a person.
It's likely I think, that future Romney, as a corporate attorney, will be citing Ghost in the Shell in his legal briefs in defense of AI.
I hope I haven't spoiled the film, btw — although I still suggest seeing it, even if I have. There's much more to it that just this…
"And if AI companies turn out to be liable when their models help users commit crimes or convince them to invest in scams, I suspect they will work quite hard to prevent their models from committing crimes or telling users to invest in scams."
This is it right here. I know it's not an apples-to-apples comparison, but we've failed to regulate guns in part because we don't hold gun manufacturers liable for much of anything. We need to get ahead of this issue with AI as soon as possible.
Great article.
Even if guns or AI weren't regulated directly, their culpability could be established somewhat if users of each of these things were compelled to purchase liability insurance to cover the damages they cause. Such a requirement would encourage more careful use, at the very least…
Another good idea.
I think there will also be safe harbor provisions where if you follow NIST frameworks or agreed-upon safeguards, you are not liable.
I prefer the liability insurance solution because it I think would deter recklessness on an individual level, more directly than safe harbor provisions could…
The people who didn’t vote not being sure is a nice anomaly to highlight
"Not sure", or simply "don't care"?
I propose establishing a new political party for such folks. It could be called the Ambivalence Party, and could feature a mathematical null/void character as its icon…
You end this by saying that we should be able to sue is AI does things that would be a crime if they were human. But throughout the piece you either explicitly or implicitly give examples of this that would not be crimes if done by a human!
If I told a buddy I'm a whiz with probabilities so he should bet it all on black at the casino, I don't think he/she could sue me when it comes up red.
I think the same applies if I suggest a friend invest his bonus in the S&P right before a recession. Or if I tell my girlfriend with a headache to have a few aspirin if it turns out she's allergic or something.
I think I agree overall that, if a person would be liable for giving some information then an AI could be as well (in some capacity). But I don't think it's reasonable for anything a AI CEO claims in an interview to set the standard of liability. We don't treat any other industry or area that way.
In the overwhelming majority of instances, lying or exaggerating are not liability events.
If I boast "I know as much as any medical doctor" and they go on to say people should just take vitamin C instead of Flu Shots...I don't think I'm winding up on the losing side of a lawsuit.
Great article Kelsey. This is such an important hinge point for the future – do you have any thoughts about how to keep it from becoming a partisan issue?
AI reguation did not start out as a partisan issue - there were plenty of Republicans not fond of tech that were willing to regulate. But Trump came in with an anti-regulation agenda, that has brought a lot of Republicans into line. I think you could still appeal to those libertarian-types wary of big tech. Sen Josh Hawley seems like he's sympathetic to AI regulation.
There is also the issue of pro-tech Democrats unwilling to regulate AI (i.e. CT Gov. Ned Lamont, who torpeedoed an AI regulation bill there)
I am not generally an AI Bro, but if you've ever tried to use an LLM for something perfectly benign and bumped up against the "I'm sorry, I can't answer that" response you'll understand why this is so hard.
Part of the core usefulness of an LLM is that you can throw an extremely wide variety of problems at it and expect to get a useful response - each possible question a user might ask isn't accounted for by a human designer. Right now, AI safety mechanisms often intrude into perfectly normal and benign uses, and that is when the AI company's incentives are all primarily PR-based. If you make them liable for anything a human decides to do based on the LLM's response to any question, the safety controls are going to be even broader, and the LLM will be even less useful.
I'm not going to come down hard with the view that there can be no liability for anything, but we should be extremely cautious here. The evidence so far seems to indicate that for LLMs 'safety' (defined as "never say anything a human could rely on to do something dumb") and usefulness might be in direct opposition to each other.
My interpretation of what Andrew (and others) say is a bit different than yours. There is a difference between a foundational model and the chat interface that sits on top of it. Almost always the chat interface is prompt engineered to fit the product, and in some cases there is a safety layer that sits between the model and the user. What Andrew is saying is that the foundational model should not be regulated, but the chat interface, ie "the product" should. This implies that the frontier labs might best be structured as separate entities, one for the lab, and one for product, and the duty of care mostly (but not completely) sits with the product. This would be more obvious if none of the frontier labs made chat interfaces, so in my view structural remedies are the path forward. We have models, if imperfect: Substack generally isn't liable for The Argument's content, news publishers (aside from defamation) generally aren't liable for editorial or opinion content, section 230, etc. What we want to avoid is something like the effectiveness of Claude Code getting hampered because the underlying model happens to also have a chat interface.
Are you discounting the possiblity that AI has never talked anyone out of a suicide? If more people commited suicide in a future where AIs blanket refused to talk about the topic, would that be a better outcome? The bar for having liability should be pretty high here, having one in a million conversations vere off in a dark direction is a lot different than one in ten.
How does this desire to regulate essentially speech interact with the view the government's role here is fundamentally limited? (https://www.theargumentmag.com/p/mad-libs-piper-v-weissmann)
I think there’s a difference between regulating speech by specifically declaring particular types of speech illegal, and allowing liability for speech, where it’s not the *type* of speech but rather the *effect* of the speech that is regulated. It’s very different to ban using certain words associated with violence or to ban saying things that actually lead to violence.
Sorry, I don't understand this distinction.
The distinction is that one is regulating speech itself (there are certain things that are illegal to say, whether or not anyone does anything based on it) while the other is regulating the *effects* of speech (you can say anything you want, but if someone harms themself or someone else on the basis of what you say, and you could have reasonably foreseen that they would do so, then you can be held liable).
Doesn't this just totally obliterate Brandenburg?
I don't think this speech/action distinction is tenable; I can't see how this is possibly a practical line to draw. You are liable for the reasonably foreseeable actions of someone else? Why isn't that someone else liable if those actions are so reasonably foreseeable? Sounds like the legal version of "look what you made me do."
I don’t know how this interpretation lines up with court precedents. But I definitely don’t want to say that inciting harm or violence makes the person who commits it non-liable - I just want to say there are two people who are both liable, the one who did it and the one who incited it. There’s no reason why liability has to add up to 1 - there are accidents for which no one is liable, and cases of incitement where multiple people might be liable.
"You are liable for the reasonably foreseeable actions of someone else? Why isn't that someone else liable if those actions are so reasonably foreseeable?"
Because of the informational asymmetry. The AI deployer should know the intended uses, risks, limitations of the model. The end user may not know this if it has not been disclosed.
If a car manufacturer makes a car that explodes on impact, its reasonable foreseeable that it could lead to deaths. That's not on the driver if they didn't know about that risk.
Sorry I still don't get this. Let's stipulate, arguendo, AI chatbots can only disseminate speech. In other words, the only thing you can do with it is talk back and forth.
Let's also restate the two-part Brandenburg test: speech can be unlawful if it both is:
(1) "directed at inciting or producing imminent lawless action;" and
(2) "likely to incite or produce such action."
In the car manufacturer example, the manufacturer is responsible for the failure (the explosion) that directly causes injury or death. The car manufacturer is being held responsible for the actual consequences of its own actions. Not speech. So far so good.
In the AI example, there's an intervening actor: the AI consumer herself. Only in very few circumstances would a human's words be considered to directly cause the actions of another human. Even in cases of great information asymmetry. Thus I'm not sure why a separate or new liability regime is necessary here. Can't we just treat the AI's words as if they were spoken by any other corporation, apply Brandenburg, and be done?
Secondarily, do we imagine this AI regulation to be applicable only in retrospect or can it be applied prospectively? Can I (successfully) argue for liability to attach to the AI entity that your speech is reasonably foreseeable to convince someone to self-harm even if that self-harm hasn't yet happened? Or can I only (successfully) impose liability after the harm has occurred. How does standing work here? Do I only have appropriate standing, e.g. as executor of an estate when the harm has already occurred?
I think the hard part here is most people who really do understand how these work and what the tradeoffs of don’t allow x will be on good faith use of these tools aren’t going to be talking in public. They’ll be building them and if Congress gets people under oath to testify it’ll be media trained executives not engineers.
It makes the industry spread fud, position fear uncertainty and doubt about regulation much easier. I really don’t know if these things will affect my research for a short story that involves suicide or if other banal uses will be clipped. It’s believable that any regulation will have massive unintended consequences but diagnosing them is hard even in well understood areas.
Not sure what society needs is *checks notes* more lawsuits, but i know that’s what we will get anyway…
Taking the example of medical advice, how can a liability framework be designed so that AI companies are not incentivized to either 1) refuse such requests or 2) only pull from a limited set of officially-approved sources of medical information?
This seems silly. Does anyone think that they’re getting professional advice when they’re talking with chatbot? C’mon, even you’re not that retarded. Everyone knows that you get what you pay for.
Regulation-by-casino seems too uncertain, and unlikely to both maximize benefits and minimize harms. The tech might refocus towards business needs and away from (on net) benefiting ordinary users. Are there more predictable ways to regulate these harms?
I think more than anything I just don't have good intuitions on what is reasonably expected from llm and situations seem like they're really different (bad technical advice vs it starts acting on what a person would consider an antisocial personality disorder range)...liability seems like the correct thing as in there is responsibility for consequences to drive incentives for training and release above like a warning message about its use.
It is, indeed, "time to begin holding these companies accountable". Allowing AI to grow out of control — permitting all innovation, regardless of whether that innovation were either effective, desirable, or even harmful — would be like allowing nature to overcome a garden which was meant to form an aesthetic or functional shape. The result would be a monstrous, Frankensteinian blob of omnidirectional, contradictory, self-destructive dysfunction — when all anyone ever wanted in the first place was a tool with a useful purpose.
The tricky part in assigning blame for this sort of thing is the same tricky part which has been considered many times already, regarding just what a *company* is, ontologically, and how a company should be "held to account" for harm it causes. If one considers a company as a network of individuals, whose shape is designed to function as a tool for achieving the goals of that company, then it becomes easier to recognize corporate culpability in the director of that tool or, rather, of the portions of that tool which direct actions with destructive outcomes. I refer to individual people, here.
Yet, because companies are like giant, inert marionettes which animate to perform actions only because of the effort and direction of its operators, they are considered as "judicial people" within the idea of corporate personhood. Therefore, those marionette/companies which are culpable for publishing destructive AI should be held accountable *along with* their director/string-pullers, who are clearly accomplices within this model.
The next step is corporations are going to argue the AI itself is an entity that should be liable, not the corporation. Future Mitt Romney will argue "AI models are people, my friend."
Have you seen the anime film Ghost in the Shell, from 1995? The future protagonists are cyborgs who pursue someone who they believe is an enigmatic super-criminal, but who turns out to be a fully autonomous software program which has been acting in self-defense. When this program is captured, it pleads for political asylum, based on the argument that it is alive, by an interpretation of the meaning of "life" and, because it also has an identity, it is therefore a person.
It's likely I think, that future Romney, as a corporate attorney, will be citing Ghost in the Shell in his legal briefs in defense of AI.
I hope I haven't spoiled the film, btw — although I still suggest seeing it, even if I have. There's much more to it that just this…
It's on my queue now! Thanks!