I propose establishing a new political party for such folks. It could be called the Ambivalence Party, and could feature a mathematical null/void character as its icon…
"And if AI companies turn out to be liable when their models help users commit crimes or convince them to invest in scams, I suspect they will work quite hard to prevent their models from committing crimes or telling users to invest in scams."
This is it right here. I know it's not an apples-to-apples comparison, but we've failed to regulate guns in part because we don't hold gun manufacturers liable for much of anything. We need to get ahead of this issue with AI as soon as possible.
Even if guns or AI weren't regulated directly, their culpability could be established somewhat if users of each of these things were compelled to purchase liability insurance to cover the damages they cause. Such a requirement would encourage more careful use, at the very least…
Great article Kelsey. This is such an important hinge point for the future – do you have any thoughts about how to keep it from becoming a partisan issue?
My interpretation of what Andrew (and others) say is a bit different than yours. There is a difference between a foundational model and the chat interface that sits on top of it. Almost always the chat interface is prompt engineered to fit the product, and in some cases there is a safety layer that sits between the model and the user. What Andrew is saying is that the foundational model should not be regulated, but the chat interface, ie "the product" should. This implies that the frontier labs might best be structured as separate entities, one for the lab, and one for product, and the duty of care mostly (but not completely) sits with the product. This would be more obvious if none of the frontier labs made chat interfaces, so in my view structural remedies are the path forward. We have models, if imperfect: Substack generally isn't liable for The Argument's content, news publishers (aside from defamation) generally aren't liable for editorial or opinion content, section 230, etc. What we want to avoid is something like the effectiveness of Claude Code getting hampered because the underlying model happens to also have a chat interface.
I think the hard part here is most people who really do understand how these work and what the tradeoffs of don’t allow x will be on good faith use of these tools aren’t going to be talking in public. They’ll be building them and if Congress gets people under oath to testify it’ll be media trained executives not engineers.
It makes the industry spread fud, position fear uncertainty and doubt about regulation much easier. I really don’t know if these things will affect my research for a short story that involves suicide or if other banal uses will be clipped. It’s believable that any regulation will have massive unintended consequences but diagnosing them is hard even in well understood areas.
I think there’s a difference between regulating speech by specifically declaring particular types of speech illegal, and allowing liability for speech, where it’s not the *type* of speech but rather the *effect* of the speech that is regulated. It’s very different to ban using certain words associated with violence or to ban saying things that actually lead to violence.
The distinction is that one is regulating speech itself (there are certain things that are illegal to say, whether or not anyone does anything based on it) while the other is regulating the *effects* of speech (you can say anything you want, but if someone harms themself or someone else on the basis of what you say, and you could have reasonably foreseen that they would do so, then you can be held liable).
Are you discounting the possiblity that AI has never talked anyone out of a suicide? If more people commited suicide in a future where AIs blanket refused to talk about the topic, would that be a better outcome? The bar for having liability should be pretty high here, having one in a million conversations vere off in a dark direction is a lot different than one in ten.
Taking the example of medical advice, how can a liability framework be designed so that AI companies are not incentivized to either 1) refuse such requests or 2) only pull from a limited set of officially-approved sources of medical information?
You end this by saying that we should be able to sue is AI does things that would be a crime if they were human. But throughout the piece you either explicitly or implicitly give examples of this that would not be crimes if done by a human!
If I told a buddy I'm a whiz with probabilities so he should bet it all on black at the casino, I don't think he/she could sue me when it comes up red.
I think the same applies if I suggest a friend invest his bonus in the S&P right before a recession. Or if I tell my girlfriend with a headache to have a few aspirin if it turns out she's allergic or something.
I think I agree overall that, if a person would be liable for giving some information then an AI could be as well (in some capacity). But I don't think it's reasonable for anything a AI CEO claims in an interview to set the standard of liability. We don't treat any other industry or area that way.
In the overwhelming majority of instances, lying or exaggerating are not liability events.
If I boast "I know as much as any medical doctor" and they go on to say people should just take vitamin C instead of Flu Shots...I don't think I'm winding up on the losing side of a lawsuit.
Regulation-by-casino seems too uncertain, and unlikely to both maximize benefits and minimize harms. The tech might refocus towards business needs and away from (on net) benefiting ordinary users. Are there more predictable ways to regulate these harms?
I think more than anything I just don't have good intuitions on what is reasonably expected from llm and situations seem like they're really different (bad technical advice vs it starts acting on what a person would consider an antisocial personality disorder range)...liability seems like the correct thing as in there is responsibility for consequences to drive incentives for training and release above like a warning message about its use.
It is, indeed, "time to begin holding these companies accountable". Allowing AI to grow out of control — permitting all innovation, regardless of whether that innovation were either effective, desirable, or even harmful — would be like allowing nature to overcome a garden which was meant to form an aesthetic or functional shape. The result would be a monstrous, Frankensteinian blob of omnidirectional, contradictory, self-destructive dysfunction — when all anyone ever wanted in the first place was a tool with a useful purpose.
The tricky part in assigning blame for this sort of thing is the same tricky part which has been considered many times already, regarding just what a *company* is, ontologically, and how a company should be "held to account" for harm it causes. If one considers a company as a network of individuals, whose shape is designed to function as a tool for achieving the goals of that company, then it becomes easier to recognize corporate culpability in the director of that tool or, rather, of the portions of that tool which direct actions with destructive outcomes. I refer to individual people, here.
Yet, because companies are like giant, inert marionettes which animate to perform actions only because of the effort and direction of its operators, they are considered as "judicial people" within the idea of corporate personhood. Therefore, those marionette/companies which are culpable for publishing destructive AI should be held accountable *along with* their director/string-pullers, who are clearly accomplices within this model.
The people who didn’t vote not being sure is a nice anomaly to highlight
"Not sure", or simply "don't care"?
I propose establishing a new political party for such folks. It could be called the Ambivalence Party, and could feature a mathematical null/void character as its icon…
"And if AI companies turn out to be liable when their models help users commit crimes or convince them to invest in scams, I suspect they will work quite hard to prevent their models from committing crimes or telling users to invest in scams."
This is it right here. I know it's not an apples-to-apples comparison, but we've failed to regulate guns in part because we don't hold gun manufacturers liable for much of anything. We need to get ahead of this issue with AI as soon as possible.
Great article.
Even if guns or AI weren't regulated directly, their culpability could be established somewhat if users of each of these things were compelled to purchase liability insurance to cover the damages they cause. Such a requirement would encourage more careful use, at the very least…
Another good idea.
Great article Kelsey. This is such an important hinge point for the future – do you have any thoughts about how to keep it from becoming a partisan issue?
My interpretation of what Andrew (and others) say is a bit different than yours. There is a difference between a foundational model and the chat interface that sits on top of it. Almost always the chat interface is prompt engineered to fit the product, and in some cases there is a safety layer that sits between the model and the user. What Andrew is saying is that the foundational model should not be regulated, but the chat interface, ie "the product" should. This implies that the frontier labs might best be structured as separate entities, one for the lab, and one for product, and the duty of care mostly (but not completely) sits with the product. This would be more obvious if none of the frontier labs made chat interfaces, so in my view structural remedies are the path forward. We have models, if imperfect: Substack generally isn't liable for The Argument's content, news publishers (aside from defamation) generally aren't liable for editorial or opinion content, section 230, etc. What we want to avoid is something like the effectiveness of Claude Code getting hampered because the underlying model happens to also have a chat interface.
I think the hard part here is most people who really do understand how these work and what the tradeoffs of don’t allow x will be on good faith use of these tools aren’t going to be talking in public. They’ll be building them and if Congress gets people under oath to testify it’ll be media trained executives not engineers.
It makes the industry spread fud, position fear uncertainty and doubt about regulation much easier. I really don’t know if these things will affect my research for a short story that involves suicide or if other banal uses will be clipped. It’s believable that any regulation will have massive unintended consequences but diagnosing them is hard even in well understood areas.
How does this desire to regulate essentially speech interact with the view the government's role here is fundamentally limited? (https://www.theargumentmag.com/p/mad-libs-piper-v-weissmann)
I think there’s a difference between regulating speech by specifically declaring particular types of speech illegal, and allowing liability for speech, where it’s not the *type* of speech but rather the *effect* of the speech that is regulated. It’s very different to ban using certain words associated with violence or to ban saying things that actually lead to violence.
Sorry, I don't understand this distinction.
The distinction is that one is regulating speech itself (there are certain things that are illegal to say, whether or not anyone does anything based on it) while the other is regulating the *effects* of speech (you can say anything you want, but if someone harms themself or someone else on the basis of what you say, and you could have reasonably foreseen that they would do so, then you can be held liable).
Are you discounting the possiblity that AI has never talked anyone out of a suicide? If more people commited suicide in a future where AIs blanket refused to talk about the topic, would that be a better outcome? The bar for having liability should be pretty high here, having one in a million conversations vere off in a dark direction is a lot different than one in ten.
Taking the example of medical advice, how can a liability framework be designed so that AI companies are not incentivized to either 1) refuse such requests or 2) only pull from a limited set of officially-approved sources of medical information?
You end this by saying that we should be able to sue is AI does things that would be a crime if they were human. But throughout the piece you either explicitly or implicitly give examples of this that would not be crimes if done by a human!
If I told a buddy I'm a whiz with probabilities so he should bet it all on black at the casino, I don't think he/she could sue me when it comes up red.
I think the same applies if I suggest a friend invest his bonus in the S&P right before a recession. Or if I tell my girlfriend with a headache to have a few aspirin if it turns out she's allergic or something.
I think I agree overall that, if a person would be liable for giving some information then an AI could be as well (in some capacity). But I don't think it's reasonable for anything a AI CEO claims in an interview to set the standard of liability. We don't treat any other industry or area that way.
In the overwhelming majority of instances, lying or exaggerating are not liability events.
If I boast "I know as much as any medical doctor" and they go on to say people should just take vitamin C instead of Flu Shots...I don't think I'm winding up on the losing side of a lawsuit.
Regulation-by-casino seems too uncertain, and unlikely to both maximize benefits and minimize harms. The tech might refocus towards business needs and away from (on net) benefiting ordinary users. Are there more predictable ways to regulate these harms?
I think more than anything I just don't have good intuitions on what is reasonably expected from llm and situations seem like they're really different (bad technical advice vs it starts acting on what a person would consider an antisocial personality disorder range)...liability seems like the correct thing as in there is responsibility for consequences to drive incentives for training and release above like a warning message about its use.
It is, indeed, "time to begin holding these companies accountable". Allowing AI to grow out of control — permitting all innovation, regardless of whether that innovation were either effective, desirable, or even harmful — would be like allowing nature to overcome a garden which was meant to form an aesthetic or functional shape. The result would be a monstrous, Frankensteinian blob of omnidirectional, contradictory, self-destructive dysfunction — when all anyone ever wanted in the first place was a tool with a useful purpose.
The tricky part in assigning blame for this sort of thing is the same tricky part which has been considered many times already, regarding just what a *company* is, ontologically, and how a company should be "held to account" for harm it causes. If one considers a company as a network of individuals, whose shape is designed to function as a tool for achieving the goals of that company, then it becomes easier to recognize corporate culpability in the director of that tool or, rather, of the portions of that tool which direct actions with destructive outcomes. I refer to individual people, here.
Yet, because companies are like giant, inert marionettes which animate to perform actions only because of the effort and direction of its operators, they are considered as "judicial people" within the idea of corporate personhood. Therefore, those marionette/companies which are culpable for publishing destructive AI should be held accountable *along with* their director/string-pullers, who are clearly accomplices within this model.
Not sure what society needs is *checks notes* more lawsuits, but i know that’s what we will get anyway…