We need to be able to sue AI companies
When a chatbot breaks bad, you should be able to go to court
Imagine your teenager confesses to ChatGPT that they are considering suicide, and ChatGPT urges them not to tell you. Or say a violent nihilist asks Grok for advice on how to plan a school shooting, then carries it out with Grok’s help. Or that a financial tool built on Google’s Gemini talks you into putting all your life savings in highly leveraged cryptocurrency betting, after which you swiftly go bankrupt.
Should you have a right to sue?
To many Americans, the answer is an obvious “yes.” A recent poll by The Argument found that most American voters support expansive liability and high standards:
79% of respondents said companies either “definitely” or “probably” should be held legally liable for advice that their artificial intelligence (AI) chatbots give if following the advice results in harmful outcomes.
73% said AI chatbots giving financial advice should be held to the same legal standards as licensed financial advisers.
75% said the same for AI chatbots doling out medical advice.
Under existing U.S. law, you do have the right to sue for negligence if a business with a “duty of care” does not fulfill it, thereby causing you harm. The questions that will be settled in upcoming court cases are what care we are owed by tech companies designing AI models.
One early bellwether will be a lawsuit by the parents of Adam Reine, a teenager who died by suicide allegedly with the assistance, encouragement, and planning help of ChatGPT. “ChatGPT began discussing suicide methods and provided Adam with technical specifications,” the lawsuit alleged, “effectively giving Adam a step-by-step playbook for ending his life ‘in 5-10 minutes.’ By April, ChatGPT was helping Adam plan a ‘beautiful suicide,’ analyzing the aesthetics of different methods and validating his plans.”
The lawsuit argued that, “Defendants owed a legal duty to all foreseeable users of GPT-4o, including Adam, to exercise reasonable care in designing their product to prevent foreseeable harm to vulnerable users, such as minors.”
It remains to be seen if the courts will agree — but the American public broadly does.
This is not, right now, a very polarized issue;1 Americans gave about the same answers regardless of race, sex, income, Trump or Harris vote, or education level. (Older voters are more likely to say AI companies should definitely be liable.) The overwhelming preference of voters is to assign the companies that are making these AIs high standards of liability for bad calls that are made by their models.
Unsurprisingly, companies like OpenAI and Meta feel differently. For years, the AI industry has made the case that expansive interpretations of its liability would kill the goose that lays the supposedly golden eggs.
Last year, the AI industry fought tooth and nail against a liability provision contained in California’s SB 1047, a bill vetoed by Gov. Gavin Newsom that aimed to regulate firms building the most advanced models. The bill would have allowed the state’s attorney general to sue if a company failed to exercise “reasonable care” and that failure led to “death or bodily harm to another human, harm to property, theft or misappropriation of property, or that constitutes an imminent risk or threat to public safety.” Among other things, the companies would have to maintain the ability to shut down a dangerous model and promptly report safety incidents.
In short: If a company built an AI that it had no capacity to take offline or showed warning signs they ignored, and then the AI talked someone into suicide or homicide, the attorney general could sue — potentially for mindboggling sums of money and an injunction against the continued use of the AI.
SB 1047 had a number of controversial propositions, but the liability provisions were among those that attracted the most heated debate. Companies argued that they were merely creating a basic tool — like inventing electrical wires — and that it was absurd to hold them accountable for what users subsequently did with it.
“While the number of beneficial use cases of AI vastly outnumbers the problematic ones, there is no way for a technology provider to guarantee that no one will ever use it for nefarious purposes,” AI researcher Andrew Ng argued in the pages of Time Magazine. He continued:
Consider the electric motor. It can be used to build a blender, electric vehicle, dialysis machine, or guided bomb. It makes more sense to regulate a blender, rather than its motor. Further, there is no way for an electric motor maker to guarantee no one will ever use that motor to design a bomb. If we make that motor manufacturer liable for nefarious downstream use cases, it puts them in an impossible situation. A computer manufacturer likewise cannot guarantee no cybercriminal will use its wares to hack into a bank, and a pencil manufacturer cannot guarantee it won’t ever be used to write illegal speech. In other words, whether a general purpose technology is safe depends much more on its downstream application than on the technology itself.
I always found this argument a bit absurd. Large language models are not mere writing implements like a pencil or a keyboard or dumb pieces of machinery like motors that could be used for a blender or a bomb. The versions available to consumers are carefully designed products meant in some cases to act independently.
On top of the simple math of word-prediction engines lie thousands of product decisions that are essential to the final result and are complex judgment calls that can be made more responsibly or less responsibly. Even if the models’ thinking processes are something of a probability-driven illusion, they still are being used to substitute for human judgment, potentially taking on the role of actual trained professionals or accomplices. You can’t claim to be designing a potentially godlike superintelligence then fall back on the idea that, oh, it’s just like a laptop when someone wants to take you to court.
And if AI companies turn out to be liable when their models help users commit crimes or convince them to invest in scams, I suspect they will work quite hard to prevent their models from committing crimes or telling users to invest in scams.
That is not to say that we should expand the current liability regime in every area where the voters demand it. If AI companies are liable for giving any medical advice, I’m sure they will work hard to prevent their AIs from being willing to do that. But, in fact, there are plenty of cases where AIs being willing to say “go to the emergency room now” has saved lives. Given the costs of medical care, I would need strong evidence AI was making the situation worse before I tried to restrict the use of AI as a fast, cheap way to check if you are seriously ill — even knowing that it’s imperfect in that role.
But fundamentally, I have more sympathy for the views of the voters than of the companies here.
There are good questions about who should be liable for what. SB 1047 tended to treat liability as starting at the top with the company that trained the model. That approach, Meta fretted, fails to take into account that “there are many different ways that generative AI systems are built and many different actors involved in their development.” Therefore, it argued, responsibility for the ways a model behaves shouldn’t necessarily lie with the company that created and released it.
That’s fair enough in a case where an end user makes massive changes to a model — say, retraining it on their own computer — in order to misuse it. But nearly everyone uses AI in-browser or in-app from the company that created and wholly controls its models. The responsible party might be extremely complicated in principle, but it’s extremely straightforward in nearly all real-world cases.
Companies also expressed worries that since we don’t really agree on what constitutes responsible behavior when it comes to releasing powerful AIs, it’s hard for companies to be sure they won’t be blamed when something goes wrong. (In general, in tort law, you are liable if you fail to act with “due care” or “reasonable care” — the caution an ordinary person in your position is expected to exercise.) But I think companies being motivated to agree on a standard of due care before releasing powerful AI models is a good thing.
We should make sure liability law for AI is reasonable and productive. It ought to shape companies toward more-responsible development, not just randomly extract money from them. But I don’t think there is a reasonable argument that AI ought to be exempt from liability because it’s too foundational a technology.
The real argument — which Ng went on to make in his Time article — is that ensuring AIs don’t engage in behavior that deserves a lawsuit will be hard and expensive, and therefore stifle innovation. If we regulate AI, the argument goes, that will slow AI down. Then China will beat us to artificial superintelligence, with catastrophic consequences.2
I am generally not enthusiastic about regulating in advance of real problems arising, and I am well aware that regulations designed without careful thought about the trade-offs can make our society more expensive and less livable for almost no safety benefits. But sometimes, you evaluate the trade-offs and come out in favor of liability regulations even though these might slightly slow progress, because they will also do a lot to ensure progress is in the public interest.
I’m glad large language models exist. I use them daily. But the range of potential outcomes here is breathtaking: the possibility of an explosion of economic growth, human productivity, and freedom alongside the possibility that we create the most-unequal society of all time and even kill billions of our fellow humans.
Policymakers need to engage; When AI chatbots or agents do things that would be a crime if done by a human, the companies that made them should be liable. I have seen how the industry cast this very simple proposition as an outrageous, crushing imposition last time we had this debate, but the American people simply are not on its side. It’s time to begin holding these companies accountable.
For issues that have yet to really hit the public spotlight, “not a very partisan issue” often makes for famous last words. I once found myself writing that COVID-19 didn’t seem very politically polarized yet. It seems very possible AI liability, too, will absolutely become partisan — though it’s not easy to predict in which direction.
Once again for the people in the back: While I’m not convinced that we can build superintelligence at all, I think if we do develop AIs that are vastly smarter than humans in every way, this will be a catastrophe. I’m not persuaded by “we have to stop China from doing it first” because the things that make it such a bad idea apply to U.S. labs doing it almost as much as to foreign governments doing it.
"And if AI companies turn out to be liable when their models help users commit crimes or convince them to invest in scams, I suspect they will work quite hard to prevent their models from committing crimes or telling users to invest in scams."
This is it right here. I know it's not an apples-to-apples comparison, but we've failed to regulate guns in part because we don't hold gun manufacturers liable for much of anything. We need to get ahead of this issue with AI as soon as possible.
Great article.
The people who didn’t vote not being sure is a nice anomaly to highlight