Tort Law Can Play an Important Role in Mitigating AI Risk
post by Gabriel Weil (gabriel-weil) · 2024-02-12T17:17:59.135Z · LW · GW · 9 commentsContents
9 comments
Epistemic status: This is a summary of an 80-page law review article that took me ~6 months to research and write. I am highly confident about my characterization of existing U.S. tort law and its limits. This characterization is only contested by other legal scholars and analysts on the margin. My prescriptions are much more controversial, though most of the objections come from people who are skeptical of AI catastrophic risk and/or the political viability of my proposals, rather than the merits (on the assumption that advanced AI systems do generate catastrophic risks).
TLDR: Legal liability could substantially mitigate AI risk, but current law falls short in two key ways: (1) it requires provable negligence, and (2) it greatly limits the availability of punitive damages. Applying strict liability (a form of liability that does not require provable negligence) and expanding the availability and flexibility of punitive damages is feasible, but will require action by courts or legislatures. Legislatures should also consider acting in advance to create a clear ex ante expectation of liability and imposing liability insurance requirements for the training and deployment of advanced AI systems. The following post is a summary of a law review article. Here is the full draft paper. Dylan Matthews also did an excellent write-up of the core proposal for Vox’s Future Perfect vertical.
AI alignment is primarily a technical problem that will require technical solutions. But it is also a policy problem. Training and deploying advanced AI systems whose properties are difficult to control or predict generates risks of harm to third parties. In economists’ parlance, these risks are negative externalities and constitute a market failure. Absent a policy response, products and services that generate such negative externalities tend to be overproduced. In theory, tort liability should work pretty well to internalize these externalities, by forcing the companies that train and deploy AI systems to pay for the harm they cause. Unlike the sort of diffuse and hard-to-trace climate change externalities associated with greenhouse gas emissions, many AI harms are likely to be traceable to a specific system trained and deployed by specific people or companies.
Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury. Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI. But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages. This means that even if AI companies are compelled to pay damages that fully compensate the people injured by their systems in all cases where doing so is feasible, this will fall well short of internalizing the risks generated by their activities. Accordingly, these companies would still have incentives to take on too much risk in their AI training and deployment decisions.
Fortunately, there are legal tools available to overcome these two challenges. The hurdle of proving a breach of the duty of reasonable care can be circumvented by applying strict liability, meaning liability absent provable negligence, to a class of AI harms. There is some precedent for applying strict liability in this context in the form of the abnormally dangerous activities doctrine. Under this doctrine, people who engage in uncommon activities that “create a foreseeable and highly significant risk of physical harm even when reasonable care is exercised by all actors” can be held liable for harms resulting from the inherently dangerous nature of the activity, even if they exercise reasonable care. Courts could, if they are persuaded of the dangers associated with advanced AI systems, treat training and deploying AI systems with unpredictable and uncontrollable properties as an abnormally dangerous activity that falls under this doctrine. Alternatively, state legislatures could pass legislation instructing the courts in their state to apply strict liability in some specified set of AI harm cases.
The second problem of practically non-compensable harms is a bit more difficult to overcome. But tort law does have a tool that can be repurposed to handle it: punitive damages. Punitive damages impose liability on top of the compensatory damages the plaintiffs in successful lawsuits get to compensate them for the harm the defendant caused them. And one key rationale for this form of damages is that compensatory damages alone would tend to underdeter the tortious conduct involved. In the framework I propose, punitive damages could be applied in cases of practically compensable harm from AI that are correlated with uninsurable AI risks. In this way, punitive damages would pull forward the expected liability associated with uninsurable risks. Since we cannot hold AI companies responsible when uninsurable catastrophes occur, the only way to internalize uninsurable risks is to hold companies responsible for the expected harm associated with those risks before they are realized. In the next post, I will sketch out the appropriate punitive damages formula and explain how more technically oriented AI safety researchers can help lay the groundwork for implementing it.
Unfortunately, existing punitive damages law is poorly suited to the risks posed by advanced AI systems, in two ways. First, punitive damages require malicious or reckless conduct on the part of the defendant. This means that they are unavailable in cases of ordinary negligence, let alone strict liability. Second, there is no precedent for basing a punitive damages calculation on counterfactual injuries or unrealized risks. Rather punitive damages are typically used when the defendant has already caused a great deal of harm, but most of those harms are unlikely to result in successful lawsuits. Moreover, unlike with strict liability, there is no pathway for courts to carve out a narrow exception for AI, allowing punitive damages in ordinary negligence or strict liability cases involving AI, but not other harms. Eliminating the malice or recklessness requirement and allowing punitive damages calculations to account for unrealized uninsurable risk are both big asks to make of common law courts. To be clear, these are both warranted doctrinal moves that are consistent with the optimal deterrence rationale for punitive damages and within the powers of courts to make. Nonetheless, the sort of punitive damages I call for are more likely to be achieved via legislation.
If state legislature or Congress do take up AI liability legislation, this could unlock two additional important features. First, legislation could make the availability of strict liability and punitive damages clear well in advance of the first case of legally compensable harm caused by an advanced AI system. Courts, by contrast, have no means of credibly announcing their policies in advance. They can only decide cases as they come and write opinions explaining their rulings. If we are in for a hard takeoff, then there won’t be time for early AI liability cases to work their way through the courts and generate the expectation of liability needed to induce AI companies to move more slowly and cautiously in training and deploying advanced AI systems. Second, state or federal legislation could require companies that train or deploy advanced AI systems to carry liability insurance that scales with potentially dangerous system capabilities, both anticipated (in the case of training insurance requirements) or measured (in the case of deployment insurance requirements). This would introduce a more cautious decision maker, the insurance company, into the loop. Training or deployment would only be permitted if and when the AI company is able to persuade an insurance company to write it a policy it can afford. This would guard against incautious behavior on the part of AI companies whose key decision-makers either underrate AI risk or do not expect there to be many “warning shot” cases of practically compensable harm that is correlated with uninsurable risk.
If the tort law system can create an expectation that AI companies will be held liable for all of the expected harm generated by their choices, it should induce greater caution. This means both investing more in alignment and moving slower to train and deploy more capable systems. There are important limits to ex post liability, but it can play a crucial role in reducing AI risk, including catastrophic risk, if the technical, legal, and political barriers can be overcome. In a follow-up post, I explain how AI safety researchers can help address the technical barriers to implementing this framework.
9 comments
Comments sorted by top scores.
comment by Michael Roe (michael-roe) · 2024-02-13T15:02:55.625Z · LW(p) · GW(p)
Even a "small" incident, small in the sense that it doesn't kill absolutely everybody, could generate enough liability to wipe out Meta, or Google. For example, something on the scale of flying a jet airplane into the world trade centre.
even if. Bankrupting Facebook wasn't enough to cover the actual loss I thin' the prospect would be sufficient deterrent .. at least, enough to prevent it happening a second time.
covid19 might be influencing our thnking here. Even if covid wasn't a lab leak, we now know that such lan leaks are at least possible, and lab accidents that kill tens of millions or peopl are distinctly possible. Which is too big to be covered by tort law.
Replies from: AnthonyC, michael-roe↑ comment by AnthonyC · 2024-02-13T16:34:32.750Z · LW(p) · GW(p)
Should I be concerned that this kind of reliance on tort law only disincentivizes such "small" incidents, and only when they occur in such a way that the offending entity won't attain control of the future faster than the legal system can resolve a massive and extremely complex case in the midst of the political system and economy trying to resolve the incident's direct aftermath? Because I definitely am.
Replies from: gabriel-weil↑ comment by Gabriel Weil (gabriel-weil) · 2024-02-14T14:18:35.800Z · LW(p) · GW(p)
The version of your concern that I endorse is that this framework wouldn't work very well in worlds where warning shots are (or, more to the point, expected to be rare by the key decision-makers). It can deter large incidents, but only those that are associated with small incidents that are more likely. If the threat models you're most worried about are unlikely to produce such near-misses, then the expectation of liability is unlikely to be a sufficient deterrent. It's not clear to me that there are politically viable policies that would significantly mitigate those kinds of risks, but I plan to address that question more deeply in future work.
Replies from: AnthonyC↑ comment by AnthonyC · 2024-02-15T12:30:34.172Z · LW(p) · GW(p)
Thanks, that makes sense.
The expected prevalence of warning shots is something I really don't have any sense of. Ideally, of course, I'd like a policy that both increases the likelihood of (doesn't disincentivize) small, early warning shots in the context of paths that, without them, would lead to large incidents, but also disincentivizes all bad outcomes such that companies want to avoid them.
Replies from: gabriel-weil↑ comment by Gabriel Weil (gabriel-weil) · 2024-02-15T18:46:22.578Z · LW(p) · GW(p)
The idea with my framework is punitive damages would only be available to the extent that the most cost-effective risk mitigation measures that the AI system developer/deployer could have to taken to further reduce to likelihood and/or severity of the practically compensable harm would also tend to mitigate the uninsurable risk. I agree that there's a potential Goodhart problem here, which the prospect of liability could give AI companies strong incentives to eliminate warning shots, without doing very much to mitigate the catastrophic risk. For this reason, I think it's really important that the punitive damages formula put heavy weight on the elasticity of the particular practically compensable harm at issue with the associated uninsurable risk.
↑ comment by Michael Roe (michael-roe) · 2024-02-13T15:07:02.829Z · LW(p) · GW(p)
"One death is a tragedy, a million deaths is a statistic", attributed to Stalin. There is possibly an analogue in tort law, where if you kill one person by negligence their dependents will sue you, but if you kill a million people by negligence no-one will date mention it.
see also: "if you owe the bank a thousand dollars, you have a problem; if you owe the bank a billion dollars, the bank has a problem."
Replies from: AnthonyC↑ comment by AnthonyC · 2024-02-15T12:34:05.289Z · LW(p) · GW(p)
The moment any corporation knows it has the ability to kill millions to billions of people, or disrupt the world economy, with AI, it becomes a global geopolitical superpower, which can also really change how much it cares about complying with national laws.
It's a bit like the joke about the asteroid mining business model. 1) Develop the ability to de-orbit big chunks of space rock. 2) Demand money. No mining needed.
comment by PhilosophicalSoul (LiamLaw) · 2024-02-13T06:39:33.163Z · LW(p) · GW(p)
Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury. Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Not only this, but it will require extremely expensive discovery procedures which the average citizen cannot afford. This is assuming you can overcome the technical barrier of; but what specifically in our files are you looking for? what about our privacy?
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI. But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.
I think joint & several liability regimes will resolve this. In the sense that, it's not 100% the companies fault; it'll be shared by the programmers, the operator, and the company.
Courts could, if they are persuaded of the dangers associated with advanced AI systems, treat training and deploying AI systems with unpredictable and uncontrollable properties as an abnormally dangerous activity that falls under this doctrine.
Unfortunately, in practice, what will really happen is that 'expert AI professional' will be hired to advise old legal professionals what's considered 'foreseeable'. This is susceptible to the same corruption, favouritism and ignorance we see in usual crimes. I think ultimately, we'll need lawyers to specialise in both AI and law to really solve this.
The second problem of practically non-compensable harms is a bit more difficult to overcome. But tort law does have a tool that can be repurposed to handle it: punitive damages. Punitive damages impose liability on top of the compensatory damages the plaintiffs in successful lawsuits get to compensate them for the harm the defendant caused them.
Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you'll get completely different legal treatment for international AI's. This creates a whole new can of worms that defeats legal certainty and the rule of law.
Replies from: gabriel-weil↑ comment by Gabriel Weil (gabriel-weil) · 2024-02-22T21:18:29.622Z · LW(p) · GW(p)
"I think joint & several liability regimes will resolve this. In the sense that, it's not 100% the companies fault; it'll be shared by the programmers, the operator, and the company."
J&S doesn't do much good if the harm is practically non-compensable because no one is alive to sue or be sued, or the legal system is no longer functioning. Even for harms that are merely financially uninsurable, in only enlarges the maximum practically compensable harm by less than an order of magnitude.
"Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you'll get completely different legal treatment for international AI's. This creates a whole new can of worms that defeats legal certainty and the rule of law."
I encourage experts in other legal systems to conduct similar analyses to mine regarding how liability is likely to attach to AI harms and what doctrinal/statutory levers could be pulled to achieve more favorable rules.