Principles For Product Liability (With Application To AI)

post by johnswentworth · 2023-12-10T21:27:41.403Z · LW · GW · 55 comments

Contents

  Principle 1: “User Errors” Are Often Design Problems
  Principle 2: Liability Is Not A Ban
  Principle 3: Failure Modes Of Coase’ Theorem
  Putting It All Together
  Hypothetical Example: Car Misuse
  Real Example: Worker’s Comp
  Negative Examples: Hot Coffee, Malpractice
  Application to AI
None
55 comments

There were several responses to What I Would Do If I Were Working On AI Governance [LW(p) · GW(p)] which focused on the liability section, and had similar criticisms. In particular, I’ll focus on this snippet as a good representative:

Making cars (or ladders or knives or printing presses or...) "robust to misuse", as you put it, is not the manufacturer's job.

The commenter calls manufacturer liability for misuse “an absurd overreach which ignores people's agency in using the products they purchase”. Years ago I would have agreed with that; it’s an intuitive and natural view, especially for those of us with libertarian tendencies. But today I disagree, and claim that that’s basically not the right way to think about product liability, in general.

With that motivation in mind: this post lays out some general principles for thinking about product liability, followed by their application to AI.

Principle 1: “User Errors” Are Often Design Problems

There's this story about an airplane (I think the B-52 originally?) where the levers for the flaps and landing gear were identical and right next to each other. Pilots kept coming in to land, and accidentally retracting the landing gear. Then everyone would be pissed at the pilot for wrecking the bottom of the plane, as it dragged along the runway at speed.

The usual Aesop of the story is that this was a design problem with the plane more than a mistake on the pilots' part; the problem was fixed by putting a little rubber wheel on the landing gear lever. If we put two identical levers right next to each other, it's basically inevitable that mistakes will be made; that's bad interface design.

More generally: whenever a product will be used by lots of people under lots of conditions, there is an approximately-100% chance that the product will frequently be used by people who are not paying attention, not at their best, and (in many cases) just not very smart to begin with. The only way to prevent foolish mistakes sometimes causing problems, is to design the product to be robust to those mistakes - e.g. adding a little rubber wheel to the lever which retracts the landing gear, so it’s robust to pilots who aren’t paying attention to that specific thing while landing a plane. Putting the responsibility on users to avoid errors will always, predictably, result in errors.

The same also applies to intentional misuse: if a product is widely available, there is an approximately-100% chance that it will be intentionally misused sometimes. Putting the responsibility on users will always, predictably, result in users sometimes doing Bad Things with the product.

However, that does not mean that it’s always worthwhile to prevent problems. Which brings us to the next principle.

Principle 2: Liability Is Not A Ban

A toy example: a railroad runs past a farmer’s field. Our toy example is in ye olden days of steam trains, so the train tends to belch out smoke and sparks on the way by. That creates a big problem for everyone in the area if and when the farmer’s crops catch fire. Nobody wants a giant fire. (I think I got this example from David Friedman’s book Law’s Order, which I definitely recommend.)

Now, one way a legal system could handle the situation would be to ban the trains. One big problem with that approach is: maybe it’s actually worth the trade-off to have crop fires sometimes. Trains sure do generate a crapton of economic value. If the rate of fires isn’t too high, it may just be worth it to eat the cost, and a ban would prevent that.

Liability sidesteps that failure-mode. If the railroad is held liable for the fires, it may still choose to eat that cost. Probably the railroad will end up passing (at least some of) that cost through to consumers, and consumers will pay it, because the railroad still generates way more value than the fires destroy. Alternatively, maybe the railroad doesn’t generate more value than the fires destroy, and then the railroad is incentivized to just shut down - which is indeed the best-case outcome, if the railroad is destroying more value than it creates.

That’s one nice thing about liability, as opposed to outright bans/requirements: liability forces a company to internalize harms, while still allowing the company to do business if the upsides outweigh the downsides.

So that’s the basic logic for relying on liability rather than bans/requirements. Then the next question is: in the typical case where more than one party is plausibly liable to some degree, how should liability be allocated?

Principle 3: Failure Modes Of Coase’ Theorem

Continuing with the example of the steam train causing crop fires: maybe one way to avoid the fires is for the farmer to plant less-flammable crops, like clover. Insofar as that’s the cheapest way to mitigate fires, it might seem sensible to put most of the liability for fires on the farmer, so they’re incentivized to mitigate the fire by planting clover (insofar as it’s not worth it to eat the cost and just keep planting more-flammable crops).

Coase’ theorem argues that, for purposes of economic efficiency, it actually doesn’t matter who the liability is on. If the cheapest way to avoid fires is for the farmer to plant clover, but liability is on the railroad company, then the solution is for the railroad to pay the farmer to plant clover. More generally, assuming that contracts incur no overhead and everyone involved actually forms the optimal contracts, Coase’ theorem says that everyone will end up doing the same thing regardless of who’s liable. Assigning liability to one party or another just changes who’s paying who how much.

… this is not the sort of theorem which applies very robustly to the real world. I actually bring it up mainly to discuss its failure modes.

The key piece is “assuming that contracts incur no overhead and everyone involved actually forms the optimal contracts”. In practice, that gets complicated fast, and the overhead gets large fast. What we want is to allocate liability so that efficient outcomes can happen with overhead and complicated contracts minimized. Usually, that means putting liability on whoever can most cheaply mitigate the harm. If clover is the cheapest way to mitigate fires, then maybe that does mean putting the liability on farmers after all, as seems intuitively reasonable.

Putting It All Together

Summarizing those three principles:

Now let’s put those together.

Insofar as a product is widely available, there is ~zero chance that consumers will actually avoid misusing the product in aggregate (Principle 1), even if “they’re liable” (i.e. failure is quite costly to them). Even if it’s cheap for any given user to Be Careful and Pay Attention at any given time, when multiplied across all the times the product is used by all its users, it ain’t gonna happen. The only way problems from misuse can actually be prevented, in aggregate, is by designing the product to be robust to misuse. So by Principle 3, liability for misuse should usually be allocated to the designer/manufacturer, because they’re the only one who can realistically prevent misuse (again, in aggregate) at all. That way, product designers/manufacturers are incentivized to think about safety proactively, i.e. actively look for ways that people are likely to misuse their products and ways to make the product less-harmful when that misuse inevitably occurs. And companies are incentivized to do all that in proportion to harm and frequency, as is economically efficient.

… and if your knee-jerk reaction is “but if product manufacturers are always liable for any harm having to do with their products, that means nobody can ever sell any products at all!”, then remember Principle 2. Liability is not a ban. Insofar as the product generates way more benefit than harm (as the extremely large majority of products do), the liability will usually get priced in, costs will be eaten by some combination of companies and consumers, and net-beneficial products will continue to be sold.

Now let’s walk through all this in the context of some examples.

Hypothetical Example: Car Misuse

We opened the post with the comment “Making cars (or ladders or knives or printing presses or...) ‘robust to misuse’, as you put it, is not the manufacturer's job.”. So, let’s talk through what the world would look like if car manufacturers were typically liable for “misuse” of cars (both accidental and intentional).

In a typical car accident, the manufacturers of the cars involved would be liable for damages. By Principle 2, this would not mean that nobody can realistically sell cars. Instead, the manufacturer would also be the de-facto insurer, and would probably do all the usual things which car insurance companies do. Insurance would be priced into the car, and people who are at lower accident risk would be able to buy a car at lower cost.

The main change compared to our world is that manufacturers would be much more directly incentivized to make cars safer. They’d be strongly incentivized to track what kinds of accidents incur the most costly liability most frequently, and design solutions to mitigate those damages. Things like seatbelts and airbags and anti-lock brakes would probably be invented and adopted faster, with companies directly incentivized to figure out and use such solutions. Likely we’d see a lot of investment in things like e.g. sensors near the driver’s seat to detect alcohol in the air, or tech to actively prevent cell phones near the driver’s seat from receiving texts.

… and from a libertarian angle, this would actually be pretty great! With manufacturers already strongly incentivized to make cars safe, there likely wouldn’t need to be a regulatory body for that purpose. If people wanted to buy cars with no seatbelts or whatever, they could, it would just cost a bunch of extra money (to reimburse the manufacturer for their extra expected liability). And the manufacturers’ incentives would likely be better aligned with actual damages than a regulator’s incentives.

Real Example: Worker’s Comp

Jason Crawford’s excellent post How Factories Were Made Safe [LW · GW] walks this through in detail; I’ll summarize just some highlights here.

A hundred and fifty years ago, workers were mostly responsible for their own injuries on the job, and injuries were common. Lost fingers, arms, legs, etc. Crawford opens with this example:

Angelo Guira was just sixteen years old when he began working in the steel factory. He was a “trough boy,” and his job was to stand at one end of the trough where red-hot steel pipes were dropped. Every time a pipe fell, he pulled a lever that dumped the pipe onto a cooling bed. He was a small lad, and at first they hesitated to take him, but after a year on the job the foreman acknowledged he was the best boy they’d had. Until one day when Angelo was just a little too slow—or perhaps the welder was a little too quick—and a second pipe came out of the furnace before he had dropped the first. The one pipe struck the other, and sent it right through Angelo’s body, killing him. If only he had been standing up, out of the way, instead of sitting down—which the day foreman told him was dangerous, but the night foreman allowed. If only they had installed the guard plate before the accident, instead of after. If only.

Workplace injuries are a perfect case of Principle 1: workers will definitely sometimes not pay attention, or not be maximally careful, or even outright horse around. Workers themselves were, in practice, quite lackadaisical about their own safety, and often outright resisted safety measures! If responsibility for avoiding injuries is on the workers, then in aggregate there will be lots of injuries.

Worker’s comp moved the liability to employers:

Workers’ comp is a “no-fault” system: rather than any attempt at a determination of responsibility, the employer is simply always liable (except in cases of willful misconduct). If an injury occurs on the job, the employer owes the worker a payment based on the injury, according to a fixed schedule. In exchange, the worker no longer has the right to sue for further damages.

The result obviously was not that companies stopped hiring workers. Insofar as a company’s products were worth the cost in injuries, companies generally ate the costs (or passed them through to consumers via higher prices).

And then companies were strongly incentivized to design their workplaces and rules to avoid injury. Safety devices began to appear on heavy machinery. Workplace propaganda and company rules pushed workers to actually use the safety devices. Accident rates dropped to the much lower lower levels we’re used to today.

As Crawford says:

I was also impressed with how a simple and effective change to the law set in motion an entire apparatus of management and engineering decisions that resulted in the creation of a new safety culture. It’s a case study of a classic attitude from economics: just put a price on the harm—internalize the externality—and let the market do the rest.

Again, I recommend Crawford’s post [LW · GW] for the full story. This is very much a central example of how the sort of liability I’m arguing for is “supposed to” go.

Negative Examples: Hot Coffee, Malpractice

If John Wentworth of 10-15 years ago were reading this post, he’d counterargue: what about that case with the lady who spilled hot coffee in her lap and then sued McDonald’s for like a million dollars? And then McDonald’s switched to serving lukewarm coffee, which everyone complained about for years? According to Principle 2, what should have happened was that McDonald’s ate the cost (or passed it through to consumers), since people clearly wanted hot coffee and the occasional injury should have been an acceptable trade-off. Yet that apparently did not happen. Clearly lawsuits are out-of-control, and this whole “rely on liability” thing makes everyone wayyyyy too risk-averse.

Today, my response would be: that case notably involved punitive damages, i.e. liability far in excess of the damages actually incurred, intended to force the company to behave differently. Under the model in this post, punitive damages are absolutely terrible - they’re basically equivalent to bans/requirements, and completely negate Principle 2. In order for liability to work, there absolutely must not be punitive damages.

(There is one potential exception here: excess damages could maybe make sense in cases where damage is common but relatively few people bring a claim against the company. But the main point here is that, if excess damages are used to force a company to do something, then obviously that breaks Principle 2.)

John Wentworth of 10-15 years might reply: ok then, what about medical malpractice suits? Clearly those are completely dysfunctional.

To which I reply: yeah, the problem is that liability isn’t strict enough. That may seem crazy and counterintuitive, but consider a doctor ordering a bunch of not-really-necessary tests to avoid claims of malpractice. Insofar as those tests are in fact completely unnecessary, they don’t actually reduce the chance of harm. Yet (the doctor apparently believes) they reduce the risk of malpractice claims. What’s going wrong in that situation is that there are specific basically-performative things a doctor can do, which don’t actually reduce harm, but do reduce malpractice suits. On the flip side, there’s plenty of basic stuff like e.g. handwashing (or, historically, bedrails) which do dramatically reduce harm, but which doctors/hospitals aren’t very reliable about.

A simple solution is to just make doctors/hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.

Application to AI

Now, finally, we get to AI.

In my previous post [LW · GW], I talked about making AI companies de-facto liable for things like deepfakes or hallucinations or employees using language models to fake reports. Dweomite replied [LW(p) · GW(p)] that these would be “Kinda like if you made Adobe liable for stuff like kids using Photoshop to create fake driver's licenses (with the likely result that all legally-available graphics editing software will suck, forever).”.

Now we have the machinery to properly reply to that comment. In short: it’s a decent analogy (assuming there’s some lawsuit-able harm from fake driver’s licenses). The part I disagree with is the predicted result. What I actually think would happen is that Photoshop would be mildly more expensive, and would contain code which tries to recognize and stop things like editing a photo of a driver’s license. Or they’d just eat the cost without any guardrails at all, if users really hated the guardrails and were willing to pay enough extra to cover liability.

Likewise, if AI companies were generally liable for harms from deepfakes… well, in the short term, I expect that cost would just be passed through to consumers, and consumers would keep using the software, because the fun greatly exceeds the harms. Longer term, AI companies would be incentivized to set up things like e.g. licensing for celebrity faces, detect and shut down particularly egregious users, and make their products robust against jailbreaking.

Similarly for hallucinations, or faked reports. Insofar as the products generate more value than harm, the cost of liability will be passed through to consumers, and people will keep using the products. But AI companies would be properly incentivized to know their customers, to design to mitigate damage, etc. Basically the things people want from a regulatory framework, but without relying on regulators to get everything right (and in practice not get everything right, and be Goodharted against).

That’s the kind of de-facto liability framework I’d ideally like for AI.

55 comments

Comments sorted by top scores.

comment by Dweomite · 2023-12-10T22:11:22.241Z · LW(p) · GW(p)

In a typical car accident, the manufacturers of the cars involved would be liable for damages. By Principle 2, this would not mean that nobody can realistically sell cars. Instead, the manufacturer would also be the de-facto insurer, and would probably do all the usual things which car insurance companies do.

I bet you cannot find any insurance company that will insure you against willful damage that you caused with your vehicle.

Workers’ comp is a “no-fault” system: rather than any attempt at a determination of responsibility, the employer is simply always liable (except in cases of willful misconduct).

Oh, what a remarkable exception!

I think you're making a vital error by mostly-ignoring the difference between user negligence and user malice.

Changing a design to defend against user error is (often) economically efficient because it's a central change that you make one time and it saves all the users the costs of being constantly careful, which are huge in aggregate, because the product is used a large number of times.

Changing a design to defend against user malice is (often) not economically efficient, because for users to defend against their own malice is pretty cheap (malice requires intent; "don't use this maliciously" arguably has negative cost in effort), while making an in-advance change that defeats malicious intent is very hard (because you have an intelligent adversary, and they can react to you, and you can't react to them).

I think Principle 3 is clearly going to put liability for writing fake reports on the person who deliberately told the AI to write a fake report, rather than on the AI maker.

Additionally, the damage that can be caused by malice is practically unbounded.  This is pretty problematic for a liability regime because a single outlier event can plausibly bankrupt the company even if their long-run expected value is positive.

Replies from: johnswentworth, tailcalled
comment by johnswentworth · 2023-12-11T04:06:18.369Z · LW(p) · GW(p)

I bet you cannot find any insurance company that will insure you against willful damage that you caused with your vehicle.

Based on a quick google search, sounds like this is true, though the relevant case law is surprisingly recent.

On reflection, I think the thing I'd ideally like here would be:

  • Product manufacturer/seller is still liable for all damages including from intentional acts, BUT
  • In case of intentional acts, the manufacturer/seller can sue whoever intentionally caused the damage to cover the cost to the manufacturer/seller.

Why this level of indirection, rather than just making the person who intentionally caused damage liable directly? Well, for small damages, the person intentionally causing harm ends up liable. And the person intentionally causing harm always ends up liable for damage to themselves. But the important difference is the case where someone can use a product to cause damage to others far in excess of the damager's own net worth.

A central case here would be guns. If someone goes on a shooting spree, they're quickly going to rack up liability far in excess of their own net worth. And then, under the sort of framework I'm advocating here, the gun manufacturer/seller would be responsible for the bulk of the damages. That sounds to me like a remarkably sensible policy in general: if someone is selling a product with such high potential for harmful misuse, then it's on the seller to know their customer, insure the rest of the world against damage by that customer far in excess of what the customer themselves can cover, etc. Otherwise, this product is too dangerous to sell.

Replies from: Dweomite, M. Y. Zuo, aphyer
comment by Dweomite · 2023-12-11T06:16:50.720Z · LW(p) · GW(p)

Who should be second in line for liability (when the actual culprit isn't caught or can't pay) is a more debatable question, I think, but I still do not see any clear reason for a default of assigning it to the product manufacturer.

Your principle 3 says we should assign liability to whoever can most cheaply prevent the problem.  My model says that will sometimes be the manufacturer, but will more often be the victim, because they're much closer to the actual harm.  For instance, it's cheaper to put your valuable heirloom into a vault than it is to manufacture a backpack that is incapable of transporting stolen heirlooms.  Also consider what happens if more than one product was involved; perhaps the thief also wore shoes!

My model also predicts that in many cases both the manufacturer and the victim will have economically-worthwhile mitigations that we'd ideally like them to perform.  I think the standard accepted way of handling situations like that is to attempt to create a list of mitigations that we believe are reasonable for the manufacturer to perform, then presume the manufacturer is blameless if they did those, but give them liability if they failed to do one that appears relevant.  Yes, this is pretty much what you complained about in your malpractice example.  Our "list of reasonable mitigations" will probably not actually be economically optimal, which adds inefficiency, but plausibly less inefficiency than if we applied strict liability to any single party (and thereby removed all incentive for the other parties to perform mitigations).

comment by M. Y. Zuo · 2023-12-11T04:40:35.396Z · LW(p) · GW(p)

Can you lay out step by step, and argument by argument, why that should be the case in a real world legal system like the US? 

It seems very far from currently accepted jurisprudence and legal philosophy.

comment by aphyer · 2023-12-11T16:28:21.182Z · LW(p) · GW(p)

It sounds like this policy straightforwardly implies that 9/11 should have bankrupted Boeing.  Do you endorse that view?

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-12-11T18:09:52.731Z · LW(p) · GW(p)

Seems false? The net worth of Boeing is about $150B, 9/11 killed 3,000 people, and a typical life insurance payout is like $150,000, so Boeing would be liable for about $504M. Maybe they’re leveraged such that they can’t eat that cost, but seems unlikely?

Replies from: aphyer, ryan_greenblatt, johnswentworth
comment by aphyer · 2023-12-11T18:26:27.939Z · LW(p) · GW(p)

I am not sure how $150k is a remotely relevant number.  I tried Googling a couple vaguely-similar things:

Googling 'asbestos liability' turned up this page claiming 

The average asbestos settlement amount is typically between $1 million and $2 million, as Mealey's latest findings show. The average mesothelioma verdict amounts are between $5 million and $11.4 million.

and asbestos exposure is to my understanding usually not immediately lethal.  

Googling 'J&J talcum powder' turns up a lot of results like this one:

After 8 hours of deliberations Thursday, a St. Louis jury awarded $4.69 billion to 22 women who sued pharmaceutical giant Johnson & Johnson alleging their ovarian cancer was caused by using its powder as a part of their daily feminine hygiene routine.

The jury award includes $550 million in compensatory damages and $4.14 billion in punitive damages.

which works out to $25M/death even if we entirely ignore the punitive damages portion, and even if we assume that all 22 victims immediately died.

faul_sname, below, links:

Per NHTSA, the statistical value of a human life is $12.5M.

It doesn't seem at all uncommon for liabilities to work out to well upwards of $10M/victim, even in cases with much less collateral damage, much less certain chains of causation, much less press coverage, and victims not actually immediately dying.

$12.5M/victim would be $37.5B, which is still less than Boeing's market cap today (though comparable to what Boeing's market cap was in 2001).  This also ignores all other costs of 9/11: Googling shows 6,000 injuries, plus I think a fairly large amount of property damage (edited to add: quick Googling turns up a claim of $16 billion in property damage to businesses).  

And our legal system is not always shy about adding new types of liability - pain and suffering of victims' families?  Disruption of work?  Securities fraud (apparently the stock market dropped $1.4 trillion of value off 9/11)?  

You can argue the exact numbers, I guess, but I nevertheless think that, as a matter of legal realism if nothing else, imposing liability for 9/11 on Boeing would have ended up bankrupting it.  

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-12-12T00:16:24.520Z · LW(p) · GW(p)

I just chose a number that seems plausibly related, and determined by market pricing rather than public choice. If we do want to put our faith in the public choice process, looking more closely at wrongful death lawsuits tables from Lawlinq (a lawyer referral website), and various accident attorney websites rather than abnormally large suits seen in the news, they say on average wrongful death suits can range from $250k to $3M+, so about within an order of magnitude of what I said. We can talk about whether that's too high/low, but it is a lot lower than $25M/death. 

It also seems likely a lot of the cost seen in the stock market was from peoples' reactions to the event, not the event itself. Seems strange to hold Boeing responsible for those reactions.

The $35B number seems reasonable, given the article Ryan Greenblatt linked below. In such a case, hopefully Boeing would have prepared well for such a disaster. 

You can argue the exact numbers, I guess, but I nevertheless think that, as a matter of legal realism if nothing else, imposing liability for 9/11 on Boeing would have ended up bankrupting it.  

It does seem reasonable to me to argue the numbers, because the post is saying "if you get the numbers right, then good things will happen, so we should implement this policy". The claim that we will predictably get the numbers wrong, and so if we try to implement the policy bad things will happen seems very different from giving examples of where the economics described in the post seems to break down. In the former scenario we may want to advocate for a modified version of the policy, which consistently gets the numbers right (like Hanson's foom insurance scheme). In the latter, we want to figure out why the economics don't work as advertised, and modify our policy to deal with a more complex world than the standard economic model.

comment by ryan_greenblatt · 2023-12-11T18:29:54.757Z · LW(p) · GW(p)

I think this dramatically understates the total damages of 9/11. This source claims around $35 billion for New York City alone. I think the damages are higher if you include (e.g.) the longer term effects on air travel including the TSA.

Replies from: D0TheMath
comment by Garrett Baker (D0TheMath) · 2023-12-12T00:19:07.826Z · LW(p) · GW(p)

Seems like a cool paper, you're probably right about the $35B number. I do think its strange to hold Boeing responsible for the actions of the US government after the event though. Seems to violate principle 3.

comment by johnswentworth · 2023-12-11T18:25:23.893Z · LW(p) · GW(p)

+1. My biggest complaint about the various comment threads on this post is that people keep being like "but if companies are liable for X, then the entire industry will go out of business!" and ~every time someone has said that I've thought "Umm, have you actually done a Fermi estimate? That shouldn't be anywhere near enough to shut down the business, especially if they raise prices in response to the liability.".

Replies from: aphyer
comment by aphyer · 2023-12-11T18:58:23.690Z · LW(p) · GW(p)

I have almost the exact inverse frustration, where it seems to me that people are extraordinarily willing to assume 'companies all have gigantic Scrooge McDuck-style vaults that can pay any amount with no effect' to the point where they will accept unexamined a claim that one corporation could easily accept full liability for 9/11.

Replies from: johnswentworth
comment by johnswentworth · 2023-12-12T02:18:01.175Z · LW(p) · GW(p)

I mean, I am often frustrated with that sort of reasoning too, but that's not the thing driving my intuitions here. The thing driving my intuitions here is basically "look, if <product> is very obviously generating vastly more value than all the liability we're talking about, then how on earth are all of it's producers going to go out of business?". Applied to the 9/11 example, that logic says something like "look, airplanes obviously generate vastly more value than all the liability of 9/11, so how on earth would that level of liability (and even an order of magnitude or more on top of it) drive airplane manufacturers out of business?".

(Bear in mind here that we need to solve for the equilibrium, not just consider what would happen if that level of liability were suddenly slapped on companies in our current world without warning. The equilibrium world involves higher prices to account for the implicit insurance in nearly all goods and services, and it involves basically every business having liability insurance, and on the other side of the equation it involves the vast majority of regulatory agencies ceasing to exist.)

Replies from: aphyer
comment by aphyer · 2023-12-12T02:51:15.127Z · LW(p) · GW(p)

A potential answer, if you want to consider things through a pure econ lens, of why I would be skeptical of this policy even in a perfect-spherical-cow world without worries about implementation difficulties:

  • I commented this below [LW · GW], but reiterating here: in almost all products, the manufacturer generally does not capture anything remotely close to all of the value of a product they produce.  The value of me having air travel available mostly accrues to, well, me.  Boeing captures only a small portion of it.  It is entirely possible for something to be a large net benefit to the world, and also for the harm it causes to drastically exceed the manufacturer's portion of the benefits.
  • In theory, this can change by having Boeing raise their prices to capture more of the value they create, in order to pay this liability.  (I wish to note in passing that I believe this would have to be quite a large price increase, but I do not consider that central to the argument.  So long as the liability regime is applied evenly, and is impossible to evade by not having deep pockets - perhaps by requiring liability insurance? - all Boeing's competitors will also have to implement the same price rises).
  • At that point, consumers will consume less (I believe often much less) of the product.  Is that efficient?  Sometimes yes, sometimes no.  One central question to consider is whether reducing the amount of a product supplied would linearly reduce the amount of harm done:
    • Say a car sells for $20k, each car has a 1/10k chance of killing someone, you charge the manufacturer $10M if it does, the manufacturer adds $1k to the price of cars to pay this, and people respond by buying fewer cars.  This is a fairly strong case for your liability theory!  The small chance of people being killed by cars was an unpriced externality, which is being correctly internalized and hence reduced.
    • However, suppose that the supply of terrorism is determined by how many religious extremists have gotten mad at the US lately.  In this case, your increase to air prices reduces the amount of air travel but does not reduce the amount of terrorism!  Here, you've simply created a large deadweight loss by reducing the amount of air travel done, with no offsetting benefit.

In our real world, my objections are driven primarily by the ways in which our legal system is not in fact perfectly implemented.  A list that I do not consider complete:

  • I don't think it's uncommon for judgments against deep-pocketed defendants to be imposed vastly in excess of harm done, to be driven primarily by public opinion rather than justice, or to cause huge indirect damages for no real benefit. 
  • I believe that any company that found itself being legally liable for 9/11 would as a factual matter have been utterly destroyed by that liability, even if a perfectly even-handed justice system might have charged it only $100B, and that 'that company having more insurance/higher prices to let it cover those costs' would as a factual matter simply have resulted in it being charged more until it no longer existed, however much that required.
  • I am concerned that this policy would make many large industries be dominated by legal-skills over efficiency to a much greater extent than is already the case (and I kinda think that is already too much the case).  If GM has 1% better lawyers than Toyota, but Toyota has 1% better cars than GM, the more that 'legal liability costs' are a major impact on a company's bottom line the more GM ends up advantaged.
  • You suggest:
  • Product manufacturer/seller is still liable for all damages including from intentional acts, BUT
  • In case of intentional acts, the manufacturer/seller can sue whoever intentionally caused the damage to cover the cost to the manufacturer/seller.
  • I worry that this policy would lead to some interesting incentives around who to sell to.  If Bill Gates goes crazy and uses a chainsaw to murder a bunch of teenagers, the manufacturer can recover from him.  If I do, they cannot.  This means...that...they should charge Bill Gates a lower price than me?  We already have a lot of obnoxious politics around similar issues in car insurance, I'm not enthusiastic about extending that to every other industry.
Replies from: johnswentworth
comment by johnswentworth · 2023-12-12T03:53:13.140Z · LW(p) · GW(p)

It sounds like your most central concerns would mostly be addressed by a system similar to worker's comp. The key piece of workers comp is that there's a fixed payment for each type of injury. IIUC, There's typically no opportunity to sue for additional damages, no opportunity to "charge more until the company no longer exists", and minimal opportunity for legal skills to make the payments higher/lower.

Would that in fact address your most central concerns?

comment by tailcalled · 2023-12-11T09:23:16.637Z · LW(p) · GW(p)

Couldn't you make both of them liable? Not as a split, but essentially duplicating the liability, so facing $X damage means one can sue the user for $X, and the manufacturer for $X, making it a total of $2X?

Replies from: faul_sname
comment by faul_sname · 2023-12-11T09:46:59.826Z · LW(p) · GW(p)

If you don't put a restriction that you can recover a maximum amount of $X this creates really bad incentives (specifically you've just built a being harmed bounty)

comment by Andy_McKenzie · 2023-12-11T02:05:12.661Z · LW(p) · GW(p)

A simple solution is to just make doctors/hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.

Can you explain more what you actually mean by this? Do you mean if someone comes into the hospital and dies, the doctors are responsible, regardless of why they died? If you mean that we figure out whether the doctors are responsible for whether the patient died, then we get back to whether they have done everything to prevent it, and one of these things might be ordering lab tests to better figure out the diagnosis, and then it seems we're back to the original problem i.e. the status quo. Just not understanding what you mean. 

Replies from: johnswentworth
comment by johnswentworth · 2023-12-11T03:09:47.511Z · LW(p) · GW(p)

If someone comes into the hospital and dies, the doctors are responsible, regardless of why they died. Same for injuries, sickness, etc. That would be the simplest and purest version, though it would probably be expensive.

One could maybe adjust in some ways, e.g. the doctors' responsibility is lessened if the person had some very legible problem from the start (before they showed up to the doctor/hospital), or the doctor's responsibility is always lessened by some baseline amount corresponding to the (age-adjusted) background rate of death/injury/sickness. But the key part is that a court typically does not ask whether the death/injury/sickness is the doctor's fault. They just ask whether it occurred under the doctor/hospital's watch at all.

Replies from: faul_sname, PeterMcCluskey
comment by faul_sname · 2023-12-11T03:38:55.304Z · LW(p) · GW(p)

Why would you operate a hospital at all under this legal system?

Replies from: johnswentworth
comment by johnswentworth · 2023-12-11T03:54:01.083Z · LW(p) · GW(p)

Because people would pay you to take them under your care.

Replies from: faul_sname
comment by faul_sname · 2023-12-11T04:01:17.415Z · LW(p) · GW(p)

Let's say Jane gets in a serious car crash. Without immediate medical care, she will surely die of her acute injuries and blood loss. With the best available medical care, she has an 80% chance of living and recovering.

Per NHTSA, the statistical value of a human life is $12.5M. As such, by admitting Jane, the hospital faces $2.5M in expected liability.

Are you expecting that Jane will front the $2.5M? Or do you have some other mechanism in mind?

Replies from: None, brglnd
comment by [deleted] · 2023-12-11T04:20:10.355Z · LW(p) · GW(p)

Theoretically the hospital should be liable in this situation for the marginal changes.  So if they manage to save 81% of the car accident victims they face no liability, and if they save 19% they face 1% liability * number of total deaths.

So theoretically Jane's insurance company would have to pay the cost of the premium for Jane's care, plus an extra charge for the overhead of the hospital?  It might be 0.1% -1% of the total tab per death you mentioned?

Obviously there is too much variance to measure this.  So instead what happens is plaintiffs attorneys scour the medical records of some of the patients who died looking for gross negligence in writing, and of course this means the doctors charting everything have an incentive to always say they did everything by the book regardless of what really happened.  Basically the liability system barely works.  

But in this hypothetical the hospital should be liable on the margin.

Replies from: faul_sname
comment by faul_sname · 2023-12-11T05:00:22.141Z · LW(p) · GW(p)

That does not sound like an improvement over the current US healthcare system in terms of aligning incentives.

"Better than the current US healthcare system in terms of aligning incentives" is not a high bar to clear. Failing to clear it sounds, to me, like a strong indication that the policy proposal needs work.

Replies from: None
comment by [deleted] · 2023-12-11T05:08:32.630Z · LW(p) · GW(p)

I'm not sure how in this case, "AI companies liable for "deepfakes, hallucinations, fake reports" would equate to anything but just an AI ban.  Being able to fake an image or generate a plausible report is a core functionality of an LLM, in the same way a chainsaw is able to cut wood and similar materials as a core functionality.  Hallucinations also are somewhat core, they are only preventable with secondary checks which will leak some of the time.  

You could not sell a chainsaw where the company was liable for any murders committed with the chainsaw.  The chainsaw engine has no way of knowing when it is being misused.  An AI model in a separate session doesn't have the context to know when it's being asked for a deepfake/fake report.  And having it refuse when the model suspects this is just irritating and reduces the value of the model.  

So the real life liability equivalent is that a chainsaw manufacturer only faces liability around things like a guard the breaks or a lack of a kill switch, etc.  And the actual chainsaw users are companies who buy insurance for chainsaw accidents when they send workers equipped with chainsaws to cut trees, etc.  Liability law kinda works here, there is armor a user can wear that can protect them.  The insurance company may demand that workers always wear their kevlar armor when sawing.  

Replies from: faul_sname
comment by faul_sname · 2023-12-11T05:09:44.961Z · LW(p) · GW(p)

Was this a reply to the correct comment?

(Asking because it's a pretty coherent reply to my Deepfakes-R-Us scenario from yesterday's thread)

Replies from: None
comment by [deleted] · 2023-12-11T05:20:11.778Z · LW(p) · GW(p)

It was but now I do want to reply to the main post again.  

Replies from: faul_sname
comment by faul_sname · 2023-12-11T05:55:10.761Z · LW(p) · GW(p)

In that case, I agree with you that "make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen" is a de-facto ban on developing anything AI-like while having enough money to be worth suing.

I think the hope was that, by saying "this problem (difficulty of guaranteeing that misuse won't happen) exists, you can't have nice things (AI) until it's solved", that will push people to solve the problem. Attempts like this haven't worked well in the past (see the Jones Act, which was "this problem (not enough American-built ships) exists, you can't have nice things (water-based shipping between one US port and another) until it's fixed", and I don't expect they'd work well here either.

Replies from: None
comment by [deleted] · 2023-12-11T06:09:44.116Z · LW(p) · GW(p)

So I've thought about it and I think @johnswentworth [LW · GW] is implicitly thinking about context aware AI products.  These would be machines that have an ongoing record of all interactions with a specific human user, like in this [LW · GW]story.  It may also have other sources of information, like the AI may demand you enable the webcam.  

So each request would be more like asking a human professional to do them, and that professional carries liability insurance.  Strict liability doesn't apply to humans, either, though.  Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if "they should have known" or they talk in writing in a way that indicates they know they are committing a crime.  For example, the FTX group chat named "wire fraud".

A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document.  I think @johnswentworth [LW · GW] is assume that an AI system will be able to do this soon.

I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea.  They are extremely complex and nearly impossible to debug, because the context of a user's records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.  

Tool AI that works on small, short duration, myopic tasks - which means it will be easy to misuse, just like any tool today can be misused - is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given.  "I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not".  If a human was locked in the trunk and died as a consequence, that's not the painting machine's fault.  


 Just for more details on myopia : "paint a car" is not a simple task, but you can subdivide it into many tiny subtasks.  Such as "plan where to apply masking" and "apply the masking".  Or "plan spray strokes" and "execute spray stroke n".  You can drill down into an isolated subtask, where "the world" is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.

comment by lberglund (brglnd) · 2023-12-11T17:33:41.617Z · LW(p) · GW(p)

Maybe if the $12.5M is paid to Jane when she dies, she could e.g. sign a contract saying that she waives her right to any such payments the hospital becomes liable for. 

Replies from: Dweomite
comment by Dweomite · 2023-12-11T19:34:27.917Z · LW(p) · GW(p)

If you allow the provider of a product or service to contract away their liability, I predict in most cases they will create a standard form contract that they require all customers to sign that transfers 100% of the liability to the customer in ~all circumstances, which presumably defeats the purpose of assigning it to the provider in the first place.

Yes, customers could refuse to sign the contract.  But if they were prepared to do that, why haven't they already demanded a contract in which the provider accepts liability (or provides insurance), and refused to do business without one?  Based on my observations, in most cases, ~all customers sign the EULA, and the company won't even negotiate with anyone who objects because it's not worth the transaction costs.

Now, even if you allow negotiating liability away, it would still be meaningful to assign the provider liability for harm to third parties, since the provider can't force third parties to sign a form contract (they will still transfer that liability to the customer, but this leaves the provider as second-in-line to pay, if the customer isn't caught or can't pay).  So this would matter if you're selling the train that the customer is going to drive past a flammable field like in the OP's example.  But if you're going to allow this in the hospital example, I think the hospital doesn't end up keeping any of the liability John was trying to assign them, and maybe even gets rid of all of their current malpractice liability too.

comment by PeterMcCluskey · 2023-12-11T03:46:48.999Z · LW(p) · GW(p)

If someone comes into the hospital

That's a bad criterion to use.

See Robin Hanson's Buy Health proposal for a better option.

comment by 1a3orn · 2023-12-10T23:06:32.934Z · LW(p) · GW(p)

This post consistently considers AI to be a "product." It discusses insuring products (like cars), compares to insuring the product photoshop, and so on.

But AI isn't like that! Llama-2 isn't a product -- by itself, it's relatively useless, particularly the base model. It's a component of a product, like steel or plastic or React or Typescript. It can be used in a chatbot, in a summarization application, in a robot service-representative app, in a tutoring tool, a flashcards app, and so on and so forth.

Non-LLM things -- like segmentation models -- are even further from being a product than LLMs.

If it makes sense to get liability insurance for the open-source framework React, then it would make sense for AI. But it doesn't at all! The only insurance that I know of are for things that are high-level final results in the value chain, rather than low-level items like steel or plastic.

I think it pretty obvious that requiring steel companies to get insurance for the misuse of their steel is a bad idea, one that this post... just sidesteps?

Now we have the machinery to properly reply to that comment. In short: it’s a decent analogy (assuming there’s some lawsuit-able harm from fake driver’s licenses). The part I disagree with is the predicted result. What I actually think would happen is that Photoshop would be mildly more expensive, and would contain code which tries to recognize and stop things like editing a photo of a driver’s license. Or they’d just eat the cost without any guardrails at all, if users really hated the guardrails and were willing to pay enough extra to cover liability.

What's weird about this post is that, until modern DL-based computer vision was invented, this would have actually been an enormous pain -- honestly, one that I think would be quite possibly impossible to implement effectively. Prior to DL it would be even more unlikely that you could, for instance, make it impossible to use photoshop to make porn of someone without also disabling legitimate use -- yet the original post wants to sue ML companies on the basis that their technology being used for that. I dunno man.

Replies from: johnswentworth
comment by johnswentworth · 2023-12-11T00:04:51.702Z · LW(p) · GW(p)

I assume various businesses insure intermediate goods pretty often? Also, whenever two businesses sign a contract for a big order of e.g. steel, the lawyers spend a bunch of time hashing out who will be liable for a gazillion different kinds of problems, and often part of the answer will be "company X is generally responsible for the bulk of problems, in exchange for a price somewhat favoring them".

Not sure why this seems so crazy to you, it seems fairly normal to me.

What's weird about this post is that, until modern DL-based computer vision was invented, this would have actually been an enormous pain -- honestly, one that I think would be quite possibly impossible to implement effectively.

Yeah, so that's a case where the company/consumers would presumably eat the cost, as long as the product is delivering value in excess of the harms.

comment by jessicata (jessica.liu.taylor) · 2023-12-10T23:20:53.235Z · LW(p) · GW(p)

What I actually think would happen is that Photoshop would be mildly more expensive, and would contain code which tries to recognize and stop things like editing a photo of a driver’s license.

So free software would be effectively banned? Both free-as-in-beer (because that can't pay for liability) and free-as-in-speech (because that doesn't allow controlling distribution).

Replies from: johnswentworth
comment by johnswentworth · 2023-12-10T23:33:59.162Z · LW(p) · GW(p)

Limited liability is still a thing, so e.g. open source projects would probably be fine, as long as they're not making any money.

Replies from: aphyer, jessica.liu.taylor, Dweomite
comment by aphyer · 2023-12-11T16:16:15.018Z · LW(p) · GW(p)

Many software products are free even if supplied by a corporation.  For example, the Visual Studio programming environment is free to you and me, but charges for enterprise licenses.

If Microsoft is liable to the full depth of its very deep pockets for the harm of computer viruses I write in Visual Studio, and needs to pay for that out of their profits, they are unlikely to continue offering the free community license they currently do.

comment by jessicata (jessica.liu.taylor) · 2023-12-10T23:41:05.939Z · LW(p) · GW(p)

Wouldn't the individual developers on the project be personally liable if they didn't do it through a LLC?

Replies from: johnswentworth
comment by johnswentworth · 2023-12-10T23:57:44.416Z · LW(p) · GW(p)

Presumably the project itself would be housed in an LLC, so individual developers would be shielded that way.

If the developers didn't go through an LLC at all at any step, then yeah, they'd be liable. But if we're in a world consistently using this sort of liability law, then presumably developers would know to use LLCs, in the same way that developers today know to attach a standard open-source license. (And today's open-source developers can, IIUC, be liable for problems if they don't attach the right license to their software.)

comment by Dweomite · 2023-12-11T19:42:37.886Z · LW(p) · GW(p)

Naive question:  What stops a company from conducting all transactions through LLCs and using them as liability shields?

I'm imagining something like:  Instead of me selling a car to Joe, I create a LLC, loan the LLC the money to buy a car, sell the car to the LLC for the loaned money, the LLC sells the car to Joe for the same price, uses Joe's money to repay the loan, leaving the LLC with zero assets, and no direct business relationship between me and Joe.

I imagine we must already have something that stops this from working, but I don't know what it is.

Replies from: None
comment by [deleted] · 2023-12-12T04:05:08.044Z · LW(p) · GW(p)

So I googled this real quick and found a list of exceptions here.  

  • Signs a personal guarantee of the loan or other business debt and the LLC defaults on its payments
  • Personally and directly harms or injures someone
  • Fails to deposit taxes withheld from the LLC’s employees’ wages
  • Intentionally takes action that is fraudulent, illegal, or reckless that results in damage to the company or harm to somebody else
  • Fails to treat the LLC as a separate legal entity

I think the bolded reason is why your proposed scheme won't work.  This would probably also be why a company couldn't shield themselves from liability for harm from their AI product by making a bunch of subsidiaries they wholly own - those aren't really separate entities.

Replies from: Dweomite
comment by Dweomite · 2023-12-12T10:05:54.481Z · LW(p) · GW(p)

Thanks for doing research!

Your link goes on to say:

To ensure that you are treating the LLC as a separate legal entity, the owners must:

  • Avoid co-mingling assets . The LLC must have its own federal employer identification number and business-only checking account. An owner’s personal finances should never be included in the LLC’s accounting books. All business debts should be paid out of the LLC’s dedicated bank account.
  • Act fairly. The LLC should make honest representations regarding the LLC’s finances to vendors, creditors or other interested parties.
  • Operating Agreement. Have all the members executed a formal written operating agreement that sets forth the terms and conditions of the LLC’s existence.

I imagine the bite is in the "act fairly" part?  That sounds distressingly like the judge just squints at your LLC and decides whether they think you're being reasonable.

comment by [deleted] · 2023-12-11T05:38:00.575Z · LW(p) · GW(p)

Ok some clarifying discussion.

You mention in the replies you want strict liability, or de-facto liability, where if harm occurs, even if the AI company took a countermeasure, the AI company is liable for the full damages.

As I understand it, current liability law does not function this way.  You gave an example of "problem was fixed by putting a little rubber wheel on the landing gear lever." in the case of "flaps and landing gear were identical and right next to each other."

All real solutions to engineering problems are probabilistic.  There is no such thing as an absolute fix.  The little rubber wheel is not enough, under some circumstances a pilot will still grab the wrong lever and deploy it.  The mitigation just reduced the chances of that happening.  

What would happen in a court is that the company may win the liability case when an aircraft crash happens, because the pilot obviously didn't feel for the wheel before pulling the lever.  This allows the blame to be shifted to the pilot, because the rubber wheel was due diligence.  

You gave some examples of harms you wanted AI not to commit:   deep fakes, hallucinations, fake reports.

In each case, there are mitigations, where we can add additional filters to reduce how much these happen.  But it is not possible to prevent them, especially because users are permitted to open a fresh context window and try as many times as they want.  Eventually they will find a way past the filters.  

This would be shown in logs in court cases over this, and be used to shift the blame to the user.  

A strict liability scheme needs the manufacturer to be able to control the users.  For instance, in such a world, all pilots would have to be approved by an aircraft manufacturer to be able to fly.  All drivers would need automaker approval, and the car would automatically detect bad drivers and ban them from ever using the company's products once the poor driving is detected.  And AI companies probably couldn't have public users, it would have to be all company employees. 

It's not a world we are familiar with.  Do you agree with this user problem @johnswentworth [LW · GW] ?  I think you may have neglected in your analysis 2 crucial factors :

           1.  Context.  All current machines, including AI, cannot reliably determine when they are being misused.  For example a terrain avoidance warning system in an aircraft cannot know when it is faulty, so the pilot can override it. 

           2.  Pathological users.  A strict liability world means pilots who deliberately crash their planes, and reckless human drivers, both cause liability for the manufacturer.  It is not possible for the manufacturers with current technology to fix this issue due to 1.

comment by David Hornbein · 2023-12-11T10:50:22.916Z · LW(p) · GW(p)

We can certainly debate whether liability ought to work this way. Personally I disagree, for reasons others have laid out here, but it's fun to think through.

Still, it's worth saying explicitly that as regards the motivating problem of AI governance, this is not currently how liability works. Any liability-based strategy for AI regulation must either work within the existing liability framework, or (much less practically) overhaul the liability framework as its first step.

comment by aphyer · 2023-12-11T16:58:59.898Z · LW(p) · GW(p)

Most sellers of products do not capture 100% of the social benefit of their product.  Even if a seller is a pure monopoly, there is a large amount of consumer surplus.

Even if you manage to avoid punitive damages, and impose liability only equal to actual damage done (something that I do not in fact believe you can get our current legal system to do when a photogenic plaintiff sues a deep-pocketed corporation), this will still inefficiently shut down large volumes of valuable activity whenever:

[total benefit of the product] > [total harm of the product] > [portion of benefit captured by the seller]

comment by RogerDearnaley (roger-d-1) · 2023-12-11T00:31:11.159Z · LW(p) · GW(p)

This argument applies to AI safety issues, up to the level where a small number of people die. It doesn't apply to x-risks like a sharp-left turn or an escaped self-propagating AI takeover, since if we're all dead or enslaved, there is nobody left to sue. It doesn't even apply to catastrophe-level damages like 10-million people dead, where the proportionate damages are much more then the value of the manufacturing company.

I am primarily worried about x-risks. I don't see that as a problem that has any viable libertarian solutions. So I want there to be AI regulation and a competent regulator, worldwide, stat — backed up if needed by the threat of the US military. I regard putting such a regulatory system in place as vital for the survival of the human race. If the quickest and easiest way to get that done is to start with a regulator intended to deal with penny-ante harms like those discussed above, and then ramp up their capabilities fast as the potential harms ramp up, then, even if doing that is not the most efficient economic model for AI safety, I'm still very happy to do something economically inefficient and market-distorting as part of a shortcut to trying to avoid the extinction of the human race.

Replies from: faul_sname, johnswentworth
comment by faul_sname · 2023-12-11T03:32:28.032Z · LW(p) · GW(p)

I appreciate your willingness to state your view clearly and directly.

That said, I don't think that "implement a policy that doesn't work and has massive downsides on a small scale, and no expectation of working better on a large scale, and then scale it up as fast as you can" is likely to help, and in fact I think it's likely to make things worse on the x-risk front as well as mundanely (because the x-risk focused policy people become "those people who proposed that disastrous policy last time")

comment by johnswentworth · 2023-12-11T03:22:27.754Z · LW(p) · GW(p)

Yup, that's basically right. In the previous post, I mentioned that the goal of this sort of thing for X-risk purposes would be to incentivize AI companies to proactively look for safety problems ahead of time, and actually hold back deployment and/or actually fix the problems in a generalizable way when they do come up. They wouldn't be incentivized to anywhere near the efficient level to avoid X-risk, but they'd at least be incentivized to put in place safety processes with actual teeth at all.

comment by [deleted] · 2023-12-11T04:02:21.739Z · LW(p) · GW(p)

Summary: It looks like you are trying to extend liability law to apply in situations it doesn't currently cover.  Currently, a foundation model company could develop a model that is extremely capable, but not directly offer it as an end product in a liability generating situation.  Other companies would license the base model, and deploy it as a psychologist or radiologist assistant or to control robots, etc.  These companies would be responsible for testing and licensing and liability insurance, and if they create more liability that their insurance can handle, these companies would fail, blowing alike a fuse.  This structure protects the foundation model company.  I believe you wish to extend liability law to apply to the foundation model itself.  

The argument has been made that OSHA has made many factories in the USA and Europe uneconomical, so that countries with less worker protections and cheaper labor (China, Mexico, Phillipines, etc) have a competitive advantage.  (and cheaper labor regardless). 

Youtube videos of actual factories in China provide direct evidence that China's equivalent to OSHA is clearly laxer.  I can link some if this point is disputed.

So this seems to be a subset of the general argument against any kind of restriction of AI that will net decelerate it's development and adoption.  Which slam into the problem that these desires for laws and restrictions are well meaning...but what exactly happens if foreign companies, exempt from liability and receiving direct government support, start offering the best and clearly most capable models?

Do Western companies:

   1.  License the model.  How?  Who pays for the liability insurance?  It's hosted in foreign data centers.  Doesn't seem like this would happen.

    2.  Whatever amazing new tech that foreign companies can develop with their capable AIs, Westerners will be forced to just pay for it, the same way today Western companies hold the IP for the some of the most valuable products.  

   3.  Militarily this is not a good situation to be in.

 

So it seems to be the same coordination problem that applies to any other AI restrictions.  All major powers need to go along with it or it's actually a bad idea to restrict anything.

comment by Algon · 2023-12-10T22:41:58.914Z · LW(p) · GW(p)

Funny that you say that punitive damages would be a terrible idea. Hanson's proposal for foom liability suggests we do just that, adding punitive damages according to the formula  where  are free parameters of the policy and N is how many of the following 8 conditions contributed to causing harm in this case: self-improving, agentic, wide scope of tasks, intentional deception, negligent owner monitoring, values changing greatly, fighting its owners for self-control, and stealing non-owner property. 

This seems pretty reasonable to me, because you can't ascribe damages after a foom. And if you think a foom is unlikely, just set F ~ 1 and M some small number.  

Quoting the post in full for those to lazy to click on a link:

Compared to some, I am less worried about extreme near-term AI doom scenarios. But I also don’t like policy being sensitive to my or anyone else’s risk estimates. I instead prefer robust policies, ones we can expect to promote total welfare for a wide range of parameter values. We want policies that will give big benefits if foom risk is high, but impose low costs if foom risk is low. In that spirit, let me suggest as a compromise a particular apparently-robust policy for dealing with AI foom risk.

If you recall, the foom scenario requires an AI system that a) is tasked with improving itself. It finds a quite unusually lumpy innovation for that task that is a) secret b) huge c) improves well across a very wide scope of tasks, and d) continues to create rapid gains over many orders of magnitude of ability. By assumption, this AI then improves fast. It somehow e) become an agent with a f) wide scope of values and actions, g) its values (what best explains its choices) in effect change radically over this growth period, yet h) its owners/builders do not notice anything of concern, or act on such concerns, until this AI becomes able to either hide its plans and actions well or to wrest control of itself from its owners and resist opposition. After which it just keeps growing, and then acts on its radically-changed values to kill us all.

Given how specific is this description, it seems plausible that for every extreme scenario like this there are many more “near miss” scenarios which are similar, but which don’t reach such extreme ends. For example, where the AI tries but fails to hide its plans or actions, where it tries but fails to wrest control or prevent opposition, or where it does these things yet its abilities are not broad enough for it to cause existential damage. So if we gave sufficient liability incentives to AI owners to avoid near-miss scenarios, with the liability higher for a closer miss, those incentives would also induce substantial efforts to avoid the worst-case scenarios. 

In liability law today, we hold familiar actions like car accidents to a negligence standard; you are liable if damage happens and you were not sufficiently careful. But for unfamiliar actions, where it seems harder to judge proper care levels, such as for using dynamite or having pet tigers, we hold people to a strict liability standard. As it is hard to judge proper foom care levels, it makes sense to use strict liability there.

Also, if there is a big chance that a harm might happen yet not result in a liability penalty, it makes to add extra “punitive” damages, for extra discouragement. Finally, when harms might be larger than the wealth of responsible parties, it makes sense to require liability insurance. That is, to make people at substantial risk of liability prove that they could pay damages if they were found liable.

Thus I suggest that we consider imposing extra liability for certain AI-mediated harms, make that liability strict, and add punitive damages according to the formulas D= (M+H)*F^N. Here D is the damages owed, H is the harm suffered by victims, M>0,F>1 are free parameters of this policy, and N is how many of the following eight conditions contributed to causing harm in this case: self-improving, agentic, wide scope of tasks, intentional deception, negligent owner monitoring, values changing greatly, fighting its owners for self-control, and stealing non-owner property. 

If we could agree that some sort of cautious policy like this seems prudent, then we could just argue over the particular values of M,F.

Added 7a: Yudkowsky, top foom doomer, says

If this liability regime were enforced worldwide, I could see it actually helping.

comment by Viliam · 2023-12-11T13:38:00.442Z · LW(p) · GW(p)

what about that case with the lady who spilled hot coffee in her lap and then sued McDonald’s for like a million dollars? (...) that case notably involved punitive damages, i.e. liability far in excess of the damages actually incurred, intended to force the company to behave differently. Under the model in this post, punitive damages are absolutely terrible - they’re basically equivalent to bans/requirements, and completely negate Principle 2. In order for liability to work, there absolutely must not be punitive damages.

Reading the Wikipedia page, the lady originally wanted $20,000 = 2× her medical expenses. McDonald's offered $800. So the lady had to involve lawyers, and the next offer was to settle for $90,000. McDonald's refused again. Before the trial, a mediator recommended to settle for $225,000. McDonald's refused again.

From my perspective, McDonald's brought this on themselves by refusing to pay the original reasonable cost.

Your suggestion is that there should be no punitive damages, only liability for damages actually incurred. That sounds nice in theory. But it ignores the part that in real life, many companies will do whatever they can to avoid paying, assuming that most of their victims cannot spend the time and money to sue them properly. So the actual amount of money paid by the companies will only be 10% or maybe 1% of what it should be.

EDIT:

Do the disagreement votes mean that you disagree with the part that companies will try to weasel out of paying by various means, so in reality they will end up paying much less than they should in theory? I would like to hear an argument to the contrary, because I have a difficulty imagining it.

comment by faul_sname · 2023-12-11T05:05:08.415Z · LW(p) · GW(p)

I definitely should have asked this before, but better late than never.

Before we go stomping all over your post any more than we already have... was this meant to be a concrete proposal, or is it more of a butterfly idea [LW · GW]?

Your writing is pretty polished so it's easy to forget that the underlying ideas might not be intended to be taken as-is.

Replies from: johnswentworth
comment by johnswentworth · 2023-12-11T05:14:28.207Z · LW(p) · GW(p)

This definitely was not intended to be a polished idea, but I don't mind folks stomping all over it. The criticism has been higher-quality than what I get on, like, 97 or 98% of posts, and it's pushing toward good adjustments to the core idea.

comment by Jiro · 2023-12-12T17:16:58.737Z · LW(p) · GW(p)

Please stop using the term "Aesop" for a moral. Yes, everyone knows the reference and can figure out what it means. It's still stupid. It comes from TV Tropes as an example of bad trope naming and has spread too much.

comment by ajc586 (Adrian Cable) · 2023-12-11T18:16:18.578Z · LW(p) · GW(p)

I think one of the reasons why punitive damages sometimes make sense is in recognition of the fact that the total 'damage' (in the colloquial, not legal sense) can sometimes include not just an economic component, but a societal component beyond that.

Here's an example: suppose a company is concerned about wrongful death suits. There are basically two levers available here: (1) spend $X on making the work environment safer, (2) put $Y aside to cover the cost of such suits, and of course it's not either/or here. But although this didn't turn out to be the case in the 'factories' example so things turned out well for society, in a given situation, depending on the numbers the best economic option may be a lot of (2) and not so much (1). In this scenario, people will continue to die even if X - Y is very small (in the world where Y does not include punitive damages) and lives could be saved at a very low additional cost to the company. Punitive damages are a tool to make this less likely by imposing a large cost in (for example) a wrongful death scenario even if X - Y (without punitive damages) is small.

If you argue for no punitive damages in civil cases, the same argument could be made for criminal cases, i.e. no jail time and all damages awarded are purely economic. In this case, then, you could murder someone if you had the resources to pay the 'fare' of the economic damage this would cause. Which at a societal level isn't much different to companies being able to choose to let fatal accidents continue to occur, if the workers comp payouts are more affordable than implementing improved safety protocols, and there's no mechanism available to 'artificially' inflate the costs to the company in a wrongful death scenario, like punitive damages.