Liability regimes for AI
post by Ege Erdil (ege-erdil) · 2024-08-19T01:25:01.006Z · LW · GW · 34 commentsContents
Coasean bargaining The judgment-proof defendant Transaction costs and economies of scale Summary and implications for AI None 34 comments
For many products, we face a choice of who to hold liable for harms that would not have occurred if not for the existence of the product. For instance, if a person uses a gun in a school shooting that kills a dozen people, there are many legal persons who in principle could be held liable for the harm:
- The shooter themselves, for obvious reasons.
- The shop that sold the shooter the weapon.
- The company that designs and manufactures the weapon.
Which one of these is the best? I'll offer a brief and elementary economic analysis of how this decision should be made in this post.
The important concepts from economic theory to understand here are Coasean bargaining and the problem of the judgment-proof defendant.
Coasean bargaining
Let's start with Coaesean bargaining: in short, this idea says that regardless of who the legal system decides to hold liable for a harm, the involved parties can, under certain conditions, slice the harm arbitrarily among themselves by contracting and reach an economically efficient outcome. Under these conditions and assuming no transaction costs, it doesn't matter who the government decides to hold liable for a harm; it's the market that will ultimately decide how the liability burden is divided up.
For instance, if we decide to hold shops liable for selling guns to people who go on to use the guns in acts of violence, the shops could demand that prospective buyers purchase insurance on the risk of them committing a criminal act. The insurance companies could then analyze who is more or less likely to engage in such an act of violence and adjust premiums accordingly, or even refuse to offer it altogether to e.g. people with previous criminal records, which would make guns less accessible overall (because there's a background risk of anyone committing a violent act using a gun) and also differentially less accessible to those seen as more likely to become violent criminals. In other words, we don't lose the ability to deter individuals by deciding to impose the liability on other actors in the chain, because they can simply find ways of passing on the cost.
The judgment-proof defendant
However, what if we imagine imposing the liability on individuals instead? We might naively think that there's nothing wrong, because anyone who used a gun in a violent act would be required to pay compensation to the victims which in principle could be set high enough to deter offenses even by wealthy people. However, the problem we run into in this case is that most school shooters have little in the way of assets and certainly not enough to compensate the victims and the rest of the world for all the harm that they have caused. In other words, they are judgment-proof: the best we can do when we catch them is put them in jail or execute them. In these cases, Coaesean bargaining breaks down.
We can try to recover something like the previous solution by mandating such people buy civil or criminal insurance by law, so that they are no longer judgment-proof because the insurance company has big coffers to pay out large settlements if necessary, and also the incentive to turn away people who seem like risky customers. However, law is not magic, and someone who refuses to follow this law would still in the end be judgment-proof.
We can see this in the following example: suppose that the shooter doesn't legally purchase the gun from the shop but steals it instead. Given that the shop will not be held liable for anything, it's only in their interest to invest in security for ordinary business reasons, but they have no incentive to take additional precautions beyond what make sense for e.g. laptop stores. Because the shooter obtains the gun illegally, they can then go and carry out a shooting without any regard for something such as "a requirement to buy insurance". In other words, even a law requiring people to buy insurance before being able to legally purchase a gun doesn't solve the problem of the judgment-proof defendant in this case.
The way to solve the problem of the judgment-proof defendant is obvious: we should impose the liability on whoever is least likely to be judgment-proof, which in practice will be the largest company involved in the process with a big pile of cash and a lot of credibility to lose if they are hit with a large settlement. They can then use Coaesean bargaining where appropriate to divide up this cost as far as they are able to under the constraints they are operating under. If they are able to escape liability by splitting up into smaller companies this is also a problem, so we would have to structure the regime in a way where this would be impossible, for example by holding all producers liable in an industry where production already has big economies of scale.
Transaction costs and economies of scale
The problem with this solution is that it gives an advantage to companies that are bigger. This is by design: a bigger company is less likely to be judgment-proof just because it gets to average the risk of selling guns over a larger customer base and therefore any single bad event is less likely to be something for which the company can't afford a settlement. However, it means we expect a trend towards increased market concentration in the presence of such a liability regime, which might be undesirable for other reasons.
A smaller company can try to compete by buying insurance on the risk of them being sued, which is itself another example of a Coasean solution, but this still doesn't remove the economies of scale introduced by our solution because in the real world such bargaining has transaction costs. Because transaction costs are in general concave in the amount being transacted, large companies will still have an advantage over smaller companies, and this is ignoring the possibility that certain forms of insurance may be illegal to offer in some jurisdictions.
Summary and implications for AI
So, we end up with the following simple analysis:
-
In industries where the problem of the judgment-proof defendant is serious, for example with technologies that can do enormous amounts of harm if used by the wrong actors, we want the liability to be legally imposed on as big of a base as possible. A simple form of this is to hold the biggest company involved in production liable, though there are more complex solutions.
-
In industries where the problem of the judgment-proof defendant is not serious, we want to impose the liability on whoever can locally do the most to reduce the risk of the product being used to do harm, as this is the solution that gives the best local incentives and therefore reduces Coasean transaction costs that must be incurred to the minimum. In most cases this will be the end users of a product, though not always.
For AI, disagreements about liability regimes seem to mostly arise out of whether people think we're in world (1) or world (2). Probably most people agree the solution recommended in (1) creates "artificial" economies of scale favoring larger companies, but people who want to hold big technology companies or AI labs liable instead of end users think the potential downside of AI technology is very large and so the end users will be judgment-proof given the scale of the harms the technology could do. It's plausible even the big companies are judgment-proof (e.g. if billions of people die or the human species goes extinct) and this might need to be addressed by other forms of regulation, but focusing only on the liability regime we still want as big of a base as we can get.
In contrast, if you think the relevant risks from AI look like people using their systems to do some small amounts of harm which are not particularly serious, you'll want to hold the individuals responsible for these harms liable and spare the companies. This gives the individuals the best incentives to stop engaging in misuse and reduces transaction costs that would both be bad in themselves and also exacerbate the trend towards industry concentration.
Unless people can agree on what the risks of AI systems are, it's unlikely that they will be able to agree on what the correct liability regime for the industry should look like. Discussion should therefore switch to making the case for large or small risks from AI, depending on what advocates might believe, and away from details of specific proposals which obscure more fundamental disagreements about the facts.
34 comments
Comments sorted by top scores.
comment by Dagon · 2024-08-19T02:46:21.374Z · LW(p) · GW(p)
many legal persons who in principle could be held liable for the harm:
- The shooter themselves, for obvious reasons.
- The shop that sold the shooter the weapon.
- The company that designs and manufactures the weapon.
I think there's a lot more options than that!
4. the individual clerk who physically handed the weapon to the shooter.
5. the shooter's biological father, who failed to role-model a non-shooting lifestyle.
6. the school district or administrator who allowed the unstable student to continue attending. And who failed to stop the actual act.
7. (obPython) Society is to blame. Right! We'll arrest them instead.
8. The manufacturer or seller of the ammunition.
9. the miner of the lead and copper for the bullet (the actual harmful object).
10. The victims, for failing to protect themselves.
11. Media and consumers, for covering and promoting the shooting.
12. The families of the victims, for sending them to that school.
13. The estate of John Moses Browning, for making modern firearms so effective.
Really, there's enough liability to go around - almost ANY change in the causal chain of the world COULD have prevented that specific tragedy.
Replies from: shankar-sivarajan, tailcalled↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2024-08-20T03:42:05.017Z · LW(p) · GW(p)
14. The gun itself.
I thought that's where you were going with this!
Relevant smbc.
↑ comment by tailcalled · 2024-08-19T07:25:14.196Z · LW(p) · GW(p)
Seems like for the Coaesean bargain to work, you have to assign liability to someone along the chain of trades leading to the harm. This complicates 5-7 and 10-12.
Though one could say that the gun trade was only one of the chains, and society-parents-school is another chain, for the placement of the shooter rather than the gun. But it seems like they already receive a good deal of punishment, so it's unclear how meaningfully it can be changed.
Replies from: Dagon↑ comment by Dagon · 2024-08-19T19:13:53.153Z · LW(p) · GW(p)
Honestly, I expected to get downvoted for my somewhat tongue-in-cheek response, though it does work as a reductio argument.
I think my main point would be that Coase's theorem is great for profitable actions with externalities, but doesn't really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate. The question of blame/responsibility after the fact is almost unrelated to the tax/fee/control of decision before the action is taken.
There's no bargain involved in a shooting - there's no profit that can be shared with those hurt by it.
Replies from: jimmy, jmh↑ comment by jimmy · 2024-08-23T23:39:31.463Z · LW(p) · GW(p)
I think my main point would be that Coase's theorem is great for profitable actions with externalities, but doesn't really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate.
This brings up another important point which is that a lot of externalities are impossible to calculate, and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.
As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out. It's really hard to measure the number of murders which didn't happen because the guns you sold deterred the attacks. And if we accept the pro 2A arguments that the real advantage of an armed populace is that it prevents tyranny, that's even harder to put a real number on.
I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it's even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2024-08-24T00:21:07.825Z · LW(p) · GW(p)
This brings up another important point which is that a lot of externalities are impossible to calculate, and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.
I think this is correct as a conditional statement, but I don't think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.
As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out.
The nice feature of positive payments by the government (instead of fines, i.e. negative payments by the government) is that the judgment-proof defendant problem goes away, so there's no reason to actually make these payments to the gun shop at all: you can just directly pay the person who stops the shooting, which probably provides much better incentives to be a Good Samaritan without the shop trying to pass along this incentive to gun buyers.
I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it's even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.
I don't agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example. "A lot of the benefits" is a weaker phrasing and I don't quite know how to interpret it, but I thought it's worth flagging my disagreement with the adjacent phrasing I used.
Replies from: jimmy↑ comment by jimmy · 2024-09-13T20:26:55.560Z · LW(p) · GW(p)
I think this is correct as a conditional statement, but I don't think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.
It's not "attempting to price some externalities where many are difficult to price is generally bad", it's "attempting to price some externalities where the difficult to price externalities on the other side is bad". Sometimes the difficulty of pricing them means it's hard to know which side they primarily lie on, but not necessarily.
The direction of legible/illegible externalities might be uncorrelated on average, but that doesn't mean that ignoring the bigger piece of the pie isn't costly. If I offer "I'll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing", you don't think "Well, the difficult part to price is a wash, but twenty dollars is twenty dollars"
you can just directly pay the person who stops the shooting,
You still need a body.
Sure, you can give people like Elisjsha Dicken a bunch of money, but that's because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can't simply go to the morgue and count how many people aren't there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it's still a potentially imperfect attempt to price the illegible and it's not a coincidence that this was left out of the initial analysis that I'm responding.
But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn't even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It's just not gonna work.
I don't agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example.
Sure, they'll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That's the stuff that by definition is going to be harder to recognize so you can't just say "all of the stuff I recognize is legible, therefore legible>>illegible".
For example, what's the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?
↑ comment by jmh · 2024-08-21T21:12:07.035Z · LW(p) · GW(p)
I do agree with your point but think you are creating a bit of a strawman here. I think the OP goal was to present situations in which we need to consider AI liability and two of those situations would be where Coasean barganing is possible and where it fails do the the (relatively) Judgement Proof actor. I'd also note that legal trends have tended to be to always look for the entity with the deepest pockets that you have some chance of blaming.
So while the example of the gun is a really poor case to apply Coase for I'm not sure that really detracts from the underlying point/use of Coasean bargaining with respect to approaches to AI liability or undestanding how to look at various cases. I don't think the claim is that AI liability will be all one type or the other. But I think the ramification here is that trying to define a good, robust AI liability strucutre is going to be complex and difficult. Perhaps to the point we shouldn't really attempt to do so in a legaslative setting but maybe in a combination of market risk managaement (insurance) and courts via tort complaints.
But that also seems to be an approach that will result in a lot of actual harms done as we all figure out where the good equilibrium might be (assuming it even exists).
comment by Raemon · 2024-08-23T18:07:56.677Z · LW(p) · GW(p)
Curated.
Liability law seems like one of the more promising tools for regulating AI, but I previously had had a fairly vague understanding of what principles should inform how to argue about it. This post basically gives me one important-seeming gear for thinking about it in a principled way.
I also like that the post is short and to the point.
comment by Jsevillamol · 2024-08-19T05:10:45.109Z · LW(p) · GW(p)
The ability to pay liability is important to factor in and this illustrates it well. For the largest prosaic catastrophes this might well be the dominant consideration
For smaller risks, I suspect in practice mitigation, transaction and prosecution costs are what dominates the calculus of who should bear the liability, both in AI and more generally.
comment by Michael Roe (michael-roe) · 2024-08-20T10:46:26.495Z · LW(p) · GW(p)
Also note that Open Source precludes doing this ...
The basic Open SOurce deal is that absolutely anyone can take the product and do whatever they like with it, without paying the supplier anything.
So
- The vendor cannot prevent the customer doing something bad with the product (If there is a line of code that says "dont do this bad thing", then the customer can just delete it
- The vendor also cannot charge the customer an insurance premium base on how likely the customer is to do something ba with the product
... which would suggest that Open Source is only viable in areas where there isn't much third party liability.
Replies from: ege-erdil, mruwnik, rhollerith_dot_com, kaustubh-kislay↑ comment by Ege Erdil (ege-erdil) · 2024-08-20T13:24:17.644Z · LW(p) · GW(p)
Open source might be viable if it's possible for the producers to add safeguards into the model that cannot be trivially undone by cheap fine-tuning, but yeah, I would agree with that given the current lack of techniques for doing this successfully.
↑ comment by mruwnik · 2024-08-26T13:59:55.542Z · LW(p) · GW(p)
A bit of nitpicking: the basic Open Source deal is not that you can do what you want with the product. It's that the source code should be available. The whole point of introducing open source as an idea was to allow coorporations etc. to give access to their source code without worrying so much about people doing what you're describing. Deleting a "don't do this bad thing" can be prosecuted as copyright infringement (if the whole license gets removed). This is what copyleft was invented for - to subvert copyright laws by using them to force companies to publish their code.
There are licenses like MIT which do what you're describing. Others are less permissive, and e.g. only allow you to use the code in non-commercial projects, or stipulate that you have to send any fixes back to the original developer if you're planning on distributing it. The GPL is a fun one, which requires any code that is derivative of it to also be open sourced.
Also, Open Source can very much be a source of liability, e.g. the SCO v. IBM case which was trying to get people to pay for linux (patent trolls being what they are) or Oracle vs Google, where Oracle (arguably also patent trolls) wanted Google to pay billions for use of the Java API (this ended up in the supreme court).
Replies from: shankar-sivarajan, rhollerith_dot_com, michael-roe↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2024-08-27T02:37:39.166Z · LW(p) · GW(p)
not that you can do what you want with the product. It's that the source code should be available.
Since the inception of the term, "Open Source" has meant more than that. You're describing "source-available software" instead.
↑ comment by RHollerith (rhollerith_dot_com) · 2024-08-28T22:46:27.636Z · LW(p) · GW(p)
There are licenses that only allow you to use the code in non-commercial projects
But they are emphatically not considered open-source licenses by the Open Source Initiative and are not considered Free-Software licenses by the Free Software Foundation, positions that have persisted uninterrupted since the 1990s.
↑ comment by Michael Roe (michael-roe) · 2024-08-28T21:20:47.528Z · LW(p) · GW(p)
This is pretty much why many people thought that the term "Open Source" was a betrayal of the objectives of the Free Software movement,
"Free as in free speech, not free bewr" has implication that "well, you can read the source" lacks.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2024-08-28T23:08:49.212Z · LW(p) · GW(p)
There's a lot more to the Open Source Definition than, "well, you can read the source". Most of the licenses approved by the Open Source Initiative have also been approved by the Free Software Foundation.
↑ comment by RHollerith (rhollerith_dot_com) · 2024-08-28T23:30:39.621Z · LW(p) · GW(p)
So far, lawmakers at least in the US have refrained from passing laws that impose liability on software owners and software distributors: they have left the question up to the contract (e.g., the license) between the software owner and the software user. But there is nothing preventing them from passing laws on software (or AI models) that trump contract provisions -- something they routinely do in other parts of the economy: in California, for example, the terms of the contract between tenant and landlord, i.e., the lease, hardly matter at all because there are so many state laws that override whatever is in the lease.
Licenses are considered contracts at least in the English-speaking countries: the act of downloading the software or using the software is considered acceptance of the contract. But, like I said, there are tons of laws that override contracts.
So, the fact that the labs have the option of releasing models under open-source-like licenses has very little bearing on the feasibility, effectiveness or desirability of a future liability regime for AI as discussed in the OP -- as long as lawmakers cooperate in the creation of the regime by passing new laws.
↑ comment by Kaustubh Kislay (kaustubh-kislay) · 2024-08-26T10:37:05.378Z · LW(p) · GW(p)
This may be a reason as to why Meta/LLama is making their models open source. In the future where coasean bargaining comes into play for larger companies, which it mostly likely will, META may have a cop out by making their models open source. Obviously like you said there will then have to be some restrictions on open source in regards to AI models, but "open source-esque" models may be the solution for companies such as OpenAI and Anthropic to avoid liability in the future.
comment by Jakub Supeł (jakub-supel) · 2024-08-23T21:07:41.789Z · LW(p) · GW(p)
Quite apart from the application of this argument to AI, the example of a gun shop/manufacturer is quite bad. One reason is that passing on the negative externalities of selling a gun without passing on the positive externalities* (which is never done in practice and would be very difficult to do) creates an assymetry that biases the cost of firearms to be higher than it would have been in rational circumstances.
(*) Positive externalities of manufacturing and selling a gun include a deterrent effect on crime ("I would rather not try to rob that store, the clerk might be armed"), direct prevention of crime by armed citizens (https://en.wikipedia.org/wiki/Defensive_gun_use) or very strong positive effects of population being armed in extreme (rare) scenarios such as foreign invasion or the government turning tyrannical. I would suspect you wouldn't want to reward firearms manufacturers for all these positive outcomes (or at least it would be difficult, since these effects are very hard to quantify).
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2024-08-24T00:14:21.408Z · LW(p) · GW(p)
In general, I don't agree with arguments of the form "it's difficult to quantify the externalities so we shouldn't quantify anything and ignore all external effects" modulo concerns about public choice ("what if the policy pursued is not what you would recommend but some worse alternative?"), which are real and serious, though out of the scope of my argument. There's no reason a priori to suppose that any positive or negative effects not currently priced will be of the same order of magnitude.
If you think there are benefits to having a population where most people own guns that are not going to be captured by the incentives of individuals who purchase guns for their own purposes, it's better to try to estimate what that effect size is and then provide appropriate incentives to people who want to purchase guns. The US government pursues such policies in other domains: for example, one of the motivations that led to the Jones Act was the belief that the market would not assign sufficient value to the US maintaining a large domestic shipbuilding industry at peacetime.
In addition, I would dispute that some of these are in fact external effects by necessity. You can imagine some of them being internalized, e.g. by governments offering rewards to citizens who prevent crime (which gives an extra incentive to such people to purchase guns as it would make their interventions more effective). Even the crime prevention benefit could be internalized to a great extent by guns being sold together with a kind of proof-of-ownership that is hard to counterfeit, similar to the effect that open carry policies have in states which have them.
There's a more general public choice argument against this kind of policy, which is that governments lack the incentives to actually discover the correct magnitude of the externalities and then intervene in the appropriate way to maximize efficiency or welfare. I think that's true in general, and in the specific case of guns it might be a reason to not want the government to do anything at all, but in my opinion that argument becomes less compelling when the potential harms of a technology are large enough.
Replies from: jakub-supel↑ comment by Jakub Supeł (jakub-supel) · 2024-08-28T17:00:23.174Z · LW(p) · GW(p)
There's no reason a priori to suppose that any positive or negative effects not currently priced will be of the same order of magnitude.
There are some a posteriori reasons though - there are numerous studies that reject a causal link between the number of firearms and homicides, for example. This indicates that firearm manufacturers do not cause additional deaths, and therefore it would be wrong to only internalize the negative costs.
If you think there are benefits to having a population where most people own guns that are not going to be captured by the incentives of individuals who purchase guns for their own purposes, it's better to try to estimate what that effect size is and then provide appropriate incentives to people who want to purchase guns.
That's not true. It is not better, because providing appropriate incentives is very likely impossible in this case, e.g.:
- due to irrational political reasons (people have irrational fear of guns and will oppose any efforts to incentivize their purchase, while supporting efforts to disincentivize it);
- due to the fact that a reward system for preventing crime can be easily gamed (cobra effect), not to mention the fact that it will probably be very costly to follow up on all cases when crime was prevented;
- due to the fact that positive outcomes of gun ownership are inherently hard to quantify, hence in reality they will not be quantified and will not be taken into account (McNamara fallacy).
comment by apotloge · 2024-08-23T20:15:48.869Z · LW(p) · GW(p)
Fun post. Presumably this points in the direction of looking at the finance as a model of liability sharing. Looks highly likely that the most promising systems will be based on complex liability sharing. Perhaps something of the flavor:
- user/ direct perpetrator of harm liable up to seizable net worth
- some industry level insurance scheme, similar to FDIC in finance liable for any extra harm
- very high safety standards in law for vendors in sector
- each episode of harm triggers detailed investigation of vendor involved in harm. If compliance failures with vendor safety regime discovered very large fines imposed. This should preserve incentives for safety in the face of liability cost being socialized at the industry level.
comment by Logan Zoellner (logan-zoellner) · 2024-08-20T06:05:19.780Z · LW(p) · GW(p)
Because the shooter obtains the gun illegally, they can then go and carry out a shooting without any regard for something such as "a requirement to buy insurance".
It doesn't seem like passing liability onto the shop fixes this. The shop, if anything, has less ability to force gun thieves to buy insurance than the law does.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2024-08-20T13:21:22.872Z · LW(p) · GW(p)
The shop has the ability to invest more in security if they will be held liable for subsequent harm. They can also buy insurance themselves and pass on the cost to people who do purchase guns legally as an additional operating expense.
Replies from: logan-zoellner↑ comment by Logan Zoellner (logan-zoellner) · 2024-08-20T18:11:15.197Z · LW(p) · GW(p)
How is the shop going to stop a gun from being stolen from a gun owner (who happened to buy their gun at the shop)? This seems much more the domain of the law. The police can arrest people who steal guns, shop owners cannot.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2024-08-21T21:19:14.282Z · LW(p) · GW(p)
If the risk is sufficiently high, then the shops would simply not sell guns to anyone who seemed like they might let their guns be stolen, for example. Note that the shops would still be held liable for any harm that occurs as a result of any gun they have sold, irrespective of whether the buyer was also the perpetrator of the harm.
In practice, the risk of a gun sold to a person with a safe background being used in such an act is probably not that large, so such a measure doesn't need to be taken: the shop can just sell the guns at a somewhat inflated price to compensate for the risk of the gun being misused in some way, and this is efficient. If you were selling e.g. nuclear bombs instead of guns, then you would demand any prospective buyer meet a very high standard of safety before selling them anything, as the expected value of the damages in this case would be much higher.
The police arresting people who steal guns does nothing to fix the problem of shootings if the gun is used shortly after it is stolen, and police are not very good at tracking down stolen items to begin with, so I don't understand the point of your example.
Replies from: logan-zoellner↑ comment by Logan Zoellner (logan-zoellner) · 2024-08-21T21:56:39.602Z · LW(p) · GW(p)
If the risk is sufficiently high, then the shops would simply not sell guns to anyone who seemed like they might let their guns be stolen,
You do realize it is illegal to discriminate against customers on the basis of things like race, income, where they live, etc, right?
So, step 1 in this plan has to begin with "dismantle the last 60 years of civil rights legislation".
↑ comment by habryka (habryka4) · 2024-08-21T22:05:53.344Z · LW(p) · GW(p)
We have reasonably effective insurance regimes which successfully use non-protected information to make decisions like this. As a first proxy, you can totally use credit score to discriminate and that alone would probably catch a huge chunk of the variance.
comment by Oliver Sourbut · 2024-09-01T21:08:43.865Z · LW(p) · GW(p)
Good old Coase! Thanks for this excellent explainer.
In contrast, if you think the relevant risks from AI look like people using their systems to do some small amounts of harm which are not particularly serious, you'll want to hold the individuals responsible for these harms liable and spare the companies.
Or (thanks to Coase), we could have two classes of harm, with big arbitrarily defined as, I don't know, say $500m which is a number I definitely just made up, and put liability for big harms on the big companies, while letting the classic societal apparatus for small harms tick over as usual? Surely only a power-crazed bureaucrat would suggest such a thing! (Of course this is prone to litigation over whether particular harms are one big harm or n smaller harms, or whether damages really were half a billion or actually $499m or whatever, but it's a good start.)
comment by Anthony Bailey (anthony-bailey) · 2024-08-23T21:08:56.037Z · LW(p) · GW(p)
It's plausible even the big companies are judgment-proof (e.g. if billions of people die or the human species goes extinct) and this might need to be addressed by other forms of regulation
...or by a further twist on liability.
Gabriel Well explored such an idea in https://axrp.net/episode/2024/04/17/episode-28-tort-law-for-ai-risk-gabriel-weil.html
The core is punitive damages for expected harms rather than those that manifested. When a non-fatal warning shot causes harm, then as well as suing for those damages that occurred, one assesses how much worse of an outcome was plausible and foreseeable given the circumstances, and awards damages in terms of the risk taken. We escaped what looks like 10% chance that thousands died? Pay 10% those costs.
Replies from: Archimedes↑ comment by Archimedes · 2024-08-25T19:57:42.129Z · LW(p) · GW(p)
This is a cool idea in theory, but imagine how it would play out in reality when billions of dollars are at stake. Who decides the damage amount and the probabilities involved and how? Even if these were objectively computable and independent of metaethical uncertainty, the incentives for distorting them would be immense. This only seems feasible when damages and risks are well understood and there is consensus around an agreed-upon causal model.
Replies from: jmhcomment by Review Bot · 2024-08-22T02:20:20.794Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?