Apocalypse insurance, and the hardline libertarian take on AI risk
post by So8res · 2023-11-28T02:09:52.400Z · LW · GW · 40 commentsContents
Background Liability insurance Apocalypse insurance Caveats Why so statist? None 40 comments
Short version: In a saner world, AI labs would have to purchase some sort of "apocalypse insurance", with premiums dependent on their behavior in ways that make reckless behavior monetarily infeasible. I don't expect the Earth to implement such a policy, but it seems worth saying the correct answer aloud anyway.
Background
Is advocating for AI shutdown contrary to libertarianism? Is advocating for AI shutdown like arguing for markets that are free except when I'm personally uncomfortable about the solution?
Consider the old adage "your right to swing your fists ends where my nose begins". Does a libertarian who wishes not to be punched, need to add an asterisk to their libertarianism, because they sometimes wish to restrict their neighbor's ability to swing their fists?
Not necessarily! There are many theoretical methods available to the staunch libertarian who wants to avoid getting punched in the face, that don't require large state governments. For instance: they might believe in private security and arbitration.
This sort of thing can get messy in practice, though. Suppose that your neighbor sets up a factory that's producing quite a lot of lead dust that threatens your child's health. Now are you supposed to infringe upon their right to run a factory? Are you hiring mercenaries to shut down the factory by force, and then more mercenaries to overcome their counter-mercenaries?
A staunch libertarian can come to many different answers to this question. A common one is: "internalize the externalities".[1] Your neighbor shouldn't be able to fill your air with a bunch of lead dust unless they can pay appropriately for the damages.
(And, if the damages are in fact extraordinarily high, and you manage to bill them appropriately, then this will probably serve as a remarkably good incentive for finding some other metal to work with, or some way to contain the spread of the lead dust. Greed is a powerful force, when harnessed.)
Now, there are plenty of questions about how to determine the size of the damages, and how to make sure that people pay the bills for the damages they cause. There are solutions that sound more state-like, and solutions that sound more like private social contracts and private enforcement. And I think it's worth considering that there are lots of costs that aren't worth billing for, because the cost of the infrastructure to bill for them isn't worth the bureaucracy and the chilling effect.
But we can hopefully all agree that noticing some big externality and wanting it internalized is not in contradiction with a general libertarian worldview.
Liability insurance
Limited liability is a risk subsidy. Liability insurance would align incentives better.
In a saner world, we'd bill people when they cause a huge negative externality (such as an oil spill), and use that money to reverse the damages.
But what if someone causes more damage than they have money? Then society at large gets injured.
To prevent this, we have insurance. Roughly, a hundred people each of whom have a 1% risk of causing damage 10x greater than their ability to pay, can all agree (in advance) to pool their money towards the unlucky few among them, thereby allowing the broad class to take risks that none could afford individually (to the benefit of all; trade is a positive-sum game, etc.).
In a sane world, we wouldn't let our neighbors take substantive risks with our lives or property (in ways they aren't equipped to pay for), for the same reason that we don't let them steal. Letting someone take massive risks, where they reap the gains (if successful) and we pay the penalties (if not), is just theft with extra steps, and society should treat it as such. The freedom and fairness of the markets depends on it just as much as the freedom and fairness of the markets depends on preventing theft.
Which, again, is not to say that a state is required in theory—maybe libertarians would prefer a world in which lots of people sign onto a broad "trade fairly and don't steal" social contract, and this contract is considered table-stakes for trades among civilized people. In which case, my point is that this social contract should probably include clauses saying that people are liable for the damage they cause, and that the same enforcement mechanisms that crack down on thieves also crack down on people imposing risks (on others) that they lack the funds and/or insurance to cover.
Now, preventing people from "imposing risks" unless they "have enough money or insurance to cover the damages" is in some sense fundamentally harder than preventing simple material theft, because theft is relatively easier to detect, and risk analysis is hard. But theoretically, ensuring that everyone has liability insurance is an important part of maintaining a free market, if you don't want to massively subsidize huge risks to your life, liberty, and property.
Apocalypse insurance
Hopefully by now the relevance of these points to existential risk is clear. AI companies are taking extreme risks with our lives, liberty, and property (and those of all potential future people), by developing AI while having no idea what they're doing. (Please stop. [LW · GW])
And in a sane world, society would be noticing this—perhaps by way of large highly-liquid real-money prediction markets—and demanding that the AI companies pay out "apocalypse insurance" in accordance with that risk (using whatever social coordination mechanisms they have available).
When I've recently made this claim in-person, people regularly objected: but insurance doesn't pay out until the event happens! What's the point of demanding that Alice has liability insurance that pays out in the event Alice destroys the world? Any insurance company should be happy to sell that insurance to Alice for very cheap, because they know that they'll never have to pay out (on account of being dead in the case where Alice kills everyone).
The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there's presumably some amount of money they could pay you now, in exchange for the ability to take that risk.
(And before you object "not me!", observe that civilization happily flies airplanes over your head, which have some risk of crashing and killing you—and a staunch libertarian might say you should bill civilization for that risk, in some very small amount proportional to the risk that you take on, so as to incentivize civilization to build safer airplanes and offset the risk.)
The guiding principle here is that trade is positive-sum. When you think you can make a lot of money by risking my life (e.g., by flying planes over my house), and I don't want my life risked, there's an opportunity for mutually beneficial trade. If the risk is small enough and the amount of money is big enough then you can give me a cut of the money, such that I prefer the money to the absence-of-risk, and you still have a lot of money left over. Everyone's better off.
This is the relationship that society "should" have with AI developers (and all technologists that risk the lives and livelihoods of others), according to uncompromising libertarian free-market ideals, as far as I can tell.
With the caveat that the risk is not small, and that the AI developers are risking the lives of everyone to a very significant degree, and that's expensive.
In short: apocalypse insurance differs from liability insurance in that it should be paid out to each and every citizen (that developers put at risk) immediately, seen as a trade in exchange for risking their life and livelihood.
In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".
Caveats
In a sane world, the exact calculations required for apocalypse insurance to work seem fairly subtle to me. To name a few considerations:
- An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture.
- Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future [LW · GW]. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).
- And it's also complicated by the question of whether we're comfortable letting AI companies loan against all the value their AI could create, versus letting them loan against the sliver of that value that comes counterfactually from them (given that some other group might come along a little later that's a little safer and offer the same gains).
- There are big questions about how to assess the risk (and of course the value of the promised-future-stars depends heavily on the risk).
- There are big questions about whether future people (who won't get to exist if life on earth gets wiped out) are relevant stakeholders here, and how to bill people-who-risk-the-world on their behalf.
And I'm not trying to flesh out a full scheme here. I don't think Earth quite has the sort of logistical capacity to do anything like this.
My point, rather, is something like: These people are risking our lives; there is an externality they have not internalized; attempting to bill them for it is entirely reasonable regardless of your ideology (and in particular, it fits into a libertarian ideology without any asterisks).
Why so statist?
And yet, for all this, I advocate for a global coordinated shutdown of AI, with that shutdown enforced by states, until we can figure out what we're doing and/or upgrade humans to the point that they can do the job properly [LW · GW].
This is, however, not to be confused with preferring government intervention, as my ideal outcome.
Nor is it to be confused with expecting it to work, given the ambitious actions required to hit the brakes, and given the many ways such actions might go wrong.
Rather, I spent years doing technical research in part because I don't expect government intervention to work here. That research hasn’t panned out, and little progress has been made by the field at large; so I turn to governments as a last resort, because governments are the tools we have.
I'd prefer a world cognizant enough of the risk to be telling AI companies that they need to either pay their apocalypse insurance or shut down, via some non-coercive coordinated mechanism (e.g. related to some basic background trade agreements that cover "no stealing" and "cover your liabilities", on pain not of violence but of being unable to trade with civilized people). The premiums would go like their risk of destroying the world times the size of the cosmic endowment, and they'd be allowed to loan against their success. Maybe the insurance actuaries and I wouldn't see exactly eye-to-eye, but at least in a world where 93% of the people working on the problem say there's a 10+% chance of it destroying a large fraction of the future’s value [LW · GW], this non-coercive policy would do its job.
In real life, I doubt we can pull that off (though I endorse steps in that direction!). Earth doesn't have that kind of coordination machinery. It has states. And so I expect we'll need some sort of inter-state alliance [LW · GW], which is the sort of thing that has ever actually worked on Earth before (e.g. in the case of nukes), and which hooks into Earth's existing coordination machinery.
But it still seems worth saying the principled solution aloud, even if it's not attainable to us.
- ^
A related observation here is that the proper libertarian free-market way to think of your neighbor's punches is not to speak of forcibly stopping him using a private security company, but to think of charging him for the privilege. My neighbors are welcome to punch me, if they're willing to pay my cheerful price [LW · GW] for it! Trade can be positive-sum! And if they're not willing to pony up the cash, then punching me is theft, and should be treated with whatever other mechanisms we're imagining that enforce the freedom of the market.
40 comments
Comments sorted by top scores.
comment by cousin_it · 2023-11-28T08:38:56.441Z · LW(p) · GW(p)
I think AI risk insurance might be incompatible with libertarianism. Consider the "rising tide" scenario, where AIs gradually get better than humans at everything, gradually take all jobs and outbid us for all resources to use in their economy, leaving us with nothing. According to libertarianism this is ok, you just got outcompeted, mate. And if you band together with other losers and try to extract payment from superior workers who are about to outcompete you, well, then you're clearly a villain according to libertarianism. Even if these "superior workers" are machines that will build a future devoid of anything good or human. It makes complete sense to fight against that, but it requires a better theory than libertarianism.
Replies from: ryan_greenblatt, sharmake-farah, christopher-king↑ comment by ryan_greenblatt · 2023-12-01T01:03:42.577Z · LW(p) · GW(p)
I think there isn't an issue as long as you ensure property rights for the entire universe now. Like if every human is randomly assigned a silver of the universe (and then can trade accordingly), then I think the rising tide situation can be handled reasonably. We'd need to ensuring that AIs as a class can't get away with violating our existing property rights to the universe, but the situation is analogous to other rights.
This is a bit of an insane notion of property rights and randomly giving a chunk to every currently living human is pretty arbitrary, but I think everything works fine if we ensure these rights now.
Replies from: cousin_it↑ comment by cousin_it · 2023-12-01T11:55:17.506Z · LW(p) · GW(p)
You think AIs won't be able to offer humans some deals that are appealing in the short term but lead to AIs owning everything in the long term? Humans offer such deals to other humans all the time and libertarianism doesn't object much.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2023-12-01T18:54:40.730Z · LW(p) · GW(p)
Why is this a problem? People who are interested in the long run can buy these property rights while people who don't care can sell them.
If AIs respect these property rights[1] but systematically care more about the long run future, then so be it. I expect that in practice some people will explicitly care about the future (e.g. me) and also some people will want to preserve option value.
Or we ensure they obey these property rights, e.g. with alignment. ↩︎
↑ comment by cousin_it · 2023-12-01T22:30:36.111Z · LW(p) · GW(p)
Even if you have long term preferences, bold of you to assume that these preferences will stay stable in a world with AIs. I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It'll be like money-naive people vs loan sharks, times 1000.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2023-12-01T22:39:58.399Z · LW(p) · GW(p)
I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It'll be like money-naive people vs loan sharks, times 1000.
I think this is just a special case of more direct harms/theft? Like imagine that some humans developed the ability to mind control others, this can probably be handled via the normal system of laws etc. The situation gets more confusing as the AIs are more continuous with more mundane persuasion (that we currently allow in our society). But, I still think you can build a broadly liberal society which handles super-persuasion.
Replies from: cousin_it↑ comment by cousin_it · 2023-12-01T22:54:58.433Z · LW(p) · GW(p)
If nothing else, I expect mildly-superhuman sales and advertising will be enough to ensure that the human share of the universe will decrease over time. And I expect the laws will continue being at least mildly influenced by deep pockets, to keep at least some such avenues possible. If you imagine a hard lock on these and other such things, well that seems unrealistic to me.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2023-12-01T23:19:32.540Z · LW(p) · GW(p)
If you imagine a hard lock on these and other such things, well that seems unrealistic to me.
I'm just trying to claim that this is possible in principle. I'm not particularly trying to argue this is realistic.
I'm just trying to argue something like "If we gave out property right to the entire universe and backchained from ensuring the reasonable enforcement of these property rights and actually did a good job on enforcement, things would be fine."
This implicitly requires handling violations of property rights (roughly speaking) like:
- War/coups/revolution/conquest
- Super-persuasion and more mundane concerns of influence
I don't know how to scalably handle AI revolution without ensuring a property basically as strong as alignment, but that seems orthogonal.
We also want to handle "AI monopolies" and "insufficient AI competition resulting in dead weight loss (or even just AIs eating more of the surplus than is necessary)". But, we at least in theory can backchain from handling this to what intervertions are needed in practice.
↑ comment by ryan_greenblatt · 2023-12-01T18:58:27.317Z · LW(p) · GW(p)
I agree that there is a concern due to an AI monopoly on certain goods and services, but I think this should be possible to handle via other means.
↑ comment by Noosphere89 (sharmake-farah) · 2024-11-11T16:48:10.099Z · LW(p) · GW(p)
I'd state this in a different way which captures most of the problem, which is that any libertarian proposal for addressing AI x-risk can only deal with outcomes that everyone opposes, not nearly everyone, which is for example the case if 4 AI companies used AI to takeover everything, and while the AI is aligned to them specifically, it does horrible things to the rest of the population.
In essence, it can only deal with technical alignment concerns, not any other risk from AIs.
I have my theories for why technical alignment is so focused on compared to other risks, but that's a story for another day.
↑ comment by Christopher King (christopher-king) · 2023-11-30T14:42:25.630Z · LW(p) · GW(p)
Human labor becomes worthless but you can still get returns from investments. For example, if you have land, you should rent the land to the AGI instead of selling it.
Replies from: cousin_it↑ comment by cousin_it · 2023-11-30T15:14:14.477Z · LW(p) · GW(p)
People who have been outcompeted won't keep owning a lot of property for long. Something or other will happen to make them lose it. Maybe some individuals will find ways to stay afloat, but as a class, no.
Replies from: jmh↑ comment by jmh · 2023-12-01T18:02:34.815Z · LW(p) · GW(p)
Does any of this discussion (both branches from your first comment)change if one starts with the assuming that AIs are actually owned, and can be bought, by humans? Owned directly but some and indirectly by others via equity in AI companies.
Replies from: cousin_it↑ comment by cousin_it · 2023-12-01T22:44:40.602Z · LW(p) · GW(p)
Like I said, people who have been outcompeted won't keep owning a lot of property for long. Even if that property is equity in AI companies, something or other will happen to make them lose it. (A very convincing AI-written offer of stock buyback, for example.)
comment by cfoster0 · 2023-11-28T03:20:45.992Z · LW(p) · GW(p)
Very surprised there's no mention here of Hanson's "Foom Liability" proposal: https://www.overcomingbias.com/p/foom-liability
comment by Scott Alexander (Yvain) · 2023-11-28T06:59:34.348Z · LW(p) · GW(p)
I agree with everyone else pointing out that centrally-planned guaranteed payments regardless of final outcome doesn't sound like a good price discovery mechanism for insurance. You might be able to hack together a better one using https://www.lesswrong.com/posts/dLzZWNGD23zqNLvt3/the-apocalypse-bet [LW · GW] , although I can't figure out an exact mechanism.
Superforecasters say the risk of AI apocalypse before 2100 is 0.38%. If we assume whatever price mechanism we come up with tracks that, and value the world at GWP x 20 (this ignores the value of human life, so it's a vast underestimate), and that AI companies pay it in 77 equal yearly installments from now until 2100, that's about $100 billion/year. But this seems so Pascalian as to be almost cheating. Anybody whose actions have a >1/25 million chance of destroying the world would owe $1 million a year in insurance (maybe this is fair and I just have bad intuitions about how high 1/25 million really is)
An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture. Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).
This seems like such a big loophole as to make the plan almost worthless. Suppose OpenAI said "If we create superintelligence, we're going to keep 10% of the universe for ourselves and give humanity the other 90%" (this doesn't seem too unfair to me, and the exact numbers don't matter for the argument). It seems like instead of paying insurance, they can say "Okay, fine, we get 9% and you get 91%" and this would be in some sense a fair trade (one percent of the cosmic endowment is worth much more than $100 billion!) But this also feels like OpenAI moving some numbers around on an extremely hypothetical ledger, not changing anything in real life, and continuing to threaten the world just as much as before.
But if you don't allow a maneuver like this, it seems like you might ban (through impossible-to-afford insurance) some action that has an 0.38% chance of destroying the world and a 99% chance of creating a perfect utopia forever.
There are probably economic mechanisms that solve all these problems, but this insurance proposal seems underspecified.
Replies from: So8res, kabir-kumar, whitehatStoic↑ comment by So8res · 2023-11-28T15:41:02.414Z · LW(p) · GW(p)
Agreed that the proposal is underspecified; my point here is not "look at this great proposal" but rather "from a theoretical angle, risking others' stuff without the ability to pay to cover those risks is an indirect form of probabilistic theft (that market-supporting coordination mechanisms must address)" plus "in cases where the people all die when the risk is realized, the 'premiums' need to be paid out to individuals in advance (rather than paid out to actuaries who pay out a large sum in the event of risk realization)". Which together yield the downstream inference that society is doing something very wrong if they just let AI rip at current levels of knowledge, even from a very laissez-faire perspective.
(The "caveats" section was attempting--and apparently failing--to make it clear that I wasn't putting forward any particular policy proposal I thought was good, above and beyond making the above points.)
↑ comment by Kabir Kumar (kabir-kumar) · 2023-11-30T12:15:14.909Z · LW(p) · GW(p)
What about regulations against implementations of known faulty architectures?
↑ comment by MiguelDev (whitehatStoic) · 2023-11-28T07:52:30.727Z · LW(p) · GW(p)
The IFRS board (Non US) and GAAP/FASB board (US) are defined governing bodies that tackle the financial reporting aspects of companies - which AI companies are, might be good thing to discuss the ideas regarding the responsibilities for accounting for existential risks associated with AI research, I'm pretty sure they will listen assuming that they don't want another Enron or SBF type case[1] happening again.
- ^
I think its its safe to assume that an AGI catastophic event will outweigh all previous fraudulent cases in history combined. So I think these governing bodies already installed will cooperate given the chance.
comment by Roko · 2023-11-28T02:33:20.668Z · LW(p) · GW(p)
If it pays out in advance it isn't insurance.
A contract that relies on a probability to calculate payments is also a serious theoretical headache. If you are a Bayesian, there's no objective probability to use since probabilities are subjective things that only exist relative to a state of partial ignorance about the world. If you are a frequentist there's no dataset to use.
There's another issue.
As the threat of extinction gets higher and also closer in time, it can easily be the case that there's no possible payment that people ought to rationally accept.
Finally different people have different risk tolerances such that some people will gladly take a large risk of death for an upfront payment, but others wouldn't even take it for infinity money.
E.g. right now I would take a 16% chance of death for a $1M payment, but if I had $50M net worth I wouldn't take a 16% risk of death even if infinity money was being offered.
Since these x-risk companies must compensate everyone at once, even a single rich person in the world could make them uninsurable.
Replies from: whitehatStoic, JBlack↑ comment by MiguelDev (whitehatStoic) · 2023-11-28T06:28:56.806Z · LW(p) · GW(p)
Even in a traditional accounting sense, I'm not aware that there is any term that could capture the probable existential effects of a research, but I understand what @So8res [LW · GW] is trying to pursue in this post which I agree with. But, I think apocalypse insurance is not the proper term here.
I think IAS/IFRS 19, actuarial gains or losses / IFRS 26 Retirement benefits are more closer to the idea - though these theortical accounting approaches applies to employees of a company. But these can be tweaked to another form of accounting theory (on another form of expense or asset) that captures how much costs are due out of possible catastrophic causes. External auditors can then review this periodically. (The proceeds from such should be pooled for averting the AGI existential risk scenarios - this might be a hard one to capture as to who manages the collected funds.)
To think of it, AI companies are misrepresenting their financials for not properly addressing a component in their reporting that reflects the "responsibility they have for the future of humanity", and this post somehow did shed some light to me that yes, this value should be somehow captured in their financial statements.
Based on what I know, these AI companies have very peculiar company setups, yet the problem is the world's population comprises the majority of the stakeholders (in a traditional accounting sense). So I think there is a case that AI companies should be obliged to present how they capture the possibility of losses from catastrophic events, and have them audited by external auditors - so the public is somehow aware: for example a publicly available FS will show these expenses and has been audited by a big 4 audit firm and then the average citizen will say: "Okay, this is how they are trying to manage the risks of AI research and it was audited by a Big 4 firm. I expect this estimated liability will be paid to the organisation built for redistributing such funds."[1]
(AI companies can avoid declaring such future catastrophic expense, if they can guarantee that the AGI they are building won't destroy the world which I am pretty sure no AI company can claim for the moment.)
I'm a former certified public accountant before going to safety research.
- ^
Not sure of who will manage the collections though, haven't gone that far in my ideas. Yet, it is safe to say that talking to the IFRS board or GAAP board about this matter can be an option, and I expect that they will listen with the most respectable members of this community re: the peculiar financial reporting aspects of AI research.
↑ comment by MiguelDev (whitehatStoic) · 2023-11-28T08:15:37.285Z · LW(p) · GW(p)
Ooops my bad, there is a pre-existing reporting standard that covers for research and development, not existential risks though: IFRS 38 intangible assets.
An intangible asset is an identifiable non-monetary asset without physical substance. Such an asset is identifiable when it is separable, or when it arises from contractual or other legal rights. Separable assets can be sold, transferred, licensed, etc. Examples of intangible assets include computer software, licences, trademarks, patents, films, copyrights and import quotas.
An update to this standard, should be necessary to cover for the nature of AI research.
Google Deepmind is using IFRS 38 as per page 16 of 2021 FS reports I found, so they are following this standard already and expect that if an update on this standard re: a proper accounting theory on the estimated liability of an AI company doing AGI research, it will be governed by the same accounting standard. Reframing this post to target this IFRS 38 standard, is recommended in my opinion.
↑ comment by Matt Goldenberg (mr-hire) · 2023-11-28T12:55:25.420Z · LW(p) · GW(p)
"responsibility they have for the future of humanity"
As I read it, it only wanted to capture the possibility of killing currently living individuals. If they had to also account for 'killing' potential future lives it could make an already unworkable proposal even MORE unworkable.
↑ comment by JBlack · 2023-11-28T06:03:00.509Z · LW(p) · GW(p)
Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.
However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?
That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don't think that's quite right.
Replies from: None↑ comment by [deleted] · 2023-12-01T23:09:16.217Z · LW(p) · GW(p)
If you pick 5000 years for your future lifespan if you win, 60 years if you lose, and you discount each following year by 5 percent, you should take the bet until your odds are worse than 48.6 percent doom.
Having children younger than you doesn't change the numbers much unless you are ok with your children also being corpses and you care about hypothetical people not yet alive. (You can argue that this is a choice you cannot morally make for other people, but the mathematically optimal choice only depends on their discount rates)
Discounting also reduces your valuation of descendents because everything you are - genetics and culture - a certain percentage is lost with each year. This is I believe the "value drift" argument, over an infinite timespan mortal human generations will also lose all value in the universe those humans living today care about. It is no different in a thousand years if the future culture, 20 generations later, is human or AI. The AI descendants may have drifted less as AI models start immortal inherently.
comment by DanielFilan · 2023-11-28T06:01:37.563Z · LW(p) · GW(p)
I think that apocalypse insurance isn't as satisfactory as you imply, and I'd like to explain why below.
First: what's a hardline libertarian? I'll say that a hardline libertarian is in favour of people doing stuff with markets, and there being courts that enforce laws that say you can't harm people in some small, pre-defined, clear-cut set of ways. So in this world you're not allowed to punch people but you are allowed to dress in ways other people don't like.
Why would you be a hardline libertarian? If you're me, the answer is that (a) markets and freedom etc are pretty good, (b) you need ground rules to make them good, and (c) government power tends to creep and expand in ill-advised ways, which is why you've got to somehow rein it in to only do a small set of clearly good things.
If you're a hardline libertarian for these reasons, you're kind of unsatisfied with this proposal, because it's sort of subjective - you're punishing people not because they've caused harm, but because you think they're going to cause harm. So how do you assess the damages? Without further details, it sounds like this is going to involve giving a bunch of discretion to a lawmaker to determine how to punish people - discretion that could easily be abused to punish a variety of activities that should thrive in a free society.
There's probably some version that works, if you have a way of figuring out which activities cause how much expected harm that's legibly rational in a way that's broadly agreeable. But that seems pretty far-off and hard. And in the interim, applying some hack that you think works doesn't seem very libertarian.
comment by Tapatakt · 2023-11-28T13:14:38.958Z · LW(p) · GW(p)
I'm confused, like I always confused with hardline libertarianism. Why would companies agree to this? Who would put capabilities researchers in jail if they, like, "I'd rather not purchase apocalypse insurance and create AI anyway"? Why this actor is not a state by other name? What should I read to became less confused?
comment by Dagon · 2023-11-28T17:03:32.782Z · LW(p) · GW(p)
[ note: I am not a libertarian, and haven't been for many years. But I am sympathetic. ]
Like many libertarian ideas, this mixes "ought" and "can" in ways that are a bit hard to follow. It's pretty well-understood that all rights, including the right to redress of harm, are enforced by violence. In smaller groups, it's usually social violence and shared beliefs about status. In larger groups, it's a mix of that, and multi-layered resolution procedures, with violence only when things go very wrong.
When you say you'd "prefer a world cognizant enough of the risk to be telling AI companies that...", I'm not sure what that means in practice - the world isn't cognizant and can't tell anyone anything. Are you saying you wish these ideas were popular enough that citizens forced governments to do something? Or that you wish AI companies would voluntarily do this without being told? Or something else?
comment by Gurkenglas · 2023-11-30T12:01:55.579Z · LW(p) · GW(p)
In theory, one billionth of the present buys one billionth of the future: Go to a casino, put all on black until you can buy the planet.
Therefore, they can buy their insurance policy with dollars. If you can't buy half the planet, apparently you can't afford a 50% chance to kill everyone.
comment by Shankar Sivarajan (shankar-sivarajan) · 2023-11-28T13:42:27.122Z · LW(p) · GW(p)
Even a libertarian might eventually recognize that the refrain "internalize your externalities" is being used to exploit him: all anyone who wants to infringe on his liberty needs to do is utter the phrase and then make up an externality to suit.
- You may not engage in homosexual activity because of the externality of God smiting the city and/or sending a hurricane.
- You must be confined to your house and wear a mask because of the externality of grandma dying.
- You may not own a gun because of the externality of children getting shot.
- You must wear a headscarf because of the externality of … I dunno, Allah causing armageddon?
- You may not eat hamburgers because of the externality of catastrophic climate collapse.
- You may not use plastic straws because of the externality of sea turtles suffocating.
↑ comment by Eli Tyre (elityre) · 2023-11-28T20:14:03.542Z · LW(p) · GW(p)
Most of these seem legitimate to me, modulo that instead of banning the thing you should pay for the externality you're imposing. Namely, climate change, harming wildlife, spreading contagious diseases, and risks to children's lives.
Those are real externalities, either on private individuals or on whole communities (by damaging public goods). It seems completely legitimate to pay for those externalities.
The only ones that I don't buy are the religious ones, which are importantly different because they entail not merely an external cost, but a disagreement about actual cause and effect.
"I agree that my trash hurts the wildlife, but I don't want to stop littering or pay to have the litter picked up" is structurally different than "God doesn't exist, and I deny the claim that my having gay sex increases risk of smiting" or "Anthropogenic climate change is fake, and I deny the claim that my pollution contributes to warming temperatures."
Which is fine. Libertarianism depends on having some shared view of reality, or at least some shared social accounting about cause and effect and which actions have which externalities, in order to work.
If there are disagreements, you need courts to rule on them, and for the rulings of the courts to be well regarded (even when people disagree with the outcome of any particular case).
↑ comment by Morpheus · 2023-11-28T19:45:00.613Z · LW(p) · GW(p)
You may not engage in homosexual activity because of the externality of God smiting the city and/or sending a hurricane.
Well the problem is god isn't real.
You may not eat hamburgers because of the externality of catastrophic climate collapse.
Your hamburger becomes slightly more expensive because there is a carbon tax.
I would say your examples are abusing the concept (And I have seen them before because people make trashy arguments all the time). The concept itself makes lots of sense.
Replies from: Morpheuscomment by localdeity · 2023-11-28T23:03:10.102Z · LW(p) · GW(p)
Here's another angle that a hardline libertarian might take (partly referenced in the footnote). I'm not quite sure how far I'd want to take the underlying principle, but let's run with it for now:
Libertarians generally agree that it's permissible to use (proportionate) force to stop someone from engaging in aggressive violence. That could mean grabbing someone's arm if they're punching in your direction, even if they haven't hit you yet. You don't necessarily have to wait until they've harmed you before you use any force against them. The standard for this is probably something like "if a reasonable person in your position would conclude with high probability that the other person is trying to harm you".
Next, let's imagine that you discover that your neighbor Bob is planning to poison you. Maybe you overhear him telling someone else about his plans; and you notice a mail delivery of a strange container labeled "strychnine" to his house, and look up what it means; maybe you somehow get hold of his diary, which describes his hatred for you and has notes about your schedule and musings about how and when would be the best time to add the poison to your food. At some point, a reasonable person would have very high certainty that Bob really is planning to kill you. And then you would be justified in using some amount of force, at least to stop him and possibly to punish him. For tactical reasons it's best to bring in third parties, show them the evidence, and get them on your side; ideally this would all be established in some kind of court. But in principle, you would have the right to use force yourself.
Or, like, suppose Bob is setting up sticks of dynamite right beside your house. Still on his property! But on the edge, as close to your house as possible. And setting up a fuse, and building a little barrier that would reduce the amount of blast that would reach his house. Surely at some point in this process you have the right to intervene forcibly, before Bob lights the fuse. (Ideally it'd be resolved through speech, but suppose Bob just insists that he's setting it up because it would look really cool, and refuses to stop. "Does it have to be real dynamite?" "Yeah, otherwise it wouldn't look authentic." "I flat out don't believe you. Stop it." "No, this is my property and I have the right.")
Next, suppose Bob is trying to set up a homemade nuclear reactor, or perhaps to breed antibiotic-resistant bacteria, for purposes of scientific study. Let's say that is truly his motive—he isn't trying to endanger anyone's lives. But he also has a much higher confidence in his own ability to avoid any accidents, and a much higher risk tolerance, than you do. I think the principle of self-defense may also extend to here: if a reasonable person believes with high confidence that Bob is trying to do a thing that, while not intended to harm you, has a high probability of causing serious harm to you, then you have the right to use force to stop it. (Again, for tactical reasons it's best to bring in third parties, and to try words before actually using force.)
If one is legitimately setting up something like a nuclear reactor, ideally the thing to do would be to tell everyone "I'm going to set up this potentially-dangerous thing. Here are the safety precautions I'm going to take. If you have objections, please state them now." [For a nuclear reactor, the most obvious precaution is, of course, building it far away from humans, but I think there are exceptions.] And probably post signs at the site with a link to your plans. And if your precautions are actually good, then in theory you reach a position where a reasonable person would not believe your actions have a high chance of seriously harming them.
The notion of "what a reasonable person would believe" is potentially very flexible, depending on how society is. Which is dangerous, from the perspective of "this might possibly justify a wide range of actions". But it can be convenient when designing a society. For example, if there's a particular group of people who make a practice of evaluating dangerous activities, and "everyone knows" that they're competent, hard-nosed, have condemned some seriously bad projects while approving some others that have since functioned successfully... then you might be at the point where you could say that an educated reasonable person, upon discovering the nuclear reactor, would check with that group and learn that they've approved it [with today's technology, that would mean the reactor would have a sign saying "This project is monitored and approved by the Hard-Nosers, approval ID 1389342", and you could go to the Hard-Nosers' website and confirm this] before attempting to use force to stop it; and an uneducated reasonable person would at least speak with a worker before taking action, and learn of the existence of the Hard-Nosers, and wouldn't act before doing some research. In other words, you might be able to justify some kind of regulatory board being relevant in practice.
(There might be layers to this. "People running a legitimate nuclear reactor would broadcast to the world the fact that they're doing it and the safety precautions they're taking! Therefore, if you won't tell me, that's strong evidence that your precautions are insufficient, and justifies my using force to stop you!" "Granted. Here's our documentation." "You also have to let me inspect the operation—otherwise that's evidence you have something to hide!" "Well, no. This equipment is highly valuable, so not letting random civilians come near it whenever they want is merely evidence that we don't want it to be stolen." "Ok, have as many armed guards follow me around as you like." "That's expensive." "Sounds like there's no way to distinguish your operation from an unsafe one, then, in which case a reasonable person's priors would classify you as an unacceptable risk." "We do allow a group of up to 20 members of the public to tour the facility on the first of every month; join the next one if you like." "... Fine."
And then there could be things where a reasonable person would be satisfied with that; but then someone tells everyone, "Hey, regularly scheduled inspections give them plenty of time to hide any bad stuff they're doing, so they provide little evidentiary value", and then if those who run the reactor refuse to allow any unscheduled inspections, that might put them on the wrong side of the threshold of "expected risk in the eyes of a reasonable external observer". So what you'd effectively be required to do would change based on what that random person had said in public and to you. Which is not, generally, a good property for a legal system. But... I dunno, maybe the outcome is ok? This would only apply to people running potentially dangerous projects, and it makes some kind of sense that they'd keep being beholden to public opinion.)
So we could treat increasingly-powerful AI similarly to nuclear reactors: can be done for good reasons, but also has a substantial probability of causing terrible fallout. In principle, if someone is building a super-AI without taking enough precautions, you could judge that they're planning to take actions that with high-enough probability are going to harm you badly enough that it would be proper self-defense for you to stop them by force.
But, once again, tactical considerations make it worth going to third parties and making your case to them, and it's almost certainly not worth acting unless they're on your side. Unfortunately, there is much less general agreement about the dangers of AI (and the correct safety strategies) than about the dangers of nuclear reactors, and it's unlikely in the near future that you'd get public opinion / the authorities on your side about existing AI efforts (though things can change quickly). But if someone did take some unilateral (say maybe 1% of the public supported them) highly destructive action to stop a particular AI lab, that would probably be counterproductive: they'd go to jail, it would discredit anyone associated with them, martyr their opposition, at best delay that lab by a bit, and motivate all AI labs to increase physical security (and possibly hide what they're doing).
For the moment, persuading people is the only real way to pursue this angle.
comment by followthesilence · 2023-11-28T05:30:36.913Z · LW(p) · GW(p)
The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there's presumably some amount of money they could pay you now, in exchange for the ability to take that risk.
Pretty sure you mean they should pay premiums rather than payouts?
I like the spirit of this idea, but think it's both theoretically and practically impossible: how do you value apocalypse? Payouts are incalculable/infinite/meaningless if no one is around.
The underlying idea seems sound to me: there are unpredictable civilizational outcomes resulting from pursuing this technology -- some spectacular, some horrendous -- and the pursuers should not reap all the upside when they're highly unlikely to bear any meaningful downside risks.
I suspect this line of thinking could be grating to many self-described libertarians who lean e/acc and underweight the possibility that technological progress != prosperity in all cases.
It also seems highly impractical because there is not much precedent for insuring against novel transformative events for which there's no empirical basis*. Good luck getting OAI, FB, MSFT, etc. to consent to such premiums, much less getting politicians to coalesce around a forced insurance scheme that will inevitably be denounced as stymying progress and innovation with no tangible harms to point to (until it's too late).
Far more likely (imo) are post hoc reaction scenarios where either:
a) We get spectacular takeoff driven by one/few AI labs that eat all human jobs and accrue all profits, and society deems these payoffs unfair and arrives at a redistribution scheme that seems satisfactory (to the extent "society" or existing political structures have sufficient power to enforce such a scheme)
b) We get a horrendous outcome and everyone's SOL
* Haven't researched this and would be delighted to hear discordant examples.
comment by Ben Pace (Benito) · 2024-12-13T07:29:20.630Z · LW(p) · GW(p)
+4. This doesn't offer a functional proposal, but it makes some important points about the situation and offers an interesting reframe, and I hope it gets built upon. Key paragraph:
In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".
comment by [deleted] · 2023-12-01T19:38:15.242Z · LW(p) · GW(p)
I like this proposal. It's a fun rethinking of the problem.
However,
- How can you even approximate a fair price for these payouts? AI risks are extremely conditional and depend on difficult to quantify assumptions. "The model leaked AND optimized itself to work on computers insecure and Internet available at the time of escape AND humans failed to stop it AND..."
For something like a nuclear power plant for instance, most of the risk is all black swans. There are a ton of safety systems and mechanisms to cool the core. We know from actual accidents that when this fails, it's not because each piece of equipment all failed at the same time. This relates to AI risk because multiplying the series probability does not tell you the true risk.
For all meltdowns i am aware of, the risk happened because human operators or an unexpected common cause made all the levels of safety fail at once.
- 3 mile island : operators misunderstood the situation and turned off cooling.
- Chernobyl: operators bypassed the automated control system with patch cables and put the core into an unstable part of the operating curve.
- Fukushima : plant wide power failure, road conditions prevented bringing spare generators on site quickly.
Each cause is coupled, and can thought of as a single cause. Adding n+1 serial defenses might not have helped in each case (depends what it is)
If AI does successfully kill everyone it's going to be in a way humans didn't model.
- Mineshaft gap argument. Large fees on AI companies simply encourages them to set up shop in countries that don't charge the fees. In futures where they don't kill everyone, those countries will flourish or will conquer the planet. So the other countries have to drop these costs and subsidize hasty catch up ASI research or risk losing. In the futures where AI does attack and try to kill everyone, not have tool AI (aligned only with the user) increases the probability that the AI wins. Most defensive measures are stronger if you have your own AI to scale production. (More bunkers, more nukes to fire back, more spacesuits to stop the bio and nano attacks, more drones...)
comment by Raphael Roche (raphael-roche) · 2024-12-13T08:15:22.841Z · LW(p) · GW(p)
I am sorry to say that on a forum where many people are likely to have been raised in a socio-cultural environnement where libertarian ideas are deeply rooted. My voice will sound dissonant here and I call to your open-mindedness.
I think that there are strong limitations to such ideas as developed in the OP proposal. Insurance is mutualization of risk, it's a statistic approach relying on the possibility to assess a risk. It works for risks happening frequently, with a clear typology, like car accidents, tempest, etc. Even in these cases there is always an insurance ceiling. But risks that are exceptionnal and the most hazardous, like war damages, nuclear accident etc, cannot be insured and are systematically subject to contractual exclusions. There is no apocalypse insurance because the risk cannot be assessed by actuaries. Even if you create such an insurance, it would be artificial, non rationally assessed, with an insurance ceiling making it useless. There is even the risk that it gives the illusion that everything is ok and acceptable. The insurance mechanism does not encourages responsability, but a contrario irresponsability. On top of that compensation through money is a legal fiction. But in real life money isn't everything that's worth. In the most dramatic cases the real damage is never repaired (i.e. loss of your child, loss of your legs, loss of your own life), it's more a symbolic compensation, "better than nothing".
As a matter of fact, I have professionnal knowledge of law and insurance, from inside, and I have a very practical experience of what I am saying. Libertarianism encourages an approach that is very theoretical and economics-centered, and that's honestly interesting, but it is also somehow disconnected from reality. Just one ordinary example among others. A negligent fourniture mover destroyed family goods inherited from generations, not a word of excuses because he said "there are insurances for that". In the end, after many months of procedure and inenumerable time and energy spent by the victim, the professional's insurance paid almost nothing because of course old family goods have no economical value for experts. Well, when you see how insurance effectively works in real cases, and how it can often encourages negligent and irresponsible behavior, it is very difficult to be enthousiast at the idea that AI existential hazard could be managed by the subscription of an insurance policy.
comment by mhampton · 2024-06-07T20:25:20.436Z · LW(p) · GW(p)
Great post. I agree with almost all of this. What I am uncertain about is the idea that AI existential risk is a rights violation under the most strict understanding of libertarianism.
As another commenter has suggested [LW · GW], we can't claim that any externality creates rights to stop or punish a given behavior, or libertarianism turns into safetyism.[1] If we take the Non-Aggression Principle as a common standard for a hardline libertarian view of what harms give you a right to restitution or retaliation, it seems that x-risk does not fit this definition.
1.The most clear evidence seems to be that Murray Rothbard wrote the following:
“lt is important to insist [...] that the threat of aggression be palpable, immediate, and direct; in short, that it be embodied in the initiation of an overt act. [...] Once we bring in "threats" to person and property that are vague and future--i.e., are not overt and immediate--then all manner of tyranny becomes excusable.” (The Ethics of Liberty p. 78)
X-risk by its very nature falls into the category of "vague and future."
2. To take your specific example of flying planes over someone's house, a follower of Rothbard, Walter Block, has argued that this exact risk is not a violation of the non-aggression principle. He also states that risks from nuclear power are "legitimate under libertarian law." (p. 295)[2] If we consider AI analogous to these two risks, it would seem Block would not agree that there is a right to seek compensation for x-risk.
3. Matt Zwolinski criticized the NAP for having an "all-or-nothing attitude toward risk" as it does not indicate what level of risk constitutes aggression. Another libertarian writer responded that a risk that constitutes a direct "threat" is aggression, (i.e. pointing a pistol at someone, even if this doesn't result in the victim being shot) but risks of accidental damage are not aggression unlese these risks are imposed with threats of violence:
"If you don’t wish to assume the risk of driving, then don’t drive. And if you don’t want to run the risk of an airplane crashing into your house, then move to a safer location. (You don’t own the airspace used by planes, after all.)"
This implies to me that Zwolinski's criticism is accurate with regards to accidents, which would rule out x-risk as a NAP violation.
Conclusion
This shows that at least some libertarians' understanding of rights does not include x-risk as a violation. I consider this to be a point against their theory of rights, not an argument against pursuing AI safety. The most basic moral instinct suggests that creating a significant risk of destroying all of humanity and its light-cone is a violation of the rights of each member of humanity.[3]
While I think that not including AI x-risk (and other risks/accidental harms) in its definition of proscribable harms means that the NAP is too narrow, the question still stands as to where to draw the line as to what externalities or risks give victims a right to payment, and which do not. I'm curious where you draw the line.
It is possible that I am misunderstanding something about libertarianism or x-risk that contradicts the interpretation I have drawn here.
Anyway, thanks for articulating this proposal.
- ^
See also this argument by Alexander Volokh:
"Some people’s happiness depends on whether they live in a drug-free world, how income is distributed, or whether the Grand Canyon is developed. Given such moral or ideological tastes, any human activity can generate externalities […] Free expression, for instance, will inevitably offend some, but such offense generally does not justify regulation in the libertarian framework for any of several reasons: because there exists a natural right of free expression, because offense cannot be accurately measured and is easy to falsify, because private bargaining may be more effective inasmuch as such regulation may make government dangerously powerful, and because such regulation may improperly encourage future feelings of offense among citizens."
- ^
Block argues that it would be wrong for individuals to own nuclear weapons, but he does not make clear why this is a meaningful distinction.
- ^
And any extraterrestrials in our light-cone, if they have rights. But that's a whole other post.
comment by artifex · 2023-11-28T10:07:48.881Z · LW(p) · GW(p)
I agree with most of this, but as a hardline libertarian take on AI risk it is incomplete since it addresses only how to slow down AI capabilities. Another thing you may want a government to do is speed up alignment, for example through government funding of R&D for hopefully safer whole brain emulation. Having arbitration firms, private security companies, and so on enforce proof of insurance (with prediction markets and whichever other economic tools seem appropriate to determine how to set that up) answers how to slow down AI capabilities but doesn’t answer how to fund alignment.
One libertarian take on how to speed up alignment is that
(1) speeding up alignment / WBE is a regular public good / positive externality problem (I don’t personally see how you do value learning in a non-brute-force way without doing much of the work that is required for WBE anyway, so I just assume that “funding alignment” means “funding WBE”; this is a problem that can be solved with enough funding; if you don’t think alignment can be solved by raising enough money, no matter how much money and what the money can be spent on, then the rest of this isn’t applicable)
(2) there are a bunch of ways in which markets fund public goods (for example, many information goods are funded by bundling ads with them) and coordination problems involving positive or negative externalities or other market failures (all of which, if they can in principle be solved by a government by implementing some kind of legislation, can be seen as / converted into public goods problems, if nothing else the public goods problem of funding the operations of a firm that enforces exactly whatever such a legislation would say; so the only kind of market failure that truly needs to be addressed is public goods problems)
(3) ultimately, if none of the ways in which markets fund public goods works, it should always still be possible to fall back on Coasean bargaining or some variant on dominant assurance contracts, if transaction costs can be made low enough
(4) transaction costs in free markets will be lower due, among other reasons, to not having horridly inefficient state-run financial and court systems
(5) prediction markets and dominant assurance contracts and other fun economic technologies don’t, in free markets, have the status of being vaguely shady and perhaps illegal that they have in societies with states
(6) if transaction costs cannot be made low enough for the problem to be solved using free markets, it will not be solved using free markets
(7) in that case, it won’t, either, be solved by a government that makes decisions through, directly or indirectly, some kind of voting system, because for voters to vote for good governments that do good things like funding WBE R&D instead of bad things like funding wars is also an underfunded public good with positive externalities and the coordination problem faced by voters involves transaction costs that are just as great as those faced by potential contributors to a dominant assurance contract (or to a bundle of dominant assurance contracts), since the number of parties, amount of research and communication needed, and so on are just as great and usually greater, and this remains true no matter the kind of voting system used, whether that involves futarchy or range voting or quadratic voting or other attempts at solving relatively minor problems with voting; so using a democratic government to solve a public goods or externality problem is effectively just replacing a public goods or externality problem by another that is harder or equally hard to solve.
In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".
Yes, it makes a lot of sense to say that, but not a lot of sense for a democratic government to be making that assessment and enforcing it (not that democratic governments that currently exist have any interest in doing that). Which I think is why you see some libertarians criticize calls for government-enforced AI slowdowns.