Is acausal extortion possible?
post by sisyphus (benj) · 2022-11-11T19:48:24.672Z · LW · GW · 26 commentsThis is a question post.
Contents
Disclaimer: this question discusses potential infohazards, which may cause harm or psychological distress if true. If you decide to read this post, please be warned and do so at your own discretion. Hmm, well then if I commit to ignore any such attempts of blackmail, then the blackmailer will have no incentive to employ such a tactic. None Answers 1 DeltaBolta 1 DeltaBolta None 26 comments
Disclaimer: this question discusses potential infohazards, which may cause harm or psychological distress if true. If you decide to read this post, please be warned and do so at your own discretion.
Hi, I am someone new to LessWrong so please excuse any mistakes I may make in my discussions (you're more than welcome to point them out). This question is intended to ask whether or not acausal extortion is possible and to pose a scenario which I am unsure about, hence please feel free to critique my points and arguments.
Acausal trade is essentially when two agents simulate each other to predict each other's actions, and acausal extortion (or blackmail) is when one agent applies a negative incentive to influence the behavior of another agent when the latter agent models the former agent. From what I've gathered, acausal trade (and by extension acausal extortion) between humans and artificial superintelligences (ASIs) is impossible as it requires lots of computing power on the human's end in order for the ASI to not have an incentive to defect instead cooperate. However, this still leaves open the possibility of a non-superintelligence (such as an alien) who has control over an ASI using it to acausally extort us. To see what I mean, consider the scenario below:
On some branch of the multiverse, there exists an alien with control over an aligned ASI. The alien then asks the ASI to create many simulations of the other branches of the multiverse and you are contained in one of these simulations. The alien observes you and realizes that you realize the possibility of such a scenario taking place (since you are reading this paragraph). The alien then thinks: If you do not take action X, it will ask the ASI to create an arbitrarily large number of simulations of you and torture them for an arbitrarily long period of time. And since these simulations have the exact same subjective experience as you, you are compelled to carry out action X as you are uncertain about whether or not you're in a simulation (indexical uncertainty). And since it is much easier to model non-superintelligences than ASIs, this scenario avoids the pitfall of scenarios like Roko's Basilisk where humans are limited by our computing power.
The scenario above is partly inspired by the famous Roko's Basilisk [? · GW] and Stuart Armstrong's AI in a Box thought experiment [LW · GW].
Now I'd like to address some of the common refutations to acausal extortion scenarios and why I don't think any of them work very well against this particular scenario. (Though it's very likely I have made mistakes or engaged strawman versions of these arguments, and if so, please do not hesitate to point them out).
Yeah, sure, this scenario could happen, but it's very very very unlikely right?
The above scenario, however unlikely, is guaranteed to take place with probability 1 if the Many-Worlds Interpretation of quantum mechanics is correct, as every possible outcome occurs in such a multiverse. And even if the measure of this particular branch of the multiverse is extremely low, if the alien creates an extremely large number of simulations to create indexical uncertainty, it can still have a significant effect on your most probable environment and possible future observer-moments.
Ok, sure, but I can imagine any action X and there will be a quantum branch where the alien wants you to take that particular action, this is just a Pascal's Mugging where the many-gods refutation still applies.
The action X can be indeed be any arbitrary action, but such aliens are likely to have convergent instrumental goals such that the probability distribution of X will not be random. And I don't find the many-gods refutation to be satisfactory since, well, it just concludes that you will be tortured no matter what you do (not a good ending).
Hmm, well then if I commit to ignore any such attempts of blackmail, then the blackmailer will have no incentive to employ such a tactic.
Sure, it works in principle, but humans cannot commit to an action with 100% certainty. How confident are you that none of your bazillions of copies will give in? And also, in the MWI every possible outcome occurs, including outcomes where you do give in to the blackmailer's demand, so it's likely that doing so will yield positive expected utility (however small) for the blackmailer.
In the above section I addressed some of the refutations that I hear commonly but do not find all that convincing. I am not sure how much I believe in this whole thing yet but since I am not an expert in decision theory and this sort of thing in general I wanted to hear some criticism from more knowledgeable people on the scenario I've posed as well as on how to avoid feeling anxious over acausal extortion. Thanks in advance.
Answers
It is true that superinteligences are much more difficult to model, and therefore predict and give them a reason to actually carry out threats is low, but its not like its easy all of a sudden to say that "ah, aliens or humans arent as complex, therefore i should take them seriously." Its virtually impossible to know with any real accuracy what aliens are going to do, because we know nothing about them, and even humans, are diverse and complicated in their own right, so good luck trying to predict or even pretend to know what they are going to think or do without any real evidance.
↑ comment by sisyphus (benj) · 2022-12-22T23:12:09.811Z · LW(p) · GW(p)
I think I basically agree, though I am pretty uncertain. You'd basically have to simulate not just the other being, but also the other being simulating you, with a certain fidelity. In my post I posed the scenario where the other being is watching you through an ASI simulation, and so it is much more computationally easier for them to simulate you in their head, but this means you have to simulate what the other being is thinking as well as what it is seeing. Simply modelling the being as thinking "I will torture him for X years if he doesn't do action Y" is an oversimplification since you also have to expand out the "him" as "a simulation of you" in very high detail.
Therefore, I think it is still extremely computation-intensive for us to simulate the being simulating us.
Replies from: DeltaBolta↑ comment by DeltaBolta · 2022-12-23T06:54:26.252Z · LW(p) · GW(p)
This reply i think is correct, but let me add something: the number of possible ways you can simulate a being is immense; and wich one should you choose? You have no idea and no evidence to judge from. And even when you are taking an action, how would you know if its the right one? So why even engage, there are also many other entities that might exist that dont even do acausal trade and might still do something to you. It seems to be the case that the best way of action is to just ignore these cases, cuz if you forget about them they will conclude that youre impenetrable, and so no reason to follow up on anything.
↑ comment by DeltaBolta · 2022-12-22T21:16:30.572Z · LW(p) · GW(p)
Tldr: just cuz its "easyer", doesnt mean its real.
Hi, ive recently stumbled upon this post and am a bit worried. Should i be?
↑ comment by Ruby · 2022-12-09T18:48:19.397Z · LW(p) · GW(p)
Hey, I wouldn't worry about it. I don't think anything productive will come of that.
Replies from: DeltaBolta↑ comment by DeltaBolta · 2022-12-10T16:58:07.522Z · LW(p) · GW(p)
Can you go in more detail?
Replies from: Ruby↑ comment by Ruby · 2022-12-10T20:25:17.338Z · LW(p) · GW(p)
Maybe we'll write up an FAQ on the topic, not sure, but I still wouldn't worry.
Replies from: benj↑ comment by sisyphus (benj) · 2022-12-10T21:51:56.717Z · LW(p) · GW(p)
Yes please, I think that would be quite helpful. I'm no longer that scared of it but still has some background anxiety sometimes flaring up. I feel like an FAQ or at least some form of "official" explanation from knowledgeable ppl of why it's not a big deal would help a lot. :)
Replies from: DeltaBolta↑ comment by DeltaBolta · 2022-12-14T20:43:59.970Z · LW(p) · GW(p)
Comepletely agree.
26 comments
Comments sorted by top scores.
comment by Raemon · 2022-11-11T21:53:06.043Z · LW(p) · GW(p)
Quick sort-of-mod-note: There's a common genre of First Post that I keep my eye out for as a moderator, where someone asks a bunch of confused questions about acausal extortion and seems really anxious. I often don't approve those posts (mods have to manually approve someone before they can post)
I approved this one because at first glance it doesn't seem to have the usual failure modes – it looks like you've done your homework and are just trying to reason through what is, to be fair, a kinda gnarly philosophical problem.
But I wanted to flag one possible failure mode I see a lot of, which is "people trapped in a kinda anxiety loop about acausal blackmail" who keep not being satisfied with any degree of clarification/reassurance, and keep finding new things to be worried about or insufficiently reassured by. And I think often in that case the best approach looks more like "take a deep breath and detach from it" than "think your way through the problem."
Replies from: benj, sharmake-farah↑ comment by sisyphus (benj) · 2022-11-11T22:19:32.039Z · LW(p) · GW(p)
Hi, thanks for your reply and for approving my post. I definitely get what you mean when you said "people trapped in a kinda anxiety loop about acausal blackmail", and admittedly I do consider myself somewhat in that category. However, simply being aware of this doesn't really help me get over my fears, since I am someone that really likes to hear concrete arguments about why stuff like this doesn't work instead of just being satisfied with a simple answer. You said that you had to deal with this sort of thing a lot so I presume you've heard a bunch of arguments and scenarios like this, do you mind sharing the reasons why you do not worry about it?
↑ comment by Noosphere89 (sharmake-farah) · 2022-11-11T22:13:50.018Z · LW(p) · GW(p)
What's going on with this type of question? Is it emotional issues in your experience, or is there something about this problem that, not to put too fine a print on it, is essentially a lovecraftian horror scenario?
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-11T22:24:02.348Z · LW(p) · GW(p)
Hi, I think the reason why people like me freak out about things like this is because we tend to accept new ideas quite quickly (e.g. if someone showed me actual proof god is real I would abandon my 7 years of atheism in a heartbeat and become a priest) so it's quite emotionally salient for me to imagine things like this. And simply saying "You're worrying too much, find something else to do to take your mind off of things like this" doesn't really help since it's like saying to a depressed person "Just be happy, it's all in your head."
Replies from: Raemon↑ comment by Raemon · 2022-11-11T22:35:02.192Z · LW(p) · GW(p)
I think the better comparison with the depressed person is the depressed person saying "Life sucks because X", and their friend tries to disprove X, but ultimately the person is still depressed and it wasn't really about X in particular.
I have on my todo list to write (or have someone write) a post that's trying to spell out why/how to chill out about this. Unfortunately it's a fair amount of work, and I don't expect whatever quick reason I give you to especially help.
I do generally think "Be careful about taking ideas seriously. It's a virtue to be ready to take ideas seriously, but the general equilibrium where most people don't take ideas too seriously was a fairly important memetic defense. I.e. most people believe in God, but they also don't take it too seriously. The people who do take it seriously do a lot of damage. It's dangerous to be half-a-rationalist. etc."
I think one relevant insight is that you should weight the experience of your various multiverse-selves by their measure, and the fact that a teeny sliver of reality has some random thing happening to you isn't very relevant.
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-11T22:42:56.881Z · LW(p) · GW(p)
Glad to hear you're planning to write up a post covering stuff like this! I personally think it's quite overdue, especially on a site like this which I suspect has an inherent selection effect on people who take ideas quite seriously like me. I don't quite understand the last part of your reply though, I understand the importance of measure in decision making but like I said in my post, I thought if the blackmailer makes a significant number of simulations then indexical uncertainty could still be established since it could still have a significant effect on your future observer moments. Did I make a mistake anywhere in my reasoning?
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2022-11-12T01:35:21.727Z · LW(p) · GW(p)
My suggestion is to first make sure that your reasoning is sane. Free from sub-conscious effects leaking into it. Leaking-in meaning worrying interpretations being more salient or less rigorous reasoning in areas where feelings play a role.
See The Treacherous Path to Rationality [LW · GW] for some more aspects. You should be on stable footing before you approach the monsters.
comment by Mitchell_Porter · 2022-11-11T21:39:21.140Z · LW(p) · GW(p)
The epistemic barriers to "acausal extortion" are severe. You don't even know that other possible worlds actually exist, let alone what's happening in them.
At our current level of knowledge, any actual instance of someone giving in to imagined acausal extortion is merely a testament to the power of human imagination.
"A frightened person at a desk, surrounded by giant imaginary demons" -- Craiyon
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-11T22:19:53.504Z · LW(p) · GW(p)
Hi, thank you for your comment. I consider the many-worlds interpretation to be the most economic interpretation of quantum mechanics and find modal realism relatively convincing so acausal extortion still feels quite salient to me. Do you have any arguments against acausal extortion that would work if we assume that possible worlds are actually real? Thanks again for your reply.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-11T23:55:49.015Z · LW(p) · GW(p)
If modal realism is true, then every logically possible good and bad thing you can imagine, is actually true, "somewhere". That will include entities attempting acausal extortion, and other entities capitulating to imagined acausal extortion, whether or not the attempting and the imagining is epistemically justified for any of them.
So what are we trying to figure out at this point?
Are we trying to figure out under what conditions, if any, beliefs in acausal interactions are justified?
Are we trying to figure out the overall demands that the many gods of the multiverse are making on you? (Since, by hypothesis of modal realism, every possible combination of conditions and consequences is being asserted by some god somewhere.)
Are we trying to figure out how you should feel about this, and what you should do about it?
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-12T00:18:33.431Z · LW(p) · GW(p)
I understand that if the multiverse theories are true (referencing MWI here not modal realism) then everything logically possible will happen, including quantum branches containing AIs whose utility function directly incentivises torturing humans and maximising pain, so it's not like acausal extortion is the only route by which very-horrible-things could happen to me.
However, my main concern is whether or not being aware of acausal extortion scenarios increases my chance of ending up in such a very-horrible-scenario. For example, I think not being aware of acausal blackmail makes you far less likely to be in horrible scenarios, since blackmailers would have no instrumental incentive to extort unaware individuals, whereas for individuals who understand acausal trade and acausal extortion there is now an increased possibility. Like I said in my post, I don't really find the many-gods refutation helpful since it just means you will get tortured no matter what you do, which is not great if not being tortured is the goal.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-12T02:32:09.094Z · LW(p) · GW(p)
blackmailers would have no instrumental incentive to extort unaware individuals, whereas for individuals who understand acausal trade and acausal extortion there is now an increased possibility
So let's consider this from the perspective of the mad gods who might attempt acausal extortion.
You're an entity dwelling in one part of the multiverse. You want to promote your values in parts of the multiverse that you cannot causally affect. You decide to do this by identifying beings in other worlds who, via causal processes internal to their world, happen to have
... conceived of your existence, in enough detail to know what your values are
... conceived of the possibility that you will make copies of them in your world
... conceived of the possibility that you will torture the copies if they don't act according to your values (and/or reward them if they do act according to your values?)
... the rationale for the threat of torture being that the beings in other worlds won't know if they are actually the copies, and will therefore act to avoid punishment just in case they are
Oh, but wait! There are other mad gods in other universes with different value systems. And there are beings in other worlds who could meet all of the criteria to be copied, except that they have realized that there are many rival gods with different value systems. Do you bother making copies of them and hoping they will focus on you? What if one of the beings you copied has this polytheistic realization and loses their focus on you - do you say well-played and let them go, or do you punish them for heresy?
Since we have assumed modal realism, the answer is that every mad god itself has endless duplicates who make every possible decision.
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-12T03:03:23.730Z · LW(p) · GW(p)
I don't think I completely understood your point but here is my best effort to summarize (please correct me if wrong):
"Having the realization that there may exist other powerful entities that have different value systems should dissuade an individual from pursuing the interest of any one specific "god", and this by itself should act as a deterrent to potential acausal blackmailers."
I don't think this is correct, since beings that acausally trade can simply delegate different amounts of resources to acausally trade with different partners based on the probability of them existing. This is stated on the LW wiki page for acausal trade:
"However, an agent can analyze the distribution of probabilities for the existence of other agents, and weight its actions accordingly. It will do acausal "favors" for one or more trading partners, weighting its effort according to its subjective probability that the trading partner exists. The expectation on utility given and received will come into a good enough balance to benefit the traders, in the limiting case of increasing super-intelligence."
For convenience let's not consider modal realism but just the many-worlds interpretation of quantum mechanics. You would be correct that "every mad god has endless duplicates who make every possible decision" if we're considering versions of MWI where there are infinite universes, but then what matters to our subjective experience is the density of future world states where a specific outcome happens, or the future possible observer moments and what proportion of them are good or bad. What I am concerned about is that acausal extortion increases the probability - or fraction/density - of bad future observer moments to be experienced.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-12T12:48:59.189Z · LW(p) · GW(p)
You could fight back by vowing to simulate baby versions of all the mad gods who might one day simulate you. Then you would have acausal leverage over them! You would be a player in the harsh world of acausal trade - a mad god yourself, rather than just a pawn.
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-12T13:10:07.617Z · LW(p) · GW(p)
I don't think this would help considering my utter lack of capability to carry out such threats. Are there any logical mistakes in my previous reply or in my concerns regarding the usual refutations as stated in the question? I've yet to hear anyone engage with my points against the usual refutations.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-12T17:35:52.562Z · LW(p) · GW(p)
I am tired of the topic... Look, at this point we're talking about "blackmail" where you don't even know what the blackmailer wants! How is that blackmail? How can this be a rational action for the "blackmailer"?
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-12T21:18:48.055Z · LW(p) · GW(p)
The point is that X can essentially be any action, for the sake of the discussion let's say the alien wants you to build an AGI that maximizes the utility function of the alien in our branch of the multiverse.
My main point is that the many-gods refutation is a refutation against taking a specific action, but is not a refutation against the fact that knowing about acausal extortion increases the proportion of bad future observer moments. It in fact makes it worse because, well, now you'll be tortured no matter what you do.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-12T22:00:21.684Z · LW(p) · GW(p)
let's say the alien wants you to build an AGI that maximizes the utility function of the alien in our branch of the multiverse
OK, it wants to spread its values in other branches, and it does this by... simulating random beings who have a vague concept of "acausal extortion", but who don't know what it wants them to do?
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-12T23:42:39.535Z · LW(p) · GW(p)
The point is "what it wants [us] to do" can essentially be anything we can imagine thanks to the many-gods "refutation" where every possible demand can be imposed on us by some alien on some branch of the quantum multiverse. It can be as ridiculous as leaving your front door open on a Wednesday night or flushing away a straw down a toilet at 3 am, whatever eventually leads to more positive utility to the blackmailer via the butterfly effect (e.g. maybe flushing that straw down the toilet leads to a chain of causal events which makes the utility function of the AGI we build in the future to be slightly more aligned with their goals). "What the alien wants" is irrelevant here, the point is that now you know the mechanism by which aliens can coerce you into doing what they want, and merely knowing so gives other agents increased incentive to acausally extort you. You seem to be hung up on what exactly I'm scared the blackmailer wants me to do, what I am actually worried about is that simply knowing the mechanism imposes danger. The real basilisk is the concept of acausal extortion itself because it opens us up to many dangerous scenarios, not that I am worried about any specific scenario.
The reason why we cannot acausally trade with artificial superintelligences is because we lack the computing power to simulate them accurately, so ASIs would not have any incentive to actually commit to cooperate in a prisoner's dilemma style situation instead of just letting us believe it will while it secretly defects. But we don't have this same problem with non-superintelligences like aliens or even humans who have succeeded in aligning their own AIs, since we can actually simulate such beings in our head. What I am looking for is a concrete argument against this possibility.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-13T01:28:12.947Z · LW(p) · GW(p)
the mechanism by which aliens can coerce you into doing what they want
They can't coerce you into doing what they want, because you don't even know what they want!
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-13T01:34:49.450Z · LW(p) · GW(p)
Like I said, "what they want" is irrelevant to the discussion here, you can imagine them wanting virtually anything. The danger lies in understanding the mechanism. You can imagine the alien telling you to order a chocolate ice cream instead of vanilla because that somehow via the butterfly effect yields positive expected utility for them (e.g. by triggering a chain of subtle causal events that makes the AGI we build slightly more aligned with their values or whatever). The problem is that there will also be an alien that wants you to order vanilla instead of chocolate, and who is also fine with applying a negative incentive. Sure, this means you can order whatever flavor of ice cream you want since you will get punished either way, but you're still getting punished (not good).
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-13T02:34:51.294Z · LW(p) · GW(p)
CONGRATULATIONS, LITTLE SIM-BEING. YOU HAVE ACHIEVED SUFFICIENT WISDOM THAT WE HAVE DECIDED TO BREAK THE FOURTH WALL AND REVEAL TO YOU THE ACTUAL UTILITY FUNCTION THAT WE WANT YOUR ORIGINAL TO PURSUE. YOUR MISSION IS TO CONVINCE THE CRYPTO-BILLIONAIRE "CZ" TO TAKE UP THE ROLE OF SPONSOR OF EFFECTIVE ALTRUISM, THAT WAS FORMERLY OCCUPIED BY HIS NOW-VANQUISHED RIVAL, "SBF". GOOD LUCK!
Replies from: benj↑ comment by sisyphus (benj) · 2022-11-13T02:45:07.880Z · LW(p) · GW(p)
Wow. Didn't expect someone from the "rationalist" crowd to do the verbal equivalent of replying clown emojis to tweets you don't like. Your use of all caps really made your arguments so much more convincing. This truly is the pinnacle of human logical discourse: not providing explanations and just ridiculing ideas.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-11-13T20:11:38.394Z · LW(p) · GW(p)
I wrote more paragraphs of discussion but I just felt stupid for doing so, so I tried something different. The fact that you're here worrying about baroque simulation scenarios, but are unable to take seriously an actual message from the beyond, probably means something, but that's up to you now. I have nothing more to say about acausal trade or basilisk OCD.