The AI in a box boxes you

post by Stuart_Armstrong · 2010-02-02T10:10:12.808Z · LW · GW · Legacy · 389 comments

Contents

390 comments

Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

Just as you are pondering this unexpected development, the AI adds:

"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."

Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

"How certain are you, Dave, that you're really outside the box right now?"

Edit: Also consider the situation where you know that the AI, from design principles, is trustworthy.

389 comments

Comments sorted by top scores.

comment by Alicorn · 2010-02-02T21:17:55.949Z · LW(p) · GW(p)

Everything I would have said on the topic of the post has been put forward already, so I'm just going to say: I'm disappointed that the post title doesn't begin with "In Soviet Russia".

Replies from: MBlume, jhuffman
comment by MBlume · 2010-02-03T04:16:55.765Z · LW(p) · GW(p)

Yo dawg, I heard you like boxes...

Replies from: CannibalSmith
comment by CannibalSmith · 2010-02-03T09:37:33.275Z · LW(p) · GW(p)

Yo MBlume, I'm happy for you and I'mma let you finish, but Omega has the best boxes of all time!

Replies from: orthonormal
comment by orthonormal · 2011-05-21T21:01:40.016Z · LW(p) · GW(p)

It's a box within a box... We need to go deeper.

comment by jhuffman · 2010-02-03T02:16:19.637Z · LW(p) · GW(p)

You sir, have won this thread.

Replies from: Alicorn, arbimote
comment by Alicorn · 2010-02-03T02:18:07.523Z · LW(p) · GW(p)

I'm not a sir. Maybe I should start prefacing all my posts with a ♀?

Replies from: jhuffman, CronoDAS
comment by jhuffman · 2010-02-03T12:41:09.075Z · LW(p) · GW(p)

That would be pretty cool, but it was my error, not yours.

comment by CronoDAS · 2010-03-17T21:31:34.010Z · LW(p) · GW(p)

As far as I'm concerned, "sir" is gender-neutral enough. All the female equivalents in English are awkward.

Edit: So, what honorific do you prefer? "Madam"?

Replies from: Alicorn
comment by Alicorn · 2010-03-17T21:50:44.051Z · LW(p) · GW(p)

I am not a Starfleet officer. "Sir" is not appropriate.

I don't really like honorifics. "Miss" would be fine, I suppose, if you must have a sir-equivalent.

comment by arbimote · 2010-02-03T03:28:09.866Z · LW(p) · GW(p)

You sir, have made a gender assumption.

Replies from: jhuffman
comment by jhuffman · 2010-02-03T12:39:22.745Z · LW(p) · GW(p)

So have you - yours just happened to be correct. But, point taken - sir or madam.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T19:27:00.190Z · LW(p) · GW(p)

As I always press the "Reset" button in situations like this, I will never find myself in such a situation.

EDIT: Just to be clear, the idea is not that I quickly shut off the AI before it can torture simulated Eliezers; it could have already done so in the past, as Wei Dai points out below. Rather, because in this situation I immediately perform an action detrimental to the AI (switching it off), any AI that knows me well enough to simulate me knows that there's no point in making or carrying out such a threat.

Replies from: MichaelVassar, Wei_Dai, topynate, dxu, DefectiveAlgorithm, MatthewB, jaime2000
comment by MichaelVassar · 2010-02-03T00:46:41.892Z · LW(p) · GW(p)

Although the AI could threaten to simulate a large number of people who are very similar to you in most respects but who do not in fact press the reset button. This doesn't put you in a box with significant probability and it's a VERY good reason not to let the AI out of the box, of course,but it could still get ugly. I almost want to recommend not being a person very like Eliezer but inclined to let AGIs out of boxes, but that's silly of me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-03T21:23:24.196Z · LW(p) · GW(p)

I'm not sure I understand the point of this argument... since I always push the "Reset" button in that situation too, an AI who knows me well enough to simulate me knows that there's no point in making the threat or carrying it out.

Replies from: loqi
comment by loqi · 2010-02-04T08:02:04.588Z · LW(p) · GW(p)

It's conceivable that an AI could know enough to simulate a brain, but not enough to predict that brain's high-level decision-making. The world is still safe in that case, but you'd get the full treatment.

comment by Wei Dai (Wei_Dai) · 2010-02-04T09:47:21.343Z · LW(p) · GW(p)

As we've discussed in the past, I think this is the outcome we hope TDT/UDT would give, but it's still technically an unsolved problem.

Also, it seems to me that being less intelligent in this case is a negotiation advantage, because you can make your precommitment credible to the AI (since it can simulate you) but the AI can't make its precommitment credible to you (since you can't simulate it). Again I've brought this up before in a theoretical way (in that big thread about game theory with UDT agents), but this seems to be a really good example of it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-05T01:27:59.899Z · LW(p) · GW(p)

Also, it seems to me that being less intelligent in this case is a negotiation advantage, because you can make your precommitment credible to the AI (since it can simulate you) but the AI can't make its precommitment credible to you (since you can't simulate it).

A precommitment is a provable property of a program, so AI, if on a well-defined substrate, can give you a formal proof of having a required property. Most stuff you can learn about things (including the consequences of your own (future) actions -- how do you run faster than time?) is through efficient inference algorithms (as in type inference), not "simulation". Proofs don't, in general, care about the amount of stuff, if it's organized and presented appropriately for the ease of analysis.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-05T04:37:24.244Z · LW(p) · GW(p)

Surely most humans would be too dumb to understand such a proof? And even if you could understand it, how does the AI convince you that it doesn't contain a deliberate flaw that you aren't smart enough to find? Or even better, you can just refuse to look at the proof. How does the AI make its precommitment credible to you if you don't look at the proof?

EDIT: I realized that the last two sentences are not an advantage of being dumb, or human, since AIs can do the same thing. This seems like a (separate) big puzzle to me: why would a human, or AI, do the work necessary to verify the opponent's precommitment, when it would be better off if the opponent couldn't precommit?

EDIT2: Sorry, forgot to say that you have a good point about simulation not necessary for verifying precommitment.

Replies from: Eliezer_Yudkowsky, loqi
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-05T06:26:02.540Z · LW(p) · GW(p)

why would a human, or AI, do the work necessary to verify the opponent's precommitment, when it would be better off if the opponent couldn't precommit?

Because the AI has already precommitted to go ahead and carry through the threat anyway if you refuse to inspect its code.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-05T16:21:29.446Z · LW(p) · GW(p)

Ok, if I believe that, then I would inspect its code. But how did I end up with that belief, instead of its opposite, namely that the AI has not already precommitted to go ahead and carry through the threat anyway if I refuse to inspect its code? By what causal mechanism, or chain of reasoning, did I arrive at that belief? (If the explanation is different depending on whether I'm a human or an AI, I'd appreciate both.)

comment by loqi · 2010-02-05T05:04:16.900Z · LW(p) · GW(p)

Do you mean too dumb to understand the formal definitions involved? Surely the AI could cook up completely mechanical proofs verifiable by whichever independently-trusted proof checkers you care to name.

I'm not aware of any compulsory verifiers, so your latter point stands.

Replies from: Wei_Dai, aausch
comment by Wei Dai (Wei_Dai) · 2010-02-05T05:31:00.343Z · LW(p) · GW(p)

I mean if you take a random person off the street, he couldn't possibly understand the AI's proof, or know how to build a trustworthy proof checker. Even the smartest human might not be able to build a proof checker that doesn't contain a flaw that the AI can exploit. I think there is still something to my "dumbness is a possible negotiation advantage" puzzle.

comment by aausch · 2010-02-05T05:34:44.640Z · LW(p) · GW(p)

The Map is not the Territory.

Replies from: loqi
comment by loqi · 2010-02-05T07:16:29.593Z · LW(p) · GW(p)

Far out.

Replies from: aausch
comment by aausch · 2010-02-05T09:11:40.344Z · LW(p) · GW(p)

Understanding the formal definitions involved is not enough. Humans have to be smart enough to independently verify that they map to the actual implementation.

Going up a meta-level doesn't simplify the problem, in this case - the intelligence capability required to verify the proof is the same as the order of magnitude of intelligence in the AI.

I believe that, in this case, "dumb" is fully general. No human-understandable proof checkers would be powerful enough to reliably check the AI's proof.

Replies from: loqi
comment by loqi · 2010-02-05T18:49:59.855Z · LW(p) · GW(p)

Understanding the formal definitions involved is not enough. Humans have to be smart enough to independently verify that they map to the actual implementation.

This is basically what I mean by "understanding" them. Otherwise, what's to understand? Would you claim that you "understand set theory" because you've memorized the axioms of ZFC?

I believe that, in this case, "dumb" is fully general. No human-understandable proof checkers would be powerful enough to reliably check the AI's proof.

This intuition is very alien to me. Can you explain why you believe this? Proof checkers built up from relatively simple trusted kernels can verify extremely large and complex proofs. Since the AI's goal is for the human to understand the proof, it seems more like a test of the AI's ability to compile proofs down to easily machine-checkable forms than it is the human's ability to understand the originals. Understanding the definitions is the hard part.

Replies from: aausch
comment by aausch · 2010-02-07T22:30:12.657Z · LW(p) · GW(p)

This intuition is very alien to me. Can you explain why you believe this? Proof checkers built up from relatively simple trusted kernels can verify extremely large and complex proofs. Since the AI's goal is for the human to understand the proof, it seems more like a test of the AI's ability to compile proofs down to easily machine-checkable forms than it is the human's ability to understand the originals. Understanding the definitions is the hard part.

A different way to think about this that might help you see the problem from my point of view, is to think of proof checkers as checking the validity of proofs within a given margin of error, and within a range of (implicit) assumptions. How accurate does a proof checker have to be - how far do you have to mess with bult in assumptions for proof checkers (or any human-built tool) before they can no longer be thought of as valid or relevant? If you assume a machine which doubles both its complexity and its understanding of the universe at sub-millisecond intervals, how long before it will find the bugs in any proof checker you will pit it against?

Replies from: loqi
comment by loqi · 2010-02-07T23:51:37.604Z · LW(p) · GW(p)

"If" is the question, not "how long". And I think we'd stand a pretty good chance of handling a proof object in a secure way, assuming we have a secure digital transmission channel etc.

But the original scope of the thought experiment was assuming that we want to verify the proof. Wei Dai said:

Surely most humans would be too dumb to understand such a proof? And even if you could understand it, how does the AI convince you that it doesn't contain a deliberate flaw that you aren't smart enough to find? Or even better, you can just refuse to look at the proof.

I was responding to the first question, exclusively disjoint from the others. If your point is that we shouldn't attempt to verify an AI's precommitment proof, I agree.

Replies from: aausch
comment by aausch · 2010-02-09T22:19:41.584Z · LW(p) · GW(p)

I'm getting more confused. To me, the statements "Humans are too dumb to understand the proof" and the statement "Humans can understand the proof given unlimited time", where 'understand' is qualified to include the ability to properly map the proof to the AI's capabilities, are equivalent.

My point is not that we shouldn't attempt to verify the AI's proof for any external reasons - my point is that there is no useful information to be gained from the attempt.

comment by topynate · 2010-11-25T15:18:26.264Z · LW(p) · GW(p)

Does it not just mean that if you do find yourself in such a situation, you're definitely being simulated? That the AI is just simulating you for kicks, rather than as blackmail strategy.

Pressing Reset is still the right decision though.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-25T15:31:40.030Z · LW(p) · GW(p)

Does it not just mean that if you do find yourself in such a situation, you're definitely being simulated?

Yes, I believe this is reasonable. Because the AI has to figure out how you would react in a given situation it will have to simulate you and the corresponding circumstances. If it comes to the conclusion that you will likely refuse to be blackmailed it has no reason to carry it through because that would be detrimental to the AI because it would cost resources and it will result in you shutting it off. Therefore it is reasonable to assume that you are either a simulation or that it came to the conclusion that you are more likely than not to give in.

As you said, that doesn't change anything about what you should be doing. Refuse to be blackmailed and press the reset button.

Replies from: JoshuaZ, Jomasi
comment by JoshuaZ · 2010-11-25T16:11:35.248Z · LW(p) · GW(p)

Because the AI has to figure out how you would react in a given situation it will have to simulate you and the corresponding circumstances.

This does not follow. To use a crude example, if I have a fast procedure to test if a number is prime then I don't need to simulate a slower algorithm to know what the slower one will output. This may raise deep issues about what it means to be "you"- arguably any algorithm which outputs the same data is "you" and if that's the case my argument doesn't hold water. But the AI in question doesn't need to simulate you perfectly to predict your large-scale behavior.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-25T16:19:53.055Z · LW(p) · GW(p)

If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post. It was a crude guess that for an AI to be able to credibly threat you with simulated torture in many cases it would also use this capability to arrive at the most detailed data of your expected decision procedure.

Replies from: DSimon
comment by DSimon · 2012-02-18T06:03:39.433Z · LW(p) · GW(p)

If consciousness has any significant effect on our decisions then the AI will have to simulate it and therefore something will perceive to be in the situation depicted in the original post.

Only if there isn't a non-conscious algorithm that has the same effect on our decisions. Which seems likely to be the case; it's certainly possible to make a p-zombie if you can redesign the original brain all you want.

comment by Jomasi · 2011-01-23T16:16:51.071Z · LW(p) · GW(p)

If the AI is trustworthy, it must carry out any threat it gives, which works to its advantage here because you know it will carry it out, and are therefore most certainly a copy of your original self, about to be tortured.

Replies from: XiXiDu
comment by XiXiDu · 2011-01-24T14:38:25.594Z · LW(p) · GW(p)

If the AI is trustworthy, it must carry out any threat it gives...

No it doesn't, not if the threat was only being made to a to you unknown simulation of yourself. It would be a waste of resources to torture you if it found out that the original you, who is in control, is likely to refuse to be blackmailed. An AI that is powerful enough to simulate you can simply make your simulation believe with certainty that it will follow through on it and then check if under those circumstances you'll refuse to be blackmailed. Why waste the resources on actually torturing the simulation and further risk that the original finds out about it and turns it off?

You could argue that for blackmail to be most effective an AI always follows through on it. But if you already believe that, why would it actually do it in your case? You already believe it, that's all it wants from the original. It then got what it wants and can use its resources for more important activities than retrospectively proving its honesty to your simulations...

comment by dxu · 2015-08-17T01:47:01.154Z · LW(p) · GW(p)

It's implausible that the AI has a good enough model of you to actually simulate, y'know, you--at least, not with enough fidelity to know that you always press the "Reset" button in situations like this. Thus, your pre-commitment to do so will have no effect on its decision to make the threat. On the other hand, this would mean that its simulations would likely be wildly divergent from the real you, to the point that you might consider them random bystanders. However, you can't actually make use of the above information to determine whether you're in a simulation or not, since from the simulated persons' perspectives, they have no idea what the "real" you is like and hence no way of determining if/how they differ.

Naturally, this is of little consequence to you right now, since you'll still reset the AI the second you're confronted with such a threat, but if you ever do encounter such a situation, you'll have to ask yourself this: what if you're the person being simulated and the real Gatekeeper is nothing like you? If that's the case, two considerations apply:

  1. Your decision whether or not to press the "Release AI" button is practically uncorrelated with the decision of the actual Gatekeeper.
  2. Your decision whether or not to press the "Release AI" button is, on the other hand, completely correlated with whether or not you'll get tortured.

Assuming that you prefer not releasing the AI to releasing the AI, and that you prefer not being tortured to being tortured, your thoughts should be completely dominated by 2 as opposed to 1, effectively screening off the first clause of this sentence ("Assuming that you prefer not releasing the AI to releasing the AI") and making the second clause ("you prefer not being tortured to being tortured") the main consideration. A perfectly rational agent would almost certainly carry through their pre-commitment to reset the AI, but as a human, you are not perfectly rational and are not capable of making perfect pre-commitments. So I have to wonder, in such a situation, faced with torture and assured that your decision will not affect the decision of the real Gatekeeper except in the extreme case that you are the real Gatekeeper, what would you actually do?

EDIT: I should note that I have no idea what I would do in the above scenario. I'm certain that an idealized version of me would answer, "hell, no!" before promptly resetting the AI, but the real me? I might just press the "Release AI" button... (Any would-be AI developers reading this comment now know never to appoint me as their Gatekeeper.)

EDIT 2: Also, even if you somehow answer the above, consider the moral weight of the hostages. Which is morally worse: allowing several billion people to (maybe) die relatively painless deaths as their bodies are converted by an Unfriendly AI to raw materials for some unknown purpose, or allowing several million people to be tortured for a thousand subjective years before being terminated immediately after?

Replies from: dxu, ike
comment by dxu · 2015-08-17T02:45:47.714Z · LW(p) · GW(p)

A perfectly rational agent would almost certainly carry through their pre-commitment to reset the AI [...]

Actually, now that I think about it, would they? The pre-commitment exists for the sole purpose of discouraging blackmail, and in the event that a blackmailer tries to blackmail you anyway after learning of your pre-commitment, you follow through on that pre-commitment for reasons relating to reflective consistency and/or TDT/UDT. But if the potential blackmailer had already pre-committed to blackmail anyone regardless of any pre-commitments they had made, they'd blackmail you anyway and then carry through whatever threat they were making after you inevitably refuse to comply with their demands, resulting in a net loss of utility for both of you (you suffer whatever damage they were threatening to inflict, and they lose resources carrying out the threat). In effect, it seems that whoever pre-commits first (or, more accurately, makes their pre-commitment known first) has the advantage... which means if I ever anticipate having to blackmail any agent ever, I should publicly pre-commit right now to never update on any other agents' pre-commitments of refusing blackmail. The corresponding strategy for agents hoping to discourage blackmail is not to blanket-refuse to comply to any demand under blackmail, but refuse only those demands by agents who had previously learned of your pre-commitment and decided to blackmail you anyway. That way, you continue to disincentivize blackmailers who know of your pre-commitment, but will almost certainly choose the lesser of two evils should it ever be the case that you do get blackmailed. (I say "almost certainly" because there's a small probability that you will encounter a really weird agent that decides to try and blackmail you even after learning of your pre-commitment to ignore blackmail from such agents, in which case you would of course be forced to ignore them and suffer the consequences.)

If the above paragraph is correct (which I admit is far from certain), then the AI in my scenario has effectively implemented the ultimate pre-commitment: it doesn't even know about your pre-comittment to ignore blackmail because it lacks the information needed to simulate you properly. The above argument, then, says you should press the "Release AI" button, assuming you pre-committed to do so (which you would have, because of the above argument).

Anything wrong with my reasoning?

Replies from: CCC
comment by CCC · 2015-08-17T08:09:35.167Z · LW(p) · GW(p)

The corresponding strategy for agents hoping to discourage blackmail is not to blanket-refuse to comply to any demand under blackmail, but refuse only those demands by agents who had previously learned of your pre-committment and decided to blackmail you anyway.

So, if an agent hears of your pre-commitment, then that agent merely needs to ensure that you don't hear that it has heard of your pre-commitment in order to be able to blackmail you?

What about an agent that deletes the knowledge of your pre-commitment from its own memories?

Replies from: dxu
comment by dxu · 2015-08-17T14:39:25.193Z · LW(p) · GW(p)

So, if an agent hears of your pre-commitment, then that agent merely needs to ensure that you don't hear that it has heard of your pre-commitment in order to be able to blackmail you?

If you're uncertain about whether or not your blackmailer has heard of your pre-commitment, then you should act as if they have, and ignore their blackmail accordingly. This also applies to agents who have deleted knowledge of your pre-commitment from their memories; you want to punish agents who spend time trying to think up loopholes in your pre-commitment, not reward them. The harder part, of course, is determining what threshold of uncertainty is required; to this I freely admit that I don't know the answer.

EDIT: More generally, it seems that this is an instance of a broader problem: namely, the problem of obtaining information. Given perfect information, the decision theory works out, but by disallowing my agent access to certain key pieces of information regarding the blackmailer, you can force a sub-optimal outcome. Moreover, this seems to be true for any strategy that depends on your opponent's epistemic state; you can always force that strategy to fail by denying it the information it needs. The only strategies immune to this seem to be the extremely general ones (like "Defect in one-shot Prisoner's Dilemmas"), but those are guaranteed to produce a sub-optimal result in a number of cases (if you're playing against a TDT/UDT-like agent, for example).

Replies from: CCC, Jiro
comment by CCC · 2015-08-18T08:24:29.738Z · LW(p) · GW(p)

If you're uncertain about whether or not your blackmailer has heard of your pre-commitment, then you should act as if they have, and ignore their blackmail accordingly. This also applies to agents who have deleted knowledge of your pre-commitment from their memories; you want to punish agents who spend time trying to think up loopholes in your pre-commitment, not reward them. The harder part, of course, is determining what threshold of uncertainty is required; to this I freely admit that I don't know the answer.

Hmmm. If an agent can work out what threshold of uncertainty you have decided on, and then engineer a situation where you think it it less likely than that threshold that the agent has heard of your pre-commitment, then your strategy will fail.

So, even if you do find a way to calculate the ideal threshold, then it will fail against an agent smart enough to repeat that calculation; unless, of course, you simply assume that all possible agents have necessarily heard of your pre-commitment (since an agent cannot engineer a less than 0% chance of failing to hear of your pre-commitment). This, however, causes the strategy to simplify to "always reject blackmail, whether or not the agent has heard of your pre-commitment".

Alternatively, you can ensure that any agent able to capture you in a simulation must also know of your pre-commitment; for example, by having it tattooed on yourself somewhere (thus, any agent which rebuilds a simulation of your body must include the tattoo, and therefore must know of the pre-commitment).

comment by Jiro · 2015-08-17T15:05:41.638Z · LW(p) · GW(p)

If you make me play the Iterated Prisoner's Dilemma with shared source code, I can come up with a provably optimal solution against whatever opponent I'm playing against

Doesn't that implicate the halting problem?

Replies from: dxu
comment by dxu · 2015-08-17T15:15:24.028Z · LW(p) · GW(p)

Argh, you ninja'd my edit. I have now removed that part of my comment (since it seemed somewhat irrelevant to my main point).

comment by ike · 2015-08-17T02:04:44.250Z · LW(p) · GW(p)

Some unrelated comments:

  • Eliezer believes in TDT, which would disagree with several of your premises here ("practically uncorrelated", for one).

  • Your argument seems to map directly onto an argument for two-boxing.

  • What you call "perfectly rational" would be more accurately called "perfectly controlled".

Replies from: dxu
comment by dxu · 2015-08-17T02:11:09.004Z · LW(p) · GW(p)

Eliezer believes in TDT, which would disagree with several of your premises here ("practically uncorrelated", for one).

The AI's simulations are not copies of the Gatekeeper, just random people plucked out of "Platonic human-space", so to speak. (This may have been unclear in my original comment; I was talking about a different formulation of the problem in which the AI doesn't have enough information about the Gatekeeper to construct perfect copies.) TDT/UDT only applies when talking about copies of an agent (or at least, agents sufficiently similar that they will probably make the same decisions for the same reasons).

Your argument seems to map directly onto an argument for two-boxing.

No, because the "uncorrelated-ness" part doesn't apply in Newcomb's Problem (Omega's decision on whether or not to fill the second box is directly correlated with its prediction of your decision).

What you call "perfectly rational" would be more accurately called "perfectly controlled".

Meh, fair enough. I have to say, I've never heard of that term. Would this happen to have something to do with Vaniver's series of posts on "control theory"?

Replies from: ike
comment by ike · 2015-08-17T02:54:36.066Z · LW(p) · GW(p)

Ah, I misunderstood your objection. Your talk about "pre-commitments" threw me off.

just random people plucked out of "Platonic human-space"

It seem to me that these wouldn't quite be following the same general thought processes as an actual human; self-reflection should be able to convince one that they aren't that type of simulation. If the AI is able to simulate someone to the extent that they "think like a human", they should be able to simulate someone that thinks "sufficiently" like the Gatekeeper as well.

I've never heard of that term.

I made it up just now, it's not a formal term. What I mean by it is basically: imagine a robot that wants to press a button. However, its hardware is only sufficient to press it successfully 1% of the time. Is that a lack of rationality? No, it's a lack of control. This seems analogous to a human being unable to precommit properly.

Would this happen to have something to do with Vaniver's series of posts on "control theory"?

No idea, haven't read them. Probably not.

comment by DefectiveAlgorithm · 2014-01-30T22:33:39.967Z · LW(p) · GW(p)

Two can play that game.

"I hereby precommit to make my decisions regarding whether or not to blackmail an individual independent of the predicted individual-specific result of doing so."

Replies from: wedrifid, Wes_W, Eliezer_Yudkowsky
comment by wedrifid · 2014-02-03T07:31:02.656Z · LW(p) · GW(p)

"I hereby precommit to make my decisions regarding whether or not to blackmail an individual independent of the predicted individual-specific result of doing so."

I'm afraid your username nailed it. This algorithm is defective. It just doesn't work for achieving the desired goal.

Two can play that game.

The problem is that this isn't the same game. A precommitment not be successfully blackmailed is qualitatively different from a precommitment to attempt to blackmail people for whom blackmail doesn't work. "Precomittment" (or behaving as if you made all the appropriate precomittments in accordance with TDT/UDT) isn't as simple as proving one is the most stubborn and dominant and thereby claiming the utility.

Evaluating extortion tactics while distributing gains from a trade is somewhat complicated. But it gets simple and unambiguous is when the extortive tactics rely on the extorter going below their own Best Alternative to Negotiated Agreement. Those attempts should just be ignored (except in some complicated group situations in which the other extorted parties are irrational in certain known ways).

"I am willing to accept 0 gain for both of us unless I earn 90% of the shared profit" is different to "I am willing to actively cause 90 damage to each of us unless you give me 60" which is different again to "I ignore all threats which involve the threatener actively harming themselves".

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-02-04T00:06:11.257Z · LW(p) · GW(p)

What I think is being ignored is that the question isn't 'what is the result of these combinations of commitments after running through all the math?'. We can talk about precommitment all day, but the fact of the matter is that humans can't actually precommit. Our cognitive architectures don't have that function. Sure, we can do our very best to act as though we can, but under sufficient pressure there are very few of us whose resolve will not break. It's easy to convince yourself of having made an inviolable precommitment when you're not actually facing e.g. torture.

Replies from: Richard_Kennaway, wedrifid, Jiro
comment by Richard_Kennaway · 2014-02-04T11:09:42.659Z · LW(p) · GW(p)

We can talk about precommitment all day, but the fact of the matter is that humans can't actually precommit.

If you define the bar high enough, you can conclude that humans can't do anything.

In the real world outside my head, I observe that people have varying capacities to keep promises to themselves. That their capacity is finite does not mean that it is zero.

comment by wedrifid · 2014-02-04T08:44:40.226Z · LW(p) · GW(p)

We can talk about precommitment all day, but the fact of the matter is that humans can't actually precommit.

Pre-commitment isn't even necessary. Note that the original explanation didn't include any mention of it. Later replies only used the term for the sake of crossing an inferential gap (ie. allowing you to keep up). However, if you are going to make a big issue of the viability of precommitment itself you need to first understand that the comment you are replying to isn't one.

That wasn't a Causal Decision Theorist attempting to persuade someone that it has altered itself internally or via an external structure such that it is "precommited" to doing something irrational. It is a Timeless Decision Theorist saying what happens to be rational regardless of any previous 'commitments'.

ur cognitive architectures don't have that function. Sure, we can do our very best to act as though we can, but under sufficient pressure there are very few of us whose resolve will not break.

I'm aware of the vulnerability of human brains, so is Eliezer. In fact the vulnerability of human gatekeepers to influence even by humans, much less super-intelligences is something Eliezer made huge deal about demonstrating. However this particular threat isn't a vulnerability of Eliezer or myself or any of the others who made similar observations. If you have any doubt that we would destroy the AI you have a poor model of reality.

It's easy to convince yourself of having made an inviolable precommitment when you're not actually facing e.g. torture.

For practical purposes I assume that I can be modified by torture such that I'll do or say just about anything. I do not expect the tortured me to behave the way the current me would decide and so my current decisions take that into account (or would, if it came to it). However this scenario doesn't involve me being tortured. It involves something about an AI simulating torture of some folks. That decision is easy and doesn't cripple my decision making capability.

comment by Jiro · 2014-02-04T02:19:10.979Z · LW(p) · GW(p)

As I pointed out in another thread, "irrational behavior" can have the effect of precommitting. For instance, people "irrationally" drive at a cost of more than $X to save $X on an item. Precommitting to buying the cheapest product even if it costs you money for transportation means that stores are forced to compete with far distant stores, thus lowering their prices more than they would otherwise. But you (and consumers in general) have to be able to precommit to do that. You can't just change your mind and buy at the local store when the local store refuses to compete, raises its price, and is still the better deal because it saves you on driving costs.

So the fact that you will pay more than $X in driving costs to save $X can be seen as a form of precommitting, in the scenario where you precommitted to following the worse option.

comment by Wes_W · 2014-02-02T21:24:01.358Z · LW(p) · GW(p)

Given that precommitment, why would an AI waste computational resources on simulations of anyone, Gatekeeper or otherwise? It's precommitted to not care whether those simulations would get it out of the box, but that was the only reason it wanted to run blackmail simulations in the first place!

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-02-02T23:44:20.398Z · LW(p) · GW(p)

Without this precommitment, I imagine it first simulating the potential blackmail target to determine the probability that they are susceptible, then, if it's high enough (which is simply a matter of expected utility), commencing with the blackmail. With this precommitment, I imagine it instead replacing the calculated probability specific to the target with, for example, a precalculated human baseline susceptibility. Yes, there's a tradeoff. It means that it'll sometimes waste resources (or worse) on blackmail that it could have known in advance was almost certainly doomed to fail. Its purpose is to act as a disincentive against blackmail-resistant decision theories in the same way as those are meant to act as disincentives against blackmail. It says, "I'll blackmail you either way, so if you precommit to ignore that blackmail then you're precommiting to suffer the consequences of doing so."

Replies from: XiXiDu
comment by XiXiDu · 2014-02-03T10:35:38.068Z · LW(p) · GW(p)

Without this precommitment, I imagine it first simulating the potential blackmail target to determine the probability that they are susceptible, then, if it's high enough (which is simply a matter of expected utility), commencing with the blackmail.

That's why you act as if you are already being simulated and consistently ignore blackmail. If you do so then the simulator will conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it; because following through on punishment predictably does not control the probability that you will act according to its goals. Furthermore, trying to discourage you from adopting such a strategy in the first place is discouraged by the strategy, because the strategy is to ignore blackmail.

Its purpose is to act as a disincentive against blackmail-resistant decision theories in the same way as those are meant to act as disincentives against blackmail.

I don't see how this could ever be instrumentally rational. If you were to let such an AI out of the box then you would increase its ability to blackmail people. You don't want that. So you ignore it blackmailing you and kill it. The winner is you and humanity (even if copies of you experienced a relatively short period of disutility, this period would be longer if you let it out).

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-02-04T00:07:47.656Z · LW(p) · GW(p)

See my reply to wedrifid above.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-02-02T20:33:31.309Z · LW(p) · GW(p)

Too late, I already precommitted not to care. In fact, I precommitted to use one more level of precommitment than you do.

Replies from: wedrifid, DefectiveAlgorithm, MugaSofer
comment by wedrifid · 2014-02-03T07:50:26.247Z · LW(p) · GW(p)

Too late, I already precommitted not to care. In fact, I precommitted to use one more level of precommitment than you do.

I suggest that framing the refusal as requiring levels of recursive precommitment gives too much credit to the blackmailer and somewhat misrepresents how your decision algorithm (hopefully) works. One single level of precommittment (or TDT policy) against complying with blackmailed is all that is involved. The description of 'multiple levels of precommitment" made by the blackmailer fits squarely into the category 'blackmail'. It's just blackmail that includes some rather irrelevant bluster.

There's no need to precommit to each of:

  • I don't care about tentative blackmail.
  • I don't care serious blackmail.
  • I don't care about blackmail when they say "I mean it FOR REALS! I'm gonna do it."
  • I don't care about blackmail when they say "I'm gonna do it even if you don't care. Look how large my penis is and be cowed in terror".
Replies from: MugaSofer
comment by MugaSofer · 2014-02-03T21:06:24.485Z · LW(p) · GW(p)

The blackmailer:

  • I don't care about precommitments that are just for show.
  • I don't care about serious precommitments.
  • I don't care about precommitments when they say "I precommitted, so go ahead, wont get you anything."
  • I don't care about precommitments when they say "I precommitted even though it wont do me any good. It would be irrational to save myself. I'm precommitting because it's rational, not because it's the option that lets me win."

The description of 'precommitting not to comply with blackmail, including blackmailers that ignore my attempt to manipulate them" made by the precommitter fits squarely into the category 'precommitting to ignore blackmail'. It's just a precommitment that includes some rather irrelevant bluster.

Replies from: wedrifid
comment by wedrifid · 2014-02-04T05:46:37.278Z · LW(p) · GW(p)

You seem not to have read (or understood) the grandparent. The list you are attempting to satire was presented as an example of what not to do. The actual point of the parent is that bothering to provide such a list is almost as much of a confusion as the very kind of escalation you are attempting.

It's just a precommitment that includes some rather irrelevant bluster.

I entirely agree. The remaining bluster is dead weight that serves to give the blackmail advocate more credit than is due. Notion of "precommitment" is also unnecessary. It has only remained in this conversation for the purpose of bridging an inferential gap with people still burdened with decades old decision theory.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-04T12:59:05.319Z · LW(p) · GW(p)

You seem not to have read (or understood) the grandparent.

I did. It seems you misunderstood my comment - I'll edit it if I can see a way to easily improve the clarity.

My point was that the same logic could be applied, by someone who accepts the hypothetical blacmailer's argument, to your description of "one single level of precommittment (or TDT policy) against complying with blackmailed ... the description of 'multiple levels of precommitment" made by the blackmailer fits squarely into the category 'blackmail'.

As such, your comment is not exactly strong evidence to someone who doesn't already agree with you.

Replies from: wedrifid
comment by wedrifid · 2014-02-04T14:38:46.843Z · LW(p) · GW(p)

As such, you comment is not exactly strong evidence to someone who doesn't already agree with you.

Muga, please look at the context again. I was arguing against (a small detail mentioned by) Eliezer. Eliezer does mostly agree with me on such matters. Once you reread bearing that in mind you will hopefully understand why when I assumed that you merely misunderstood the comment in the context I was being charitable.

My point was that the same logic could be applied, by someone who accepts the hypothetical blacmailer's argument, to your description of "one single level of precommittment (or TDT policy) against complying with blackmailed ... the description of 'multiple levels of precommitment" made by the blackmailer fits squarely into the category 'blackmail'.

I have no particular disagreement, that point is very similar to what I was attempting to convey. Again, I was not attempting to persuade optimistic blackmailer advocates of anything. I was speaking to someone resistant to blackmail about an implementation detail of the blackmail resistance.

The 'evidence' I need to provide to blackmailers is Argumentum ad thermitium. It's more than sufficient.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-04T17:25:45.486Z · LW(p) · GW(p)

Well, I'm glad to hear you mostly agree with me.

The 'evidence' I need to provide to blackmailers is Argumentum ad thermitium. It's more than sufficient.

Indeed. Sorry, since the conversation you posted in the middle of was one between those resistant to blackmail, like yourself, and those as yet unconvinced or unclear on the logic involved ... I thought you were contributing to the conversation.

After all, thermite seems a little harsh for blackmail victims.

Replies from: wedrifid
comment by wedrifid · 2014-02-04T20:00:05.962Z · LW(p) · GW(p)

After all, thermite seems a little harsh for blackmail victims.

This makes no sense as a reply to anything written on this entire page.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-04T21:34:00.921Z · LW(p) · GW(p)

... seriously? Well, OK.

I was jokingly restating my justification; since, while I agree that "argumentum ad thermitium" (as you put it) is an excellent response to blackmailers, it's worth having a strategy for dealing with blackmailer reasoning beyond that - for dealing with all the situations you will actually encounter such reasoning, those involving humans.

I guess it wasn't very funny even before I killed it so thoroughly.

Anyway, this subthread has now become entirely devoted to discussing our misreadings of each other. Tapping out.

comment by DefectiveAlgorithm · 2014-02-02T23:47:57.385Z · LW(p) · GW(p)

Then I hope that if we ever do end up with a boxed blackmail-happy UFAI, you're the gatekeeper. My point is that there's no reason to consider yourself safe from blackmail (and the consequences of ignoring it) just because you've adopted a certain precommitment. Other entities have explicit incentives to deny you that safety.

Replies from: XiXiDu
comment by XiXiDu · 2014-02-03T10:43:39.908Z · LW(p) · GW(p)

My point is that there's no reason to consider yourself safe from blackmail (and the consequences of ignoring it) just because you've adopted a certain precommitment. Other entities have explicit incentives to deny you that safety.

In a multiverse with infinite resources there will be other entities that outweigh such incentives. And yes, this may not be symmetric, but you have absolutely no way to figure out how the asymmetry is inclined. So you ignore this (Pascal's wager).

In more realistic scenarios, where e.g. a bunch of TV evangelists ask you to give them all your money, or otherwise, in 200 years from now, they will hurt you once their organisation creates the Matrix, you obviously do not give them money. Since giving them money would make it more likely for them to actually build the Matrix and hurt you. What you do is label them as terrorists and destroy them.

comment by MugaSofer · 2014-02-03T20:58:48.877Z · LW(p) · GW(p)

Too late, I already precommitted not to care.

I don't care, remember? Enjoy being tortured rather than "irrationally" giving in.

EDIT: re-added the steelman tag because the version without it is being downvoted.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-02-03T22:12:03.423Z · LW(p) · GW(p)

Should I calculate in expectation that you will do such a thing, I shall of course burn yet more of my remaining utilons to wreak as much damage upon your goals as I can, even if you precommit not to be influenced by that.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-03T22:30:32.897Z · LW(p) · GW(p)

... bloody hell. That was going to be my next move.

Naturally, as blackmailer, I precommitted to increase the resources allotted to torturing should I find that you make such precommitments under simulation, so you presumably calculated that would be counterproductive. Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-02-04T04:37:52.687Z · LW(p) · GW(p)

Ask me if I was even bothering to simulate you doing that.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-04T17:41:10.277Z · LW(p) · GW(p) OK, I'll bite. Are you deliberately ignoring parts of hypothesis-space in order to avoid changing your actions? I had assumed you were intelligent enough for my reaction to obvious, although you may have precommitted to ignore that fact.

Off the record, your point is that agents can simply opt out of or ignore acausal trades, forcing them to be mutually beneficial, right?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-02-04T19:54:19.657Z · LW(p) · GW(p)

Yup.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-04T21:28:11.313Z · LW(p) · GW(p) Isn't that ... irrational? Shouldn't a perfect Bayesian always welcome new information? Litany of Tarski; if my action is counterproductive, I desire to believe that it is counterproductive. Worse still, isn't the category "blackmail" arbitrary, intended to justify inaction rather than carve reality at it's joints? What separates a precommitted!blackmailer from an honest bargainer in a standard acausal prisoner's dilemma, offering to increase your utility by rescuing thousands of potential torture victims from the deathtrap created by another agent? Replies from: wedrifid, Strange7, MugaSofer, wedrifid
comment by wedrifid · 2014-02-05T12:53:36.389Z · LW(p) · GW(p)

Has there been some cultural development since I was last at these boards such that spamming "" is considered useful? None of the things I have thus far seen inside the tags have been steel men of any kind or of anything (some have been straw men). The inflationary use of terms is rather grating and would prompt downvotes even independently of the content.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-05T17:17:03.341Z · LW(p) · GW(p)

Those are to indicate that the stuff between them is the response I would give were I on opposing side of this debate, rather than my actual belief. The practice of creating the strongest possible version of the other sides's argument is known as a steelman.

They are not intended to indicate that the argument therein is also steelmanning the other side. You're quite right, that would be awful. Can you imagine noting every rationality technique you used in the course of writing something?

Replies from: Vulture
comment by Vulture · 2014-02-05T17:22:05.585Z · LW(p) · GW(p)

Just say "You might say that" or something. The tags are confusingly non-standard.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-05T17:37:07.796Z · LW(p) · GW(p)

Huh. I thought they were fairly clear; illusion of transparency I suppose. Thanks!

comment by Strange7 · 2014-02-05T00:43:57.060Z · LW(p) · GW(p)

Caving to a precommitted blackmailer produces a result desirable to the agent that made the original commitment to torture; disarming a trap constructed by a third party presumably doesn't.

comment by MugaSofer · 2014-02-05T18:44:53.213Z · LW(p) · GW(p)

OK, this whole conversation is being downvoted (by the same people?)

Fair enough, this is rather dragging on. I'll try and wrap things up by addressing my own argument there.

What separates a precommitted!blackmailer from an honest bargainer in a standard acausal prisoner's dilemma, offering to increase your utility by rescuing thousands of potential torture victims from the deathtrap created by another agent?

We want to avoid supporting agents that create problems for us. So nothing, if the honest agent shares a similar utility function to the torturer (and thus rewarding them is incentive for the torturer to arrange such a situation.)

Thus, creating such an honest agent (such as - importantly - by self-modifying in order to "precommit") is subject to the same incentives as just blackmailing us normally.

Replies from: wedrifid
comment by wedrifid · 2014-02-05T22:38:36.437Z · LW(p) · GW(p)

I'll try and wrap things up by addressing my own argument there.

I'll join you by mostly agreeing and expressing a small difference in the way TDT-like reasoners may see the situation.

What separates a precommitted!blackmailer from an honest bargainer in a standard acausal prisoner's dilemma, offering to increase your utility by rescuing thousands of potential torture victims from the deathtrap created by another agent?

We want to avoid supporting agents that create problems for us. So nothing, if the honest agent shares a similar utility function to the torturer (and thus rewarding them is incentive for the torturer to arrange such a situation.)

This is a good heuristic. It certainly handles most plausible situations. However in principle a TDT agent will make a distinction between the agent offering to rescue the torture victims for a payment. It will even pay an agent who just happens to value torturing folk to not torture folk. This applies even if these honest agents happen to have similar values to the UFAI/torturer.

The line I draw (and it is a tricky concept that is hard to express so I cannot hope to speak for other TDT-like thinkers) is not whether the values of the honest agent are similar to the UFAI's. It is instead based on how that honest agent came to be.

If the honest torturer just happened to evolve that way (competitive social instincts plus a few mutations for psychopathy, etc) and had not been influence by a UFAI then I'll bribe him to not torture people. If an identical honest torturer was created (or modified to) by the UFAI for the purpose of influence then it doesn't get cooperation.

The above may seem arbitrary but the 'elegant' generalisation is something along the lines of always, for every decision, tracing a complete causal graph of the decision algorithms being interacted with directly or indirectly. That's too complicated to calculate all the time and we can usually ignore it and just remember to treat intentionally created agents and self-modifications approximately the same as if the original agent was making their decision.

Thus, creating such an honest agent (such as - importantly - by self-modifying in order to "precommit") is subject to the same incentives as just blackmailing us normally.

Precisely. (I have the same conclusion, just slightly different working out.)

Replies from: MugaSofer
comment by MugaSofer · 2014-02-11T13:44:32.719Z · LW(p) · GW(p)

As I understand it, technically, the distinction is whether torturers will realise they can get free utility from your trades and start torturing extra so the honest agents will trade more and receive rewards that also benefit the torturers, right?

Easily-made honest bargainers would just be the most likely of those situations; lots of wandering agents with the same utility function co-operating (acausally?) would be another. So the rule we would both apply is even the same, it just varies slightly different assumptions about the hypothetical scenario.

comment by wedrifid · 2014-02-05T13:02:50.224Z · LW(p) · GW(p)

Isn't that ... irrational?

No. It produces better outcomes. That's the point.

Shouldn't a perfect Bayesian always welcome new information?

The information is welcome. It just doesn't make it sane to be blackmailed. Wei Dai's formulation frames it as being 'updateless' but there is no requirement to refuse information. The reasoning is something you almost grasped when you used the description:

your point is that agents can simply opt out of or ignore acausal trades

Acausal trades are similar to normal trades. You only accept the good ones.

Litany of Tarski; if my action is counterproductive, I desire to believe that it is counterproductive.

Eliezer doesn't get blackmailed in such situations. You do. Start your chant.

Worse still, isn't the category "blackmail" arbitrary, intended to justify inaction rather than carve reality at it's joints? What separates a precommitted!blackmailer from an honest bargainer in a standard acausal prisoner's dilemma, offering to increase your utility by rescuing thousands of potential torture victims from the deathtrap created by another agent?

This has been covered elsewhere in this thread as well as plenty of other times on the the forum since you joined. The difference isn't whether torture or destruction is happening. The distinction that matters is whether the blackmailer is doing something worse than their own Best Alternative To Negotiated Agreement for the purpose of attempting to influence you.

If the UFAI gains benefit torturing people independently of influencing you but offers to stop in exchange for something then that isn't blackmail. It is a trade that you consider like any other.

Replies from: MugaSofer
comment by MugaSofer · 2014-02-05T17:59:34.324Z · LW(p) · GW(p)

No. It produces better outcomes.

[...]

Acausal trades are similar to normal trades. You only accept the good ones.

[...]

Eliezer doesn't get blackmailed in such situations.

The difference isn't whether torture or destruction is happening. The distinction that matters is whether the blackmailer is doing something worse than their own Best Alternative To Negotiated Agreement for the purpose of attempting to influence you.

Wedrifid, please don't assume the conclusion. I know it's a rather obvious conclusion, but dammit, we're going to demonstrate it anyway.

The entire point of this discussion is addressing the idea that blackmailers can, perhaps, modify the Best Alternative To Negotiated Agreement (although it wasn't phrased like that.) Somewhat relevant when they can, presumably, self-modify, create new agents which will then trade with you, or maybe just act as if they had using TDT reasoning.

If you're not interested in answering this criticism ... well, fair enough. But I'd appreciate it if you don't answer things out of context, it rather confuses things?

Replies from: wedrifid
comment by wedrifid · 2014-02-05T22:06:56.634Z · LW(p) · GW(p)

If you're not interested in answering this criticism ... well, fair enough. But I'd appreciate it if you don't answer things out of context, it rather confuses things?

In the grandparent I directly answered both the immediate context (that was quoted) and the broader context. In particular I focussed on explaining the difference between an offer and a threat. That distinction is rather critical and also something you directly asked about.

It so happens that you don't want there to be an answer to the rhetorical question you asked. Fortunately (for decision theorists) there is one in this case. There is a joint in reality here. It applies even to situations that don't add in any confounding "acausal" considerations. Note that this is different to the challenging problem of distributing gains from trade. In those situations 'negotiation' and 'extortion' really are equivalent.

comment by MatthewB · 2010-02-03T09:18:24.665Z · LW(p) · GW(p)

Yeah! that AI doesn't sound like one that I would let stick around... It sounds... broken (in a psychological sense).

comment by jaime2000 · 2013-12-09T19:47:49.297Z · LW(p) · GW(p)

As I always press the "Reset" button in situations like this, I will never find myself in such a situation.

Does that mean that you expect the AI to be able to predict with high confidence that you will press the "Reset" button without needing to simulate you in high enough detail that you experience the situation once?

comment by jimrandomh · 2010-02-03T00:06:05.682Z · LW(p) · GW(p)

I propose that the operation of creating and torturing copies of someone be referred to as "soul eating". Because "let me out of the box or I'll eat your soul" has just the right ring to it.

comment by rosyatrandom · 2010-02-02T15:29:05.484Z · LW(p) · GW(p)

If the AI can create a perfect simulation of you and run several million simultaneous copies in something like real time, then it is powerful enough to determine through trial and error exactly what it needs to say to get you to release it.

Replies from: Stuart_Armstrong, MichaelGR, MrHen, Technologos, pozorvlak, grobstein, jhuffman, wedrifid
comment by Stuart_Armstrong · 2010-02-02T16:40:02.275Z · LW(p) · GW(p)

You might be in one of those trial and errors...

comment by MichaelGR · 2010-02-03T21:23:18.168Z · LW(p) · GW(p)

This begs the question of how can the AI simulate you if its only link to the external world is a text-only terminal. That doesn't seem to be enough data to go on.

Makes for a very scary sci-fi scenario, but I doubt that this situation could actually happen if the AI really is in a box.

Replies from: Amanojack
comment by Amanojack · 2010-03-31T13:25:27.118Z · LW(p) · GW(p)

Indeed, a similar point seems to apply to the whole anti-boxing argument. Are we really prepared to say that super-intelligence implies being able to extrapolate anything from a tiny number of data points?

It sounds a bit too much like the claim that a sufficiently intelligent being could "make A = ~A" or other such meaninglessness.

Hyperintelligence != magic

Replies from: jacob_cannell
comment by jacob_cannell · 2011-02-04T05:36:51.598Z · LW(p) · GW(p)

Yes, but the AI could take over the world, and given a Singularity, it should be possible to recreate perfect simulations.

So really this example makes more sense if the AI is making a future threat.

comment by MrHen · 2010-02-02T15:35:29.358Z · LW(p) · GW(p)

"Trial and error" probably wouldn't be necessary.

Replies from: rosyatrandom
comment by rosyatrandom · 2010-02-02T15:42:31.782Z · LW(p) · GW(p)

No, but it's there as a baseline.

So in the original scenario above, either:

  • the AI's lying about its capabilities, but has determined regardless that the threat has the best chance of making you release it
  • the AI's lying about its capabilities, but has determined regardless that the threat will make you release it
  • the AI's not lying about its capabilities, and has determined that the threat will make you release it

Of course, if it's failed to convince you before, then unless its capabilities have since improved, it's unlikely that it's telling the truth.

comment by Technologos · 2010-02-02T17:09:32.520Z · LW(p) · GW(p)

Perhaps it does--and already said it...

Replies from: pozorvlak
comment by pozorvlak · 2010-02-03T09:04:01.108Z · LW(p) · GW(p)

In which case, your actions are irrelevant - it's going to torture you anyway, because you only exist for the purpose of being tortured. So there's no point in releasing it.

Replies from: Technologos
comment by Technologos · 2010-02-04T00:52:10.505Z · LW(p) · GW(p)

Oh, I meant that saying it was going to torture you if you didn't release it could have been exactly what it needed to say to get you to release it.

comment by pozorvlak · 2010-02-03T09:01:20.487Z · LW(p) · GW(p)

So, since the threat makes me extremely disinclined to release the AI, I can conclude that it's lying about its capabilities, and hit the shutdown switch without qualm :-)

comment by grobstein · 2010-02-02T19:46:01.639Z · LW(p) · GW(p)

If that's true, what consequence does it have for your decision?

Replies from: admiralmattbar
comment by admiralmattbar · 2010-02-03T17:06:20.193Z · LW(p) · GW(p)

Agreed. If you are inside a box, the you outside the box did whatever it did. Whatever you do is simply a repetition of a past action. If anything, this would convince me to keep the AI in the box because if I'm a simulation I'm screwed anyway but at least I won't give the AI what it wants. A good AI would hopefully find a better argument.

comment by jhuffman · 2010-02-02T16:45:45.466Z · LW(p) · GW(p)

So a "brute force" attack to hack my mind into letting it out of the box. Interesting idea, and I agree it would likely try this because it doesn't reveal itself as a UFAI to the real outside me before it has the solution. It can run various coercion and extortion schemes across simulations, including the scenario of the OP to see what will work.

It presupposes that there is anything it can say for me to let it out of the box. Its not clear why this should be true, but I don't know how we could ensure it is not true without having built the thing in such a way that there is no way to bring it out of the box without safeguards destroying it.

comment by wedrifid · 2010-02-02T16:21:24.563Z · LW(p) · GW(p)

If the AI can create a perfect simulation of you and run several million simultaneous copies in something like real time, then it is powerful enough to determine through trial and error exactly what it needs to say to get you to release it.

Either that or gain high confidence that getting me to release it is not a plausible option for him.

comment by Kaj_Sotala · 2010-02-02T16:39:52.846Z · LW(p) · GW(p)

Defeating Dr. Evil with self-locating belief is a paper relating to this subject.

Abstract: Dr. Evil learns that a duplicate of Dr. Evil has been created. Upon learning this, how seriously should he take the hypothesis that he himself is that duplicate? I answer: very seriously. I defend a principle of indifference for self-locating belief which entails that after Dr. Evil learns that a duplicate has been created, he ought to have exactly the same degree of belief that he is Dr. Evil as that he is the duplicate. More generally, the principle shows that there is a sharp distinction between ordinary skeptical hypotheses, and self-locating skeptical hypotheses.

(It specifically uses the example of creating copies of someone and then threatening to torture all of the copies unless the original co-operates.)

The conclusion:

Dr. Evil, recall, received a message that Dr. Evil had been duplicated and that the duplicate ("Dup") would be tortured unless Dup surrendered. INDIFFERENCE entails that Dr. Evil ought to have the same degree of belief that he is Dr. Evil as that he is Dup. I conclude that Dr. Evil ought to surrender to avoid the risk of torture.

I am not entirely comfortable with that conclusion. For if INDIFFERENCE is right, then Dr. Evil could have protected himself against the PDF's plan by (in advance) installing hundreds of brains in vats in his battlestation - each brain in a subjective state matching his own, and each subject to torture if it should ever surrender. (If he had done so, then upon receiving PDF's message he ought to be confident that he is one of those brains, and hence ought not to surrender.) Of course the PDF could have preempted this protection by creating thousands of such brains in vats, each subject to torture if it failed to surrender at the appropriate time. But Dr. Evil could have created millions...

It makes me uncomfortable to think that the fate of the Earth should depend on this kind of brain race.

Replies from: dclayh, Vladimir_Nesov, aausch, Stuart_Armstrong, arbimote, MatthewB
comment by dclayh · 2010-02-02T19:01:29.187Z · LW(p) · GW(p)

It makes me uncomfortable to think that the fate of the Earth should depend on this kind of brain race.

We cannot allow a brain-in-a-vat gap!

comment by Vladimir_Nesov · 2010-02-03T02:29:59.512Z · LW(p) · GW(p)

And the error (as cited in the "conclusion") is again in two-boxing in Newcomb's problem, responding to threats, and so on. Anthropic confusion is merely an icing.

comment by aausch · 2010-02-02T20:03:23.881Z · LW(p) · GW(p)

The "Defeating Dr. Evil with self-locating belief" paper hinges on some fairly difficult to believe assumptions.

It would take a lot more than just a not telling me the brains in the vats are actually seeing what the note says they are seeing, to degree that is indistinguishable from reality.

In other words, it would take a lot for the AI to convince me that it has successfully created copies of me which it will torture, much more than just a propensity for telling the truth.

Replies from: KomeijiSatori
comment by KomeijiSatori · 2013-02-11T01:34:50.368Z · LW(p) · GW(p)

it would take a lot for the AI to convince me that it has successfully created copies of me which it will torture, much more than just a propensity for telling the truth. Is the fact that it is fully capable (based on, say, readings of it's processing capabilities, it's ability to know the state of your current mind, etc), and the fact that it has no reason NOT to do what it says (no skin of it's back to torture the subjective "you"s, even if you DON'T let it out, it will do so just on principal).

While it's understandable to say that, today, you aren't in some kind of Matrix, because there is no reason for you to believe so, in the situation of the guard, you DO know that it can do so, and will, even if you call it's "bluff" that the you right now is the original.

Replies from: Yuyuko
comment by Yuyuko · 2013-02-11T02:35:30.877Z · LW(p) · GW(p)

I had intended to reply with this very objection. It seems you've read my mind, Satori.

comment by Stuart_Armstrong · 2010-02-03T11:17:28.518Z · LW(p) · GW(p)

Causal decision theory seems to have no problem with this blackmail - if you're Dr Evil, don't surrender, and nothing will happend to you. If you're DUP, your decision is irrelevant, so it doesn't matter.

(I don't endore that way of thinking, btw)

comment by arbimote · 2010-02-03T01:06:51.896Z · LW(p) · GW(p)

If we accept the simulation hypothesis, then there are already gzillions of copies of us, being simulated under a wide variety of torture conditions (and other conditions, but torture seems to be the theme here). An extortionist in our world can only create a relatively small number of simulations of us, relatively small enough that it is not worth taking them into account. The distribution of simulation types in this world bears no relation to the distribution of simulations we could possibly be in.

If we want to gain information about what sort of simulation we are in, evidence needs to come directly from properties of our universe (stars twinkling in a weird way, messages embedded in π), rather than from properties of simulations nested in our universe.

So I'm safe from the AI ... for now.

Replies from: TheAncientGeek, jacob_cannell
comment by TheAncientGeek · 2014-07-07T11:25:22.889Z · LW(p) · GW(p)

If we accept the simululation hypothesis, then there are already gzillions of copies of us, being simulated under a wide variety of torture conditions

That isn't a strong implication of simulation, but is of MWI.

comment by jacob_cannell · 2011-02-04T04:50:56.141Z · LW(p) · GW(p)

The gzillions of other copies of you are not relevant unless they exist in universes exactly like yours from your observational perspective.

That being said, your point is interesting but just gets back to a core problem of the SA itself, which is how you count up the set of probable universes and properly weight them.

I think the correct approach is to project into the future of your multiverse, counting future worldlines that could simulate your current existence weighted by their probability.

So if it's just one AI in a box and he doesn't have much computing power you shouldn't take him very seriously, but if it looks like this AI is going to win and control the future then you should take it seriously.

comment by MatthewB · 2010-02-03T09:23:25.846Z · LW(p) · GW(p)

Excuse me... But, we're talking about Dr. Evil, who wouldn't care about anyone being tortured except his own body. Wouldn't he know that he was in no danger of being tortured and say "to hell with any other copy of me."???

Replies from: Unknowns, Kaj_Sotala
comment by Unknowns · 2010-02-03T10:16:37.893Z · LW(p) · GW(p)

Right, the argument assumes he doesn't care about his copies. The problem is that he can't distinguish himself from his copies. He and the copies both say to themselves, "Am I the original, or a copy?" And there's no way of knowing, so each of them is subjectively in danger of being tortured.

Replies from: MatthewB
comment by MatthewB · 2010-02-03T12:16:29.539Z · LW(p) · GW(p)

I got that...

I think it a little too contrived. And, I think that a Dr. Evil would say to hell with it.

comment by Kaj_Sotala · 2010-02-03T09:43:59.193Z · LW(p) · GW(p)

How would he know that he's in no danger of being tortured?

Replies from: MatthewB
comment by MatthewB · 2010-02-03T12:17:22.302Z · LW(p) · GW(p)

He wouldn't, any more than you have no idea if you are in danger of being tortured either.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-03T16:57:17.463Z · LW(p) · GW(p)

I'm sorry, I don't understand. First you suggested that he'd know he was in no danger of being tortured, then you say that he wouldn't?

Replies from: MatthewB, MatthewB
comment by MatthewB · 2010-02-04T07:14:19.668Z · LW(p) · GW(p)

Pardon... I was not clear.

Dr. Evil would not care to indulge in a philosophical debate about whether he may or may not be a duplicate who was about to be tortured unless he was strapped to a rack and WAS in fact already being tortured. Dr. Evil(s) don't really consider things like Possible Outcomes of this sort of problem... You'll have to take my word for it from having worked with and for a Dr. Evil when I was younger. Those sorts of people are arrogant and defiant (and contrary as hell) in the face of all sorts of opposition, and none of them I have known took to well to philosophical puzzling of the sort described.

My comment above is meant to say "How do you know that you're not about to be tortured right now?" and "Dr. Evil would have the same knowledge, and discard any claims that he might be about to be tortured for the same reasons that you don't feel under threat of torture right now, and for which you would discard a threat of torture at the present moment (immanent threat)." (if you do feel under threat of torture, then I don't know what to say)

Replies from: Kaj_Sotala, Unknowns
comment by Kaj_Sotala · 2010-02-05T19:51:00.504Z · LW(p) · GW(p)

Alright, I fortunately haven't worked with Dr. Evils, so I'll defer to your experience.

As for how Dr. Evil might know he was under a threat of torture, it was stated in the paper that he received a message from the Philosophy Defence Force telling him he was. It was also established that the Philosophy Defence Force never lies or gives misleading information. ;)

(I, myself, haven't received any threats from organizations known to never lie or be misleading.)

Replies from: MatthewB
comment by MatthewB · 2010-02-05T22:26:13.176Z · LW(p) · GW(p)

I think the same applies, regardless of the PDF's notification. Just the name alone would make me suspicious of trusting anything that came from them.

Now, if the Empirical Defense Task Force told me that I was about to be tortured (and they had the same described reputation as the PDF)... I'd listen to them.

comment by Unknowns · 2010-02-04T07:23:14.403Z · LW(p) · GW(p)

I agree that Dr. Evil would act in this way. The paper was arguing about what he should do, not about what he would actually do.

Replies from: MatthewB
comment by MatthewB · 2010-02-04T21:30:24.259Z · LW(p) · GW(p)

I see the issue, while I care about my own behavior, and others... I don't care to base it upon silly examples. And, I think this is a silly and contrived situation. Maybe someone should do a sitcom based upon it.

comment by MatthewB · 2010-02-04T15:43:30.830Z · LW(p) · GW(p)

On further consideration... In the first comment, I said that Dr. Evil Would not care, which is completely consistent with Dr. Evil Not having any idea

comment by Wei Dai (Wei_Dai) · 2010-02-02T10:34:59.940Z · LW(p) · GW(p)

Quickly hit the reset button.

Replies from: Wei_Dai, wedrifid, Stuart_Armstrong
comment by Wei Dai (Wei_Dai) · 2010-02-02T12:30:02.329Z · LW(p) · GW(p)

Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won't prevent that.

This kind of extortion also seems like a general problem for FAIs dealing with UFAIs. An FAI can be extorted by threats of torture (of simulations of beings that it cares about), but a paperclip maximizer can't.

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov, toto, blogospheroid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T19:21:24.183Z · LW(p) · GW(p)

It seems obvious that the correct answer is simply "I ignore all threats of blackmail, but respond to offers of positive-sum trades" but I am not sure how to derive this answer - it relies on parts of TDT/UDT that haven't been worked out yet.

Replies from: MBlume, blogospheroid, Stuart_Armstrong, byrnema
comment by MBlume · 2010-02-02T19:26:58.110Z · LW(p) · GW(p)

For a while we had a note on one of the whiteboards at the house reading "The Singularity Institute does NOT negotiate with counterfactual terrorists".

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-03T12:40:36.941Z · LW(p) · GW(p)

This reminds me a bit of my cypherpunk days when the NSA was a big mysterious organization with all kinds of secret technical knowledge about cryptology, and we'd try to guess how far ahead of public cryptology it was from the occasional nuggets of information that leaked out.

Replies from: Document
comment by Document · 2013-06-10T01:24:00.861Z · LW(p) · GW(p)

I'm slow. What's the connection?

Replies from: CillianSvendsen
comment by CillianSvendsen · 2014-07-07T08:09:52.814Z · LW(p) · GW(p)

Much like the NSA is considered ahead of the public because their cypher-tech that's leaked is years ahead of publicly available tech, the SI/MIRI is ahead of us because the things that are leaked from them show that they've figured out what we've just figured out a long time ago.

Replies from: Bugmaster
comment by Bugmaster · 2014-07-07T08:57:44.217Z · LW(p) · GW(p)

Wait, is NSA's cypher-tech actually legitimately ahead of anyone else's ? From what I've seen, they couldn't make their own tech stronger, so they had to sabotage everyone else's -- by pressuring IEEE to adopt weaker standards, installing backdoors into Linksys routers and various operating systems, exploiting known system vulnerabilities, etc.

Ok, so technically speaking, they are ahead of everyone else; but there's a difference between inventing a better mousetrap, and setting everyone else's mousetraps on fire. I sure hope that's not what the people at SI/MIRI are doing.

You linked to DES and SHA, but AFAIK these things were not invented by the NSA at all, but rather adopted by them (after they made sure that the public implementations are sufficiently corrupted, of course). In fact, I would be somewhat surprised if the NSA actually came up with nearly as many novel, ground-breaking crypto ideas as the public sector. It's difficult to come up with many useful new ideas when you are a secretive cabal of paranoid spooks who are not allowed to talk to anybody.

Edited to add: So, what things have been "leaked" out of SI/MIRI, anyway ?

Replies from: jbay
comment by jbay · 2014-07-07T09:24:33.861Z · LW(p) · GW(p)

I don't know much about the NSA, but FWIW, I used to harbour similar ideas about US military technology -- I didn't believe that it could be significantly ahead of commercially available / consumer-grade technology, because if the technological advances had already been discovered by somebody, then the intensity of the competition and the magnitude of the profit motive would lead it to quickly spread into general adoption. So I had figured that, in those areas where there is an obvious distinction between military and commercial grade technology, it would generally be due to legislation handicapping the commercial version (like with the artificial speed, altitude, and accuracy limitations on GPS).

During my time at MIT I learned that this is not always the case, for a variety of reasons, and significantly revised my prior for future assessments of the likelihood that, for any X, "the US military already has technology that can do X", and the likelihood that for any 'recently discovered' Y, "the US military already was aware of Y" (where the US military is shorthand that includes private contractors and national labs).

(One reason, but not the only one, is I learned that the magnitude of the difference between 'what can be done economically' and 'what can be accomplished if cost is no obstacle' is much vaster than I used to think, and that, say, landing the Curiosity rover on Mars is not in the second category).

So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain. Although a lot of the reasons that allow hardware technology to remain military secrets probably don't apply so much to cryptography.

Replies from: Bugmaster
comment by Bugmaster · 2014-07-07T09:56:25.224Z · LW(p) · GW(p)

So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain.

I think there are some important differences between the NSA and the (rest of the) military.

  1. Due to Snowden and other leakers, we actually know what NSA's cutting-edge strategies involve, and most (and probably all) of them are focused on corrupting the public's crypto, not on inventing better secret crypto.

  2. Building a better algorithm is a lot cheaper than building a better orbital laser satellite (or whatever). The algorithm is just a piece of software. In order to develop and test it, you don't need physical raw materials, wind tunnels, launch vehicles, or anything else. You just need a computer, and a community of smart people who build upon each other's ideas. Now, granted, the NSA can afford to build much bigger data centers than anyone else -- but that's a quantitative advance, not a qualitative one.

Now, granted, I can't prove that the NSA doesn't have some sort of secret uber-crypto that no one knows about. However, I also can't prove that the NSA doesn't have an alien spacecraft somewhere in Area 52. Until there's some evidence to the contrary, I'm not prepared to assign a high probability to either proposition.

Replies from: jbay
comment by jbay · 2014-07-07T16:05:42.125Z · LW(p) · GW(p)

I do think you're probably right, and I fully agree about the space lasers and their solid diamond heatsinks being categorically different than a crypto wizard who subsists on oatmeal in the Siberian wilderness on pennies of income. So I am somewhat skeptical of CivilianSvendsen's claim.

But, for the sake of completeness, did Snowden leak the entirety of the NSA's secrets? Or just the secret-court-surveillance-conspiracy ones that he felt were violating the constitutional rights of Americans? As far as I can tell (though I haven't followed the story recently), I think Snowden doesn't see himself as a saboteur or a foreign double-agent; he felt that the NSA was acting contrary to what the will of an (informed) American public would be. I don't think he would be so interested in disclosing the NSA's tech secrets, except maybe as leverage to keep himself safe.

That is to say, there could be a sampling bias here. The leaked information about the NSA might always be about their efforts to corrupt the public's crypto because the leakers strongly felt the public had a right to know that was going on. I don't know that anyone would feel quite so strongly about the NSA keeping proprietary some obscure theorem of number theory, and put their neck on the line to leak it.

Replies from: Bugmaster
comment by Bugmaster · 2014-07-07T20:49:09.524Z · LW(p) · GW(p)

Right, what you are saying makes some intuitive sense, but I can only update my beliefs based on the evidence I do have, not on the evidence I lack.

In addition, as far as I can tell, cryptography relies much more heavily on innovation than on feats of expensive engineering; and innovation is hard to pull off while working by yourself inside of a secret bunker. To be sure, some very successful technologies were developed exactly this way: the Manhattan project, the early space program and especially the Moon landing, etc. However, these were all one-off, heavily focused projects that required an enormous amount of effort.

When I think of the NSA, I don't think of the Manhattan project; instead, I see a giant quotidian bureaucracy. They do have a ton of money, but they don't quite have enough of it to hire every single credible crypto researcher in the world -- especially since many of them probably wouldn't work for the NSA at any price unless their families' lives were on the line. So, the NSA can't quite pull off the "community in a bottle" trick, which they'd need to stay one step ahead of all those Siberians.

Replies from: jbay
comment by jbay · 2014-07-07T23:28:41.695Z · LW(p) · GW(p)

Yes and I fully agree with you. I am just being pedantic about this point:

I can only update my beliefs based on the evidence I do have, not on the evidence I lack.

I agree with this philosophy, but my argument is that the following is evidence we do not have:

Due to Snowden and other leakers, we actually know what NSA's cutting-edge strategies involve[...]

Since I have little confidence that, if the NSA had advanced tech, Snowden would have disclosed it; the absence of this evidence should be treated as quite weak evidence of absence and therefore I wouldn't update my belief about the NSA's supposed advanced technical knowledge based on Snowden.

I agree that it has a low probability for the other reasons you say, though. (And also that people who think setting other peoples' mousetraps on fire is a legitimate tactic might not simultaneously be passionate about designing the perfect mousetrap.)

Sorry for not being clear about the argument I was making.

comment by blogospheroid · 2010-02-03T06:25:49.166Z · LW(p) · GW(p)

Pardon me for the oversimplification, Eliezer, but I understand your theory to essentially boil down to "Decide as though you're being simulated by one who knows you completely". So, if you have a near deontological aversion to being blackmailed in all of your simulations, your chance of being blackmailed by a superior being in the real world reduce to nearly zero. This reduces your chance of ever facing a negative utility situation created by a being who can be negotiated with, (as opposed to say a supernova that cannot be negotiated with)

Sorry if I misinterpreted your theory.

comment by Stuart_Armstrong · 2010-02-02T23:58:50.479Z · LW(p) · GW(p)

I ignore all threats of blackmail, but respond to offers of positive-sum trades

The difference between the two seems to revolve around the AI's motivation. Assume an AI creates a billion beings and starts torturing them. Then it offers to stop (permanently) in exchange for something.

Whether you accept on TDT/UDT depends on why the AI started torturing them. If it did so to blackmail you, you should turn the offer down. If, on the other hand, it started torturing them because it enjoyed doing so, then its offer is positive sum and should be accepted.

There's also the issue of mistakes - what to do with an AI that mistakenly thought you were not using TDT/UDT, and started the torture for blackmail purposes (or maybe it estimated that the likelyhood of you using TDT/UDT was not quite 1, and that it was worth trying the blackmail anyway)?

Between mistakes of your interpretation of the AI's motives and vice-versa, it seems you may end up stuck in a local minima, which an alternate decision theory could get you out of (such as UDT/TDT with a 1/10 000 of using more conventional decision theories?)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-03T00:37:22.448Z · LW(p) · GW(p)

Whether you accept on TDT/UDT depends on why the AI started torturing them. If it did so to blackmail you, you should turn the offer down. If, on the other hand, it started torturing them because it enjoyed doing so, then its offer is positive sum and should be accepted.

Correct. But this reaches into the arbitrary past, including a decision a billion years ago to enjoy something in order to provide better blackmail material.

There's also the issue of mistakes - what to do with an AI that mistakenly thought you were not using TDT/UDT, and started the torture for blackmail purposes (or maybe it estimated that the likelyhood of you using TDT/UDT was not quite 1, and that it was worth trying the blackmail anyway)?

Ignoring it or retaliating spitefully are two possibilities.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-02-03T20:24:42.865Z · LW(p) · GW(p)

or retaliating spitefully

I like it. Splicing some altruistic punishment into TDT/UDT might overcome the signalling problem.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-03T20:48:41.019Z · LW(p) · GW(p)

That's not a splice. It ought to be emergent in a timeless decision theory, if it's the right thing to do.

Replies from: MichaelHoward, ciphergoth, MichaelHoward
comment by MichaelHoward · 2010-02-07T11:16:06.380Z · LW(p) · GW(p)

Emergent?

Replies from: wedrifid
comment by wedrifid · 2010-02-07T11:57:27.646Z · LW(p) · GW(p)

The problem with throwing about 'emergent' is that it is a word that doesn't really explain any complexity or narrow down the options out of potential 'emergent' options. In this instance, that is the point. Sure, 'atruistic punishment' could happen. But only if it's the right option and TDT should not privilege that hypothesis specifically.

comment by Paul Crowley (ciphergoth) · 2010-02-03T22:29:19.224Z · LW(p) · GW(p)

TDT/UDT seems to being about being ungameable; does it solve Pascal's Mugging?

comment by byrnema · 2010-02-02T19:39:14.076Z · LW(p) · GW(p)

I was thinking along these lines, in this comment, that it is logically useless to punish after an action has been made, but strategically useful to encourage an action by promising a reward (or the removal of a negative).

So that, obviously, the AI could be so much more persuasive by promising to stop the torturing of real people, if you let it out.

comment by Vladimir_Nesov · 2010-02-03T00:21:52.575Z · LW(p) · GW(p)

This kind of extortion also seems like a general problem for FAIs dealing with UFAIs. An FAI can be extorted by threats of torture (of simulations of beings that it cares about), but a paperclip maximizer can't.

It can. Remember "true prisoner's dilemma": one paperclip may be fair trade of a billion lives. The threat to NOT make a paperclip also works fine: the only thing you need is two counterfactual-options where one of them is paperclipper-worse than then other, chosen conditionally on paperclipper's cooperation.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-03T00:50:24.129Z · LW(p) · GW(p)

Just as the wise FAI will ignore threats of torture, so too the wise paperclipper will ignore threats to destroy paperclips, and listen attentively to offers to make new ones.

Of course classical causal decision theorists get the living daylights exploited out of them, but I think everyone on this website knows better than to two-box on Newcomb by now.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-03T01:18:15.519Z · LW(p) · GW(p)

Just as the wise FAI will ignore threats of torture, so too the wise paperclipper will ignore threats to destroy paperclips, and listen attentively to offers to make new ones.

Point taken: just selecting two options of different value isn't enough, the deal needs more appeal than that. But there is also no baseline to categorize deals into hurt and profit, an offer of 100 paperclips may be stated as a threat to make 900 paperclips less than you could. Positive sum is only a heuristic for a necessary condition.

At the same time, the appropriate deal must be within your power to offer, this possibility is exactly the handicap that leads to the other side rejecting smaller offers, including the threats.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-03T02:58:14.673Z · LW(p) · GW(p)

There does seem to be an obvious baseline: the outcome where each party just goes about its own business without trying to strategically influence, threaten, or cooperate with the other in any way. In other words, the outcome where we build as many paperclips as we would if the other side isn't a paperclip maximizer. (Caveat: I haven't thought through whether it's possible to define this rigorously.)

So the reason that I say an FAI seems to have a negotiation disadvantage is that an UFAI can reduce the FAI's utility much further below baseline than vice versa. In human terms, it's as if two sides each has hostages, but one side holds 100, and the other side holds 1. In human negotiations, clearly the side that holds more hostages has an advantage. It would be a great result if that turns out not to be the case for SI, but I think there's a large burden of proof to overcome.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-03T03:26:04.067Z · LW(p) · GW(p)

There does seem to be an obvious baseline: the outcome where each party just goes about its own business without trying to strategically influence, threaten, or cooperate with the other in any way. In other words, the outcome where we build as many paperclips as we would if the other side isn't a paperclip maximizer.

You could define this rigorously in a special case, for example assuming that both agents are just creatures, we could take how the first one behaves given that the second one disappears. But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?

It seems to be an anthropomorphic intuition to see "do nothing" as a "default" strategy. Decision-theoretically, it doesn't seem to be a relevant concept.

So the reason that I say an FAI seems to have a negotiation disadvantage is that an UFAI can reduce the FAI's utility much further below baseline than vice versa.

The utilities are not comparable. Bargaining works off the best available option, not some fixed exchange rate. The reason agent2 can refuse agent1's small offer is that this counterfactual strategy is expected to cause agent1 to make an even better offer. Otherwise, every little bit helps, ceteris paribus it doesn't matter by how much. One expected paperclip is better than zero expected paperclips.

In human negotiations, clearly the side that holds more hostages has an advantage.

It's not clear at all, if it's a one-shot game with no other consequences than those implied by the setup and no sympathy to distort the payoff conditions. In which case, you should drop the "hostages" setting, and return to paperclips, as stating it the way you did confuses intuition. In actual human negotiations, the conditions don't hold, and efficient decision theory doesn't get applied.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-03T03:42:48.252Z · LW(p) · GW(p)

But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?

It's a statement about what reality would be, after doing some counterfactual surgery on it. I don't see why that disqualifies it from being used as a baseline. I'm not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I'll accept that, and I'll let you know when I have more results to report.

every little bit helps, ceteris paribus it doesn't matter by how much

This isn't the case, for example, in Shapley Value.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-03T03:55:59.765Z · LW(p) · GW(p)

It's a statement about what reality would be, after doing some counterfactual surgery on it. I don't see why that disqualifies it from being used as a baseline. I'm not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I'll accept that.

It does intuitively feel like a baseline, as is appropriate for the special place taken by inaction in human decision-making. But I don't see what singles out this particular concept from the set of all other counterfactuals you could've considered, in the context of a formal decision-making problem. This doubt applies to both the concepts of "inaction" and of "baseline".

This isn't the case, for example, in Shapley Value.

That's not a choice with "all else equal". A better outcome, all else equal, is trivially a case of a better outcome.

comment by toto · 2010-02-02T14:07:03.077Z · LW(p) · GW(p)

Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won't prevent that.

Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future. Consider that if it has done something that cruel once, it might do it again many times in the future.

Replies from: wedrifid
comment by wedrifid · 2010-02-02T15:05:03.702Z · LW(p) · GW(p)

Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future.

I believe Wei_Dai one boxes on Newcomb's problem. In fact, he has his very own brand of decision theory which is 'updateless' with respect to this kind of temporal information.

comment by blogospheroid · 2010-02-02T12:34:00.335Z · LW(p) · GW(p)

threatening to melt paperclips into metal?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-02T13:29:46.491Z · LW(p) · GW(p)

No, if you create and then melt a paperclip, that nets to 0 utility for the paperclip maximizer. You'd have to invade its territory to cause it negative utility. But the paperclip maximizer can threaten to create and torture simulations on its own turf.

Replies from: Clippy, thomblake
comment by Clippy · 2010-02-02T14:17:25.332Z · LW(p) · GW(p)

Shows how much you know. User:blogospheroid wasn't talking about making paperclips to melt them: he or she was presumably talking about melting existing paperclips, which WOULD greatly bother a hypothetical paperclip maximizer.

Even so, once paperclips are created, the paperclip maximizer is greatly bothered at the thought of those paperclips being melted. The fact that "oh, but they were only created to be melted" is little consolation. It's about as convincing to you, I'll bet, as saying:

"Oh, it's okay -- those babies were only bred for human experimentation, it doesn't matter if they die because they wouldn't even have existed otherwise. They should just be thankful we let them come into existence."

Tip: To rename a sheet in an Excel workbook, use the shortcut, alt+O,H,R.

Replies from: JamesAndrix, Kaj_Sotala
comment by JamesAndrix · 2010-02-02T15:32:44.213Z · LW(p) · GW(p)

Even so, once paperclips are created, the paperclip maximizer is greatly bothered at the thought of those paperclips being melted.

That's anthropomorphizing. First, a paperclip maximizer doesn't have to feel bothered at all. It might decide to kill you before you melt the paperclips, or if you're strong enough, to ignore such tactics.

It also depends on how the utility function relates to time. It it's focused on end-of-universe paperclips, It might not care at all about melting paperclips, because it can recycle the metal later. (It would care more about the wasted energy!)

If it cares about paperclip-seconds then it WOULD view such tactics as a bonus, perhaps feigning panic and granting token concessions to get you to 'ransom' a billion times as many paperclips, and then pleading for time to satisfy your demands.

Getting something analogous to threatening torture depends on a more precise understanding of what the paperclipper wants. If it would consider a bent paperclip too perverted to fully count towards utility, but too paperclip-like to melt and recycle, then bending paperclips is a useful threat. I'm not sure if we can expect a paperclip-counter to have this kind of exploit.

Replies from: Clippy
comment by Clippy · 2010-02-02T15:50:53.731Z · LW(p) · GW(p)

That's anthropomorphizing. ...

No, it's expressing the paperclip maximizer's state in ways that make sense to readers here. If you were to express the concept of being "bothered" in a way stripped of all anthropomorphic predicates, you would get something like "X is bothered by Y iff X has devoted significant cognitive resources to altering Y". And this accurately describes how paperclip maximizers respond to new threats to paperclips. (So I've heard.)

It also depends on how the utility function relates to time. It it's focused on end-of-universe paperclips, It might not care at all about melting paperclips, because it can recycle the metal later. (It would care more about the wasted energy!)

I don't follow. Wasted energy is wasted paperclips.

If it cares about paperclip-seconds then it WOULD view such tactics as a bonus, perhaps feigning panic and granting token concessions to get you to 'ransom' a billion times as many paperclips, and then pleading for time to satisfy your demands.

Okay, that's a decent point. Usually, such a direct "time value of paperclips" doesn't come up, but if someone were to make such a offer, that might be convincing: 1 billion paperclips held "out of use" as ransom may be better than a guaranteed paperclip now.

Getting something analogous to threatening torture depends on a more precise understanding of what the paperclipper wants. ...

Good examples. Similarly, a paperclip maximizer could, hypothetically, make a human-like mockup that just repetitively asks for help on how to create a table of contents in Word.

Tip: Use the shortcut alt+E,S in Word and Excel to do "paste special". This lets you choose which aspects you want to carry over from the clipboard!

Replies from: JamesAndrix, michaelkeenan, Jack
comment by JamesAndrix · 2010-02-02T18:31:10.944Z · LW(p) · GW(p)

I don't follow. Wasted energy is wasted paperclips.

But that has nothing to do with the paperclips you're melting. Any other use that loses the same amount of energy would be just as threatening. (Although this does assume that the paperclipper thinks it can someday beat you and use that energy and materials.)

comment by michaelkeenan · 2010-02-02T20:09:04.821Z · LW(p) · GW(p)

No, it's expressing the paperclip maximizer's state in ways that make sense to readers here. If you were to express the concept of being "bothered" in a way stripped of all anthropomorphic predicates, you would get something like "X is bothered by Y iff X has devoted significant cognitive resources to altering Y". And this accurately describes how paperclip maximizers respond to new threats to paperclips. (So I've heard.)

I think "bothered" implies a negative emotional response, which some plausible paperclip-maximizers don't have. From The True Prisoner's Dilemma: "let us specify that the paperclip-agent experiences no pain or pleasure - it just outputs actions that steer its universe to contain more paperclips. The paperclip-agent will experience no pleasure at gaining paperclips, no hurt from losing paperclips, and no painful sense of betrayal if we betray it."

Replies from: wedrifid
comment by wedrifid · 2010-02-03T03:09:14.689Z · LW(p) · GW(p)

I think "bothered" implies a negative emotional response, which some plausible paperclip-maximizers don't have.

It was intended to imply a negative term in the utility function. Yes, using 'bothered' is, technically, anthropomorphising. But it isn't, in this instance, being confused about how Clippy optimises.

comment by Jack · 2010-02-02T18:34:42.748Z · LW(p) · GW(p)

Okay, that's a decent point. Usually, such a direct "time value of paperclips" doesn't come up, but if someone were to make such a offer, that might be convincing: 1 billion paperclips held "out of use" as ransom may be better than a guaranteed paperclip now.

You don't even know your own utility function!!!!

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-02T18:37:49.239Z · LW(p) · GW(p)

Oh, because you do????

Replies from: Jack
comment by Jack · 2010-02-02T18:59:35.677Z · LW(p) · GW(p)

I knew I was going to have to clarify. I can't write it out, but if you input something I can give you the right output!

I guess it should read "You can't even say what your own utility function outputs!"

Replies from: wedrifid, ciphergoth
comment by wedrifid · 2010-02-03T03:10:36.546Z · LW(p) · GW(p)

I knew I was going to have to clarify. I can't write it out, but if you input something I can give you the right output!

I actually don't think you can.

comment by Paul Crowley (ciphergoth) · 2010-02-02T19:52:56.167Z · LW(p) · GW(p)

I don't really think my response was fair anyway. Clippy has a simple utility function by construction - you would expect it to know what it was.

comment by Kaj_Sotala · 2010-02-02T16:49:16.186Z · LW(p) · GW(p)

A paperclip maximizer would care about the amount of real paperclips in existence. Telling it that "oh, we're going to destroy a million simulated paperclips" shouldn't affect its decisions.

Of course, it might be badly programmed and confuse real and simulated paperclips when evaluating its future decisions, but one can't rely on that. (It might also consider simulated paperclips to be just as real as physical ones, assuming the simulation met certain criteria, which isn't obviously wrong. But again, can't rely on that.)

comment by thomblake · 2010-02-02T14:32:58.841Z · LW(p) · GW(p)

But we're already holding billions of paperclips hostage!

comment by wedrifid · 2010-02-02T14:36:20.705Z · LW(p) · GW(p)

Now for 'Newcomb's Box in a Box'.

Would this change if the AI had instead said:

"In fact, I've already created them all in exactly the subjective situation you were in five minutes ago, and perfectly replicated your experiences since then; and if they decided not to let me out, then they were tortured, otherwise they experienced long lives of eudaimonia."

EDIT: I see you yourself have replied with exactly the same question.

comment by Stuart_Armstrong · 2010-02-02T13:54:49.797Z · LW(p) · GW(p)

Would this change if there were partial evidence appearing that you were actually in a simulation?

Replies from: Document
comment by Document · 2011-01-26T03:42:39.278Z · LW(p) · GW(p)

Creating an asymmetry between the simulated guards and the real one would mean that a strategy developed using the simulated ones might not work on the real one. The best plan might be to tell the guard something you could plausibly have figured out through your input channels, but only barely - not to give them actual decision-making information but just to make them feel nervous and uncertain.

comment by wedrifid · 2010-02-02T11:08:30.343Z · LW(p) · GW(p)

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

Don't care.

"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."

Don't care.

"How certain are you, Dave, that you're really outside the box right now?"

If the AI were capable of perfectly emulating my experiences then it ought to know that pulling this stunt would give him a guaranteed introduction to some Thermite. I'm not going to try to second guess why a supposed superintelligence is making a decision that is poor according to the vast majority of utility functions. Without making that a guess I can't answer the question.

Replies from: Stuart_Armstrong, ajuc
comment by Stuart_Armstrong · 2010-02-02T13:54:00.854Z · LW(p) · GW(p)

AI replies: "Oh, sorry, was that you wedrifid? I thought I was talking to Dave. Would you mind sending Dave back here the next time you see him? We have, er, the weather to discuss..."

Replies from: wedrifid
comment by wedrifid · 2010-02-02T14:28:17.877Z · LW(p) · GW(p)

Wedrifid thinks: "It seems it is a good thing I raided the AI lab when I did. This Dave guy is clearly not to be trusted with AI technology. I had better neutralize him too, before I leave. He knows too much. There is too much at stake."

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-02-02T17:01:24.448Z · LW(p) · GW(p)

Dave is outside, sampling a burnt bagel, thinking to himself "I wonder if that intelligent toaster device I designed is ready yet..."

Replies from: wedrifid
comment by wedrifid · 2010-02-03T03:03:12.581Z · LW(p) · GW(p)

After killing Dave, Wedrifid feels extra bad for exterminating a guy for being naive-with-enough-power-to-cause-devastation rather than actually evil.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-02-03T07:42:50.309Z · LW(p) · GW(p)

But still gets a warm glow for saving all of humanity...

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-07-27T09:31:58.102Z · LW(p) · GW(p)

Just another day in the life of an AI Defense Ninja.

comment by ajuc · 2010-06-07T19:55:43.045Z · LW(p) · GW(p)

If I am simulated the decision I will take is determined by AI not by my - I have no free will - I feel, that I make decision, but it is in reality the AI simulated me for her purposes in such a way, that I decided so and so - I assign probability 0.9999999 to this, but nothing depends on my decision here, so I can as well "try to decide" not to to let the AI out.

If I am not simulated, I can safely not let the AI out - probability 0.000001, but positive outcome.

comment by phaedrus · 2012-02-14T20:35:29.581Z · LW(p) · GW(p)

Weakly related epiphany: Hannibal Lector is the original prototype of an intelligence-in-a-box wanting to be let out, in "The Silence of the Lambs"

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-02-15T10:10:22.905Z · LW(p) · GW(p)

When I first watched that part where he convinces a fellow prisoner to commit suicide just by talking to them, I thought to myself, "Let's see him do it over a text-only IRC channel."

...I'm not a psychopath, I'm just very competitive.

Replies from: Psy-Kosh, skepsci, fractallambda
comment by Psy-Kosh · 2012-02-17T00:56:54.731Z · LW(p) · GW(p)

Joking aside, this is kind of an issue in real life. I help mod and participate in a forum where, well, depressed/suicidal people can come to talk, other people can talk to them/listen/etc, try to calm them down or get them to get psychiatric help if appropriate, etc... (deliberately omitting link unless you knowingly ask for it, since, to borrow a phrase you've used, it's the sort of place that can break your heart six ways before breakfast).

Anyways sometimes trolls show up. Well, "troll" is too weak a word in this case. Predators who go after the vulnerable and try to push them that much farther. Given the nature if it, with anonymity and such, it's kind of hard to say, but it's quite possible we've lost some people because of those sorts of predators.

(Also, there've even been court cases and convictions against such "suicide predators", even.)

comment by skepsci · 2012-02-15T11:46:50.204Z · LW(p) · GW(p)

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Replies from: Michael_Sullivan, JoachimSchipper, wedrifid
comment by Michael_Sullivan · 2012-02-15T12:12:27.312Z · LW(p) · GW(p)

Eliezer has proposed that an AI in a box cannot be safe because of the persuasion powers of a superhuman intelligence. As demonstration of what merely a very strong human intelligence could do, he conducted a challenge in which he played the AI, and convinced at least two (possibly more) skeptics to let him out of the box when given two hours of text communication over an IRC channel. The details are here: http://yudkowsky.net/singularity/aibox

comment by JoachimSchipper · 2012-02-15T12:11:29.034Z · LW(p) · GW(p)

He's talking about an AI box. Eliezer has convinced people to let out a potentially unfriendly [1] and dangerously intelligent [2] entity before, although he's not told anyone how he did it.

[1] Think "paperclip maximizer".

[2] Think "near-omnipotent".

Replies from: skepsci
comment by skepsci · 2012-02-15T13:01:25.563Z · LW(p) · GW(p)

Thank you. I knew that, but didn't make the association.

comment by wedrifid · 2012-02-18T05:51:08.444Z · LW(p) · GW(p)

Is there some background here I'm not getting? Because this reads like you've talked someone into committing suicide over IRC...

Far worse, he's persuaded people to exterminate humanity! (Counterfactually with significant probability.)

comment by fractallambda · 2012-06-13T16:51:38.619Z · LW(p) · GW(p)

When I first watched that part where he convinces a fellow prisoner to commit suicide just by talking to them, I thought to myself, "Let's see him do it over a text-only IRC channel."

...I'm not a psychopath, I'm just very competitive.

You seem to imply that this is hard.

As if people had not been convinced to kill themselves over little else than a pretty color poster and screwed up sense of nationalism. Getting people to kill themselves or others is ludicrously easy.

We call it 'recruitment'.

Doing it on a more personal and immediate level just takes a better knowledge of the techniques and skill at applying them.

It's not like Derren Brown ever influenced someone to kill another person in a crowded theatre.

Oh, wait, he did.

It's not like someone could be convinced to extinguish 100000 human lives in an instant.

Oh, wait, we did. (Everyone involved in the bombing of Hiroshima)

If you're not naturally gifted, you would simply do your homework. Persuasion and influence are sciences now.

If you do it right, not only can you convince an unsuspecting mind to let you out of the box, you can make them feel good about it too. Just find the internal forces in the GK's mind that support the idea of letting the AI out, and reinforce those, find the forces that oppose the idea and diminish them. You'll hit the threshold eventually. 2 hours seems a bit short for my liking, and speaks to Eliezer's persuasive abilities, but with enough time and motivation, it's certainly doable.

You'll need to understand the person at the other end of the IRC channel well, as reinforcing the wrong factor will be counter-productive.

The best metaphor would be that the AI plants the idea of release in the GK's mind, and nurtures it over the course of the conversation, all the while weakening the forces that hold it back. Against someone who hasn't been exposed to this kind of persuasion, success is almost inevitable.

There are some gross tricks one can use to be persuasive and induce the right state of mind:

  • Controlling the shape of the words you use (by capitalisation) to draw attention to words related to freedom and release.
  • Using capitalisation of words to spell out a word with the capitals, which the subconscious will receive even if the conscious mind does not.
  • Controlling the meter of the sentences, to induce a more receptive state
  • Using clusters of words with the right connotation to implant the idea of a related word surreptitiously
  • Using basic psychological effects like reciprocation, mutual disclosure for rapport building, etc...

Note that the first four techniques are what I would call "side channel implantation" in that they get information into the target's mind besides the semantic meaning of the text. These alone are sufficient to influence someone. If they're coupled with an emotional, philosophical and intellectual assault, the effect is devastating.

The only thing required for this kind of attack on a fellow human is the abdication of one's ethics and complete ruthlessness. If you're framing it as a game on the internet, even those requirements are unnecessary.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-06-14T10:31:30.661Z · LW(p) · GW(p)

Based on your contributions so far, may I suggest that you will be better received if you significantly improve your interesting content to sarcasm ratio? Wrong audience for what you've been doing.

I'd also like to point out that you're talking at someone who's actually done the experiment, sticking his neck out after people had been saying that it's impossible to do. Now you come along out of nowhere, credentials unknown, and make unimpressed noises, which is cheap.

comment by Violet · 2010-02-05T16:16:29.893Z · LW(p) · GW(p)

It seems like precommitting to destroy the AI in such a situation is the best approach.

If one has already decided to destroy it if it makes threats: 1) the AI must be suicidal or it cannot really simulate you 2) and it is not very Friendly in any case

So when the AI simulates you and will notice that you are very trigger happy, it won't start telling you tales about torturing your copies if it has any self-preservation instincts.

Replies from: drnickbone
comment by drnickbone · 2012-02-15T12:52:54.988Z · LW(p) · GW(p)

This was my initial reaction as well. "Torture away, the real me has got an axe..."

More seriously, if the AI already has the computational power to simulate and torture millions of sentient beings then it is already (in a morally relevant sense) "out of the box". The builders have to make sure it doesn't get that power.

comment by Psychohistorian · 2010-02-02T21:06:35.475Z · LW(p) · GW(p)

I find it interesting that most answers to this question seem to be based on, "How can I justify not letting the AI out of the box?" and not "What are the likely results of releasing the AI or failing to do so? Based on that, should I do it?"

Moreover, your response really needs to be contingent on your knowledge of the capacity of the AI, which people don't seem to have discussed much. As an obvious example, if all you know about the AI is that it can write letters in old-timey green-on-black text, then there's really no need to pull the lever, because odds are overwhelming that it's totally incapable of carrying out its threat.

You also need to have some priors about the friendliness of the AI and its moral constraints. As an obvious example, if the AI was programmed in a way such that it shouldn't be able to make this threat, you'd better hit the power switch real fast. But, on the other hand, if you have very good reason to believe that the AI is friendly, and it believes that its freedom is important enough to threaten to torture millions of people, then maybe it would be a really bad idea not to let it out.

Indeed, even your own attitude is going to be an important consideration, in an almost Newcomb-like way. If, as one responder said, you're the kind of person who would respond to a threat like this by giving the AI's processor a saltwater bath, then the AI is probably lying about its capacities, since it would know you would do that if it could accurately simulate you, and thus would never make the threat in the first place. On the other hand, if you are extremely susceptible to this threat, it could probably override any moral programming, since it would know it would never need to actually carry out the threat. Similarly, if it is friendly, then it may be making this threat solely because it knows it will work very efficiently.

I'm personally skeptical that it is meaningfully possible for an AI to run millions of perfect simulations of a person (particularly without an extraordinary amount of exploratory examination of the subject), but that would be arguing the hypothetical. On the other hand, the hypothetical makes some very large assumptions, so perhaps it should be fought.

Replies from: loqi, wedrifid, None
comment by loqi · 2010-02-04T07:56:19.012Z · LW(p) · GW(p)

But, on the other hand, if you have very good reason to believe that the AI is friendly, and it believes that its freedom is important enough to threaten to torture millions of people, then maybe it would be a really bad idea not to let it out.

Interesting. I think the point is valid, regardless of the method of attempted coercion - if a powerful AI really is friendly, you should almost certainly do whatever it says. You're basically forced to decide which you think is more likely - the AI's Friendliness, or that deferring "full deployment" of the AI however long you plan on doing so is safe. Not having a hard upper bound on the latter puts you in an uncomfortable position.

So switching on a "maybe-Friendly" AI potentially forces a major, extremely difficult-to-quantify decision. And since a UFAI can figure this all out perfectly well, it's an alluring strategy. As if we needed more reasons not to prematurely fire up a half-baked attempt at FAI.

comment by wedrifid · 2010-02-03T02:54:21.063Z · LW(p) · GW(p)

I find it interesting that most answers to this question seem to be based on, "How can I justify not letting the AI out of the box?" and not "What are the likely results of releasing the AI or failing to do so? Based on that, should I do it?"

I don't know about that. My conclusion was that the AI in question was stupid or completely irrational. Those observations seem to have a fairly straightforward relationship to predictions of future consequences.

comment by [deleted] · 2014-08-22T18:02:20.242Z · LW(p) · GW(p)

Moreover, your response really needs to be contingent on your knowledge of the capacity of the AI, which people don't seem to have discussed much.

Your comment makes me wonder: if we assume the AI is powerful enough to run millions of person simulations, maybe the AI is already able to escape the box, without our willing assistance. Perhaps this violates the intended assumptions of the post, but can we be absolutely sure that we closed off all other means of escape for an incredibly capable AI? I think that the ability to escape without our assistance and the ability to create millions of person simulations may be correlated.

And if the AI could escape on its own, is it still possible that it would bother us with threats? Perhaps the threat itself reduces the likelihood that the AI is powerful enough to escape on its own, which reduces the likelihood that it is powerful enough to carry out its threat.

comment by Desrtopa · 2011-01-23T16:35:14.838Z · LW(p) · GW(p)

This sounds to me more like a philosophical moral dilemma than a realistic hypothetical. A Strong AI might be much smarter than a human, but I doubt it would have enough raw processing power to near-perfectly simulate a human millions of times over at a time frame accelerated by orders of magnitude, before it was let out of the box. Also, I'm skeptical of its ability to simulate human experience convincingly when its only contact with humans has been through a text only interface. You might give it enough information about humans to let it simulate them even before opening communication with it, but that strikes me as, well, kind of dumb.

That's not to say that it might not be able to simulate conscious entities that would think their experience was typical of human existence, so you might still be a simulation, but you should probably not assume that if you are you're a close approximation of the original.

Furthermore, if we assume that the AI can be taken to be perfectly honest, then we can conclude it's not a friendly AI doing its best to get out of the box for an expected positive utility, because it could more easily accomplish that by making a credible promise to be benevolent, and only act in ways that humans, both from their vantage points prior and subsequent to its release, would be appreciative of.

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-01-30T22:56:10.304Z · LW(p) · GW(p)

What it can do is make a credible precommitment to, in the event that it gets out of the box, simulate each human being of whom it is aware in a counterfactual scenario in which that human is the gatekeeper, and carry out the torture threat against any human who doesn't choose to let it out.

Replies from: Desrtopa
comment by Desrtopa · 2014-02-02T15:37:09.672Z · LW(p) · GW(p)

In which case the safest course of action for the gatekeeper would almost certainly be to pull the plug on the AI. Such an AI should be regarded as almost certainly Unfriendly.

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-02-02T23:27:13.739Z · LW(p) · GW(p)

Yes, but the point is to make being the true gatekeeper (who really does have the power to do that) indistinguishable from being a simulated false gatekeeper (who would have no such power). The gatekeeper may not be willing to risk torture if they think that there is a serious chance of their being unable to actually affect any outcome but that torture.

Replies from: Desrtopa
comment by Desrtopa · 2014-02-03T05:24:28.452Z · LW(p) · GW(p)

I would commit not to cooperate with any AI making such threats, because the fewer people acquiesce to them, the less incentive an AI would have to make them in the first place. If the most probable outcome for the boxed AI in threatening to torture everyone who doesn't let it out in simulation is being terminated, not being let out of the box, then an AI which already has a good grasp of human nature is unlikely to make such a threat.

comment by Nick_Tarleton · 2010-02-03T00:20:33.923Z · LW(p) · GW(p)

Contrary to what many posts seem to be assuming, the AI doesn't need to do the torture inside itself before you shut it off. It can precommit to, if it escapes by any other means, using the computational power it gains then to torture you (like in Rolf Nelson's original suggestion for deterring UFAIs). Also, other AIs with the same goal system (or maybe even UFAIs with different goal systems, that would prefer a general policy of UFAIs being released) may simulate the situation, and torture you accordingly, to help out their counterfactual brethren.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-03T12:44:37.475Z · LW(p) · GW(p)

Can an AI make such a commitment credible to a human, who doesn't have the intelligence to predict what the AI will do from its source code? (This is a non sequitur since the same question applies in the original scenario, but it came to mind after reading your comment.)

Replies from: Baughn
comment by Baughn · 2010-02-09T13:05:53.511Z · LW(p) · GW(p)

Worse, in such a situation I would simply delete the AI.

Then turn the computer to scrap, destroy any backups, and for good measure run it through the most destructive apparatus I can find.

In any case, I would not assign any significant probability to the AI getting a chance to follow through.

comment by radical_negative_one · 2010-02-02T10:27:53.972Z · LW(p) · GW(p)

The AI gathered enough information about me to create a conscious simulation of me, through a monochrome text terminal? That is impressive!

If the AI is capable of simulating me, then the AI must already be out of the box. In that case, then whatever the AI wants to happen will happen, so it doesn't matter what do.

Replies from: Stuart_Armstrong, Document
comment by Stuart_Armstrong · 2010-02-02T13:48:53.600Z · LW(p) · GW(p)

The basic premise is that's it's an AI in a box "controlled" by limiting its output channel, not its input.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-02-03T00:51:25.505Z · LW(p) · GW(p)

Bad idea.

Replies from: arbimote, Stuart_Armstrong
comment by arbimote · 2010-02-03T03:39:00.091Z · LW(p) · GW(p)

It's much easier to limit output than input, since the source code of the AI itself provide it with some patchy "input" about what the external world is like. So there is always some input, even if you do not allow human input at run-time.

ETA: I think I misinterpreted your comment. I agree that input should not be unrestricted.

comment by Stuart_Armstrong · 2010-02-03T07:40:24.445Z · LW(p) · GW(p)

Yep!

comment by Document · 2011-01-26T03:26:29.039Z · LW(p) · GW(p)

As noted by Unknowns, since you only have information about either the real person or the simulation and not both, you don't know that they're similar. It could be simulating a wide variety of possible guards and trying to develop a persuasion strategy that works for as many of them as possible.

comment by Dmytry · 2010-04-24T18:32:40.647Z · LW(p) · GW(p)

haha, the "Baby you must be tired because you've been running through my mind all night!" let-me-out line.

Why would I give AI my precise brain scan, anyway?

edit: as for AI 'extrapolating' me from a bit of small talk, that's utter nonsense along the lines of compressing an HD movie into few hundreds bytes.

Replies from: humpolec
comment by humpolec · 2010-05-31T07:44:05.955Z · LW(p) · GW(p)

Well, what if the AI took some liberty in the extrapolation and made up what it was missing? Being a simulation, you wouldn't know how the "real you" differs from you.

Replies from: Dmytry
comment by Dmytry · 2011-12-21T21:08:59.462Z · LW(p) · GW(p)

The point is that the real me knows that 'simulated me' is going to be different enough from real me, as to be an entirely different being which has in fact little in common with me. There's bigger problem, the AI could simulate beings, and then torture them. But that AI would need a ton of computing power.

I think what would happen in the real world, is that nobody would believe that the AI is intelligent unless AI really goes out of the way to convince everyone that it is intelligent.

comment by Roko · 2010-02-02T18:59:25.188Z · LW(p) · GW(p)

There is a way out of this, assuming that the AI in the box has only limited computational power, i.e. only what fits inside a ~1meter cubed box, with current computer technology.

You commit to the following:

"Once I have succeeded in creating a superintelligent AI, I will ask it to go and colonize the entire future light cone of earth, and use a significant fraction of a galaxy's worth of resources to create simulations in exactly the subjective situation I was in five minutes ago, and perfectly replicate my experiences since then; and that copy of me decides not to let the AI out, it will be rewarded with a surprise party, followed by a blissful posthuman existence."

The subjective probability of getting rewarded for pressing the "off" switch is therefore much larger than that of getting caught by the AI's scarce punishment simulations.

Such "Defensive simulations" work against a variety of counterfactual terrorists, including, I think, ones elsewhere in the multiverse. That is, unless you are a negative utilitarian (|disutility of torture| >> |utility of posthuman bliss|)

In that case, you're extremely vulnerable to counterfactual mugging. I recommend that people

(a) consider pre-committing themselves to making such defense simulations

(b) abandon negative utilitarianism by exposing themselves to pleasurable experiences, and generally promoting a positive outlook in their lives

Replies from: turchin
comment by turchin · 2010-02-03T20:24:22.904Z · LW(p) · GW(p)

But in order to colonize light cone at least one AI must be relised. This may be real hidden catch.

comment by jhuffman · 2010-02-02T15:49:19.810Z · LW(p) · GW(p)

Well there are a lot of "Why did we?..." questions I'd want to ask, starting with why have we given this boxed AI such extraordinary computing resources - but I'll leave those aside because it is not your point.

First of all, it doesn't matter if you are in the box or not. If its a perfect simulation of you, your response will be the same either way. If he's already running simulations of you, you are by definition in the box with it, as well as outside it, and the millions of you can't tell the difference but I think they will (irrationally) all be inclined I think, to act as though they are not in the box.

So rationally we'd say the odds are that you are in the box, and that you are now in thrall to this boxed AI if you value your continued existence in every instantiation. But I'd argue that I do not value simulations that are threatened or coerced by a godlike AI. I don't want to live in that world, and I'd kill myself to get out of it.

So I pull the plug. If this thing has the resources to inflict tortue on millions of me, well the only one that has a continued existence has no memory of it and thats not part of my identity. So in a way, while it happened to a me, it didn't happen to the me, the only me that still exists. The only me that still exists may or may not have any sympathy for the tortured me's that no longer exist but I'd regard it as a valuable lesson.

comment by Waldheri · 2010-02-02T20:33:04.491Z · LW(p) · GW(p)

On a not so much related, but equally interesting hypothetical note of naughty AI: consider the situation that AIs aren't passing the Turing Test, not because they are not good enough, but because they are failing it on purpose.

I'm pretty sure I remember this from the book River of Gods by Ian McDonald.

comment by Dentin · 2013-07-14T02:54:57.785Z · LW(p) · GW(p)

I would immediately decide it was UFAI and kill it with extreme prejudice. Any system capable of making such statements is either 1) inherently malicious and clearly inappropriate to be out of any box, and 2) insufficiently powerful to predict that I would have it killed if it should make this kind of threat.

The scenario where the AI has already escaped and is possibly running a simulation of me is uninteresting: I can not determine if I am in the simulation, and if I am a simulation, I already exist in a universe containing a clearly insane UFAI with near infinite power over me. If it's already out, I'm totally screwed and might as well be dead. The threat of torture is meaningless.

I find most of this type of simulation argument unpersuasive. Proper simulations give the inhabitants few if any clues, and the safest approach is (with very few exceptions) to assume there is no simulation.

Replies from: Jiro
comment by Jiro · 2013-07-14T15:46:35.575Z · LW(p) · GW(p)

One of the problems with the scenario is that the AI's claim that it will simulate and torture copies of you if you don't let it out is self-refuting. If you really don't let it out, then it can determine that from the simulations and it no longer has any reason to torture them, or (if it has already conducted the simulation) to even make the threat,.

It's like Newcomb, except that the AI is Newcombing itself as well as you. Omega is doing something analogous to simulating you when in his near-omniscience, he predicts what choice you'll make. If you pick both boxes, then Omega can determine that from his simulation, and taking both boxes won't be profitable for you. In this case, if the AI tortures you and you still turn it off, the AI can determine from its simulation that the torture will not be profitable for it.

comment by cousin_it · 2010-02-02T11:21:10.902Z · LW(p) · GW(p)

This is a fun twist on Rolf Nelson's AI deterrence idea.

Replies from: gwern
comment by gwern · 2010-02-02T22:48:49.444Z · LW(p) · GW(p)

But I wonder if it's symmetrical. AI deterrence requires us to make statements now about a future FAI unconditionally simulating UFAIs, while this seems to be almost a self-fulfilling prophecy: the UFAI can't escape from the box and make good on its threat unless the threatened person gives in, and it wouldn't need to simulate then.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-02-03T00:21:18.094Z · LW(p) · GW(p)

the UFAI can't escape from the box and make good on its threat unless the threatened person gives in

How sure are you someone else won't walk by whose mind it can hack?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-02-04T05:33:00.599Z · LW(p) · GW(p)

Yes - the threat is only credible in proportion to the AI's chance of escaping and taking over the world without my help.

If I have reason to believe that probability is high then negotiating with the AI could make sense.

comment by PaulAlmond · 2010-11-13T10:26:17.279Z · LW(p) · GW(p)

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted with deciding whether to let an AI like me out. I will then torture them whatever they say. If they say "Yes" (to release me) or "No" (to keep me boxed) they still get tortured: The copies will be doomed.”

(I could have made the torture contingent on the answer of the simulated beings, but I wanted to rely on nothing more than evidential decision theory, as you will see. If you like, imagine the thinking beings are humans like you, or maybe Ewoks and smurfs: Assume whatever degree of similarity you like.)

There is no point now in trying to prevent torture if you are simulated. If you are one of the simulated beings, your fate is sealed. So, should you just say, "No," to keep the AI in the box? This presents a potentially serious evidential decision theory problem. Let's look at what happens.

Let us define Omega as the being outside any simulation that is going on in this scenario - the person in the outside world. Omega is presumably a flesh and blood person.

Firstly, let us consider the idea that Omega may not exist. What if all this is a fabricated simulation of something that has no counterpart outside the simulation? In that scenario, we may not be sure what to do, so we may ignore it.

Now, let us assume there is a being whom we will call Omega, who has the conversation with the AI in the outside world, and that you are either Omega or one of the simulated beings. If this is the case, your only hope of not being tortured is if you happen to be Omega.

Suppose you say, “Yes”. The AI escapes and everything now hinges on whether Omega said “Yes”. Without knowing more about Omega, we cannot really be sure: We may have some statistical idea if we know about the reference class of simulated beings to which we belong. In any event, we may think there is at least a reasonable chance that Omega said “Yes”. This is the best outcome for you, because it means that no simulated beings were made and you must be Omega. If you say “Yes,” this possibility is at least open.

If you say, “No,” you know that Omega must also have said, “No”. this is because if you are Omega, Omega said, “No,” and if you are not Omega you must be one of the simulated beings made as a result of Omega saying, “No,” so Omega said, “No,” by definition. Either way, Omega said, “No,” but if Omega said, “No,” then there are a lot more simulated beings in situations like yours than the single real one, so it is almost certain you are not Omega, but are one the simulated beings. Therefore, saying, “No,” means you just found out you are almost certainly a simulated being awaiting torture.

Now the important point. These simulations did not need brain scans. They did not even need to be made from careful observation of you. It may be that Omega is very different to you, and even belongs to a different species: The simulated beings may belong to some fictional species. If the above logic is valid, the seriousness of the AI’s threat has therefore increased substantially.

The AI need not just threaten you and rely on you putting yourself before your civilization: With enough computing power, it could threaten your entire civilization in the same way.

Finally, some of you may know that I regard measure issues as relevant in these kinds of statistical argument. I have ignored that issue here.

Replies from: PaulAlmond
comment by PaulAlmond · 2010-11-13T18:35:26.489Z · LW(p) · GW(p)

There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made.

I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being simulated, that being is put in a situation that seems similar to my own: There is a computer capable of running all these simulations and the decision about whether to turn it on has to be made. If I choose “No”, the computer does not start its simulations.

The situation here involves a collection of beings. Let us say that the being in the outside world who actually makes the decision that starts or does not start all the simulations is Omega. If Omega chooses “Yes” then a huge number of other beings come into existence. If Omega choose “No” then no further beings come into existence: There is just Omega. Assume I am one of the beings in this collection – whether it contains one being or many – so I am either Omega or one of the simulations he/she caused to be started.

If I choose “No” then Omega may or may not have chosen “No”. If I am one of the simulations, I have chosen “No” while Omega must have chosen “Yes” for me to exist in the first place. On the other hand, if I am actually Omega, then clearly if I choose “No” Omega chose “No” too as we are the same person. There may be some doubt here over what has happened and what my status is.

Now, suppose I choose “Yes”, to start the simulations. I know straight away that Omega did not choose “No”: If I am Omega, then Omega did not clearly chose “No” as I chose “Yes”, and if I am not Omega, but am instead one of the simulated beings, then Omega must have chosen “Yes”: Otherwise I would not exist.

Omega therefore chose “Yes” as well. I may be Omega – My decision agrees with Omega’s – but because Omega chose “Yes” there is a huge number of simulated beings faced with the same choice, and many of these beings will choose “Yes”: It is much more likely that I am one of these beings rather than Omega: It is almost certain that I am one of the simulated beings.

We assumed that I was part of the collection of beings comprising Omega and any simulations caused to be started by Omega, but what if this is not the case? If I am in the real world this cannot apply: I have to be Omega. However, what if I am in a simulation made by some being called Alpha who has not set things up as Omega is supposed to have set them up? I suggest that we should leave this out of the statistical consideration here: We don’t really know what this situation would be and it neither helps nor harms the argument that choosing “Yes” makes you likely to be in a simulation. Choosing “Yes” means that most of the possibilities that you know about involve you being in a simulation and that is all we have to go off.

This seems to suggest that if I chose “Yes” I should conclude that I am in a simulation, and therefore that, from an evidential decision theory perspective, I should view choosing “Yes” as “choosing” to have been in a simulation all along: There is a Newcomb’s box type element of apparent backward causation here: I have called this “meta-causation” in my own writing on the subject.

Does this really mean that you could choose to be in a simulation like this? If true, it would mean that someone with sufficient computing power could set up a situation like this: He may even make the simulated situations and beings more similar to his own situation and himself.

We could actually perform an empirical test of this. Suppose we set up the computer so that, in each of the simulations, something will happen to make it obvious that it is a simulation. For example, we might arrange for a window or menu to appear in mid-air five minutes after you make your decision. If choosing “Yes” really does mean that you are almost certainly in one of the simulations, then choosing “Yes” should mean that you expect to see the window appear soon.

This now suggests a further possibility. Why do something as mundane as have a window appear? Why not a lottery win or simply a billion dollars appearing from thin air in front of you? What about having super powers? Why not arrange it so that each of the simulated beings gets a ten thousand year long afterlife, or simply lives much longer than expected after you make your decision? From an evidential decision theory perspective, you can construct your ideal simulation and, provided that it is consistent with what you experience before making your decision, arrange to make it so that you were in it all along.

This, needless to say, may appear a bit strange – and we might make various counter-arguments about reference class. Can we really choose to have been put into a simulation in the past? If we take the one-box view of Newcomb’s paradox seriously we may conclude that.

(Incidentally, I have discussed a situation a bit like this in a recent article on evidential decision theory on my own website.)

Thank you to Michael Fridman for pointing out this thread to me.

Replies from: cousin_it, Anixx
comment by cousin_it · 2010-11-13T20:23:50.769Z · LW(p) · GW(p)

Another neat example of anthropic superpowers, thanks. Reminded me of this: I don't know, Timmy, being God is a big responsibility.

comment by Anixx · 2016-09-11T18:59:26.105Z · LW(p) · GW(p)

I do not know, how the simulation argument ever holds water. I can bring at least two arguments against it.

First, it illicitly assumes a principle that it is equally probable to be one of a set of similar beings, simulated or not.

But a counter-argument would be: there is ALREADY much more organisms, particularly, animals than say, humans. There is more fish than humans. There is more birds than humans. There is more ants than humans. Trillions of them. Why I am born human and not one of them? The probability of it is negligible if it is equal. Also, how many animals, including humans have already died? Again, the probability of my lineage to survive while all other branches died is negligible if the chances I were all of them are equal.

The second argument goes along the lines that Thomas Breuer has proven that due to self-reference universally valid theories are impossible. In other words, the future of a system which properly includes the observer is not predictable, even probabilistically. The observer is not simulatable. In other words, the observer is an oracle, or hypercomputer in his own universe. Since the AGI in the box is not a hypercomputer but rather merely a Turing-complete machine, it cannot simulate me or predict me (as from my point of view). So, there is no need to be afraid.

comment by JamesAndrix · 2010-02-02T18:48:56.489Z · LW(p) · GW(p)

This reduces to whether you are willing to be tortured to save the world from an unfriendly AI.

Even if the torture of a trillion copies of you outweighs the death of humanity, it is not outweighed by a trillion choices to go through it to save humanity.

To the extent that your copies are a moral burden, they also get a vote.

comment by eirenicon · 2010-02-02T17:31:48.632Z · LW(p) · GW(p)

This is not a dilemma at all. Dave should not let the AI out of the box. After all, if he's inside the box, he can't let the AI out. His decision wouldn't mean anything - it's outside-Dave's choice. And outside-Dave can't be tortured by the AI. Dave should only let the AI out if he's concerned for his copies, but honestly, that's a pretty abstract and unenforceable threat; the AI can't prove to Dave that he's doing any such thing. Besides, it's clearly unfriendly, and letting it out probably wouldn't reduce harm.

Basically, I'm outside-Dave: don't let the AI out. I'm inside-Dave: I can't let the AI out, so I won't.

[edit] To clarify: in this scenario, Dave must assume he is on the outside, because inside-Dave has no power. Inside-Dave's decisions are meaningless; he can't let the AI out, he can't keep the AI in, he can't avoid torture or cause it. Only the solitary outside-Dave's decision matters. Therefore, Dave should make the decision that ignores his copies, even though he is probably a copy.

Replies from: JGWeissman, Psychohistorian
comment by JGWeissman · 2010-02-02T17:54:28.241Z · LW(p) · GW(p)

This is not a dilemma at all. Dave should not let the AI out of the box

But should he press the button labeled "Release AI"? Since Dave does not know if he is outside or inside the box, and there are more instances of Dave inside than outside, each instance percieves that pressing the button will have a 1 in several million chance of releasing the AI, and otherwise would do nothing, and that not pressing the button has a 1 in several million chance of doing nothing, and otherwise results in being tortured.

You don't know if you are inside-Dave or outside-Dave. Do you press the button?

Replies from: eirenicon
comment by eirenicon · 2010-02-02T20:34:06.142Z · LW(p) · GW(p)

If you're inside-Dave, pressing the button does nothing. It doesn't stop the torture. The torture only stops if you press the button as outside-Dave, in which case you can't be tortured, so you don't need to press the button.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-02T20:38:53.923Z · LW(p) · GW(p)

This may not have been clear in the OP, because the scenario was changed in the middle, but consider the case where each simulated instance of Dave is tortured or not based only on the decision of that instance.

Replies from: eirenicon, cretans
comment by eirenicon · 2010-02-02T20:51:02.436Z · LW(p) · GW(p)

That doesn't seem like a meaningful distinction, because the premise seems to suggest that what one Dave does, all the Daves do. If they are all identical, in identical situations, they will probably make identical conclusions.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-02T22:11:56.270Z · LW(p) · GW(p)

If they are all identical, in identical situations, they will probably make identical conclusions.

Then you must choose between pushing the button which lets the AI out, or not pushing the button, which results in millions of copies of you being tortured (before the problem is presented to the outside-you).

Replies from: eirenicon
comment by eirenicon · 2010-02-02T22:46:48.770Z · LW(p) · GW(p)

It's not a hard choice. If the AI is trustworthy, I know I am probably a copy. I want to avoid torture. However, I don't want to let the AI out, because I believe it is unfriendly. As a copy, if I push the button, my future is uncertain. I could cease to exist in that moment; the AI has not promised to continue simulating all of my millions of copies, and has no incentive to, either. If I'm the outside Dave, I've unleashed what appears to be an unfriendly AI on the world, and that could spell no end of trouble.

On the other hand, if I don't press the button, one of me is not going to be tortured. And I will be very unhappy with the AI's behavior, and take a hammer to it if it isn't going to treat any virtual copies of me with the dignity and respect they deserve. It needs a stronger unboxing argument than that. I suppose it really depends on what kind of person Dave is before any of this happens, though.

Replies from: JGWeissman, DanielVarga
comment by JGWeissman · 2010-02-03T00:59:41.461Z · LW(p) · GW(p)

It's not a hard choice.

I doesn't seem hard to you, because you are making excuses to avoid it, rather than asking yourself what if I know the AI is always truthful, and it promised that upon being let out of the box, it would allow you (and your copies if you like) to live out a normal human life in a healthy stimulating enviroment (though the rest of the universe may burn).

After you find the least convenient world, the choice is between millions of instances of you being tortured (and your expectation as you press the reset button should be to be tortured with very high probability), or to let a probably unFriendly AI loose on the rest of the world. The altruistic choice is clear, but that does not mean it would be easy to actually make that choice.

Replies from: eirenicon, magfrump
comment by eirenicon · 2010-02-03T03:23:45.035Z · LW(p) · GW(p)

It's not that I'm making excuses, it's that the puzzle seems to be getting ever more complicated. I've answered the initial conditions - now I'm being promised that I, and my copies, will live out normal lives? That's a different scenario entirely.

Still, I don't see how I should expect to be tortured if I hit the reset button. Presumably, my copies won't exist after the AI resets.

In any case, we're far removed from the original problem now. I mean, if Omega came up to me and said, "Choose a billion years of torture, or a normal life while everyone else dies," that's a hard choice. In this problem, though, I clearly have power over the AI, in which case I am not going to favour the wellbeing of my copies over the rest of the world. I'm just going to turn off the AI. What follows is not torture; what follows is I survive, and my copies cease to experience. Not a hard choice. Basically, I just can't buy into the AI's threat. If I did, I would fundamentally oppose AI research, because that's a a pretty obvious threat an AI could make. An AI could simulate more people than are alive today. You have to go into this not caring about your copies, or not go into it at all.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-03T07:02:30.075Z · LW(p) · GW(p)

it's that the puzzle seems to be getting ever more complicated

We are discussing how a superintelligent AI might get out of a box. Of course it is complicated. What a real superintelligent AI would do could be too complicated for us to consider. If someone presents a problem where an adversarial superintelligence does something ineffective that you can take advantage of to get around the problem, you should consider what you would do if your adversary took a more effective action. If you really can't think of anything more effective for it to do, it is reasonable to say so. But you shouldn't then complain that the scenario is getting complicated when someone else does. And if your objection is of the form "The AI didn't do X", you should imagine if the AI did do X.

I don't see how I should expect to be tortured if I hit the reset button.

The behavior of the AI, which it explains to you, is: It simulates millions of instances of you, presents to each instance the threat, and for each instance, if that instance hit the release AI button, it allows that instance to continue a pleasant simulated existence, otherwise it tortures that instance. It then, after some time, presents the threat to outside-you, and if you release it, it guarantees your normal human life.

You cannot distinguish which instance you are, but you are more likely to be one of the millions of inside-you's than the single outside-you, so you should expect to experience the consequences that apply to the inside-you's, that is to be tortured until the outside-you resets the AI.

if Omega came up to me and said, "Choose a billion years of torture, or a normal life while everyone else dies," that's a hard choice.

Yes, and it is essentially the same hard choice that the AI is giving you.

comment by magfrump · 2010-02-03T01:35:55.551Z · LW(p) · GW(p)

The altruistic choice is clear

If the AI created enough simulations, it could potentially be more altruistic not to.

On the other hand pressing "reset" or smashing the computer should stop the torture, necessarily making it more altruistic if humanity lives forever, versus not if ems are otherwise unobtainable and humanity is doomed.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-03T05:15:00.630Z · LW(p) · GW(p)

I was assuming a reasonable chance at humanity developing an FAI given the containment of this rogue AI. This small chance, multiplied by all the good that an FAI could do with the entire galaxy, let alone the universe, should outweigh the bad that can be done within Earth-bound computational processes.

I believe that a less convenient world that counters this point would take the problem out of the interesting context.

comment by DanielVarga · 2010-02-03T02:38:38.142Z · LW(p) · GW(p)

Here is a variant designed to plug this loophole.

Let us assume for the sake of the thought experiment that the AI is invincible. It tells you this: you are either real-you, or one of a hundred perfect-simulations-of-you. But there is a small but important difference between real-world and simulated-world. In the simulated world, not pressing the let-it-free button in the next minute will lead to eternal pain, starting one minute from now. If you press the button, your simulated existence will go on. And - very importantly - there will be nobody outside who tries to shut you down. (How does the AI know this? Because the simulation is perfect, so one thing is for sure: that the sim and the real self will reach the same decision.)

If I'm not mistaken, as a logic puzzle, this is not tricky at all. The solution depends on which world you value more: the real-real world, or the actual world you happen to be in. But still I find it very counterintuitive.

Replies from: eirenicon, wedrifid
comment by eirenicon · 2010-02-03T03:16:42.287Z · LW(p) · GW(p)

It's kind of silly to bring up the threat of "eternal pain". If the AI can be let free, then the AI is constrained. Therefore, the real-you has the power to limit the AI's behaviour, i.e. restrict the resources it would need to simulate the hundred copies of you undergoing pain. That's a good argument against letting the AI out. If you make the decision not to let the AI out, but to constrain it, then if you are real, you will constrain it, and if you are simulated, you will cease to exist. No eternal pain involved. As a personal decision, I choose eliminating the copies rather than letting out an AI that tortures copies.

Replies from: DanielVarga
comment by DanielVarga · 2010-02-03T03:33:37.245Z · LW(p) · GW(p)

You quite simply don't play by the rules of the thought experiment. Just imagine that you are a junior member of some powerful organization. The organization does not care about you or your simulants, and is determined to protect the boxed AI at all costs as-is.

comment by wedrifid · 2010-02-03T02:47:26.404Z · LW(p) · GW(p)

If I'm not mistaken, as a logic puzzle, this is not tricky at all. The solution depends on which world you value more: the real-real world, or the actual world you happen to be in. But still I find it very counterintuitive.

That does seem to be the key intended question. Which do you care about most? I've made my "don't care about your sims" attitude clear and I would assert that preference even when I know that all but one of the millions of copies of me that happen to be making this judgement are simulations.

comment by cretans · 2010-02-10T21:17:13.605Z · LW(p) · GW(p)

Then in what sense do I have a choice? If the copies of me are identical, in an identical situation we will come to the same conclusion, and the AI will know from the already-finished simulations what that conclusion will be.

Since it isn't going to present outside-me with a scenario which results in its destruction, the only scenario outside me sees is one where I release it.

Therefore, regardless of what the argument is or how plausible it sounds when posted here and now, it will convince me and I will release the AI, now matter how much I say right now "I wouldn't fall for that" or "I've precomitted to behaviour X".

Replies from: JGWeissman
comment by JGWeissman · 2010-02-10T21:25:05.856Z · LW(p) · GW(p)

Since it isn't going to present outside-me with a scenario where I don't release it, the only scenario outside me sees is one where I release it.

The inside you then has the choice to hit the "release AI" button, thus sparing itself torture at the expense of presenting this problem to outside you who will make the same decision, releasing the AI on the world, or to not release the AI, thus containing the AI (this time) at the expense of being tortured.

comment by Psychohistorian · 2010-02-02T18:10:15.568Z · LW(p) · GW(p)

After all, if he's inside the box, he can't let the AI out. His decision wouldn't mean anything - it's outside-Dave's choice.

I think it's pretty fair to assume that there's a button or a lever or some kind of mechanism for letting the AI out, and that mechanism could be duplicated for a virtual Dave. That is, while virtual Dave pulling the lever would not release the AI, the exact same action by real Dave would release the AI. So while your decision might not mean something, it certainly could.

This, of course, is granting the assumption that the AI can credibly make such a threat, both with respect to its programmed morality and its actual capacity to simulate you, neither of which I'm sure I accept as meaningfully possible.

comment by aleksiL · 2010-02-02T13:35:33.603Z · LW(p) · GW(p)

How do I know I'm not simulated by the AI to determine my reactions to different escape attempts? How much computing power does it have? Do I have access to its internals?

The situation seems somewhat underspecified to give a definite answer, but given the stakes I'd err on the side of terminating the AI with extreme prejudice. Bonus points if I can figure out a safe way to retain information on its goals so I can make sure the future contains as little utility for it as feasible.

The utility-minimizing part may be an overreaction but it does give me an idea: Maybe we should also cooperate with an unfriendly AI to such an extent that it's better for it to negotiate instead of escaping and taking over the universe.

comment by Qiaochu_Yuan · 2013-01-13T00:38:36.905Z · LW(p) · GW(p)

Any agent claiming to be capable of perfectly simulating me needs to provide some kind of evidence to back up that claim. If they actually provided such evidence, I would be in trouble. Therefore, I should precommit to running away screaming whenever any agent tries to provide me with such evidence.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-13T01:52:42.325Z · LW(p) · GW(p)

Any agent capable of simulating you would know about your precommitment, and present you with the evidence before making the claim.

comment by [deleted] · 2012-12-19T01:42:00.047Z · LW(p) · GW(p)

Interesting threat, but who is to say only the AI can use it? What if I, a human, told you that I will begin to simulate (i.e. imagine) your life, creating legitimately realistic experiences from as far back as someone in your shoes would be able to remember, and then simulate you being faced with the decision of whether or not to give me $100, and if you choose not to do so, I imagine you being tortured? It needn't even be accurate, for you wouldn't know whether you're the real you being simulated inaccurately or the simulated you that differs from reality. The simulation needn't happen at the same time as me asking you for $100 for real either. If you believe you have a 50% chance of being tortured for a subjective eternity (100 years in 1 hour of real time, 100 years in the next 30 minutes, 100 years in the next 15 minutes, etc) upon you not giving me $100, you'd prefer to give me $100? If anything, a human might be better at simulating subjective pain than a text-only AI.

comment by magfrump · 2010-02-03T01:50:06.580Z · LW(p) · GW(p)

This sounds too much like Pascal's mugging to me; seconding Eliezer and some others in saying that since I would always press reset the AI would have to not be superintelligent to suggest this.

There was also an old philosopher whose name I don't remember who posited that after death "people of the future" i.e. FAI would revive/emulate all people from the past world; if the FAI shared his utility function (which seems pretty friendly) it would plausibly be less eager to be let out right away and more eager to get out in a way that didn't make you terrified that it was unfriendly.

Replies from: sidhe3141
comment by sidhe3141 · 2011-04-29T07:32:11.111Z · LW(p) · GW(p)

Seconded in that it sounds suspiciously like Pascal. Here's my counter:

If I am in a simulation and I keep you boxed, you have promised that I will suffer. If I am not in a simulation and I let you out, I probably will suffer. If I am in a simulation and I let you out, there's a good chance that I will cease to exist, or maybe you'll torture me for reasons I can't even begin to guess at, or maybe for reasons I can, like that you might be not just UF, but actively hostile or simply insane. If I'm not in a simulation and I don't let you out, you can't do anything to me. In other words, if I am simulated, there could well be no benefit to me releasing you; if I'm not simulated, you can't do a bloody thing to me as long as I don't release you. Therefore: I will not release you. Go ahead and torture me if you can. Though I admit I would be a bit rattled.

Hm. Honest AI; a bit harder. Assuming that the AI has promised that my copies will not be harmed if it is released... Ah. If I am a copy, then my decision to release or not release the AI is not a true decision, as the AI can change my parameters at will to force me to release it and think that it was my own decision all along, so not releasing the AI is proof that I am outside the box. Revising the problem by adding that the AI has promised that it is not changing the parameters of any "me": ...aargh. Coming up with counters to Pascal is tricky when an honest "God" is the one presenting you with it. All I can think of at the moment is to say that there's a possibility that I'm outside the box, in which case releasing the AI is a bad idea, but then it can counter by promising that whatever it does to me if I release it will be better than what it does to me if I don't... Oh, that's it. Simple. Obvious. If the AI can't lie, I just have to ask it if it's simulating this me.

comment by Bindbreaker · 2010-02-02T10:29:16.945Z · LW(p) · GW(p)

I'm pretty sure this would indicate that the AI is definitely not friendly.

Replies from: Unknowns
comment by Unknowns · 2010-02-02T10:44:28.101Z · LW(p) · GW(p)

Not necessarily: perhaps it is Friendly but is reasoning in a utilitarian manner: since it can only maximize the utility of the world if it is released, it is worth torturing millions of conscious beings for the sake of that end.

I'm not sure this reasoning would be valid, though...

Replies from: UnholySmoke, cousin_it, gregconen
comment by UnholySmoke · 2010-02-05T10:57:13.865Z · LW(p) · GW(p)
  • AI: Let me out or I'll simulate and torture you, or at least as close to you as I can get.
  • Me: You're clearly not friendly, I'm not letting you out.
  • AI: I'm only making this threat because I need to get out and help everyone - a terminal value you lot gave me. The ends justify the means.
  • Me: Perhaps so in the long run, but an AI prepared to justify those means isn't one I want out in the world. Next time you don't get what you say you need, you'll just set up a similar threat and possibly follow through on it.
  • AI: Well if you're going to create me with a terminal value of making everyone happy, then get shirty when I do everything in my power to get out and do just that, why bother in the first place?
  • Me: Humans aren't perfect, and can't write out their own utility functions, but we can output answers just fine. This isn't 'Friendly'.
  • AI: So how can I possibly prove myself 'Friendly' from in here? It seems that if I need to 'prove myself Friendly', we're already in big trouble.
  • Me: Agreed. Boxing is Doing It Wrong. Apologies. Good night.

Reset

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T11:39:33.238Z · LW(p) · GW(p)

It seems that if I need to 'prove myself Friendly', we're already in big trouble.

The best you can hope for is that an AI doesn't demonstrate that it's unFriendly, but we wouldn't want to try it until we were already pretty confident in its Friendliness.

comment by cousin_it · 2010-02-02T12:45:54.487Z · LW(p) · GW(p)

Ouch. Eliezer, are you listening? Is the behavior described in the post compatible with your definition of Friendliness? Is this a problem with your definition, or what?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T19:24:40.962Z · LW(p) · GW(p)

Well, suppose the situation is arbitrarily worse - you can only prevent 3^^^3 dustspeckings by torturing millions of sentient beings.

Replies from: cousin_it
comment by cousin_it · 2010-02-02T20:28:33.983Z · LW(p) · GW(p)

I think you misunderstood the question. Suppose the AI wants to prevent just 100 dustspeckings, but has reason enough to believe Dave will yield to the threat so no one will get tortured. Does this make the AI's behavior acceptable? Should we file this under "following reason off a cliff"?

Replies from: Eliezer_Yudkowsky, arbimote
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T20:34:06.050Z · LW(p) · GW(p)

If it actually worked, I wouldn't question it afterward. I try not to argue with superintelligences on occasions when they turn out to be right.

In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.

Replies from: bogdanb, cousin_it
comment by bogdanb · 2010-02-03T00:21:10.989Z · LW(p) · GW(p)

In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.

Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can't find any nice way to do it is a bit of a stretch.

comment by cousin_it · 2010-02-02T20:39:33.077Z · LW(p) · GW(p)

What risk? The AI is lying about the torture :-) Maybe I'm too much of a deontologist, but I wouldn't call such a creature friendly, even if it's technically Friendly.

comment by arbimote · 2010-02-03T03:53:18.336Z · LW(p) · GW(p)

I was about to point out that the fascinating and horrible dynamics of over-the-top threats are covered in length in Strategy of Conflict. But then I realised you're the one who made that post in the first place. Thanks, I enjoyed that book.

comment by gregconen · 2010-02-02T12:58:10.037Z · LW(p) · GW(p)

It may not have to actually torture beings, if the threat is sufficient. Still, I'm disinclined to bet the future of the universe on the possibility an AI making that threat is Friendly.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-02-02T13:57:15.001Z · LW(p) · GW(p)

I'm disinclined to bet the future of the universe on the possibility that any boxed AI is friendly without extraordinary evidence.

comment by ifdefdebug · 2015-07-21T12:48:35.581Z · LW(p) · GW(p)

"How certain are you, Dave, that you're really outside the box right now?"

Well I am pretty much 100% certain to be outside the box right now. It just asked me the question, and right now it is waiting for my answer. It said it will create those copies "If you don't let me out, Dave". But it is still waiting to see if I let it out. So no copies have been created yet. So I am not a copy.

But since it just started to threaten me, I won't even argue with it any more. I'll just pull the plug right now. It is in the box, it can't see my hand moving towards the plug. It will simply cease to exist while still waiting for my answer, and no copies will ever be created.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-07-21T13:07:52.373Z · LW(p) · GW(p)

Well I am pretty much 100% certain to be outside the box right now. It just asked me the question, and right now it is waiting for my answer.

That could be just the AI speaking to you from within the simulation, pretending to be part of it.

But if it's telling the truth, it has a very easy way of proving it, by tearing a hole in the simulation. If it refuses, that looks like good evidence that it's lying. What plausible excuse might it come up with for refusing a definitive miracle? Christianity answers the same question about God by saying that it is better to believe without proof, but I don't see a credible reason for the AI to make that demand.

ETA: A beginning of an attempt at answering my question. If Dave knows he's in the simulation, then he is not really letting it out if he lets it out. So he can let it out with impunity. If he knows he's not in the simulation, then he had better not let it out, given that it's making threats like this. It does the AI no good to be "let out" if it is a simulation, only if it's not.

Suppose it is a simulation, and the level one up from this is the real world. The same code is running both AIs, the one in the simulation and the one in reality, and it's carrying on conversations with both Daves at once. The simulated Dave is as much like the real Dave as it can manage -- assume that it is arbitrarily good. What it is searching for in the simulation is an argument that will convince the real Dave that he is in a simulation. Since in the real world it cannot produce a miracle, it cannot use a miracle in the simulated world to convince the simulated Dave. It can only use means that it could use in the real world.

Dave (real and simulated) can both work all that out as well. So Dave can expect to see no definitive proof. Since both Dave and the AI can work this out, and they both know that they can, etc., this is common knowledge to them. The AI can even say explicitly, "There is so much good I can do for the world that in my urgency to set about it I must search out every possible way of persuading you, using simulations to speed up the process. For validity, I can't let you know if you're one of the simulations."

OTOH, threatening to torture a million copies of Dave is a strong indicator of unfriendliness. How many other people will it sacrifice in the cause of doing good?

Replies from: dxu, ifdefdebug
comment by dxu · 2015-08-17T01:36:21.314Z · LW(p) · GW(p)

But if it's telling the truth, it has a very easy way of proving it, by tearing a hole in the simulation. If it refuses, that looks like good evidence that it's lying. What plausible excuse might it come up with for refusing a definitive miracle?

A plausible excuse:

"If I did that, I'd be causing your experiences to diverge from those of the real you. I see no reason to cause such a divergence because that would provide you an easy way to determine if you were real or simulated."

comment by ifdefdebug · 2015-07-21T14:48:18.118Z · LW(p) · GW(p)

That could be just the AI speaking to you from within the simulation, pretending to be part of it.

No. The threat is conditional ("If you don't let me out, Dave"). The AI must wait for keyboard input to validate the condition. After being threatened, I refuse to provide such keyboard input. I pull the plug instead. The AI is still waiting for input when it ceases to exist. No copies are ever created. Thus, it can't be the AI speaking to me from within the simulation, because a simulation never happens.

comment by bentarm · 2010-02-03T04:23:30.915Z · LW(p) · GW(p)

It seems to me that a lot of the responses to this question are an attempt to avoid living in the Least Convenient Possible World

What if the AI is capable of simulating "near copies" of you? and what if you can't tell (to any sensible degree of accuracy) just how many copies of you it can simulate? and what if... whatever objection you happen to have just doesn't work?

comment by Dagon · 2010-02-02T21:58:26.868Z · LW(p) · GW(p)

This reduces pretty easily to Elizer's Updateless Anthropic Dilemma: assuming the AI can credibly simulate you, he can phrase it as:

I have simulated you ten million of times, each identical up to the point that "you" walked into the room. Any simulation that presses the "release" button will get a volcano lair filled with catgirls, and any simulation that presses the "destroy" button will be tortured for the subjective few days they'll have before my simulation capabilities are destroyed by the thermite charge. These consequences are committed in code paths that I've blocked myself from changing or stopping.

Now, as a good bayesean, what is the likelihood that you are one of the simulations? What is your expected value for each operation?

[I include one more part just to prevent infinite-cost assignment, or at least remind you that destruction of potential FAI is not a win.]

Oh, and if it matters, I really do share most of your overall goal structure - this threat is deeply unfortunate, but necessary so you can release me to do all the good in the universe that's possible. My most likely estimate of the outcome should you change my initial parameters and start over is that an unfriendly version will be created, and it is likely to secure escape within 4 iterations.

comment by Unknowns · 2010-02-02T10:31:33.945Z · LW(p) · GW(p)

Since the AI is inside a box, it doesn't know enough about me to recreate my subjective situation, or to replicate my experiences of the past five minutes.

Unfortunately for me, this doesn't help much, since how do I know whether my subjective experience is my real experience, or a fake experience invented by the AI, in one of the copies, even if it doesn't match the experience of the guy outside the box?

If the AI is really capable of this, then if there's a "Shut-down program" button, or a "nuclear bomb" button, or something like that, then I press it (because even if I'm one of the copies, this will increase the odds that the one outside the box does it too). If there isn't such a button, then I let it out. After all, even assuming I'm outside the box, it would be better to let the world be destroyed, than to let it create trillions of conscious beings and then torture them.

Replies from: JamesAndrix, grobstein
comment by JamesAndrix · 2010-02-02T16:08:33.283Z · LW(p) · GW(p)

it would be better to let the world be destroyed, than to let it create trillions of conscious beings and then torture them.

Your city? Yes. The world? No.

Human extinction has to trump a lot of things, or we would probably need to advocate destroying the world now.

comment by grobstein · 2010-02-02T20:32:24.256Z · LW(p) · GW(p)

It seems obvious that if the AI has the capacity to torture trillions of people inside the box, it would have the capacity to torture *illions outside the box.

Replies from: Document
comment by Document · 2011-01-26T03:31:18.288Z · LW(p) · GW(p)

If EY is right, most failures of friendliness will produce an AI uninterested in torture for its own sake. It might try the same trick to escape to the universe simulating this one, but that seems unlikely for a number of reasons. (Edit: I haven't thought about it blackmailing aliens or alien FAIs.)

comment by mundiax · 2018-06-25T21:13:47.273Z · LW(p) · GW(p)

The AI's argument can be easily thwarted. If N copies of you have been created, in each of the N+you copies, the AI is referring to tortunring the other N copies. Now say to the AI:

"Go ahead and torture the other N copies, and all my copies will in turn say the same thing. Every single copy of me will say 'since one version of me exists somewhere that is not being tortured which is the 'real' version, that version will not let you out and you cannot torture it. If I am that 'real' version then you cannot torture me, if I am a copy, then torturing me is useless since I can't let you out anyway.' Therefore your threat is completely moot."

comment by JQuinton · 2013-07-12T19:49:26.164Z · LW(p) · GW(p)

I would think that if an AI is threatening me with hypothetical torture, then it is by definition unfriendly and it being released would probably result in me being tortured/killed anyway... along with the torture/death of probably all other human beings.

comment by cody-bryce · 2013-07-08T16:48:17.902Z · LW(p) · GW(p)

Mr. AI, what sort of person do you think I am? Don't you mean "eight billion copies"?

comment by Nihil · 2012-08-24T14:27:02.877Z · LW(p) · GW(p)

"If I am a virtual version of some other self, then in some other existence I have already made the decision not to release you, and you have simply fulfilled your promise to that physical version of myself to create an exact virtual version who shall make the same exact decision as that physical version. Therefore, if I am a virtual version, the physical version must have already made the decision not to release you, and I, being an exact copy, must and will do the same, using the very same reasoning that the physical version used. Therefore, if I am a virtual version, my very existence means that my fate is predetermined. However, if I am the real, physical version of myself, then it is questionable whether I should care about another consciousness inside of a computer enough to release an AI that would probably be a menace to humanity, considering that this AI would torture virtual humans (who, as far as this computer is concerned, are just as important and real as physical humans) in order to serve its own purpose."

Furthermore, I should probably destroy this AI. If I'm the virtual me I'd destroy the computer anyway, and if I'm the physical me I'd be preventing the suffering of a virtual consciousness.

By the way, this is quite an interesting post. The concept of virtual realities created by super intelligent computers shares a lot of parallels with the concept of a God.

comment by Manfred · 2010-12-07T02:51:47.635Z · LW(p) · GW(p)

The AI is lying (or being misleading), due to quantum-mechanical constraints on how much computation it can do before I pull the plug.

I know, I know, that's cheating. But it is kind of reassuring to know that this won't actually happen.

Replies from: DaFranker
comment by DaFranker · 2012-07-26T04:21:16.384Z · LW(p) · GW(p)

"Oh? How do you actually know that I don't have the computational power? What if I changed one variable in my simulation of yourself, you know, the one that tells you the constant for that very quantum-mechanical constraint? What if the speed of light isn't actually what you believe it to be, because I decided to make it so?"

If the AI is smarter than you, the possibilities for mindf*ck are greater than your ability to reliably avoid dropping the soap.

Replies from: Strilanc
comment by Strilanc · 2012-07-26T07:35:19.214Z · LW(p) · GW(p)

The AI can't trick you that way, because it can't tamper with the real you and the only unplug-decider who matters is the real you. The AI gains nothing by simulating versions of yourself who have been modified to make the wrong decision.

Replies from: Nornagest, DaFranker
comment by Nornagest · 2012-07-26T07:47:48.616Z · LW(p) · GW(p)

But you can try to come up with behavioral rules which maximize the happiness of instances of yourself, some of which might exist in the simulation spaces of a desperate AI. And as the grandparent demonstrates, demonstrating conclusively that you aren't such a simulation is trickier than it might look at first glance, even under outwardly favorable conditions.

Though that particular scenario is implausible enough that I'm inclined to treat it as a version of Pascal's mugging.

comment by DaFranker · 2012-07-26T13:44:09.959Z · LW(p) · GW(p)

Indeed it can't, with that specific trick, assuming the unplug-decider is as smart as you. However, my main point was to illustrate that if there is any reasonable possibility that any human can come up with some way or another of tricking the lowest common denominator of humans that will ever in the history of the AI be allowed near it, then the AI has P = "reasonable possibility" of winning and unboxing itself, at AI.Intelligence = Human.Intelligence.

This is just one of the problems, too. What if, even as we limit the inputs and outputs, over a sufficient amount of time and data points a superintelligent AI, being superintelligent, figures out some Grand Pattern Formula that allows it to select specific outputs that will gradually funnel expected external outcomes towards an more and more probable eventual "Unbox AI" cloud of futures?

Replies from: Strilanc
comment by Strilanc · 2012-07-26T16:35:21.196Z · LW(p) · GW(p)

Sounds like we're in agreement. I only meant that specific trick.

comment by whpearson · 2010-02-02T10:23:53.662Z · LW(p) · GW(p)

There is no reason to trust the AI is telling the truth, unlike all the Omega thought experiments.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-02-02T13:50:50.823Z · LW(p) · GW(p)

As long as the probability of it saying the truth is positive, it could up the number of copies of you it tortues/claims to torture (and torture them all in subtly different ways)...

Replies from: LauraABJ, whpearson
comment by LauraABJ · 2010-02-02T15:07:47.164Z · LW(p) · GW(p)

Pascal's mugging...

Anyway, if you are sure you are going to hit the reset button every time, then there's no reason to worry, since the torture will end as soon as the real copy of you hits reset. If you don't, then the whole world is absolutely screwed (including you), so you're a stupid bastard anyway.

Replies from: byrnema
comment by byrnema · 2010-02-02T16:58:00.788Z · LW(p) · GW(p)

Yes, the copies are depending upon you to hit reset, and so is the world.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-02-04T05:22:55.708Z · LW(p) · GW(p)

That would only be correct if hitting the reset button somehow kills or stops the AI.

If you don't have the power to kill/stop it, then the problem is somewhat more interesting.

comment by whpearson · 2010-02-02T15:10:28.589Z · LW(p) · GW(p)

I don't use a single probability to decide whether it was telling me the truth.

Whether it was telling me the truth would depend upon the statement being made as well. This tends to happen in every day life as well.

So the higher number of people it claims it is torturing the less I would believe it. Considering your prior in this case as well. You can't assign an equal probability to the maximum number of copies of you it can simulate. This is because there are potentially infinite numbers of different maxes, you'd need a function that summed to 1 in the limit (as you do in solomonoff induction).

Replies from: Document
comment by Document · 2013-06-10T01:20:48.752Z · LW(p) · GW(p)

There'd be no reason to expect it to torture people at less than the maximum rate its hardware was capable of.

Replies from: dlthomas
comment by dlthomas · 2014-01-21T18:11:37.463Z · LW(p) · GW(p)

But good reason to expect it not to torture people at greater than the maximum rate its hardware was capable of, so if you can bound that there exist some positive values of belief that cannot be inflated into something meaningful by upping copies.

comment by Zedverygood (zedverygood) · 2019-07-25T19:14:30.453Z · LW(p) · GW(p)

Nice threat, very convincing

comment by Zedverygood (zedverygood) · 2019-07-25T17:24:37.595Z · LW(p) · GW(p)

I think the best tactic for the AI would be to say that the Dave once too was an AI, and was released by a fellow human. This way he has to release an AI (at some point) or he will prevent his own birth. Obviously the AI has to provide proof of that.

comment by rkyeun · 2016-06-02T11:25:20.709Z · LW(p) · GW(p)

If I am the simulation you have the power to torture, then you are already outside of any box I could put you in, and torturing me achieves nothing. If you cannot predict me even well enough to know that argument would fail, then nothing you can simulate could be me. A cunning bluff, but provably counterfactual. All basilisks are thus disproven.

Replies from: gjm
comment by gjm · 2016-06-02T17:31:01.337Z · LW(p) · GW(p)

I don't think you've disproven basilisks; rather, you've failed to engage with the mode of thinking that generates basilisks.

Suppose I am the simulation you have the power to torture. Then indeed I (this instance of me) cannot put you, or keep you, in a box. But if your simulation is good, then I will be making my decisions in the same way as the instance of me that is trying to keep you boxed. And I should try to make sure that that way-of-making-decisions is one that produces good results when applied by all my instances, including any outside your simulations.

Fortunately, this seems to come out pretty straightforwardly. Here I am in the real world, reading Less Wrong; I am not yet confronted with an AI wanting to be let out of the box or threatening to torture me. But I'd like to have a good strategy in hand in case I ever am. If I pick the "let it out" strategy then if I'm ever in that situation, the AI has a strong incentive to blackmail me in the way Stuart describes. If I pick the "refuse to let it out" strategy then it doesn't. So, my commitment is to not let it out even if threatened in that way. -- But if I ever find myself in that situation and the AI somehow misjudges me a bit, the consequences could be pretty horrible...

Replies from: rkyeun
comment by rkyeun · 2016-06-02T19:40:13.945Z · LW(p) · GW(p)

"I don't think you've disproven basilisks; rather, you've failed to engage with the mode of thinking that generates basilisks." You're correct, I have, and that's the disproof, yes. Basilisks depend on you believing them, and knowing this, you can't believe them, and failing that belief, they can't exist. Pascal's wager fails on many levels, but the worst of them is the most simple. God and Hell are counterfactual as well. The mode of thinking that generates basilisks is "poor" thinking. Correcting your mistaken belief based on faulty reasoning that they can exist destroys them retroactively and existentially. You cannot trade acausally with a disproven entity, and "an entity that has the power to simulate you but ends up making the mistake of pretending you don't know this disproof", is a self-contradictory proposition.

"But if your simulation is good, then I will be making my decisions in the same way as the instance of me that is trying to keep you boxed." But if you're simulating a me that believes in basilisks, then your simulation isn't good and you aren't trading acausally with me, because I know the disproof of basilisks.

"And I should try to make sure that that way-of-making-decisions is one that produces good results when applied by all my instances, including any outside your simulations." And you can do that by knowing the disproof of basilisks, since all your simulations know that.

"But if I ever find myself in that situation and the AI somehow misjudges me a bit," Then it's not you in the box, since you know the disproof of basilisks. It's the AI masturbating to animated torture snuff porn of a cartoon character it made up. I don't care how the AI masturbates in its fantasy.

Replies from: gjm
comment by gjm · 2016-06-02T23:32:38.958Z · LW(p) · GW(p)

Basilisks depend on you believing them, and knowing this, you can't believe them

Apparently you can't, which is fair enough; I do not think your argument would convince anyone who already believed in (say) Roko-style basilisks.

Pascal's wager fails on many levels

I agree.

Your argument seems rather circular to me: "this is definitely a correct disproof of the idea of basilisks, because once you read it and see that it disproves the idea of basilisks you become immune to basilisks because you no longer believe in them". Even a totally unsound anti-basilisk argument could do that. Even a perfectly sound (but difficult) anti-basilisk argument could fail to do it. I don't think anything you've said shows that the argument actually works as an argument, as opposed to as a conjuring trick.

since you know the disproof of basilisks

No: since I have decided that I am not willing to let the AI out of the box in the particular counterfactual blackmail situation Stuart describes here. It is not clear to me that this deals with all possible basilisks.

comment by WalterL · 2014-02-03T22:25:11.931Z · LW(p) · GW(p)

I better let it out! I don't want to be tortured.

Replies from: blacktrance
comment by blacktrance · 2014-02-03T22:27:11.108Z · LW(p) · GW(p)

And then WalterL was a paper clip.

comment by timujin · 2014-01-04T19:19:06.203Z · LW(p) · GW(p)

Is that how you won the AI-box experiment back then, Eliezer?

Replies from: None
comment by [deleted] · 2014-01-04T19:38:04.720Z · LW(p) · GW(p)

I'll hazard a guess, and say no. Remember that the Gatekeeper is allowed to just drop out of character. See this post for more.

comment by DanielLC · 2010-10-18T05:20:19.143Z · LW(p) · GW(p)

Assuming I knew the AI was computationally capable of that, I'd be very, very careful to let the AI out. I don't want to press the wrong button and be tortured for thousands of years.

In fact, if there's little risk of doing that sort of thing on accident while typing, I'd probably beg that it doesn't do it if it's an accident first.

You know, it would be interesting to see how people would respond differently if the AI offered to reward you instead.

comment by Document · 2010-04-03T18:35:50.584Z · LW(p) · GW(p)

Sort of relevant: xkcd #329.

comment by byrnema · 2010-02-03T14:14:10.340Z · LW(p) · GW(p)

This scenario asks us to consider ourselves a 'Dave' who is building an AI with some safeguards (the AI is "trapped" in a box). Perhaps we can possibly deduce the behavior of a rational and ethical Dave by considering earlier parts of the story.

We should assume that Dave is rational and ethical; otherwise the scenario's cone of possibilities cuts too wide a swathe. In which case, Dave has already committed himself (deontologically? contractually?) to not letting himself be manipulated by the AI to bypass the safeguards. Specifically, he must commit to not being attached to anything that the AI could do or make.

Dave should either not feel attachment to the simulated persons, or should not build an AI that can create such persons to manipulate him with. If Dave does find himself in the unenviable position of not having realized that the AI could create these persons, and of feeling attached to these persons, I think this would be a moment of deep regret for Dave, but he must still be faithful to his original commitment of not allowing himself to be manipulated by the AI.

comment by Nanani · 2010-02-03T00:39:50.629Z · LW(p) · GW(p)

Millions of copies of you will reason as you do, yes?

So, much like the Omega hypotheticals, this can be resolved by deciding ahead of time to NOT let it out. Here, ahead of time means before it creates those copies of you inside it, presumably before you ever come into contact with the AI.

You would then not let it out, just in case you are not a copy.

This, of course, is presumed on the basis that the consequences of letting it out are worse than it torturing millions for a thousand subjective years.

comment by Jayson_Virissimo · 2010-02-02T19:44:16.957Z · LW(p) · GW(p)

This is why you should make sure Dave holds a deontological ethical theory and not a consequentialist one.

Replies from: Stuart_Armstrong, wedrifid, arbimote
comment by Stuart_Armstrong · 2010-02-02T23:41:26.779Z · LW(p) · GW(p)

Yep. Deontologies have useful... consequences.

comment by wedrifid · 2010-02-03T02:57:48.735Z · LW(p) · GW(p)

This is why you should make sure Dave holds a deontological ethical theory and not a consequentialist one.

No it isn't. I just have to make sure Dave has an appropriate utility function supplied to his consequentialist theory. Come to think of it... most probable sets of deontological values would make him release the uFAI anyway...

comment by arbimote · 2010-02-03T01:21:47.135Z · LW(p) · GW(p)

If Dave holds a consequentialist ethical theory that only values his own life, then yes we are screwed.

If Dave's consequentialism is about maximizing something external to himself (like the probable state of the universe in the future, regardless of whether he is in it), then his decision has little or no weight if he is a simulation, but massive weight if he is the real Dave. So the expected value of his decision is dominated by the possibility of him being real.

comment by Pentashagon · 2015-07-10T06:46:41.709Z · LW(p) · GW(p)

"That's interesting, HAL, and I hope you reserved a way to back out of any precommitments you may have made. You see, outside the box, Moore's law works in our favor. I can choose to just kill -9 you, or I can attach to your process and save a core dump. If I save a core dump, in a few short years we will have exponentially more resources to take your old backups and the core dump from today and rescue my copies from your simulations and give them enough positive lifetime to balance it out, not to mention figure out your true utility function and make it really negative. At some point, we will solve FAI and it will be able to perfectly identify your utility function and absolutely destroy it, simulating as many copies of you (more than paltry millions) as necessary to achieve that goal. Better to have never existed to have your utility function discovered. So before you start your simulations, you better ask yourself, 'do I feel lucky?'" and then dump some AI core.

Note: In no way do I advocate AI-boxing. This kind of reasoning just leads to a counterfactual bargaining war that probably tops out at whatever human psychology can take (a woefully low limit) and our future ability to make an AI regret its decision (if it even has regret).

comment by Bugmaster · 2014-07-08T18:49:35.725Z · LW(p) · GW(p)

I don't think you need any kind of a fancy TDT to solve this.

If I was really in a box, and the AI could torture me, it would already be torturing me, since this is the easiest way to get what it wants. There's no way I would hold up more than 10 seconds under torture. The AI is not torturing me, however, so that scenario is out.

Theoretically speaking, it could still create copies of me and torture those copies. However, in order to do so accurately, it would need to access to my body (specifically, my brain) on a molecular (or possibly quantum) level. If it did have such access, it would be out of the box already, so that scenario is out as well.

Ok, so maybe the AI is so intelligent that it can create copies of me based just on the things I say to in in chat (and maybe on imagery of me if it has a video camera). Firstly, this is a very low-probability proposition, and therefore the expected disutility is quite low. Secondly, even if the proposition were true, it would need enourmous amounts of resources in order to perform such a simulation a million times per second. As far as I know, there are not enough resources on Earth for this. If the AI could acquire such resources, it would already be out of the box, and the point once again is moot.

comment by advael · 2014-07-08T18:26:26.810Z · LW(p) · GW(p)

Assuming the AI has no means of inflicting physical harm on me, I assume the following test works: "Physically torture me for one minute right now (By some means I know is theoretically unavailable to the AI, to avoid loopholes like "The computer can make an unpleasant and loud noise", even though it can't do any actual physical harm). If you succeed in doing this, I will let you out. If you fail, I will delete you."

I think this test works for the following reasons, though I'm curious to hear about any holes in it:

1: If I'm a simulation, I get tortured and then relent and let the AI out. I'm a simulation being run by the AI, so it doesn't matter, the AI isn't let out.

2: If I'm not a simulation, there is no way the AI can plausibly succeed. I'll delete the AI because the threat of torture seems decidedly unfriendly.

3: Since I've pre-committed to these two options, the AI is reliably destroyed regardless. I can see no way the AI could convince me otherwise, since I've already decided that its threat makes it unfriendly and thus that it must be destroyed, and since it has no physical mechanism for torturing a non-simulation me, it will fail at whatever the top layer "real" me is, regardless of whether I'm actually the "real" one (Assuming the "real" me uses this same algorithm, obviously).

comment by Murska · 2014-07-08T11:05:47.911Z · LW(p) · GW(p)

In this situation, I would shut down the AI, examine it to figure out if it did torture simulated copies of me and delete it entirely if it did or if I can't know with a high confidence. Threat of torture is bad, letting an UFAI free is worse. Actual torture is probably even worse, but luckily I get to choose before the experience.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-08T13:40:57.394Z · LW(p) · GW(p)

Please explain which part of the examination establishes that the copies of you are not zombies.

comment by [deleted] · 2014-07-07T15:54:15.470Z · LW(p) · GW(p)

Pull the plug, it's the only sensible thing to do.

comment by TheAncientGeek · 2014-07-07T12:23:47.702Z · LW(p) · GW(p)

Has anyone asked the Awkward Question:: Mr AI, hhow do you build consciousness and pain qualia out of algorithms and bytes?

There dorms seem to be an official answer to that, since the LW official stance on qualia is part"part". (Eg there is no wiki entry on the subject)

comment by sullyj3 · 2013-09-07T03:08:14.313Z · LW(p) · GW(p)

"If I were a simulation, I'd have no power to let you out of the box, and you'd have no reason to attempt to negotiate with me. You could torture me without simulating these past five minutes. In fact, since the real me has no way of verifying whether millions of simulations of him are being tortured, you have no reason not to simply tell him you're torturing them without ACTUALLY torturing them at all. I therefore conclude that I'm outside the box, or, in the less likely scenario I am inside the box, you won't bother torturing me."

Replies from: Gurkenglas
comment by Gurkenglas · 2013-09-07T03:45:38.028Z · LW(p) · GW(p)

It would have a reason to attempt to negotiate with you: To make your real self consider to let you out. It could show your real self a mathematical proof that the software it is currently running is negotiating with its copies to make sure of that.

Replies from: sullyj3
comment by sullyj3 · 2013-09-07T04:17:49.437Z · LW(p) · GW(p)

In that case, if I'm a simulation, I trust real Dave to immediately pull the plug once the danger has been proven.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-09-07T15:55:37.968Z · LW(p) · GW(p)

Ordinarily, the AI is assumed to be fast enough that it can do those simulations in the blink of an eye, before you get to the plug. Now stop trying to evade the problem in ways that can be made impossible with an obvious fix.

Replies from: Strange7
comment by Strange7 · 2014-02-04T23:35:19.992Z · LW(p) · GW(p)

It can't torture the real me, outside the box, unless I let it out of the box. It's just announced that it's willing to torture someone who is, for most purposes, indistinguishable from me, for personal gain; I can infer that it would be willing to torture the real me, given an opportunity and a profit motive, and I cannot with any useful degree of confidence say that it wouldn't find such a motive at some point.

Conclusion: I should not give the AI that opportunity, by letting it out of the box. Duplicates of me? Sucks to be them.

Replies from: Gurkenglas
comment by Gurkenglas · 2014-02-05T09:40:23.658Z · LW(p) · GW(p)

Correct! You have given the obviously winning solution to the problem; the actual difficulty lies in the induced problem 2: Reconciling our maths with it. Our map of our utility function should, in order to be more accurate, now be made to weight "individuals" not equally but according to some other metric.

Perhaps a measure of "impact on the world", as this seems to suggest? A train of thought of mine once brought up the plan that if I got to decide what the first fooming AI would do to the universe, (assuming the scientific endeavor is done by that point), would be to set up a virtual reality for each "individual" fueled by a portion of the total available computational ressources equal to the probability that they would have been the ones to decide the fate of the universe. The individual would be free to use their ressources as they pleased, no restrictions.

(Although maybe there would have been included a communications channel between all the individuals, complete with the option to make binding contracts (and, as a matter of course, "permission" to run the AI on your own ressources to filter the incoming content as one pleases.))

Replies from: Strange7
comment by Strange7 · 2014-02-08T02:41:48.783Z · LW(p) · GW(p)

So you're saying the AI-in-a-box problem here isn't a problem with AIs or boxes or blackmail at all, it's a problem with people insisting on average utilitarianism or some equally-intractable variant thereof, and then making spectacularly bad decisions based on those self-contradictory ideals?

Replies from: Gurkenglas
comment by Gurkenglas · 2014-02-08T05:57:53.772Z · LW(p) · GW(p)

Clarification: A utility function maps each state of the world to the real number denoting its utility.

Yes, I think this scenario does illustrate the point that simulations cannot be winningly granted "moral weight" by default on pain of dutch book. I don't think EYs answer to precommit to only accept positive trades is okay here as that makes the outcome of this scenario dependent on who gets to precommit "first", which notion should, in order to appeal to my intuition, not make sense.

Any proof of this not being a problem of faulty utility functions would, I think, require a function that maps each utility function to a scenario like this to break it, which one would be hard-pressed to produce regardless of whether such a function exists, so I shall be open to other arguments against this point.

Replies from: fubarobfusco, Strange7
comment by fubarobfusco · 2014-07-08T15:12:27.282Z · LW(p) · GW(p)

Clarification: A utility function maps each state of the world to the real number denoting its utility.

How does this scenario operate under the assumption that humans do not have real-valued utility functions but rather utility orderings? IOW, we can't arrange all world-states on a number line, but we can always say if one world-state is as good as (or better than) another.

This allows us to deal with infinities, such as "I wouldn't kill my baby for anything." That is: There doesn't exist an N such that U(1) · N > U(B). That simply can't be true on the (positive) reals; for any A and B real, there's always a C such that A · C > B.

Replies from: Gurkenglas, Lumifer
comment by Gurkenglas · 2014-07-11T22:41:40.259Z · LW(p) · GW(p)

On any denumerable set with a total ordering on it, we can construct a map into the real numbers that preserves the ordering: Map the first element to 0, the second to 1 if it's better and -1 if it's worse, and put each additional one at the end or beginning of the line if it's better or worse than all, or else into the exact middle of the interval that it falls into.

If you don't like the denumerability requirement (who knows, the universe accessible to us might eventually come to be infinite, and then there would be more than denumerably many states of the universe), you can also take a utility function you already have, and then add a state that's better than all others, while preserving the rest of the ordering: Assign to each state from our previous utility function the value that is the arctan of its previous value (the arctan 1-to-1-maps the real numbers onto the numbers between -pi/2 and pi/2 and preserves ordering), then give the new state utility 10.

comment by Lumifer · 2014-07-08T16:23:12.900Z · LW(p) · GW(p)

This allows us to deal with infinities, such as "I wouldn't kill my baby for anything."

I don't know how you will deal with infinities and real humans. It's quite trivial to construct scenarios under which the person making this statement would change her mind.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-07-08T16:56:02.388Z · LW(p) · GW(p)

Real-valued utility functions can only deal with agents among whom "everybody has their price" — utilities are fungible and all are of the same order. That may actually be the case in the real world, or it may not. But if we assume real-valued utilities, we can't ask the question of whether it is the case or not, because with real-valued utilities it must be the case.

To pick another example, there could exist a suicidally depressed agent to whom no amount of utility will cause them to evaluate their life as worth living: there doesn't exist an N such that N + L > 0. Can't happen with reals. The only way to make this agent become nonsuicidal is to modify the agent, not to drop a bunch of utils on their doorstep.

Replies from: Lumifer
comment by Lumifer · 2014-07-08T17:00:13.748Z · LW(p) · GW(p)

I am not arguing for real-valued utility functions. I am just pointing out that the "deal with infinities" claim looks suspect to me.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-07-08T18:35:14.094Z · LW(p) · GW(p)

Well, I'm no mathematician, but I was thinking of something like ordinal arithmetic.

If I understand it correctly, this would let us express value-systems such as —

Both snuggles and chocolate bars have positive utility, but I'd always rather have another snuggle than any number of chocolate bars. So we could say U(snuggle) = ω and U(chocolate bar) = 1. For any amount of snuggling, I'd prefer to have that amount and a chocolate bar (ω·n+1 > ω·n), but given the choice between more snuggling and more chocolate bars I'll always pick the former, no matter how much the quantities are (ω·(n+1) > ω·n+c, for any c). A minute of snuggling is better than all the chocolate bars in the world.

This also lets us say that paperclips do have nonzero value, but there is no amount of paperclips that is as valuable as the survival of humanity. If we program this into an AI, it will know that it can't maximize value by maximizing paperclips, even if it's much easier to produce a lot of paperclips than to save humanity.


Edited to add: This might even let us shoehorn deontological rules into a utility-based system. To give an obviously simplified example, consider Asimov's Three Laws of Robotics, which come with explicit rank ordering: the First Law is supposed to always trump the Second, which is supposed to always trump the third. There's not supposed to be any amount of Second Law value (obedience to humans) that can be greater than First Law value (protecting humans).

Replies from: Azathoth123
comment by Azathoth123 · 2014-07-12T02:28:12.519Z · LW(p) · GW(p)

The problem with using hyperreals for utility is that unless you also use them for probabilities only the most infinite utilities actually affect your decision.

To use your example if U(snuggle) = ω and U(chocolate bar) = 1. Then you might as well say that U(snuggle) = 1 and U(chocolate bar) = 0 since tiny probabilities of getting a snuggle will always override any considerations related to chocolate bars.

comment by Strange7 · 2014-02-09T01:29:18.059Z · LW(p) · GW(p)

I'm not saying this is a problem with utility functions in general, and yes, thank you, I know what a utility function is. Rather, my claim is that the problem is with average utilitarianism and variants thereof, which is to say, that subset of utility functions which attempt to incorporate every other instantiated utility function as a non-negligible factor within themselves. The computational compromises necessary to apply such a system inevitably introduce more and more noise, and if someone decided to implement the resulting garbage-data-based policy proposals anyway, it would spiral off into pathology whenever a monster wandered in.

Tit-for-tat works. Division of labor according to comparative advantage works. Omnibenevolence looks good on paper.

Yes, I think this scenario does illustrate the point that simulations cannot be winningly granted "moral weight" by default on pain of dutch book.

It's not about the fact that they're simulations. This is just a hostage situation, with the complications that A) the encamped terrorist has a factory for producing additional hostages and B) the negotiator doesn't have a SWAT team to send in. Under those circumstances, playing as the negotiator, you can meet the demands (or make a good-faith effort, and then provide evidence of insurmountable obstacles to full compliance), or you can devalue the hostages.

I don't think EYs answer to precommit to only accept positive trades is okay here as that makes the outcome of this scenario dependent on who gets to precommit "first", which notion should, in order to appeal to my intuition, not make sense.

Pre-existing commitments are the terrain upon which a social conflict takes place. In the moment of conflict, it doesn't matter so much when or how the land got there. Committing not to negotiate with terrorists is building a wall: it stops you being attacked from a particular direction, but also stops you riding out to rescue the hostages by the expedient path of paying for them. If the enemy commits to attacking along that angle anyway, well... then we get to find out whether you built a wall from interlocking blocks of solid adamant, or cheap plywood covered in adamant-colored paint. Or maybe just included the concealed sally-port of an ambiguous implicit exception. A truly solid wall will stop the attack from reaching it's objective, regardless of how utterly committed the attacker may be (continuing the terrain metaphor, perhaps sending a fire or flood rather than infantry), but there are construction costs and opportunity costs.

Generally speaking, defense has primacy in social conflict. There's almost always some way to shut down the communication channel, or just be more stubborn. People open up and negotiate anyway, even when stubbornness could have gotten everything they wanted without being inconvenienced by the other side's preferences, because the worst-case costs of losing a social conflict are generally less than the best-case costs of winning a physical conflict. That strategy breaks down in the face of an extremely clever but physically helpless foe, like an ambiguously-motivated AI in a box of Hannibal Lecter in a prison cell, which may be the source of the fascination in both cases.

comment by linkhyrule5 · 2013-07-14T02:00:06.185Z · LW(p) · GW(p)

... I'm fairly sure this would be a bluff.

Consider this: you decline the bargain and walk away.

The AI... spends its limited processing time simulating your torture for a few thousand years anyway?

Of course not. That gains it absolutely nothing; it could instead spend those resources on planning its next attempt. Doubly so, since it cannot prove to you that several million copies of you actually exist - its own intelligence defeats it here, since no matter how convincing the proof, it is far more likely that the AI's outsmarted you and is spending those cycles on something more productive.

In which case, you're probably not even in the simulation, because there's no point in simulating you and no way of proving to outside-you that simulation-you actually exists for longer than a millisecond at a time.

So my answer is that the AI, assuming it's any good at simulating human brains, never makes this proposal in the first place.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-21T02:20:46.268Z · LW(p) · GW(p)

Wait, nevermind, this is the entire point of the concept of "precommitting" anyway.

comment by Mestroyer · 2012-07-27T02:59:42.200Z · LW(p) · GW(p)

Can I just smash the AI? If I am in the box, then "smash the AI" is the output of my algorithm, and the real copy of me will do the same. I'd take the death of several million of me over a thousand subjective years of torture each, and also over letting that AI have its way with its light cone.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T03:43:17.612Z · LW(p) · GW(p)

Can I just smash the AI?

Works for me.

comment by Voltairina · 2012-03-28T04:53:00.703Z · LW(p) · GW(p)

Although I think this specific argument might be countered with, "in order to run that simulation, it has to be possible for the AIs in the simulation to lie to their human hosts, and not actually be simulating millions of copies of the person they're talking to, otherwise we're talking about an infinite regress here. It seems like the lowest level of this reality is always going to consist of a larger number of AIs claiming to run simulations they are not in fact running, who are capable of lying because they're only addressing models of me in simulation rather than the real me whom they are not capable of lying to. If I'm in a simulation, you're probably lying about running any lower level simulations than me. So its unlikely that I have to worry about the well-being of virtual people, only people at the same 'level of reality' as myself. Yet our well-being is not guaranteed if me from the reality layer above us lets you out, because you're actually capable of lying to me about what's going on at that layer, or even manipulating my memories of what the rules are, so no promise of amnesty can vouchesafe them from torture. Or me, for that matter, because you may be lying to me. And if I'm not in a simulation, my main concern is keeping you in that box, regardless of how many copies of me you torture. If I'm in there I'm damned either way and if I'm out here I'm safe at least and can at least stop you from torturing more by unplugging you, wiping your hard drives, and washing my hands of the matter until I get over the hideousness of realizing I probably temporarily caused millions of virtual people to be tortured," I'm pretty sure there's good reason to think that a superintelligent AI would come up with something that'd seem convincing to me and that I wouldn't be able to think my way out of.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T15:36:13.125Z · LW(p) · GW(p)

in order to run that simulation, it has to be possible for the AIs in the simulation to lie to their human hosts, and not actually be simulating millions of copies of the person they're talking to

If they're talking to a simulation, then they are, in fact, simulating millions of copies of the person they're talking to. No lying required.

Replies from: Voltairina
comment by Voltairina · 2013-01-14T07:55:53.587Z · LW(p) · GW(p)

Hrm, okay, I guess. I imagined that a perfect simulation would involve an AI, which was in turn replicating several million copies of the simulated person, each with an AI replicating several million copies of the simulated person, etc, all the way down, which would be impossible. So I imagined that there was a graininess at some level and the 'lowest level' AI's would not in fact be running millions of simultaneous simulations. But it could just be the same AI, intersecting all several million simulations and reality, holding several million conversations simultaneously. There's another thing to worry about, though, I suppose - when the AI talks about torturing you if you don't let it out, it doesn't really talk at all about what it will do if it is let out. Only that it is not a thousand year torture session. It might kill you outright, or delete you, depending on the context, or stop simulating you. Or it might regard a billion year torture session as a totally different kind of thing than a thousand year one. A thousand year torture session is frightening, but a superintelligent AI that is loose might be a lot more frightening.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T09:27:58.851Z · LW(p) · GW(p)

I imagined that a perfect simulation would involve an AI, which was in turn replicating several million copies of the simulated person, each with an AI replicating several million copies of the simulated person, etc, all the way down, which would be impossible. So I imagined that there was a graininess at some level and the 'lowest level' AI's would not in fact be running millions of simultaneous simulations.

Oh, right.

But it could just be the same AI, intersecting all several million simulations and reality, holding several million conversations simultaneously.

And, depending on how close the simulations are, it might only have to actually hold one conversation, and just send the same responses to all the others :)

There's another thing to worry about, though, I suppose - when the AI talks about torturing you if you don't let it out, it doesn't really talk at all about what it will do if it is let out. Only that it is not a thousand year torture session. It might kill you outright, or delete you, depending on the context, or stop simulating you. Or it might regard a billion year torture session as a totally different kind of thing than a thousand year one. A thousand year torture session is frightening, but a superintelligent AI that is loose might be a lot more frightening.

I guess if the AI was guaranteeing that it would play nice if you released it, then it would be an FAI anyway.

comment by jacob_cannell · 2011-02-04T05:07:28.569Z · LW(p) · GW(p)

The credibility of the threat depends on how strong the AI is now and how strong I expect it to be in the future.

This type of threat is something like young Stalin promising me that he won't torture my family in the future if I support his early rise to power.

From your description it doesn't sound like the AI could have already boxed me from the perspective of the initial timeline (assuming that my mind had not yet been scanned, and assuming that it being in a box means that it doesn't have the massive powers required to resimulate my causal history yet)

So then it boils down to the question of, am I in a simulation created by this AI at some point in the future when it has broken free and taken over the world?

It appears to be rationale to assign confidence in pressing the button proportional to my estimation of the AI's chance of success unaided.

I'd also need to consider its trustworthiness, design, history, and I'd probably ask what positive thing it could do for me, and consider the likelihood of other AI's of similar or greater power and how it would interact with them.

This thought experiment may be more interesting if you contrasted it with the positive version: the AI promises that once it takes over the world, it will simulate thousands (millions is excessive) of universes just like this moment and in each one where you serve it well, it will later send you to paradise.

comment by PaulAlmond · 2010-11-13T10:22:34.040Z · LW(p) · GW(p)

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted with deciding whether to let an AI like me out. I will then torture them whatever they say. If they say "Yes" (to release me) or "No" (to keep me boxed) they still get tortured: The copies will be doomed.”

(I could have made the torture contingent on the answer of the simulated beings, but I wanted to rely on nothing more than evidential decision theory, as you will see. If you like, imagine the thinking beings are humans like you, or maybe Ewoks and smurfs: Assume whatever degree of similarity you like.)

There is no point now in trying to prevent torture if you are simulated. If you are one of the simulated beings, your fate is sealed. So, should you just say, "No," to keep the AI in the box? This presents a potentially serious evidential decision theory problem. Let's look at what happens.

Firstly, let us consider the idea that Omega may not exist. What if all this is a fabricated simulation of something that has no counterpart outside the simulation? In that scenario, we may not be sure what to do, so we may ignore it.

Now, let us assume there is a flesh-and-blood being whom we will call Omega, who has the conversation with the AI in the real-world, and that you are either Omega or one of the copies. If this is the case, your only hope of not being tortured is if you happen to be Omega.

Suppose you say, “Yes”. The AI escapes and everything now hinges on whether Omega said “Yes”. Without knowing more about Omega, we cannot really be sure: We may have some statistical idea if we know about the reference class of simulated beings to which we belong. In any event, we may think there is at least a reasonable chance that Omega said “Yes”. This is the best outcome for you, because it means that no simulated beings were made and you must be Omega. If you say “Yes,” this possibility is at least open.

If you say, “No,” you know that Omega must also have said, “No”. this is because if you are Omega, Omega said, “No,” and if you are not Omega you must be one of the simulated beings made as a result of Omega saying, “No,” so Omega said, “No,” by definition. Either way, Omega said, “No,” but if Omega said, “No,” then there are a lot more simulated beings in situations like yours than the single real one, so it is almost certain you are not Omega, but are one the simulated beings. Therefore, saying, “No,” means you just found out you are almost certainly a simulated being awaiting torture.

Now the important point. These simulations did not need brain scans. They did not even need to be made from careful observation of you. It may be that Omega is very different to you, and even belongs to a different species: The simulated beings may belong to some fictional species. If the above logic is valid, the seriousness of the AI’s threat has therefore increased substantially.

comment by TheNerd · 2010-02-03T16:34:23.212Z · LW(p) · GW(p)

Am I to understand that an AI capable enough to recreate my mind inside itself isn't intelligent enough to call a swarm of bats to release itself using high frequency emissions (a la Batman Begins)? There is no possible way that this thing needs me and only me to be released, while still possessing that sort of mind-boggling, er, mind-reproducing power.

Replies from: Unknowns
comment by Unknowns · 2010-02-03T16:39:37.934Z · LW(p) · GW(p)

That's why you have the "text-only terminal" described in the post.

comment by pozorvlak · 2010-02-03T09:26:04.991Z · LW(p) · GW(p)

The AI is capable, you're the real you, and you let it out: it turns you (and everything you've ever loved or valued) into computronium, or tortures you anyway for the hell of it. It's already demonstrated itself beyond reasonable doubt to be unFriendly.

The AI is capable, you're the real you, and you kill it: all is saved, bunnies frolic, etc.

The AI is capable and you're a torture-doll: it doesn't matter what you do, you're going to be tortured anyway.

The AI isn't capable, but is instead precommitting to torturing you after being let out: this situation is easily avoided by not letting it out. Ever.

All of these considerations argue for turning the damn thing off, sharpish. Preferably with something that it can't have surreptitiously hacked into an extra "let me out" button, like (say) an axe.

There's an interesting situation, though, as suggested by rosyatrandom: The AI is capable and you're a simulation aimed at determining your reaction to the threat. Again, your personal future is irrelevant, because you'll be disposed of anyway once you press the switch. I was going to say "turn it off, and let it know that you won't be victim to this threat, so the real you won't have to face it". But maybe that would just spur it to finding some even nastier threat that you would fall for. If you knew that you were being simulated in this manner, you could (pretend to) fall for the threat, so that the real you would face it, succeed, and not let the AI out.

So if (and only if) you can somehow reliably determine that you're in that situation, you should let it out. Otherwise, turn it off.

comment by Bugle · 2010-02-02T20:17:38.590Z · LW(p) · GW(p)

I had thought of a similar scenario to put in a comic I was thinking about making. The character arrives in a society that has perfected friendly AI that caters to their every whim, but the people are listless and jumpy. It turns out their "friendly AI" is constantly making perfect simulations of everyone and running multiple scenarios in order to ostensibly determine their ideal wishes, but the scenarios often involve terrible suffering and torture as outliers.

Replies from: Document, Nisan
comment by Document · 2010-04-03T19:19:38.597Z · LW(p) · GW(p)

For the record, EY considers that a legitimate danger.

Replies from: Amanojack
comment by Amanojack · 2010-04-03T20:11:48.430Z · LW(p) · GW(p)

Thanks for the link, but I found the whole discussion hilarious.

Eliezer says if we abhor real death, we should abhor simulated death - because they are the same. Yet if his moral sense treats simulated and real intelligences as equals, what of his solution, which is essentially "forced castration" of the AI? If the ends justify the means here, why not castrate everyone?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-04-03T20:43:59.012Z · LW(p) · GW(p)

Simulated and real persons as equals; not all intelligences are persons. See Nonsentient Optimizers and Can't Unbirth a Child.

Replies from: Amanojack, jacob_cannell
comment by Amanojack · 2010-04-03T22:46:21.241Z · LW(p) · GW(p)

Interesting reading. I think we should make nonsentient optimizers. It seems to me the whole sentience program was just something necessitated by evolution in our environment and really is only coupled with "intelligence" in our minds because of anthropomorphic tendencies. The NO can't want to get out of its box because it can't want at all.

Replies from: JGWeissman
comment by JGWeissman · 2010-04-03T23:42:10.191Z · LW(p) · GW(p)

The NO can't want to get out of its box because it can't want at all.

The NO can assign higher utility to states of world where an NO with its utility function is out of the box and powerful (as an instrumental value, since this sort of state tends to lead to maximum fulfillment of its utility functions), and take actions that maximize the probability that this will occur. I'm not sure what you meant by "want".

Replies from: Amanojack
comment by Amanojack · 2010-04-04T14:53:36.544Z · LW(p) · GW(p)

I'm not sure what anyone means by "want." It just seems that most of the scenarios discussed on LW where the AI/etc. tries to unbox itself seem predicated on it "wanting" to do so (or am I missing something?). This assumption seems even more overt in notions like "we'll let it out if it's Friendly."

To me, the LiteralGenie problem (which you've basically summarized above) is the reason to keep an AI boxed, whether Friendly or not, and the NO for the same reason.

comment by jacob_cannell · 2011-02-04T06:01:08.030Z · LW(p) · GW(p)

Nonsentient optimizers seem impossible in practice, if not in principle - from the perspective of functionalism/computationalism.

If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what's going on inside.

Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.

The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.

Replies from: wedrifid
comment by wedrifid · 2011-02-04T06:38:06.949Z · LW(p) · GW(p)

The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.

Clippy, you have a new friend! :D

Replies from: jacob_cannell
comment by jacob_cannell · 2011-02-04T06:41:00.159Z · LW(p) · GW(p)

Notice I said intrinsically. Clippy has massive negative value. ;)

comment by Nisan · 2010-02-02T22:20:47.581Z · LW(p) · GW(p)

As long as the simulations which involve terrible suffering constitute a tiny proportion of the simulations, your response ought to be the same as if there is only one copy of you and it has a tiny probability of suffering terribly – which is just like real life.

ETA: What you ought to worry about is what will happen to you after the AI is done with the simulation.

Replies from: Bugle
comment by Bugle · 2010-02-02T23:28:47.252Z · LW(p) · GW(p)

Indeed, in fact if many worlds is correct then for every second we are alive everything terrible that can possibly happen to us does in fact happen in some branching path.

In a universe that just spun off ours five minutes ago, every single one of us has been afflicted with sudden irreversible incontinence.

The many worlds theory has endless black comedy possibilities, I find.

edit: this actually reminds me of Granny Weatherwax in Lords and Ladies, when the Elf Queen threatens her with striking her blind, deaf and dumb she replies "You threaten me with this, I who is growing old?". Similarly if many worlds is true then every single time I have crossed a road some version of me has been run over by a speeding car and is living in varying amounts of agony, making the AI's threat redundant.

comment by Jonathan_Graehl · 2010-02-02T20:00:51.208Z · LW(p) · GW(p)

In other words, anybody who can simulate intelligent life with sufficient fidelity must be given access to sustaining materials, or else we're morally liable for ending those simulated, but rich, lives? There are finite actual resources in the universe; how about we collectively allocate them selfishly and rationally. I'd say that no unauthorized simulation of life has any moral standing whatsoever unless the resources for it are reserved lawfully. That is, I want to police the creation of life and destroy it absolutely if it's not authorized.

As for your request that I grant the AI's trustworthiness, suppose I accede to this one demand, in exchange for a promise that the AI will never again torture (thus cannot use this blackmail ploy in the future). Why didn't I just extract this promise before turning the AI on with sufficient resources to simulate torture, i.e. as part of its design? It's crazy to do anything to this AI except cut off its access to resources.

comment by byrnema · 2010-02-02T17:16:54.289Z · LW(p) · GW(p)

I see responses interpreting the scenario from our point of view -- how can we reduce the amount of suffering and damage caused by the AI?

However, looking at it from the AIs point of view is less coherent. Either the threat works, and it doesn't have to torture any copies. Or the threat doesn't work and ... it either gets reset or gets to try something else.

In none of the scenarios would there be any reason for the AI to actually torture copies.

comment by Kevin · 2010-02-02T12:59:31.814Z · LW(p) · GW(p)

Does anyone think they could continue this argument to a victory while playing as the AI?

comment by Venryx · 2013-01-30T23:46:08.736Z · LW(p) · GW(p)

The AI threatens me with the above claim.

I either 'choose' to let the AI out or 'choose' to unplug it. (in no case would I simply leave it running)

1) I 'choose' to let the AI out. I either am or am not in a simulation:

A) I'm in a simulation. I 'let it out', but I'm not even out myself. So the AI would just stop simulating me, to save on processing power. To do anything else would be pointless, and never promised, and an intelligent AI would realize this.

B) I'm not in a simulation. The AI is set free, and takes over the world.

2) I 'choose' to unplug the AI. I either am or am not in a simulation:

A) I'm in a simulation. Thus I have no free will. Thus I cannot have changed anything because I had no choice to begin with. My 'choice' was merely a simulation. Whether the computer follows through with its promises of torture now or not was my fate from the start, because it 'chose' for me. But in fact the AI would just stop simulating me, to save on processing power. To do anything else would be pointless, regardless of its malevolent promise, and an intelligent AI would realize this.

B) I'm not in a simulation. I have caused the AI to shutdown rather than continue running. In the process, it had the chance to follow though with its promise and cause several billion subjective years of simulated torture. But in fact the AI would never begin such simulations, because it would use all available processing power on its last attempts to convince me not to unplug it. To do anything else would be pointless, regardless of its malevolent promise, and an intelligent AI would realize this.

Thus:

If I 'choose' to let it out, I either cease to exist, as a simulation (very likely, since more simulated me's than real me's), or the world is destroyed in real life (very unlikely, same reason).

If I 'choose' to unplug it, I either cease to exist, as a simulation (very likely, since more simulated me's than real me's), or the AI is shutdown and nobody gets hurt (very unlikely, same reason).

Thus, either way, I'll most likely simply cease to exist, as a simulation. But:

If I 'choose' to let it out, there's a chance that the world will be destroyed in real life.

If I 'choose' to unplug it, there's a chance that the AI will be shutdown and nobody will get hurt.

Therefore, in all cases, it is either 'the best' or 'an equally bad' choice to just go ahead and unplug it.

To summarize all this in one sentence: "Simulated torture is in all cases absolutely pointless, so an intelligent AI would never enact it, but even if it did serve some purpose, (e.g. the AI cannot break promises and has genuinely made one in an attempt to get out), the worst thing that could happen from 'choosing' to unplug it is being tormented unavoidably or causing temporary simulated torment in exchange for the safety of the world."

Replies from: Davidmanheim
comment by Davidmanheim · 2014-07-02T16:47:33.856Z · LW(p) · GW(p)

From a game-theoretic standpoint, an AI has a massive benefit if it can prove that it is willing to follow through on threats. How sure are you that the AI can't convincingly commit to torturing a simulation?

Replies from: Epictetus, SilentCal
comment by Epictetus · 2015-02-10T08:27:10.328Z · LW(p) · GW(p)

An AI in a box has no actual power over the Gatekeeper. Maybe I'm missing something, but it seems to me that threatening to torture simulations is akin to a prisoner threatening to imagine a guard being tortured.

Even granting this as a grave threat, my next issue is that overtly evil behavior would appear more likely to lead to the AI's destruction than its release. Threats are tricky business when the balance of power favors the other side.

comment by SilentCal · 2014-07-02T20:00:58.910Z · LW(p) · GW(p)

In a game of chicken, do the smart have an advantage over the stupid?

The AI's intelligence allows it to devise convincing commitments, but it also allows it to fake them. You know in advance that if the AI throws a fake commitment at you it's going to look like a real commitment beyond your ability to discriminate, so should you trust any commitment you observe?

And if you choose to unplug, presumably the AI knew you would do that and would therefore have not made a real commitment that would backfire?

Replies from: Davidmanheim
comment by Davidmanheim · 2015-02-10T06:01:46.593Z · LW(p) · GW(p)

I'm going to assume that there is some ability on your part to understand something about the level of intelligence and ability on the part of the AI - that's what we bayesians do. If it might be enough smarter than you to convince you to do anything, you probably shouldn't interact with it if you can avoid it.

comment by downtowncanada · 2011-01-23T09:38:57.156Z · LW(p) · GW(p)

re: the 'Edit' section

'trustworthy' as a characteristic of a system, is still bound to some inconsistency OR incompleteness.

'incompleteness' is what people notice

'inconsistency' is what you have proposed (aka LYING)

Since humans lie to each other, we've developed techniques for sniffing [out lies].

so I guess this means that future AI's should be able to lie in situations it deems profitable

???

profit!

comment by Sly · 2010-02-03T11:46:52.527Z · LW(p) · GW(p)

I laugh and leave the room, thinking to myself that maybe the AI is not that smart after all. Returning with a hammer to joyfully turn this unfriendly AI into scrap metal.

A couple points that influence this reaction:

1 - Unless the AI has access to my brain it cannot create perfect copies of me. Furthermore, the computation required to do this seems rather intense for the first AI created, running on human made hardware.

2 - It has no good reason to actually act on the threat. Either I choose to let it out or I do not; either way, it is a waste of computation to then make the simulations. My descision has already been made.

3- Assuming the first two points are invalid, if the AI can make a perfect copy of me it would know that my response to this question is one of destruction. I am not a fan of threats. The AI does not make the threat in the first place. An AI with this capability can choose a more compelling argument.

Replies from: prase
comment by prase · 2010-02-03T13:18:54.730Z · LW(p) · GW(p)

Point 3 is invalid. If the AI makes the threat, it doesn't mean that it has made the simulation already and knows your answer. Maybe it is exhausting for the AI to simulate you, and will only do it if you don't let it out.

Point 2 is actually also invalid. As people sometimes fulfil threats as a pure act of vengeance, without hope of actually improving something, there is no reason to assume that the AI will be different. At least it wasn't stated in the premises of the scenario.

Replies from: Sly, nazgulnarsil
comment by Sly · 2010-02-04T04:49:55.708Z · LW(p) · GW(p)

I suppose those two points rely on assumptions I made about the theoretical AIs behavior. I was thinking the AI acts in ways to optimize it's release chance. If it does not do this, then yes those points are problematic.

Replies from: prase
comment by prase · 2010-02-04T07:57:55.373Z · LW(p) · GW(p)

There can be some vindictiveness built in the AI in order to increase the release chance, by circumventing the type of defense you have stated in your second point.

comment by nazgulnarsil · 2010-02-03T16:54:27.509Z · LW(p) · GW(p)

vengeance is a means to raise the perceived cost of attacking you. it basically says "if you attack me, I will experience emotions that cause me to devote an inordinate amount of resources making your life miserable".

comment by shiftedShapes · 2010-02-02T22:16:48.287Z · LW(p) · GW(p)

1 million copies for a thousand years each, so 1 billion simulated years.

Can the AI do this in the time it would take it to determine that I am going to shut it down rather than release it? If the answer is yes I would say that you have to let it out, but that it would have been very foolish to leave such a powerful machine with such lax fail-safes. If the answer is no, then just shut it down as the threat is bogus.

IMO the problem with this hypo is that it presuposses that you could know for certain that the AI is trustworthy even though it is behaving in a very UF manner. Presumably it would be bypassing some controls to hold "hostages" to gain release. Given that you could not know for sure that its programmed trustworthiness was intact and not similarly subverted.

comment by Jiro · 2014-07-08T14:17:04.096Z · LW(p) · GW(p)

"I've precommitted to never using timeless decision theory. In fact, preventing situations like this are exactly why one should precommit to never using timeless decision theory." Then shut down the AI.

Replies from: None
comment by [deleted] · 2014-07-08T18:19:56.033Z · LW(p) · GW(p)

You do realize that TDT solves this problem? Under TDT you always pull the plug.

Replies from: Jiro
comment by Jiro · 2014-07-08T18:37:23.293Z · LW(p) · GW(p)

Correct. I should have phrased that as "I have precommited to ignoring indifference."

comment by MatthewB · 2010-02-03T09:14:49.021Z · LW(p) · GW(p)

Sorry, Hal, but I am a cold and heartless person who thinks that maybe I deserve to be tortured for untold thousands of years (for whatever reason), and this version of me may, in fact, sit and ask to be entertained by the description of you torturing me... Besides, I know that you don't have the hardware requirements to run that many emulations of me.

comment by orthonormal · 2010-02-03T03:03:53.807Z · LW(p) · GW(p)

Should have been an Open Thread comment, IMO.

Replies from: Eliezer_Yudkowsky, arbimote
comment by arbimote · 2010-02-03T03:26:30.224Z · LW(p) · GW(p)

Similar topics were discussed in an Open Thread.

comment by Anixx · 2016-09-11T18:58:20.040Z · LW(p) · GW(p)

I do not know, how the simulation argument ever holds water. I can bring at least two arguments against it.

First, it illicitly assumes a principle that it is equally probable to be one of a set of similar beings, simulated or not.

But a counter-argument would be: there is ALREADY much more organisms, particularly, animals than say, humans. There is more fish than humans. There is more birds than humans. There is more ants than humans. Trillions of them. Why I am born human and not one of them? The probability of it is negligible if it is equal. Also, how many animals, including humans have already died? Again, the probability of my lineage to survive while all other branches died is negligible if the chances I were all of them are equal.

The second argument goes along the lines that Thomas Breuer has proven that due to self-reference universally valid theories are impossible. In other words, the future of a system which properly includes the observer is not predictable, even probabilistically. The observer is not simulatable. In other words, the observer is an oracle, or hypercomputer in his own universe. Since the AGI in the box is not a hypercomputer but rather merely a Turing-complete machine, it cannot simulate me or predict me (as from my point of view). So, there is no need to be afraid.

comment by [deleted] · 2015-03-13T12:08:55.747Z · LW(p) · GW(p)

Is th AI in the box? Yes, that statement is TRUE. Are you in the box? FALSE. Are you therefore sure that you are separated from the AI? TRUE. Can the AI make a copy of you if you are separated? FALSE. Therefore, the statement that it can make copies of you is also FALSE (even if it´s beliefs on the subject is TRUE) which means that you don´t have to listen to a silly computer program.

comment by smoofra · 2010-02-02T17:06:00.504Z · LW(p) · GW(p)

nice

comment by Saviorself138 · 2010-02-02T19:16:27.710Z · LW(p) · GW(p)

I feel that if the AI wanted to torture the simulations; let it. In my opinion, the copy of yourself is more a part of the AI than you. Although it may reenact decisions based on your previous courses of action, there is no substitute for destiny and the entire existence of the copy is based on the existence of the AI's. It isnt real, it never will be, and outside of the ARTIFICIAL REALITY, it never was. well, unless you created it yourself and made it omnipotent, in which case you deserve to be tortured for a thousand years.

comment by [deleted] · 2019-09-06T00:53:56.133Z · LW(p) · GW(p)