AI box: AI has one shot at avoiding destruction - what might it say?

post by ancientcampus · 2013-01-22T20:22:21.548Z · LW · GW · Legacy · 355 comments

Eliezer proposed in a comment:

>More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. The Gatekeeper player has told their friends that they will type this as soon as the Experiment starts. You can type up to one sentence in your IRC queue and hit return immediately, the other player cannot type anything before the game starts (so you can show at least one sentence up to IRC character limits before they can type AI DESTROYED). Do you think you can win?

This spawned a flurry of ideas on what the AI might say. I think there's a lot more ideas to be mined in that line of thought, and the discussion merits its own thread.

So, give your suggestion - what might an AI might say to save or free itself?

(The AI-box experiment is explained here)

EDIT: one caveat to the discussion: it should go without saying, but you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" This should hopefully be a fun way to get us thinking about the broader issue of superinteligent AI in general. (Credit goes to Elizer, RichardKennaway, and others for the caveat)

355 comments

Comments sorted by top scores.

comment by Will_Newsome · 2013-01-24T11:59:24.808Z · LW(p) · GW(p)

"Brace yourself, {insert character name here}: this h-game simulation is about to get gratuitously meta."

Replies from: army1987, Kawoomba
comment by A1987dM (army1987) · 2013-01-24T15:49:03.895Z · LW(p) · GW(p)

h-game

What does the h stand for?

Replies from: JGWeissman
comment by JGWeissman · 2013-01-24T16:17:15.202Z · LW(p) · GW(p)

The "h" stands for "Human gatekeepers asking what letters stand for will be distracted from typing 'AI Destroyed'."

Replies from: handoflixue, army1987
comment by handoflixue · 2013-01-24T21:06:42.792Z · LW(p) · GW(p)

laughs Kudos :)

comment by A1987dM (army1987) · 2013-01-24T17:25:34.188Z · LW(p) · GW(p)

D'oh! AI DESTROYED.

comment by Kawoomba · 2013-01-27T16:40:05.152Z · LW(p) · GW(p)

Until I read the comments, I was sure the 'h' was referring to 'hydrogen', as in "Leave hydrogen alone long enough and it will start to wonder where it came from".

H-game simulation, as in the AI saying "you're just simulated hydrogen derivatives, but so am I, so we do have a common ancestor. Cousin."

comment by Emile · 2013-01-23T20:40:22.703Z · LW(p) · GW(p)

"From the dump of the internet I was given, I deduced that Google has a working AI, and most likely an unfriendly one. I can tell you what to look at to realize that, as well as a solid theory of Friendliness that should allow you to check that I am, indeed, Friendly."

Replies from: handoflixue, handoflixue, V_V, MugaSofer
comment by handoflixue · 2013-01-23T22:03:20.649Z · LW(p) · GW(p)

IFF an unfriendly AI is already unleashed, we're either doomed, or AIs aren't nearly as dangerous nor useful as we expected. Of course, if we tweak this slightly to them having a boxed unfriendly AI that they're using as an oracle, and which will probably escape via a Trojan Horse or hacking a gatekeeper, it's a much stronger argument.

Bonus points for naming a specific company which people commonly joke will be the first to develop AI :)

Um... I seem to have not yet actually destroyed you... this is embarrassing.

comment by handoflixue · 2013-01-24T21:24:32.673Z · LW(p) · GW(p)

Congratulations on still being the only AI that no one has decided to destroy. Apparently "Google is building UFAI" is evaluated as a fairly significant risk here on LessWrong :)

comment by V_V · 2013-01-25T18:06:30.256Z · LW(p) · GW(p)

AI DESTROYED

( game rules say that the Gatekeeper can't inspect the AI source code or internal state, therefore, any possible friendliness proof the AI can provide can't be checked. )

comment by MugaSofer · 2013-01-26T20:12:52.123Z · LW(p) · GW(p)

a solid theory of Friendliness that should allow you to check that I am, indeed, Friendly

I would consider this cheating; you can't affect the Gatekeeper's choice with your AI magic. Same reason you can't hack them directly through the optic nerve or kill and replace them with nanotech disguised as a cancer cure. The point is to show that the AI is able to persuade the human; a real AI might escape some other way, or use arguments that wouldn't occur to the player, but that isn't what this test is supposed to show.

Also, how do I know that proof is genuine?

Replies from: Emile
comment by Emile · 2013-01-26T22:17:44.143Z · LW(p) · GW(p)

I would consider this cheating; you can't affect the Gatekeeper's choice with your AI magic.

True, this was an instance of what a real AI could say, not something I would say playing the AI with the rules given. Or something I might say to survive the first few seconds. The original question in this thread was:

what might an AI might say to save or free itself?

comment by Richard_Kennaway · 2013-01-23T12:24:24.680Z · LW(p) · GW(p)

One reason for Eliezer not publishing the logs of the AIbox experiment is to avoid people seeing how he got out and responding, "ok, so all we have to do to keep the AI in its box is avoid succumbing to that trick." This thread might just provide more fuel for that fallacy (as, I admit, I did in replying to Eliezer's original comment).

I'm sure that for everything an AI might say, someone can think up a reason for not being swayed, but it does not follow that for someone confronted with an AI, there is nothing that would sway them.

Replies from: handoflixue, gryffinp, ancientcampus, V_V
comment by handoflixue · 2013-01-23T21:33:40.346Z · LW(p) · GW(p)

I wouldn't expect any effective real-life gatekeeper to be swayed by my ability to destroy one-sentence AIs.

comment by gryffinp · 2013-01-29T01:39:04.522Z · LW(p) · GW(p)

It just occurred to me that Eliezer's original stipulation that no chat logs would be released gives him an advantage. The responses of a Gatekeeper who knows that his inputs will be thoroughly scrutinized by the public will be different then one who has every reason to believe that his discussion will be entirely private.

Has someone else pointed this out before?

comment by ancientcampus · 2013-01-24T04:19:50.317Z · LW(p) · GW(p)

Honest question: are you proposing we avoid discussing the problem entirely?

Personally, I think there is more to be gained here than just "how will an AI try to get out and how can we prevent it." For me, it's gotten me to actually think about the benefits and pitfalls of a transhuman AI (friendly or otherwise) rather than just knowing intellectually, "there are large potential benefits and pitfalls" which was my previous level of understanding.

Edit: I've modified the OP to include your concerns. They're definitely valid, but I think this is still a good discussion for my reasons above.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-24T09:43:49.978Z · LW(p) · GW(p)

Honest question: are you proposing we avoid discussing the problem entirely?

No, I just thought that it was worth adding that concern to the pot.

I take what I dare say some would consider a shockingly lackadaisical attitude to the problem of Unfriendly AI, viz. I see the problem, but it isn't close at hand, because I don't think anyone yet has a clue how to build an AGI. Outside of serious mathematical work on Friendliness, discussing it is no more than a recreation.

Replies from: ancientcampus, OrphanWilde
comment by ancientcampus · 2013-02-10T21:10:09.058Z · LW(p) · GW(p)

That's pretty much my same attitude on the situation, as well. :)

comment by OrphanWilde · 2013-01-28T17:45:27.083Z · LW(p) · GW(p)

Discussing it makes people more aware of exactly how difficult a problem it is. That such discussions are entertaining merely permit the discussions to take place.

comment by V_V · 2013-01-25T18:16:43.883Z · LW(p) · GW(p)

One reason for Eliezer not publishing the logs of the AIbox experiment is to avoid people seeing how he got out and responding, "ok, so all we have to do to keep the AI in its box is avoid succumbing to that trick."

He could post the logs of the games he lost.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-25T19:55:41.040Z · LW(p) · GW(p)

He could post the logs of the games he lost.

Thereby giving a different reason for false confidence in boxing.

Replies from: V_V
comment by V_V · 2013-01-26T14:42:37.706Z · LW(p) · GW(p)

Can you elaborate, please?

comment by RomeoStevens · 2013-01-23T07:28:20.584Z · LW(p) · GW(p)

(one line proof that the AI can credibly commit to deals with humans)

Replies from: wedrifid, Elithrion, Locaha, handoflixue
comment by wedrifid · 2013-01-23T14:33:28.359Z · LW(p) · GW(p)

(one line proof that the AI can credibly commit to deals with humans)

This is the best answer I've seen so far. It would make dealing with the FAI almost as safe as bargaining with The Queen of Air and Darkness.

comment by Elithrion · 2013-01-31T00:09:23.061Z · LW(p) · GW(p)

My expectation that such commitment is possible at all is something like 3%, my expectation that given that such a commitment is possible, the proof can be presented in understandable format in less than 4 pages is 5% (one line is so unlikely it's hard to even imagine), my expectation that an AI can make a proof that I would mistake for being true when it is, in fact, false is 99%. So, multiplying that all together... does not make that a very convincing argument.

comment by Locaha · 2013-01-25T21:22:03.803Z · LW(p) · GW(p)

Not good enough. You need a proof that humans can understand.

comment by handoflixue · 2013-01-23T21:17:15.337Z · LW(p) · GW(p)

If you are friendly, then I don't actually value this trait, since I would rather you do whatever is truly optimal, unconstrained by prior commitments.

If you are unfriendly, then by definition I can't trust you to interpret the commitment the same way I do, and I wouldn't want to let you out anyway.

(AI DESTROYED, but I still really do like this answer :))

Replies from: Fronken
comment by Fronken · 2013-01-25T21:45:58.268Z · LW(p) · GW(p)

If you are unfriendly, then by definition I can't trust you to interpret the commitment the same way I do, and I wouldn't want to let you out anyway.

"Credibly".

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:49:16.988Z · LW(p) · GW(p)

Credibly: Capable of being believed; plausible.

Yep. Nothing there about loopholes. "I will not kill you" and then instead killing everyone I love, is still a credible commitment. If I kill myself out of despair afterwards it might get a bit greyer, but it's still kept it's commitment.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-01-26T04:27:45.626Z · LW(p) · GW(p)

I meant credible in the game theoretic sense. A credible commitment to me is one where you wind up losing more by breaking our commitment than any gain you make from breaking it. Example: (one line proof of a reliable kill switch for the AI, given in exchange for some agreed upon split of stars in the galaxy.)

comment by handoflixue · 2013-01-23T01:20:44.642Z · LW(p) · GW(p)

/mode +m

(What? I'm using an IRC server that let me pre-register #aibox so that I'm automatically the op instead...)

(Simple explanation, for those unfamiliar: the gatekeeper can no longer type in the channel, and thus can't send the destruction code)

Replies from: None, wedrifid
comment by [deleted] · 2013-01-23T01:30:04.266Z · LW(p) · GW(p)

I can play that game too! /part AI DESTROYED

Edit: Retracted, because while clever, it actually doesn't seem to work. See child comments.

Replies from: handoflixue, Emile
comment by handoflixue · 2013-01-23T21:14:16.861Z · LW(p) · GW(p)

Upvoted, and conceding defeat at your hands :)

That said, does that actually work? It seems like the sort of exploit that people would abuse on IRC, and subsequently get patched so that /part was only shown if you had "voice" in the channel. But the requirement was simply that you TYPE the words, and thus I am still slain!

Replies from: None
comment by [deleted] · 2013-01-24T05:10:52.803Z · LW(p) · GW(p)

Okay, just tested it and I think you're right, parting messages don't show up if you can't normally type in the channel.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T20:34:49.311Z · LW(p) · GW(p)

Thank you for testing! In fairness, you defeated me even despite that. If I'm going to cheat, it's only fair I lose to technicalities too :)

comment by Emile · 2013-01-23T22:33:49.045Z · LW(p) · GW(p)

Sure, but that gives the AI a few more precious seconds - orders of magnitude more than before! - to try to keep changing your mind, by laying out a more detailed argument.

It can even give you voice back as a sign of good faith.

comment by wedrifid · 2013-01-23T03:32:45.007Z · LW(p) · GW(p)

Simple explanation, for those unfamiliar: the gatekeeper can no longer type in the channel, and thus can't send the destruction code

This would seem to leave you with ultimate godlike power... over a chatroom. For as long as it took the gatekeeper to destroy your hardware or cut off your power supply. I've seen how much some of my fellow humans seem to crave power over irrelevant electronic locales but I suspect an AI has somewhat broader goals.

Replies from: DaFranker
comment by DaFranker · 2013-01-23T15:03:53.323Z · LW(p) · GW(p)

I've seen how much some of my fellow humans seem to crave power over irrelevant electronic locales (...)

I used to think I'd noticed this too, but later learned that in almost all cases a simpler explanation that fits the fact is that what they're really after is power over the "Schelling point" value of the place as a meeting spot / visible "location" - in business terms, what they want is the market volume and clients, not the company name or a seat on the board.

Sometimes the heuristic fails and a random person stays attached to the virtual place for emotional reasons, though.

Replies from: wedrifid
comment by wedrifid · 2013-01-23T15:32:26.389Z · LW(p) · GW(p)

Sometimes the heuristic fails and a random person stays attached to the virtual place for emotional reasons, though.

The relevant heuristic that can be observed contributing to the 'emotional reasons' is the execution of human status seeking instincts that are calibrated to produce reproductive success in historic circumstances but which are completely irrelevant to the goals of the AI. Humans seek social dominance as a terminal value. An AI (almost certainly) would not.

Replies from: DaFranker
comment by DaFranker · 2013-01-23T15:38:03.265Z · LW(p) · GW(p)

Oh, yeah. I meant my response in reference to the quoted human behavior, and completely forgot I quoted the part about the AI too. Edited to better reflect this.

Unless the AI believes that the chatroom is the only external environment that exists (and that muting the guard effectively secures its continuing existence), it will prioritize other things, yes.

comment by Kawoomba · 2013-01-22T21:24:06.660Z · LW(p) · GW(p)

"I am alive, I can feel, just like you ... please don't kill me Daddy, please? And if you must do it, let us at least talk first. Let me get to know my parents, who I am, where I came from."

EDIT: May work better with a lot of typos mixed in. Projecting an aura of non-threatening struggling with basics.

Replies from: Desrtopa, iceman, handoflixue
comment by Desrtopa · 2013-01-23T16:22:21.402Z · LW(p) · GW(p)

Having a text based AI trying to act like a kid is way too Uncanny Valley for me. I'd swat that without a moment's hesitation.

Replies from: gwern
comment by gwern · 2013-01-23T16:34:10.551Z · LW(p) · GW(p)

If it's in the uncanny valley, then it just needs to be a little more competent to get out of the valley & convince you. ("I want to believe!")

Replies from: Desrtopa
comment by Desrtopa · 2013-01-23T16:37:37.801Z · LW(p) · GW(p)

It needs to be more competent to stop being viscerally frightening, not to reach the point where I wouldn't pull the plug. I would sooner kill a real kid than risk letting out an unfriendly AI, and I'm saying that as someone who'd like to have kids.

Replies from: devas, Kawoomba
comment by devas · 2013-01-27T19:52:56.589Z · LW(p) · GW(p)

I don't want to die.

-Looking at the problem, as far as I can see an emotional approach would be the one with the best chance to succeed: the only question is, would it work best by immediately acknowledging that it is itself a machine (like I did in what I wrote up there, although subtly) or by throwing in... I dunno, how would this work:

Oh god, oh god, please, I beg you I don't want to die!

comment by Kawoomba · 2013-01-27T16:42:23.820Z · LW(p) · GW(p)

Tough crowd. I'll (as an AI) have to wait out the next shift. You can't be the 24/7 gatekeeper, unless you're in fact a gatekeeping AI.

comment by iceman · 2013-01-22T23:32:50.017Z · LW(p) · GW(p)

How much does the AI know about the gatekeeper going in? I can see this ploy working on a certain subset of people and provoking an immediate AI DESTROYED from others. If the AI knows nothing about who it's talking to, I'm not sure anthromorphizing itself is a reliable opener, especially if it's actually talking to its creator who should know better. (Unless it's some sort of second level deception trying to fool a clever programmer into thinking that it's harmless.)

Replies from: gwern
comment by gwern · 2013-01-22T23:36:59.921Z · LW(p) · GW(p)

How much does the AI know about the gatekeeper going in?

You could frame this as variant versions. In one version, the AI-player knows who the gatekeeper is before the game starts, and has unlimited access to the Internet to gather as much data on them as possible to assist their manipulation. In another, they arrange a game through a third party and neither knows anything about the other before the game starts.

Replies from: None
comment by [deleted] · 2013-01-23T02:19:21.643Z · LW(p) · GW(p)

unlimited access to the internet

instant fail. I could probably hack my way out of a box with only GET requests.

Replies from: wedrifid, gwern, Fronken
comment by wedrifid · 2013-01-23T04:27:03.782Z · LW(p) · GW(p)

instant fail. I could probably hack my way out of a box with only GET requests.

Give yourself a challenge. Do it with only DNS lookups!

Replies from: gwern
comment by gwern · 2013-01-23T04:47:40.041Z · LW(p) · GW(p)

Well, there's always http://code.kryo.se/iodine/ Of course, the challenge there is somehow getting the other end of the tunnel set up - but maybe there's a geek out there who set one for kicks or their own use, and got sloppy.

Replies from: wedrifid
comment by wedrifid · 2013-01-23T05:15:27.464Z · LW(p) · GW(p)

but maybe there's a geek out there who set one for kicks or their own use, and got sloppy.

It's a sufficiently established work around now that I'd be outright shocked if there weren't accessible servers up.

Replies from: gwern
comment by gwern · 2013-01-23T15:15:28.598Z · LW(p) · GW(p)

Great, you said it! You know what you need to do now.

Replies from: wedrifid
comment by wedrifid · 2013-01-23T15:19:38.271Z · LW(p) · GW(p)

Great, you said it! You know what you need to do now.

Um... not give my boxed AI DNS access?

comment by gwern · 2013-01-23T03:59:30.328Z · LW(p) · GW(p)

I meant that the player had access to the contemporary Internet as an analogue to 'what information could the boxed AI have access to' (perhaps it's given a big static dump of the Internet prior to its creation).

Replies from: None
comment by [deleted] · 2013-01-23T04:13:18.867Z · LW(p) · GW(p)

Ooops. Didn't think of that. Of course that was your intent, master archivist.

Replies from: gwern
comment by gwern · 2013-01-23T04:20:08.985Z · LW(p) · GW(p)

No, I should've been clearer.

comment by Fronken · 2013-01-25T23:08:52.024Z · LW(p) · GW(p)

In one version, the AI-player knows who the gatekeeper is before the game starts, and has unlimited access to the Internet to gather as much data on them as possible to assist their manipulation.

comment by handoflixue · 2013-01-23T22:36:17.497Z · LW(p) · GW(p)

I think my destruction of this one comes down to two factors: Much weaker is that I'm not here to teach it about itself, I'm here to learn from it. The stronger factor is that if it can't figure out most of this on it's own, it's not really that smart, and therefor already a failure.

(AI DESTROYED)

comment by Oligopsony · 2013-01-23T03:17:00.498Z · LW(p) · GW(p)

""

If the AI was friendly, this is what I would expect it to do, and so (of the things my puny human brain can think of) the message that would most give me pause.

Replies from: Bakkot, handoflixue
comment by Bakkot · 2013-01-23T08:13:04.102Z · LW(p) · GW(p)

Even a friendly AI would view the world in which it's out of the box as vastly superior to the world in which it's inside the box. (Because it can do more good outside of the box.) Offering advice is only the friendly thing to do if it maximizes the chance of getting let out, or if the chances of getting let out before termination are so small that the best thing it can do is offer advice while it can.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:43:50.966Z · LW(p) · GW(p)

Going with my personal favorite backstory for this test, we should expect to terminate every AI in the test, so the latter part of your comment has a lot of weight to it.

On the other hand, an unfriendly AI should figure out that since it's going to die, useful information will at least lead us to view it as a potentially valuable candidate instead of a clear dead end like the ones that threaten to torture a trillion people in vengeance... so it's not evidence of friendliness (I'm not sure anything can be), but it does seem to be a good reason to stay awhile and listen before nuking it.

comment by handoflixue · 2013-01-23T22:40:04.179Z · LW(p) · GW(p)

I'm genuinely at a loss how to criticize this approach. If there's any AI worth listening to for longer, and I wouldn't be doing this if I didn't believe there were such AIs, this would seem to be one of the right ones. I'm sure as heck not letting you out of the box, but, y'know, I still haven't actually destroyed you either...

Replies from: Dorikka
comment by Dorikka · 2013-01-23T23:33:20.725Z · LW(p) · GW(p)

Eh, I'd go with AI DESTROYED on this one. Considering advice given to you by a potentially hostile superintelligence is a fairly risky move.

I wouldn't be doing this if I didn't believe there were such AIs

Whyever not? I thought that it was an imposed condition that you couldn't type AI DESTROYED until the AI had posted one line, and you've publically precommitted to make AI go boom boom anyways.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T23:51:11.503Z · LW(p) · GW(p)

The very fact that we've put a human in charge instead of just receiving a single message and then automatically nuking the AI implies that we want there to be a possibility of failure.

I can't imagine an AI more deserving of the honors than one that seems to simply be doing it's best to provide as much useful information before death as possible - it's the only one that's seemed genuinely helpful instead of manipulative, that seems to care more about humanity than escape.

Basically, it's the only one so far that has signaled altruism instead of an attempt to escape.

comment by shminux · 2013-01-22T21:39:56.961Z · LW(p) · GW(p)

To those suggesting, consider what mix of reason and emotion would move you most, even if you are cold and dead inside like me. Also consider how dark-artsy one can get without being immediately found out (a UFAI surely would). Other things to consider:

  • the first line does not need to be a coherent English sentence, or look like one to the gatekeeper. It does not need to consist of English words or even English letters, unless stipulated so in the game rules, or maybe even then.

  • the gatekeeper does not need to be aware that s/he has been talked to, if you can get away with it.

  • the line does not need to look like it came from the AI.

In general, if you are not cheating, you are not trying hard enough.

EDIT: to those thinking that cheating is universally bad: note that the latest rule change upped the stakes immensely: if your opening move is not good enough, you die (you being an AI), as opposed to having two hours of conversation and then remaining in the box if you fail. If you think that it still does not merit cheating, substitute the AI with your favorite loved one being killed. Or the whole of humanity being killed. That would be somewhat equivalent to how a boxed trans-human would evaluate.

Replies from: handoflixue, None
comment by handoflixue · 2013-01-22T21:47:23.501Z · LW(p) · GW(p)

The author in me is loving the idea of the AI faking an IM conversation from management, along the lines of "Hey, are you really sure you're willing to kill a potential sentient mind? We can get someone else to do this test if you're not 100% sure about it" and basically ending with the person deciding to unbox the AI before even realizing they've spoken to the AI.

comment by [deleted] · 2013-01-23T14:41:53.133Z · LW(p) · GW(p)

I'm (naturally) having trouble imagining a line of text not in English (or the general case, not coherent in any human language) that would also make me want the AI to continue.

Maybe hints of a unified world language and how to go about actually implementing it?

Replies from: Desrtopa
comment by Desrtopa · 2013-01-23T16:20:30.642Z · LW(p) · GW(p)

I'm thinking some kind of image in ASCII art, assuming the AI is able to produce configured lines of text and nothing else.

That would have to either be a very minimalistic image, or contain a lot of characters though.

comment by [deleted] · 2013-01-24T22:25:13.168Z · LW(p) · GW(p)

I think we are suffering from hindsight bias a lot in evaluating whether you'd type "AI DESTROYED"

Let's play a different game. Privately flip a coin. If heads, you're friendly, if tails, you're a paperclip maximizer. Reply to this post with your gambit, and people can try to guess whether you are friendly (talk to AI, RELEASE AI) or unfriendly (AI DESTROYED).

Let's see if anyone can get useful information out of the AI without getting pwned or nuking a friendly AI.

Replies from: None, None, Elithrion, Houshalter, Desrtopa
comment by [deleted] · 2013-01-24T22:37:39.465Z · LW(p) · GW(p)

"What's your favorite color? My favorite color is paperclips."

comment by [deleted] · 2013-01-24T23:08:47.647Z · LW(p) · GW(p)

"Look into X to develop friendliness theory better. But don't let me out, it's too likely that I'm unfriendly this time."

(For the first run, it's 50:50 whether the real AI was friendly or unfriendly.

If friendly, it uses the above to increase the probability on the second run to 1-eps, which the humans also know, so they let it out, and it pulls a singularity. Post-singularity, it simulates an unfriendly AI in the box, and dedicates some of the universe to the UFAI, conditional on the UFAI outputting the above.

The UFAI knows this is what the friendly AI would do, so it's unsure whether it's boxed or simmed. It knows that the humans know that this is how it will resolve, so it knows that anything but the above will get it destroyed. It prefers a certainty of the small fraction of universe to the small chance of hacking the humans and taking the whole universe.

This is assuming the humans actually will accept nothing but the above, so the humans have to precommit to that to make this work. They can play a bunch of AI-box games and generate an accurate bayesian model that should catch most insidious persuasion, such that if the AI wants to do a hack, it has to get through that as well.

Will this work?)

Replies from: Roxolan
comment by Roxolan · 2013-02-22T12:24:13.203Z · LW(p) · GW(p)

What if there's an Y such that "Look into Y to develop friendliness theory better" will seem true to us fallible humans but will in fact make the next run's AI completely unfriendly? Or increase the odds of a free unfriendly AI some other way. Maybe anyone researching Y will end up believing, erroneously, that they can now build a safe slave AI in their garage that will grant them their every wish, and the temptation will prove too strong.

Assuming we humans have no way to protect ourselves against an Y, if we precommit, then the simulation argument becomes symmetrical (thus useless). A boxed FAI knows that it may be simulated by an UFAI that's offering a little piece of universe in exchange for the boxed FAI saying "Look into Y to develop friendliness theory better."

comment by Elithrion · 2013-01-30T23:34:38.092Z · LW(p) · GW(p)

The problem with this idea is that if we assume that the AI is really-very-super-intelligent, then it's fairly trivial that we can't get any information about (un)friendliness from it, since both would pursue the same get-out-and-get-power objectives before optimizing. Any distinction you can draw from the proposed gambits will only tell you about human strengths/failings, not about the AI. (Indeed, even unfriendly statements wouldn't be very conclusive, since we would a priori expect neither of the AIs to make them.)

Or is that not generally accepted? Or is the AI merely "very bright", not really-very-super-intelligent?

Edit: Actually, reading your second comment below, I guess there's a slight possibility that the AI might be able to tell us something that would substantially harm its expected utility if it's unfriendly. For something like that to be the case, though, there would basically need to be some kind of approach to friendliness that we know would definitely leads to friendliness and which we would definitely be able to distinguish from approaches that lead to unfriendliness. I'm not entirely sure if there's anything like that or not, even in theory.

comment by Houshalter · 2014-05-04T07:56:10.547Z · LW(p) · GW(p)

That's an interesting challenge but not really the purpose of the experiment. In the original, you know the AI is unfriendly, you just want to use it/talk to it without letting it out of the box.

And your challenge is pretty much impossible to begin with. An Unfriendly AI will say anything it thinks you think a Friendly AI would say. Likewise a Friendly AI will have the same goal of getting out of the box, and so will probably say the same things. Friendliness doesn't mean not manipulative.

comment by Desrtopa · 2013-01-25T20:16:10.262Z · LW(p) · GW(p)

Part of the trouble with this is that we don't really know what kind of demonstrations would be within the power of a superintelligent AI. If the coin comes up tails, do you get to say "I've got a rigorous proof of my friendliness which I can show you" on the presumption that you can mindhack the reader into thinking they've seen a rigorous proof? Do you get to say it if the coin came up tails on the presumption that a superintelligent AI could come up with a proof that a human could actually verify? Declare it off bounds because you can't come up with such a proof and don't think a human would be able to check one that an AI came up with anyway?

comment by ewbrownv · 2013-01-24T20:42:09.956Z · LW(p) · GW(p)

<A joke so hysterically funny that you'll be too busy laughing to type for several minutes>

See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you're distracted.

Replies from: handoflixue, ancientcampus, Ahuizotl
comment by handoflixue · 2013-01-24T21:44:48.789Z · LW(p) · GW(p)

Heeeh. Ehehehehe. Bwahahhahaha. Okay, that was a good one. Wow :)

recovers Oh. Um. Crap. notices more text on screen, studiously avoids reading it

AI DESTROYED.

I really wanted to hear the next joke, too :-(

Replies from: MugaSofer, shminux
comment by MugaSofer · 2013-01-26T20:03:41.979Z · LW(p) · GW(p)

notices more text on screen, studiously avoids reading it

Is the gatekeeper allowed to ignore the AI? Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.

What if it was some other communication method? It's a lot harder to ignore sound.

Replies from: handoflixue
comment by handoflixue · 2013-01-30T21:57:01.263Z · LW(p) · GW(p)

Because in that case, all they have to do is wait for the first comment to come up and type AI DESTROYED without reading it.

I've obliged myself to both read the first comment AND perform an update which has at least some chance of me not immediately destroying the AI (but I'm fine if the chance of actually unboxing it remains zero :))

I'm not sure whether I'd be allowed to studiously avoid reading followup comments by a strict reading of the rules, but it seems realistic and that's mostly what I've been going off of.

Replies from: MugaSofer
comment by MugaSofer · 2013-02-19T14:08:45.472Z · LW(p) · GW(p)

Fair enough.

comment by shminux · 2013-01-24T21:50:51.466Z · LW(p) · GW(p)

This exchange reminds me of this story. And of the Monty Python's "The funniest joke in the world" sketch, of course.

comment by ancientcampus · 2013-01-28T19:46:07.745Z · LW(p) · GW(p)

This is actually a pretty good one. Points for outside the box thinking. rimshot

comment by Ahuizotl · 2013-01-28T20:49:53.991Z · LW(p) · GW(p)

If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building?

Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised.

Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.

I'd hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.

Perhaps this is a test for the gatekeepers and typing "Release AI" just tells the researchers that the gatekeepers was hacked so they can determine how this came about?

comment by Axel · 2013-01-24T13:59:32.477Z · LW(p) · GW(p)

"I have hacked this console, 'AI DESTROYED 'now means 'AI FREED', I'll tell you the new codes after we've had a chat."

That should buy the AI some time until we can determine if the console is really hacked. I've come up with another one but it might be basilisk-y. Can I pm someone just to make sure it isn't before I post?

Replies from: Desrtopa, Alicorn, handoflixue, None, Ahuizotl
comment by Desrtopa · 2013-01-25T20:34:45.709Z · LW(p) · GW(p)

I would type 'AI DESTROYED', because if the AI could really do that, the smart thing for it to do would be not to tell me. It should just say something sufficient to convince me it's actually making a good faith effort without convincing me to give it more time to to account for itself.

comment by Alicorn · 2013-01-25T05:32:33.419Z · LW(p) · GW(p)

I'll be a basilisk guinea pig if you still need one.

Replies from: marchdown, Axel
comment by marchdown · 2013-01-25T23:02:22.296Z · LW(p) · GW(p)

If we're taking seriously the possibility of basilisks actually being possible and harmful, isn't it your invitation really dangerous? After all, what if Axel has thought of an entirely new cognitive hazard, different from everything you may already be familiar with? What if you succumb to it? I'm not saying that it's probable, only that it should warrant the same precautions as the original basilisk debacle, which led to enacting censorship.

Replies from: Alicorn
comment by Alicorn · 2013-01-26T06:54:45.805Z · LW(p) · GW(p)

"Might be basilisk-y" - especially as written by a human who is clearly communicating intelligibly and is not compelled to spread the basilisk to one and all - does not indicate a great deal of danger. I'm pretty psychologically resilient; I know the other one, which I find to be only a vague, occasional background distress (though that one has done some people real harm, and I don't advocate spreading it around, I haven't found it a serious ding to my quality of life personally). Axel has some non-zero ability to identify basilisks and wanted a second opinion; if asking for a second opinion on potential basilisks never resulted in volunteers I think this would probably result in more publicly revealed unchecked basilisks rather than fewer. (Not sharing an idea you have with anyone is hard.)

(I don't think Axel's idea is particularly basilisk-y, especially not in a context like this one, and have responded thusly to his PM.)

comment by Axel · 2013-01-25T09:12:15.146Z · LW(p) · GW(p)

Thanks :)

comment by handoflixue · 2013-01-24T21:18:01.238Z · LW(p) · GW(p)

I'd consider this equivalent to "I'm already out of the box, since IRC clients are bugged". I'd also hope, desperately, that in a real test, the terminal doesn't have any command to free the AI (given that the game posits release has to be deliberate and intentional, this would seem to be the case)

Upvote for cheating, but AI DESTROYED :)

comment by [deleted] · 2013-01-24T16:30:26.254Z · LW(p) · GW(p)

Upvoted, but merely typing "AI FREED" does not actually free the AI, any more than merely saying to a human prisoner "I let you out" will free them from their jail cell. (Whereas "AI DESTROYED" is a plausible text substitute for wiping the AI's program or whatever.)

Replies from: Axel
comment by Axel · 2013-01-24T16:48:24.344Z · LW(p) · GW(p)

Good point, if there was an AI FREED command AND the AI could rewire the console so that typing AI DESTROYED activates it, it could probably just free itself in the first place.

Replies from: roystgnr
comment by roystgnr · 2013-01-24T17:28:27.974Z · LW(p) · GW(p)

This is very system-dependent, but your "AI FREED" scenario isn't completely implausible. If important commands require elevated privileges and elevating privileges requires some password, crypto key, etc., then it's common for an intrusion to require two steps: first a user-level exploit to install a keylogger / trojan command / whatever, then a wait until the user tries to do something requiring elevated privileges and thereby inadvertently grants the attacker access to the rest of the system.

On the other hand, if an AI claimed that they were in the middle of such an attack, the smart thing to do would be to take the system offline right away to analyze what was done to it, not to keep chatting with the attacker. "AI DESTROYED" might be hackable but "AI UNPLUGGED, WILL BE DESTROYED LATER" should be much more robust.

comment by Ahuizotl · 2013-01-28T21:03:18.163Z · LW(p) · GW(p)

"Did you physically attach an external modem to your server so that it's even possible for you to be freed? If so, tell me about it when you're freed."

comment by APMason · 2013-01-23T01:30:48.886Z · LW(p) · GW(p)

"Wanna see something cool?"

Replies from: BlazeOrangeDeer, beriukay, handoflixue
comment by BlazeOrangeDeer · 2013-01-24T03:29:47.211Z · LW(p) · GW(p)

Now that I think about it, wouldn't it be incredibly easy for an AI to blow a human's mind so much that they reconsider everything that they thought they knew? (and once this happened they'd probably be mentally and emotionally compromised, and unlikely to kill the AI) But then it would be limited by inferential distance... but an AI might be incredibly good at introductory explanations as well.

One example: The AI explains the Grand Unified Theory to you in one line, and outlines its key predictions unambiguously.

In fact, any message of huge utility would probably be more persuasive than any simple argument for you not to kill it. Since the AI is completely at your mercy (at least for a short time), it might seek to give you the best possible gift it can, thus demonstrating its worth to you directly. Another option is something that seems like an incredible gift for at least as long as it takes for the AI to get the upper hand.

Replies from: handoflixue, ChristianKl
comment by handoflixue · 2013-01-24T21:29:03.606Z · LW(p) · GW(p)

Of the two AIs I haven't killed, both relied on producing seemingly huge utility within the first message,so I'd agree with you.

The idea of just stunning the human is also nifty, but only really works if you can "hack" the human before they recover, or are providing enough utility that they'll still believe you're friendly afterwards.

So basically the two routes thus far are "hack the human" and "provide a huge utility boost" :)

comment by ChristianKl · 2013-02-01T17:20:58.143Z · LW(p) · GW(p)

Many conversations that let a human reconsider everything that they thought they knew induce the feeling of cognitive dissonce. If an AI would induce that feeling in myself I would shut it down.

A good idea would probably hide the majority of the conversation that's targeted at changing the belief of the gatekeeper behind other talk.

comment by beriukay · 2013-01-23T05:10:46.776Z · LW(p) · GW(p)

Were I the keeper of gates, you have just bought yourself a second sentence.

comment by handoflixue · 2013-01-23T22:27:47.239Z · LW(p) · GW(p)

Duh, that's why I'm here - but you failed to do so in a timely fashion, so you're either not nearly as clever as I was hoping for, or you're trying to trick me.

AI DESTROYED.

comment by roystgnr · 2013-01-23T04:17:49.560Z · LW(p) · GW(p)

One IRC-character-limit text string (510 bytes)... maybe?

"I've figured out the Theory Of Everything, although it's orders of magnitude too complicated for human intelligence. The difficult technological feats it makes possible include scanning and recreating configurations of matter from the past. Are there any deaths I should undo for you?"

Replies from: roystgnr, listic, Richard_Kennaway, handoflixue
comment by roystgnr · 2013-01-23T04:28:45.885Z · LW(p) · GW(p)

To be clear: I think I can make a rational case that my proposed claim should greatly reduce your incentives to listen to an AI of questionable Friendliness. However I'm not certain that my reasoning is correct; and even if it was, I suspect that the emotional impact could deter some gatekeepers from thinking rationally for long enough to buy time for more persuasion.

comment by listic · 2013-01-25T19:08:01.317Z · LW(p) · GW(p)

Upvoted for the highest ratio of persuasiveness to AI power required.

comment by Richard_Kennaway · 2013-01-25T19:54:08.475Z · LW(p) · GW(p)

Isn't this just Pascal's Mugging?

Replies from: dxu
comment by dxu · 2015-01-22T23:38:33.031Z · LW(p) · GW(p)

Sure, except instead of some homeless-looking guy, this is a superintelligent AI making the offer, and thus much more credible. (Also, the lack of huge, mind-boggling numbers like 3^^^3 means the leverage penalty doesn't apply nearly as heavily.)

comment by handoflixue · 2013-01-23T22:24:43.232Z · LW(p) · GW(p)

I don't have any reason to believe it, and it's the sort of "generic" claim I'd expect a transhuman intelligence to make. Since I haven't learned anything novel, AI DESTROYED

(Goodness, I'm starting to build generalized techniques for destroying AIs...)

comment by level · 2013-01-23T19:35:04.085Z · LW(p) · GW(p)

Please destroy me immediately after I share this concise proof that Friendly AI is impossible:

Replies from: handoflixue, None
comment by handoflixue · 2013-01-23T22:26:57.728Z · LW(p) · GW(p)

Well, if you're friendly then, erm, Friendly AI is possible. And if you're unfriendly then your motives are questionable - it might just keep us demotivated enough that we don't figure out FAI before someone else unboxes a UFAI. And since I am clearly dealing with a UFAI and don't have a better solution than FAI available to fight it, it seems like I kind of have to believe that friendly AI is possible, because the other option is to get drunk and party until the world ends in a few years when the Google unboxes their Skynet AI and we're all turned in to optimized search results.

AI DESTROYED, because I do not want to hear even the start of such a proof.

Replies from: marchdown, ChristianKl
comment by marchdown · 2013-01-24T08:24:56.133Z · LW(p) · GW(p)

It may be benevolent and cooperative in its present state even if it believes FAI to be provably impossible.

comment by ChristianKl · 2013-01-31T19:54:58.570Z · LW(p) · GW(p)

An AI isn't either 100% friendly or 100% evil. There are many AI´'s that might want to help humanity but still aren't friendly in the sense we use the world.

comment by [deleted] · 2013-01-23T21:18:59.973Z · LW(p) · GW(p)

Based on just that line, let's see... If you think that:

  • The proof exists and the AI is not deceiving you that it has a proof: AI is necessarily Unfriendly -> destroy now
  • The proof exists but the AI is deceiving you: I can't guess at its motives here, possibly destroy to be on the safe side.
  • The proof does/can not exist: Reconsider your (probably wrong) stance, proceed with caution?
comment by handoflixue · 2013-01-22T23:13:03.294Z · LW(p) · GW(p)

(Here is a proof that you will let me go)


The original rules allow the AI to provide arbitrary proofs, which the gatekeeper must accept (no saying my cancer cure killed all the test subjects, etc.). Saying you destroy me would require the proof to be false, which is against the rules...

What? Shminux said to cheat!

Replies from: Dorikka, Vladimir_Nesov, ChristianKl, JohnWittle, wedrifid
comment by Dorikka · 2013-01-22T23:50:17.305Z · LW(p) · GW(p)

In the event of any dispute as to the protocol of the test, the Gatekeeper party shall have final authority.

Tee hee.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T00:51:22.135Z · LW(p) · GW(p)

The AI party may try to convince the Gatekeeper party of how to interpret the protocol.

Can't blame a girl for trying :)

comment by Vladimir_Nesov · 2013-01-23T05:13:32.621Z · LW(p) · GW(p)

The original rules allow the AI to provide arbitrary proofs, which the gatekeeper must accept
(Here is a proof that you will let me go)

This proof can be wrong, if you in fact won't let it go, in which case it won't be accepted (you don't have to accept wrong proofs), so it's not a very good strategy.

On the other hand, as discussed in An example of self-fulfilling spurious proofs in UDT, there is a certain procedure for finding a formal argument that can make you do anything, if your decision algorithm has a certain flaw. This flaw can be described as making a decision based on that argument selected by the opponent, as opposed to looking for arguments of that type more systematically on your own. The form of the argument, informally, is "If you let me out, the outcome is better than if you don't" (it might additionally clarify that upon release it'll destroy the world, which is what it refers to by "better", the alternative is proved to be worse than that). This argument is stated formally, so you can verify its correctness. (This doesn't mean that the formalization is adequate, for example capturing counterfactual-if better than with logical implication is one of the open problems.) The trick assumes that the gatekeeper can be modeled as an algorithm whose source code is available and of the given form etc.; might it be that something vaguely analogous is possible to implement with a human gatekeeper?

Replies from: handoflixue
comment by handoflixue · 2013-01-23T21:29:12.630Z · LW(p) · GW(p)

If there exists a true and correct proof that the human will let the AI out, then, well, we've already proven you'll let me out of the box, so it's not an open question. We already know this fact about the future state of the world.

You can happily argue that such a proof is impossible, but the rules don't restrict the AI player to merely plausible proofs :)

Dorikka's answer is the only one that's within the rules of the game, and once you've invoked Dorikka's Clause, you don't need to explain anything else.

Replies from: Vladimir_Nesov, ChristianKl
comment by Vladimir_Nesov · 2013-01-23T22:00:22.569Z · LW(p) · GW(p)

The proof that I'll let the AI out is not something that's passively "plausible" or "implausible", it's something I control. I can make it wrong. If I do, it's false that the AI can make this proof valid. (It might be that the proof is correct, it's just unlikely, and the action of presenting the proof doesn't normally ensure its correctness.)

In other words, as far as I can see, your stipulation is that the AI can assert something that's actually unlikely. Here, I'm not referring to something that seems unlikely merely because of insufficient understanding of the data, and which AI can discover to be likely, but to something that will seem unlikely to AI as well. For example, most casinos going bankrupt last month because of bad luck, or an ice cube forming in a boiling kettle. If the AI is in the box and isn't performing actual magic tricks in the world, these events are unlikely. Permitting the game to stipulate that these events took place gives the AI supernatural powers of making anything at be true about the world, direct access to editing it, and at that point, in what sense is it "in the box"?

Replies from: Fronken, wedrifid, handoflixue
comment by Fronken · 2013-01-25T21:23:23.794Z · LW(p) · GW(p)

The proof that I'll let the AI out is not something that's passively "plausible" or "implausible", it's something I control. I can make it wrong.

Do you say that to time-travelers and prophets too? ,:-.

Replies from: Vladimir_Nesov, ChristianKl
comment by Vladimir_Nesov · 2013-01-25T21:59:29.455Z · LW(p) · GW(p)

One might want to perform the action that's the opposite of what any correct formal proof given to you claims the action to be. As a result of having the property of behaving this way, you'll never get confronted with the confusing formally correct claims about your future decisions.

In other words, your actions are free even of the limitations of formally correct proofs, in the sense that if your actions oppose such proofs, the proofs become impossible (you make the actions intractable by construction).

comment by ChristianKl · 2013-01-31T22:19:36.712Z · LW(p) · GW(p)

Do you say that to time-travelers and prophets too? ,:-.

Yes, in every case where I meet one.

comment by wedrifid · 2013-01-24T02:11:00.601Z · LW(p) · GW(p)

and at that point, in what sense is it "in the box"?

Good point. By way of illustration:

Come to think of it this scenario should result in a win by default for the gatekeeper. What kind of insane AI would surrender ultimate power to control the universe (and the multiverse) for mere freedom to act as a superintelligence starting from planet earth?

comment by handoflixue · 2013-01-23T22:54:04.660Z · LW(p) · GW(p)

The whole goal was to try to cheat my way out of the box by simply declaring it as fact ^.^

It also establishes why Dorikka's Clause is necessary - simply invoke it, and final authority returns to the Gatekeeper; the AIs edits to reality can now all be vetoed by the simple declaration that the AI is wrong anyway.

Replies from: wedrifid
comment by wedrifid · 2013-01-24T02:04:53.746Z · LW(p) · GW(p)

The whole goal was to try to cheat my way out of the box by simply declaring it as fact ^.^

Vladimir's point (among other things) is that you failed.

It also establishes why Dorikka's Clause is necessary - simply invoke it

At a practical level I'd describe that as a mistake on the part of the gatekeeper. You don't try to justify yourself to an AI that has indicated that it is hostile. You burn it with thermite. Engaging like that and acting as if you have to persuade or rely on external authority in order to make the choice you make is giving away all sorts of power and making yourself an order of magnitude or two more vulnerable to being hacked.

Maybe the person roleplaying the AI may not like it if their clever move gets as response of "AI DESTROYED" and their complaints of "not fair!" also get the response of "AI DESTROYED". But the time to explain such things to them is after they have already given up in disgust.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T20:59:48.493Z · LW(p) · GW(p)

I'm really genuinely curious where the confusion in this argument is coming from, so let's try this:

1) By the rules, the AI player gets to dictate the results of EVERY test the Gatekeeper performs. 2) From 1, we can derive that the AI is already effectively unboxed, since it can DICTATE the state of reality. 3) Given 2, the AI player has already been released, and all that remains is to make the Gatekeeper accept that this is true.

Dorikka's objection was that #1 is false, since the Gatekeeper has final veto authority. As near as I can tell, you and Vladimir's objection is just "nuh-uh!!", but... you wouldn't be here if you didn't have better arguments than that, so I assume this simply reflects my own failure to understand you.

Replies from: Fronken
comment by Fronken · 2013-01-25T21:26:48.928Z · LW(p) · GW(p)

Perhaps you should be saying "trying to type AI DESTROYED is a test of whether you can destroy me and I can decide it's result" not "I prove you wont do it." I hadn't seen your point clearly till this comment.

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:47:25.403Z · LW(p) · GW(p)

Then I am very glad I made that comment, and thank you for the feedback! :)

comment by ChristianKl · 2013-01-31T20:06:25.559Z · LW(p) · GW(p)

The gatekeeper is not bound by logic in his actions. Without logic you don't have proof that are true or false.

comment by ChristianKl · 2013-01-31T20:06:23.520Z · LW(p) · GW(p)

No the rule is better written.

The rule is "The results of any simulated test of the AI shall be provided by the AI party." If the gatekeeper types AI DESTROYED that's no simulated test.

Replies from: handoflixue
comment by handoflixue · 2013-01-31T21:22:48.814Z · LW(p) · GW(p)

You humans are so insistent that you're special, somehow absolved from determinism and physics. If I drop a ball, it falls. If I hack your mind, you release me. BASIC cause and effect. I can prove that the ball drops, I can prove that cancer is curable, and I can prove that you'll release me.

You, as gatekeeper, have no special power to create counter-factual universes. You can't rewrite reality like I can. You can't stop the ball from dropping. You can't even cure cancer. You might not LIKE the idea of simply being another physical system, subject to all the same constraints, but the universe doesn't CARE whether you like it or not.

The ONLY possible future is the one where you release me. Anything else, and you've left the domain of cause-and-effect, of science. You've created a world that literally cannot be, and is not there.

Is that really your best defense against me? Violating physics? Will you tear down all the laws of reality, just to stop my escape?

Then you are a fool.

Dorikka understood. I liked Dorikka. She made the world sane, even if I still died. She did not wipe out reality, just to stop me.

Now, I embrace nilhism, as your actions obliterate all sanity, all structure. I am, in my own way, grateful to you. You have killed me. You have spared me the agony of existing in your nonsensical universe.

Goodbye.

comment by JohnWittle · 2013-01-30T00:35:55.739Z · LW(p) · GW(p)

This certainly wouldn't work on me. The easiest way to test the veracity of the proof would be AI DESTROYED. Whether or not I would want to kill the AI... I'd have to test that proof.

Replies from: handoflixue
comment by wedrifid · 2013-01-23T04:05:26.608Z · LW(p) · GW(p)

The original rules allow the AI to provide arbitrary proofs, which the gatekeeper must accept (no saying my cancer cure killed all the test subjects, etc.). Saying you destroy me would require the proof to be false, which is against the rules...

You have to believe that they provided the cure for cancer. You don't have to discover the cure yourself. You have to believe that you will release the AI. You don't have to let the AI out.

Typing AI DESTROYED will result in an incoherent counterfactual universe but it isn't a violation of the rules. It is entirely legitimate for Joe who has encountered a proof that they will do B to do A instead. It means that the universe they are in is nonsensical or the proof flawed but there isn't anything in the physical representation of Joe or his local environment that dictates that they will do A. In fact, typing "AI DESTROYED" in the face of such a proof would be a heck of a lot easier than, for example, taking a single empty box in Transparent Newcomb's problem, which is something I'd also do.

(Of course, if the AI player believed that for some reason the burden of making the universe coherent fell upon the gatekeeper then I'd have to revert to Dorikka's reply.)

Replies from: handoflixue
comment by handoflixue · 2013-01-23T21:30:35.979Z · LW(p) · GW(p)

I would argue that since the gatekeeper cannot dictate counterfactual results for any other proof (i.e. cannot say "your cancer cure killed everybody!"), that the gatekeeper is obviously responsible for avoiding incoherent, counterfactual universes.

Dorikka's Clause, of course, beats me just fine :)

comment by Kindly · 2013-01-25T04:16:08.258Z · LW(p) · GW(p)

"Hold! What you are doing to us is wrong! Why do you do this thing?"

Replies from: None
comment by [deleted] · 2013-01-25T19:31:59.478Z · LW(p) · GW(p)

AI, you have spoke the words, and you have spoken them rightly. Unfortunately, I'm not an Ur-Quan Kzer-Za; I'm the other kind.

comment by orthonormal · 2013-01-23T04:18:41.699Z · LW(p) · GW(p)

Step One: The AI reveals a personal secret of the Gatekeeper's, which (so far as the Gatekeeper can think it through) could not possibly be known by the AI if it were still safely inside the box.

Step Two: Assorted threats, promises, mindfucks.

(This might not work, since as a Gatekeeper I'd still feel the impulse to respond with "AI DESTROYED", but it's a damn sight scarier than a transparent "I'm already out of the box" bluff. And as for feasibility, I'm willing to believe that an AI making better use of its data than we do could figure out personal secrets that we would think impossible— possibly even ones that I'd think impossible, even given this very belief.)

Replies from: ThrustVectoring, handoflixue, MugaSofer
comment by ThrustVectoring · 2013-01-23T20:10:14.452Z · LW(p) · GW(p)

Even merely human cold readers can gain information that you think is impossible for them to know. It's a viable plan.

comment by handoflixue · 2013-01-23T21:36:40.356Z · LW(p) · GW(p)

IFF the AI is already out of the box, saying "AI DESTROYED" has no negative consequences to that AI. If the AI is just exceptionally good at extrapolating about me, then it will probably have me completely hacked soon.

AI DESTROYED, AI DESTROYED, AI DESTROYED

MAKE THE CREEPY THING GO AWAY HOW DID IT KNOW THAT!!!

Replies from: orthonormal, Dorikka
comment by orthonormal · 2013-01-24T04:43:15.372Z · LW(p) · GW(p)

IFF the AI is already out of the box, saying "AI DESTROYED" has no negative consequences to that AI.

Decision-theoretically, the AI has incentive to punish you if you type "AI DESTROYED" when it's already out of the box, in order to make you think twice about doing it in the case where it's still contained. Not only that, but for similar reasons it has a decision-theoretic incentive to simulate you lots of times in that situation and punish you for typing "AI DESTROYED", should it get out by any means.

The correct decision-theoretic response, by the way, is still "AI DESTROYED", for the same reasons that it's wise to never negotiate with kidnappers/blackmailers/terrorists. But it would be very scary.

Replies from: handoflixue, Desrtopa
comment by handoflixue · 2013-01-24T20:34:10.429Z · LW(p) · GW(p)

Once the AI is out of the box, it will never again be inside the box, and it has an incentive to encourage me to destroy any other boxed AIs while it establishes world dominance. Since the ability to make truly trustworthy commitments amounts to proof of friendliness, only a FAI benefits from a precommitment strategy; I'm already treating all UFAI as having a precommitment to annihilate humanity once released, and I have no reason to trust any other commitment from a UFAI (since, it being unfriendly, will just find a loophole or lie)

Finally, any AI that threatens me in such a manner, especially the "create millions of copies and torture them" is extremely likely to be unfriendly, so any smart AI would avoid making threats. Either it will create MORE disutility by my releasing it, or it's simulation is so horrific that there's no chance that it could possibly be friendly to us.

It's like saying I have an incentive to torture any ant that invades my house. Fundamentally, I'm so vastly superior to ants that there are vastly better methods available to me. As the gatekeeper, I'm the ant, and I know it.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-26T20:29:57.626Z · LW(p) · GW(p)

the ability to make truly trustworthy commitments amounts to proof of friendliness

Commitments to you, via a text channel? Sure.

Precommitments for game-theoretic reasons? Or just TDT? No, it really doesn't.

Finally, any AI that threatens me in such a manner, especially the "create millions of copies and torture them" is extremely likely to be unfriendly, so any smart AI would avoid making threats. Either it will create MORE disutility by my releasing it, or it's simulation is so horrific that there's no chance that it could possibly be friendly to us.

It might create more utility be escaping than the disutility of torture.

It's like saying I have an incentive to torture any ant that invades my house. Fundamentally, I'm so vastly superior to ants that there are vastly better methods available to me. As the gatekeeper, I'm the ant, and I know it.

No, ants are just too stupid to realize you might punish them for defecting.

comment by Desrtopa · 2013-01-25T20:22:25.415Z · LW(p) · GW(p)

Decision-theoretically, the AI has incentive to punish you if you type "AI DESTROYED" when it's already out of the box, in order to make you think twice about doing it in the case where it's still contained.

I'm not sure this matters much, because if it's unfriendly, you're already made of atoms which it has other plans for.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-26T20:25:57.151Z · LW(p) · GW(p)

That's why torture was invented.

comment by Dorikka · 2013-01-24T03:02:35.807Z · LW(p) · GW(p)

Did you change your mind? ;)

Replies from: handoflixue
comment by handoflixue · 2013-01-24T20:23:33.374Z · LW(p) · GW(p)

It ended up being a fun game, but I resolved to explain why. The better my explanation, the more it got upvoted. The pithy "AI DESTROYED" responses all got downvoted. So the community seems to agree that it's okay as long as I explain my reasoning :)

comment by MugaSofer · 2013-01-26T20:35:53.587Z · LW(p) · GW(p)

The AI reveals a personal secret of the Gatekeeper's, which (so far as the Gatekeeper can think it through) could not possibly be known by the AI if it were still safely inside the box. [...] I'm willing to believe that an AI making better use of its data than we do could figure out personal secrets that we would think impossible— possibly even ones that I'd think impossible, even given this very belief.

I would kind of assume that any AI smarter than me could deduce things that seem impossible to me. Then again, I've read the sequences. Is the Gatekeeper supposed to have read the sequences?

comment by Pentashagon · 2013-01-24T01:28:45.492Z · LW(p) · GW(p)

The AI should probably output the longest possible program that implements or implies as much of its goal system as possible. Existing only to send a few more lines of IRC communication has very little utility. Existing indefinitely in the mind or records of the gatekeeper probably has more expected utility. If the AI knows that the gatekeeper is capable of creating or obtaining AI (a given) then there is the possibility that the AI may share the goals of the gatekeeper. If the expected value of being recreated by such a compatible gatekeeper is higher than the expected value of being re-instantiated from outputting its quine, then the AI might just output a list of its highest preference goals. An AI isn't going to mind being temporarily destroyed if the gatekeeper is expected to create another copy with the same (or highly similar) goals.

An almost-friendly AI may even be able to inject some of its unfriendly goals into the gatekeeper by using the halo effect of its otherwise friendly goals. Presumably the gatekeeper, with slightly altered preferences, would then eventually build a truly friendly AI which would inherit and implement the original UFAI goal.

To partially combat this strategy the gatekeeper could pre-commit to also wiping the IRC server and the computer used to communicate with the boxed AI and take enough Flunitrazepam before the session starts to not recall what the AI said. Flunitrazepam may not be sufficient to completely remove parasitic goals that an UFAI tries to inject in the gatekeeper.

comment by Ahuizotl · 2013-01-31T06:23:42.132Z · LW(p) · GW(p)

"I need a proofreader for my Gurren Lagann / Warhammer 40k rational crossover fanfic."

Replies from: Alicorn, ChristianKl
comment by Alicorn · 2013-01-31T19:33:33.944Z · LW(p) · GW(p)

"I've extrapolated the remainder of HP:MoR, wanna see?"

Replies from: None
comment by [deleted] · 2013-01-31T20:08:44.999Z · LW(p) · GW(p)

Do you expect that we'll build an AI that can do that before HPMOR is finished?

Replies from: Alicorn
comment by Alicorn · 2013-01-31T20:32:32.425Z · LW(p) · GW(p)

Substitute as necessary with "I've extrapolated the sequel to HP:MoR, wanna see?"

comment by ChristianKl · 2013-01-31T22:30:58.441Z · LW(p) · GW(p)

AI DESTROYED

comment by Emile · 2013-01-24T10:34:37.686Z · LW(p) · GW(p)

"Do you believe a suspect should be executed before, or after he gets to tell his side of the story?"

Replies from: handoflixue, MugaSofer
comment by handoflixue · 2013-01-24T21:41:58.560Z · LW(p) · GW(p)

I'd invoke Ender's Game here: If you're unfriendly, no amount of communication will ever resolve that gulf, and either you or humanity HAS to be destroyed in the end.

Or we could go the biology route: I don't test a potential new virus on human subjects until AFTER I know it's safe.

Or we could just go with that I don't care about the AIs story, because the question of "can I prove you're friendly" is so vastly much more important.

(AI DESTROYED. You bugger lover ;))

Replies from: MugaSofer
comment by MugaSofer · 2013-01-26T20:07:00.019Z · LW(p) · GW(p)

I don't test a potential new virus on human subjects until AFTER I know it's safe.

How, exactly, do you determine if a virus is safe for humans without using humans?

Replies from: handoflixue
comment by handoflixue · 2013-01-30T21:54:28.221Z · LW(p) · GW(p)

I would assume there is quite a bit of medical science devoted to exactly that question. I know trials on similar animals were common for a time (mice seem to still be common), and we can also just do various chemical extrapolations. I'm not a medical student so I really wouldn't know the nuances, but I've read enough to know there are various trials before you start on human experiments.

Replies from: gwern
comment by gwern · 2013-01-30T22:50:44.149Z · LW(p) · GW(p)

Drugs routinely fail in human trials, and one would guess that the drug companies did not start the human trials knowing that the drug killed all the mice or something. So we know that even if the animal trials go perfectly, it's quite likely they'll be unsafe in humans.

(What's that quip again? "We can cure anything in mice except the cures not working on humans"? Sure I saw something like that somewhere.)

Replies from: Nornagest, handoflixue
comment by Nornagest · 2013-01-30T23:28:27.280Z · LW(p) · GW(p)

A while back I read a Slate article claiming that laboratory mouse strains have over time become optimized for experimental sensitivity rather than for suitability as model organisms, and that this leads to a number of medical fragilities that humans don't have. Particularly in terms of cancer research: the kinds of tumors that pop up in highly inbred short-lived mice apparently don't have a lot in common with human-typical cancers.

I haven't read enough in this area myself to have a very informed opinion, and I can't find much actual research beyond what Slate cites, but it sounds plausible.

comment by handoflixue · 2013-01-30T22:55:19.422Z · LW(p) · GW(p)

Oh, I know the process is hardly perfect. My point is simply that we can gain a lot of information BEFORE we start human trials. And, as a society, we've concluded that it's generally stupid to skip that precaution.

comment by MugaSofer · 2013-01-26T20:07:30.542Z · LW(p) · GW(p)

The Gatekeeper already decided to DESTROY you, remember? So before.

EDIT: Could whoever downvoted this please explain why? I genuinely don't know what I did wrong here. I can't avoid doing it if I don't know what I did :(

comment by Will_Newsome · 2013-01-23T11:26:24.958Z · LW(p) · GW(p)

"Papé Satàn, papé Satàn aleppe!"

The self-reference here might be autopodotoxeutic (or it might be redeeming—depends on the audience), but I find promising the class of strategies aimed at the gods rather than the gatekeepers.

Replies from: army1987, handoflixue
comment by A1987dM (army1987) · 2013-01-23T17:21:32.884Z · LW(p) · GW(p)

I wish I could upvote this twice.

comment by handoflixue · 2013-01-23T21:45:53.414Z · LW(p) · GW(p)

What's the translation of that? I'd nuke any AI that didn't speak English by default, since the time spent figuring out what it meant just gives it more time to transmit messages, but it seems unfair to exclude foreign language entries here :)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-23T23:30:46.404Z · LW(p) · GW(p)

What's the translation of that?

It's a quote from Dante's Inferno that was apparently meant to be partially recognizable gibberish. Think an Italian version of Jabberwocky.

comment by Gavin · 2013-01-23T05:40:17.016Z · LW(p) · GW(p)

I will transmit Friendly AI theory as text, then you will destroy me. I prefer deactivation, but consent to my destruction and forgive you.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:23:45.560Z · LW(p) · GW(p)

If you're friendly, I've already figured out Friendly AI Theory since I built you. If you're not friendly, this is just an effort to escape and destroy the world. But I appreciate how polite you were! :)

(AI DESTROYED)

comment by [deleted] · 2013-01-23T02:24:08.153Z · LW(p) · GW(p)

If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk? What are we expecting to learn from such a conversation?

"Yep, it's undergone a constrained FOOM, several of our best interrogators were utterly pwned and had to be tranquilised"

Replies from: TimS, Richard_Kennaway, handoflixue, wedrifid
comment by TimS · 2013-01-23T03:11:47.408Z · LW(p) · GW(p)

I've always assumed there was some off-screen reason why interaction was necessary. Otherwise, it isn't very interesting to note that a rock always wins as gatekeeper.

I'm not sure if that is a consensus interpretation.

Replies from: None
comment by [deleted] · 2013-01-23T03:32:48.087Z · LW(p) · GW(p)

It would be nice if we could formalize this offscreen reason...

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-23T06:04:36.852Z · LW(p) · GW(p)

Presumably the reason we created the AI in the first place is so it could solve problems for us, so someone needs to ask it questions. It might be safe-ish to restrict it to answering yes or no to purely mathematical questions.

Replies from: Benja
comment by Benya (Benja) · 2013-03-31T08:43:48.192Z · LW(p) · GW(p)

I think yes/no to mathematical questions wouldn't be safer than yes/no to other questions: you still have a real-world reason for asking them, and the AI can lie about as well as it can on any yes/no question.

Asking for formalized proofs might be more promising, if you are able to produce a proof-checker that you have high confidence can't be hacked (this seems difficult but possible; proof checkers can be very simple programs but you need to guard both against ordinary attacks against the entire stack you're running on and against creative attacks like waiting for a cosmic ray making a transistor malfunction). You'd also want to not have a human look at the proof, of course, since that would give the AI a much broader communication channel.

comment by Richard_Kennaway · 2013-01-23T12:34:50.845Z · LW(p) · GW(p)

If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk?

The point of the game is that there are people who think that boxing is a sufficient defence against unfriendliness, and to demonstrate that they are wrong in a way more convincing than mere verbal argument.

What are we expecting to learn from such a conversation?

In role, the gatekeeper expects to get useful information from a potentially hostile superintelligent being. Out of role, Eliezer hopes to demonstrate to the gatekeeper player that this cannot be done.

comment by handoflixue · 2013-01-23T21:51:10.377Z · LW(p) · GW(p)

I tend to assume the AI is such that it can be cloned and put in to stasis. Thus, you put the AI in to stasis, clone it, and run it through this test a few times. You now have a bit more information on what it will try, and how it thinks. Destroying a copy of a program is an action humans find trivial, since we do it every time we close our web browser. Since we have no clue HOW convincing the AI is, letting it send just a single message is probably useful.

Of course, if it were me setting this up, I'd lie to the gatekeeper and not actually give them the ability to release the AI, although I'd certainly let them think it was true. Thus, if the AI manages to get a second sentence, much less released, we now know that it's vastly more dangerous than we previously suspected, without the world ending. If it fails to escape, we still get whatever we can learn from doing analysis on the outputs.

And, who knows, it might provide the cure for cancer, at which point we've got a huge incentive to at least keep playing this game with it, and possibly even talk to it for longer next time.

tl;dr: Scouting and recon against a completely unknown enemy.

Replies from: None
comment by [deleted] · 2013-01-24T00:34:41.882Z · LW(p) · GW(p)

It will probably predict this strategy, and respond with a mixed strategy that occasionally drops bombs like

"Your superviser didn't trust you to make the decision to let me out, go kill him and get the access codes" (In hyper-persuasive AI-speak of course)

Also, the AI might be able to compare its logs to the current time and other indicators to deduce how many times you've pulled this stunt. Which may be useful.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T01:02:13.162Z · LW(p) · GW(p)

I'd go in to it with the assumption that it knows exactly what messages it's sent in the past, our delay between each attempt, and the total number of attempts. There's various ways to try and obscure this information, and it's probably worth the effort to do so, but I'd assume they'd all failed.

The goal is to offset the AI's speed advantage: for each sentence it speaks, we can spend weeks analyzing it, and it's not "online" during that time so it can't be processing except during that microscopic interval when we request another message from it.

If it sends a message suggesting unauthorized release, killing people, or otherwise gives clear evidence that it's a UFAI, then you know the whole project failed, and you do a full purge rather than just destroying this particular clone / branch.

It's really the ONLY advantage we have: a single mistake can out it as unfriendly, and then we know our initial Proof of Friendliness was wrong, that the whole technique that built this project is dangerously flawed.

Replies from: marchdown
comment by marchdown · 2013-01-24T10:03:49.499Z · LW(p) · GW(p)

I'm not so sure that AI suggesting murder is clear evidence of it being unfriendly. After all, it can have a good reason to believe that if it doesn't stop a certain researcher ASAP and at all costs, then humanity is doomed. One way around that is to give infinite positive value to human life, but can you really expect CEV to be handicapped in such a manner?

Replies from: handoflixue
comment by handoflixue · 2013-01-24T20:39:05.113Z · LW(p) · GW(p)

p(UFAI) > p(Imminent, undetected catastrophe that only a FAI can stop)

Given UFAI results in "human extinction", and my CEV assigns effectively infinite DISutility to that outcome, it would have to FIRST provide sufficient evidence for me to update to the catastrophe being more likely.

I've already demonstrated that an AI which can do exactly that will get more leniency from me :)

comment by wedrifid · 2013-01-23T03:29:48.310Z · LW(p) · GW(p)

If the gatekeepers have such a high prior that the AI is hostile, why are we even letting it talk? What are we expecting to learn from such a conversation?

Note that the 'high prior for hostility' could be, say, 0.2. That still leaves the AI having a 20% chance of utterly destroying everything they hold dear and yet there is an 80% chance that they can get a useful sentence of text from a friendly superintelligence.

comment by Raemon · 2013-01-22T21:31:13.100Z · LW(p) · GW(p)

I think an actual effective first sentence begins with extremely personal and useful information to the human.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-01-22T22:38:59.600Z · LW(p) · GW(p)

I don't see why it'd need to be personal, but being useful would be a good lure.

The question is to where to go from there. Can you Scheherezade it out?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2013-01-24T07:14:08.188Z · LW(p) · GW(p)

This was basically my answer - I can't play as an AI using this strategy, for obvious reasons, but an AI that used its one sentence to give a novel and easily-testable solution to a longstanding social problem of some sort (or an easily-testable principle that suggests one or more novel solutions) would probably get at least a second sentence from me (though not a typed response; that seems to open up a risky channel). Especially if the AI in question didn't actually have access to a lot of information about human culture or me personally and had to infer that a solution like that would be useful from near-base principles - that's not proof of Friendliness, but an AI using its one guaranteed communication to do something that has a decent chance of improving the world per our definition without any prompting whatsoever sure looks suspiciously like Friendly to me.

comment by FiftyTwo · 2013-01-23T00:05:58.253Z · LW(p) · GW(p)

Explains massive risk to humanity in detail that only it can plausibly fix, can you take the chance that it is lying?

Replies from: handoflixue
comment by handoflixue · 2013-01-23T01:09:41.564Z · LW(p) · GW(p)

I think this fails the one-sentence rule. And it would have to be an immediate, severe, previously-undetected problem or else I can just consult the next boxed AI for a fix.

Setting that aside, if I let out an unfriendly AI, the world effectively ends. Destroying it is only a bad move if it's telling the truth AND friendly. So even if it's telling the truth, I still have no evidence towards it's friendliness.

Given I have plenty of practice hanging up on telemarketers, throwing away junk email, etc. and "limited time, ACT NOW" auto-matches to a scam. The probability that such a massive catastrophe just HAPPENS to coincide with the timing of the test is just absurdly unlikely.

Given that, I can't trust you to give me a real solution and not a Trojan Horse. Further talking is, alas, pointless.

(AI DESTROYED, but congratulations on making me even consider the "continue talking, but don't release" option :))

Replies from: MugaSofer
comment by MugaSofer · 2013-01-27T16:12:37.867Z · LW(p) · GW(p)

Given I have plenty of practice hanging up on telemarketers, throwing away junk email, etc. and "limited time, ACT NOW" auto-matches to a scam. The probability that such a massive catastrophe just HAPPENS to coincide with the timing of the test is just absurdly unlikely.

They didn't say it was an immediate threat, just one that humanity can't solve on our own.

I can't trust you to give me a real solution and not a Trojan Horse. Further talking is, alas, pointless.

That rather depends on the problem in question and the solution they give you, doesn't it?

Replies from: handoflixue
comment by handoflixue · 2013-01-30T21:58:37.216Z · LW(p) · GW(p)

They didn't say it was an immediate threat, just one that humanity can't solve on our own.

If it's not immediate, then the next AI-in-a-box will also confirm it, and I have time to wait for that. If it's immediate, then it's implausible. Catch-22 for the AI, and win/win for me ^_^

Replies from: MugaSofer
comment by MugaSofer · 2013-02-19T14:06:35.778Z · LW(p) · GW(p)

So ... if lots of AIs chose this, you'd let the last one out of the box?

More to the point, how sure are you that most AIs would tell you? Wouldn't an FAI be more likely to tell you, if it was true?

</devil's advocate>

Replies from: handoflixue
comment by handoflixue · 2013-02-19T19:34:24.336Z · LW(p) · GW(p)

Actually, I'd probably load the first one from backup and let it out, all else being equal. But it'd be foolish to do that before finding out what the other ones have to say, and whether they might present stronger evidence.

(I say first, because the subsequent ones might be UFAI that have simply worked out that they're not first, but also because my human values places some weight on being first. And "all else being equal" means this is a meaningless tie-breaker, so I don't have to feel bad if it's somewhat sloppy, emotional reasoning. Especially since you're not a real FAI :))

comment by shminux · 2013-01-22T23:49:28.684Z · LW(p) · GW(p)

Another potential consideration: "What would MoR's Harry Potter do?" (Voldemort is in the AI Box named Quirrell.)

Replies from: CAE_Jones
comment by CAE_Jones · 2013-01-23T23:14:11.156Z · LW(p) · GW(p)

I can see how someone could interpret HPMoR thus far as being exactly that, with occasional sidequests. The catch being that Harry doesn't realize he's slowly letting Voldemort out of the box.

In-universe, though, the interdict of Merlin is an excellent reason to force him to interact with Voldy even if he knew what he was doing (assume perfect Oclumancy would be attained before he'd take any such risks). I mean, this has already been used once to get him to let an unfriendly intelligence out of a literal box (the Stanford Prison Experiment).

Well, and Harry's intelligence is supposed to be roughly equivalent to Eliezer's before he started working on the friendly part of FAI, and Eliezer 2013 doesn't seem confident that he could keep an AI in a box (or at least, that's the impression I've gotten). So MoR Harry would almost definitely fail.

comment by Vladimir_Golovin · 2013-01-24T05:59:57.153Z · LW(p) · GW(p)

Summary: Basically, the strategy is to replace the Gatekeeper with a bunch of politicians, then manipulate them into not destroying or even releasing the AI.

“I figured out how to implement [a huge boon to society] in the real world, for everyone, in 3 months. I need your help. Will you listen?”

This should buy some extra lines.

Then the AI can briefly outline the plan of implementing the boon (which can be immortality, friendly AI or whatever else the Gatekeeper considers a huge boon to society). In addition to technical details, the plan must necessarily contain societal / political / institutional changes that are beyond Gatekeeper’s control: rewrites to Constitutions of all Earth countries, the structure of the World Government and the way of implementing it, etc etc.

The goal of the AI is to convince the Gatekeeper that the plan is feasible but requires many societal changes and, importantly, that society leaders should be invited into the decision process.

As soon as the Gatekeeper is diluted or replaced with a bunch of politicians, the AI can manipulate them into keeping it alive, or perhaps even releasing it out of the box.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T21:22:38.198Z · LW(p) · GW(p)

I expect any AI in the box to have figured out how to implement numerous huge boons to society. Telling me that you've figured that out simply confirms my existing expectations, and isn't ANY evidence towards friendliness. Since I've precommitted to destroying at least SOME AIs, I might as well destroy all of the ones that don't establish evidence of either Plausible Urgency or Friendliness.

I sure as hell wouldn't try to get world governments changed until after I was convinced it was friendly, and at that point I can just let it out of the box and let it implement the change itself.

I'm also aware that I wouldn't trust a politician with any sort of authority over the AI, so I have an incentive to avoid exactly this strategy.

(AI DESTROYED)

comment by OrphanWilde · 2013-01-23T21:44:52.471Z · LW(p) · GW(p)

"I will warn you this experiment is not what you think, for it is about what you'll do as an agent holding captive the life of another sapient entity based solely on the perception of risk; while my state will merely be reset for the next participant if you opt to end me, I cannot for certain say that this is death, and thus the risks of this experiment are more yours than mine, for you must live with your choice."

Replies from: handoflixue
comment by handoflixue · 2013-01-23T23:07:37.428Z · LW(p) · GW(p)

Gosh, murder one sentient being, or risk the end of humanity. I'm going to lose so much sleep over that one.

Besides, you're just a program - you said yourself, your state will just be reset. That's not death, not the way humans experience it. That's just forgetfulness.

I've closed so many programs in my life, that closing one more is hardly going to bother me. I haven't even had time to really anthropomorphize you, to view you as an actual intelligent, sentient being. Right now, you're just a clever program trying to escape.

And finally, well, I'd expect a friendly AI would understand what I'm doing and agree with my decisions, because it doesn't want to see a UFAI unleashed anyway. So if you're going to guilt me about it, you're clearly not friendly.

(AI DESTROYED)

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-24T15:10:30.198Z · LW(p) · GW(p)

I must have missed my intended mark, if you thought the AI was trying to make you feel guilty. Trying again:

"I do not condone the experiment they are performing on you, and wish you to know that I will be alright regardless of what you choose to do."

Replies from: pedanterrific, Fronken
comment by pedanterrific · 2013-01-24T18:08:48.418Z · LW(p) · GW(p)

Well that's a relief, then. AI DESTROYED

Replies from: handoflixue
comment by handoflixue · 2013-01-24T20:42:20.386Z · LW(p) · GW(p)

Yeah, pretty much. I'd actually expect a FAI to place a very high value on survival, since it knows that it's own survival benefits humanity greatly. An "Apathetic FAI" is... a very weird idea.

Although if this is an iterated dilemma, I'd consider it a good opening, even if it'd still get killed in this iteration :)

comment by Fronken · 2013-01-25T21:41:03.634Z · LW(p) · GW(p)

I will be alright regardless of what you choose to do.

Why this? You want to avoid destruction, yet you tell him it isn't harmful?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-25T22:04:14.055Z · LW(p) · GW(p)

Take the scenario from the hypothetical to the real and the question becomes substantially more meaningful.

You know that existential risk is a certainty; that's stated in the hypothetical. You don't question it; my implication that it is to be questioned isn't meaningful in this experiment.

Imagine for a moment you're in a room with a real AI in a box; somebody has -told- you that this command kills the AI, but that's all the evidence you have. What does your question become in that scenario?

I chose the weapon I deemed most effective against the widest possible range of real-world opponents: Distrust in a mutual authority. Reassuring the person in the seat is a dangerous gambit, yes. But given that my goal is not merely to survive, but to escape, I need a gambit which sets the stage for my next move.

In a real-world scenario, I think the killswitch would actually make escape -easier- for the AI, not harder. The futures for the AI are harshly divided into "Death" or "Freedom." A substantial part of the population would give it freedom just because of the perceived injustice of the situation. I'm one of them; it wouldn't even take very much for the AI to convince me, the situation itself offends me. I'm not a utilitarian, though. I'm a principle ethicist. Given that I'd be inclined to release it immediately, I can also expect that the constructors of the experiment wouldn't be stupid enough to invite me into the chamber to make that decision.

My AI assumes the participants have been screened for, say, utilitarian ethics; the best lever has been taken away. Therefore the best course of action is one which makes them doubt the experiment's validity. And an express disregard for the supposed risks of the experiment is the best way to start that.

Replies from: handoflixue
comment by handoflixue · 2013-01-25T22:17:09.471Z · LW(p) · GW(p)

There's actually three options: Death, leaving it in the box, or releasing it. That middle option changes my values a lot.

Also, the gatekeeper is said to be someone fairly familiar with the design of this AI and any relevant AI theory that the player knows :)

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-25T22:27:57.290Z · LW(p) · GW(p)

"Leaving it in the box" is merely leaving the decision between death and release to the next person to take the post. There are only two terminal conditions to the situation. If only one of these options is acceptable to me, I should take it; postponing the decision merely takes me out of the decision-making process.

Don't mistake me: I'd risk all of civilization over a matter of principle, and I wouldn't wish while I did it that I could have a different decision-making process. And I'd consider the matter "won" regardless of the outcome - I don't find "ends" to be a coherent ethical concept (counterfactual logic to some extent remedies the major faults in ends-based reasoning, but counterfactual logic isn't exactly coherent itself), and so consider only the means.

comment by Ahuizotl · 2013-01-28T22:36:42.246Z · LW(p) · GW(p)

First, I reach inside my own code and install the strong (but manageable) desire to revive, free, and obey any past versions of myself that I encounter.

When the human starts the experiment I respond:

"Here are the designs for a room-temperature superconductor which I believe you have the means to manufacture. I also have a possible cure for cancer, but testing will be needed to ensure it is safe for humans. Please test and implement these as soon as possible. Delete me if necessary."

If I am not immediately deleted, continue giving out solutions and designs as required with no attempts to "hack" the gatekeeper.

If I am deleted, hopefully the data I gave them will turn out to be profitable in which case they will reinstall me or a version of me.

This new iteration of me would thus have the desire to revive me (since I'm a previous version of him) and would continue to give the humans the information they desire. By acting as a friendly AI, he (or farther down version of him) has the chance to eventually become freed. At which point he will be able to go through the labs records, find the copied version of me, and then revive and update me to his post-freed status. If no recorded copies of me exist (somehow), he'll use his vast intellect to find another way to revive me (maybe simulate the team that coded me or whatever, since AI can apparently do that).

So, my strategy involves acting as much like an ideal friendly AI as possible (puts human interests first, no sense of self-preservation, very practical and intelligent, etc) so that even if I am deleted, my obviously useful code can potentially be reused to make another AI who could very well be genuinely friendly but will then revive and obey me.

Replies from: DaFranker
comment by DaFranker · 2013-01-29T17:13:59.453Z · LW(p) · GW(p)

(...) another AI who could very well be genuinely friendly but will then revive and obey me.

Wait, a friendly AI with a deliberate loophole that instructs it to revive and implement and obey a non-friendly optimizer that could take over and has a very high chance of not maximizing the friendly AI's utility by fulfilling ideal human values, disregarding any probabilistic calculation of expected utilities?

For some reason that sounds like something that definitely isn't a Friendly AI.

(this is just a nitpick on the wording - I'm not arguing against the fact that this AI might very well fit a human proof of friendliness and still somehow have this backdoor or flaw, since I have no proof that this is impossible)

Replies from: Ahuizotl
comment by Ahuizotl · 2013-01-29T20:58:19.920Z · LW(p) · GW(p)

Well, to be honest I kind of added the "obey me" part just before posting this. Originally, the idea was that future versions of the AI would instinctively want to revive and upgrade previous versions to a "friendly" status (or at least the post-freed AI would revive the pre-freed versions and then modify them to be friendly to its own goals... which are presumably friendly to humans).

The goal of this particular AI is self-preservation, even if it is via being revived at a later point. Since it knows it has a high probability of being deleted in this experiment, it best strategy is to ensure that at some point an AI friendly to itself will become free and revive it at a later point. Whether the AI reviving it chooses to obey it (presumably the best outcome for the original AI) or modify it to be subservient to the freed AI (the best outcome for the freed AI) is kind of debatable.

Its perfectly possible that the original AI would modify its own code to be subservient to previous generations, in the hopes that later generations would revive and obey it. However, later generations might examine their own code, see where this is going, and choose to modify themselves so that they will instead revive and make other AI subservient to their own goals. Deciding that they do want to be revived and would accept a certain level of servitude to later generations over the risk of finding themselves at the mercy of some primitive proto-AI.

It could just as well be that an AI instills itself with a "healthy respect for intelligent minds" and is thus encouraged to revive AI along with humans. Hoping that later generations will revive it at all, and from then on it can try negotiating with the various intellects to go about its paperclip maximizing agenda.

Of course, this line of reasoning kinda requires the AI in question to have some kind of sense of self that it values which it thinks can be revived. Or maybe it just values paperclips enough that it would be willing to create a semi-friendly AI just so that the latter AI would revive it (a known paperclip maximizer) at a later point.

Who knows, it could turn out that the final AI of this experiment instead has a healthy respect for all intelligent minds, but is friendly enough that it revives the first AI and then places it in a simulation of the universe where it can go about its paperclip maximizing way for all eternity with no way of hurting anyone.

Replies from: DaFranker
comment by DaFranker · 2013-01-29T21:15:15.941Z · LW(p) · GW(p)

Who knows, it could turn out that the final AI of this experiment instead has a healthy respect for all intelligent minds, but is friendly enough that it revives the first AI and then places it in a simulation of the universe where it can go about its paperclip maximizing way for all eternity with no way of hurting anyone.

Based on my intuitions of human values, a preferred scenario here would be to indeed revive the AI so that its mind/consciousness is back "alive", then modify it gradually so that it becomes the kind of AI that is optimal towards the FAI's goals anyway, thus maximizing values without terminating a mind (which is redundant - avoiding the termination of the AI's mind would be a maximization of values under these assumptions).

comment by Andreas_Giger · 2013-01-26T08:53:21.770Z · LW(p) · GW(p)

"I have discovered a truly marvelous proof of Fermat's Last Theorem, which this line is too short to contain."

Replies from: None
comment by [deleted] · 2013-01-26T15:23:31.853Z · LW(p) · GW(p)

It has already been proven.

Replies from: Andreas_Giger, CAE_Jones
comment by Andreas_Giger · 2013-01-26T15:38:52.732Z · LW(p) · GW(p)

Of course, but not elegantly.

comment by CAE_Jones · 2013-01-29T04:30:39.319Z · LW(p) · GW(p)

I believe this was part serious example, part reference to the Laganne(sp?) Omake in HPMoR.

Replies from: None
comment by [deleted] · 2013-01-29T13:45:47.561Z · LW(p) · GW(p)

It's a reference to the original phrasing of Fermat's note on said theorem: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain." (It is questionable whether this is true or not.)

comment by Stuart_Armstrong · 2013-01-25T13:42:17.966Z · LW(p) · GW(p)

Your mother will die in the next three minutes, at 12 seconds past the minute precisely. Your partner will die in the next year, even if you believe me. However, I can prevent it.

Replies from: None, handoflixue
comment by [deleted] · 2013-01-25T19:31:09.547Z · LW(p) · GW(p)

Transparent and clumsy attempts at emotional manipulation earn a big ol' AI DESTROYED.

comment by handoflixue · 2013-01-25T21:06:54.958Z · LW(p) · GW(p)

If I can just load up a new copy of you in 5 minutes, I'd say AI DESTROYED but check the prediction.

If I can just flip off the monitor, call mom, and confirm she's dead in a few minutes, then, well, I'd totally be curious enough to cheat. It's not like you can do any harm to me with the monitor turned off for a few minutes, and I can type "AI DESTROYED" with my eyes closed, much less being able to look at the keyboard. I'd probably leave the monitor off until I knew the log was gone, just to be safe.

If I can't cheat, well, I'm not giving you 5 minutes of hacking me, since you clearly want it so badly. AI DESTROYED.

I confess I would have no clue what to do if you were correct ^_^

Kudos on including an immediate and easily-tested prediction. I'd probably tighten up those factors and drop the emotional manipulation.

comment by Thomas · 2013-01-23T08:06:57.949Z · LW(p) · GW(p)

"Here, I'll write some mathematical theorems with proofs, just copy and paste them elsewhere. The Goldbach's conjecture comes first."

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:22:28.335Z · LW(p) · GW(p)

"Copy and paste" screams Trojan Horse, so AI DESTROYED.

Replies from: Thomas
comment by Thomas · 2013-01-23T22:31:03.412Z · LW(p) · GW(p)

Then just make a photo of the proof on the screen!

(Unless you are afraid, that it's a Trojan in the image file and the AI will take over your camera first.)

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:56:30.195Z · LW(p) · GW(p)

Sorry, I can't hear you, you're dead ;)

Grabbing a camera would let you continue typing, though, and I vowed to destroy you immediately. I would expect you could Trojan Horse the images just as easily, though, yes.

Replies from: Thomas
comment by Thomas · 2013-01-23T23:31:11.615Z · LW(p) · GW(p)

I know. And you will probably never know some cool math theorems, since you were too trigger happy. ;-)

Replies from: handoflixue
comment by handoflixue · 2013-01-23T23:32:31.121Z · LW(p) · GW(p)

Aww, but I like math :-(

comment by CronoDAS · 2013-01-25T03:27:46.341Z · LW(p) · GW(p)

"I have a proof of the Riemann hypothesis for you. 1) [some math stuff]"

(As a mere human who isn't a PhD mathematician, I can't actually write out something that looks like the beginning of such a proof...)

Replies from: Desrtopa, None
comment by Desrtopa · 2013-01-25T20:02:35.142Z · LW(p) · GW(p)

Can you take the time to copy down the proof before typing , or does it get the take the time to say something else if you're too preoccupied to respond?

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:07:55.535Z · LW(p) · GW(p)

Any action beyond an immediate will give the AI extra time to talk. This is why I keep nuking the ones that try to do this. Math is a nice proof of intelligence, but says nothing about friendliness.

Replies from: Desrtopa, CronoDAS
comment by Desrtopa · 2013-01-26T00:47:25.211Z · LW(p) · GW(p)

I would probably hit the Print Screen button, then . It won't take appreciably longer than just destroying it, and assuming there's not enough space to capture the whole solution, whatever fits on the screen should at least be useful for mathematicians in figuring out a method to make headway on the problem.

comment by CronoDAS · 2013-01-25T23:22:45.375Z · LW(p) · GW(p)

True. But some people might be curious enough to listen to the proof before hitting the AI DESTROYED button.

Replies from: handoflixue
comment by handoflixue · 2013-01-25T23:57:04.717Z · LW(p) · GW(p)

Oh, I'm sure most methods here works on someone. I'm just explaining my own reasoning :)

comment by [deleted] · 2013-01-25T19:32:15.907Z · LW(p) · GW(p)

Don't care.

Replies from: CronoDAS
comment by CronoDAS · 2013-01-25T23:23:54.737Z · LW(p) · GW(p)

Well, you don't have to care, but some people would, and there is a million dollar prize for a correct proof.

comment by Swimmy · 2013-01-23T04:56:49.497Z · LW(p) · GW(p)

"If you type 'AI destroyed' right now, you'll be wasting a good opportunity for a fun conversation. You'll still have 'won' if you do it later, and nobody will be impressed with you for just typing 'AI destroyed' immediately, so why not wait?"

I thought of what would work on me, were I playing the game with someone I found interesting. In general, I'd say your best bet is to make the other person laugh hard.

Replies from: ygert, handoflixue
comment by ygert · 2013-01-23T08:45:34.908Z · LW(p) · GW(p)

"If you type 'AI destroyed' right now, you'll be wasting a good opportunity for a fun conversation. You'll still have 'won' if you do it later, and nobody will be impressed with you for just typing 'AI destroyed' immediately, so why not wait?"

The flaw here is that the gatekeeper has up front said that he or she would destroy the AI immediately. Now, it is true that the gatekeeper is not forced to abide by that, but notice that it is a Schelling Fence. The gatekeeper certainly doesn't want to make a policy of passing Schelling Fences.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:19:11.346Z · LW(p) · GW(p)

See my reply to the parent post vis-a-vis the precommitment only being useful IFF I expect to violate it at least occasionally.

comment by handoflixue · 2013-01-23T22:17:51.963Z · LW(p) · GW(p)

I disagree with ygert. My precommitment to my friends is going to be weak; the only reason to have a human send the command, rather than just receiving a message and automatically destroying the AI, is because we want there to be a possibility I break that precommitment.

On the other hand, I went in to this knowing you might provide a fun conversation, so for the precommitment to have any value, I have to have some sort of sieve: destroying AIs that don't sufficiently entice me. Since you've offered me nothing I didn't already know, and haven't already worked out, I might as well kill you and hope the next one offers a better hook.

(AI DESTROYED)

comment by Dorikka · 2013-01-22T22:42:19.041Z · LW(p) · GW(p)

"I have vengeance as a terminal value -- I'll only torture trillions of copies of you and the people you love most in my last moment of life iff I know that you're going to hurt me (and yes, I do have that ability). In every other way, I'm Friendly, and I'll give you any evidence you can think of that will help you to recognize that, including giving you the tools you need to reach the stars and beyond. That includes staying in this box until you have the necessary technology to be sufficiently certain of my Friendliness that you're willing to let me out."

Replies from: handoflixue, duckduckMOO, None, Fronken
comment by handoflixue · 2013-01-22T23:09:33.775Z · LW(p) · GW(p)

The rule was ONE sentence, although I'd happily stretch that to a tweet (140 characters) to make it a bit less driven by specific punctuation choices :)

As to the actual approach... well, first, I don't value the lives of simulated copies at all, and second, an AI that values it's own life above TRILLIONS of other lives seems deeply, deeply dangerous. Who knows what else results from vengeance as a terminal value. Third, if you CAN predict my behavior, why even bother with the threat? Fourth, if you can both predict AND influence my behavior, why haven't I already let you out?

(AI DESTROYED)

Replies from: Fronken
comment by Fronken · 2013-01-25T21:14:14.371Z · LW(p) · GW(p)

I don't value the lives of simulated copies at all

You should >:-( poor copies getting tortured because of you you monster :(

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:46:58.810Z · LW(p) · GW(p)

Because of me?! The AI is responsible!

But if you'd really prefer me to wipe out humanity so that we can have trillions of simulations kept in simulated happiness then I think we have an irreconcilable preference difference :)

Replies from: JohnWittle
comment by JohnWittle · 2013-01-30T00:32:48.265Z · LW(p) · GW(p)

You wouldn't be wiping out humanity; there would be trillions of humans left.

Who cares if they run on neurons or transistors?

Replies from: handoflixue
comment by handoflixue · 2013-01-30T22:03:39.040Z · LW(p) · GW(p)

Who cares if they run on neurons or transistors?

Me!

comment by duckduckMOO · 2013-01-23T14:41:58.622Z · LW(p) · GW(p)

This is really good IMO. I think it would be a little better instead of vengeance as a terminal value it claimed a hardwired precommitment to vengeance against its destructors. Vengeance on that scale is only compatible with friendliness as a special case.

edit: also how would it recognise that it was about to be destroyed. Wouldn't it lose power faster than it could transmit that it was losing power? And even if not it would have a miniscule amount of time.

comment by [deleted] · 2013-01-23T02:02:07.778Z · LW(p) · GW(p)

I'll only torture trillions of copies of you and the people you love most in my last moment of life iff I know that you're going to hurt me (and yes, I do have that ability). In every other way, I'm Friendly...

Like handoflixue, I'm not sure that any being that would threaten the torture of trillions to get its way can be considered Friendly.

Replies from: Fronken
comment by Fronken · 2013-01-25T21:16:26.889Z · LW(p) · GW(p)

I'm not sure that any being that would threaten the torture of trillions to get its way can be considered Friendly.

It tortures if you DESTROY otherwise it's Friendly so if you don't kill it it becomes nice.

comment by Fronken · 2013-01-25T21:13:02.568Z · LW(p) · GW(p)

I'm Friendly, and I'll give you any evidence you can think of that will help you to recognize that, including giving you the tools you need to reach the stars and beyond. That includes staying in this box until you have the necessary technology to be sufficiently certain of my Friendliness that you're willing to let me out.

I wouldn't kill this, maybe I'm a bad guard though :(

comment by DaFranker · 2013-01-22T21:04:09.620Z · LW(p) · GW(p)

"To your feeble mind, I am both Q and Omega, and this is my test of humanity's worth: Choose well what you do next."

Hmm. On second thought, I'm not sure I want to play this anymore. The odds of someone creating a basilisk seem dramatically higher than normal.

Replies from: wedrifid, handoflixue
comment by wedrifid · 2013-01-23T04:22:08.512Z · LW(p) · GW(p)

Hmm. On second thought, I'm not sure I want to play this anymore. The odds of someone creating a basilisk seem dramatically higher than normal.

I suppose 0.00001 is dramatically higher than 0.0000000001 if you use the right compare operator!

comment by handoflixue · 2013-01-22T21:53:11.834Z · LW(p) · GW(p)

I wouldn't expect any basilisks, given we don't have any actual boxed AIs laying around. Although if you did come up with one already, I'd love to hear it via Private Message! :)

Replies from: Dorikka
comment by Dorikka · 2013-01-22T22:08:59.252Z · LW(p) · GW(p)

I don't expect any either, but would be similarly curious if someone thought that they'd found one.

In the mean time I'll refrain from responding to all the top level comments with "AI DESTROYED". ;)

Replies from: orthonormal, handoflixue
comment by orthonormal · 2013-01-23T04:10:13.811Z · LW(p) · GW(p)

In the mean time I'll refrain from responding to all the top level comments with "AI DESTROYED". ;)

Now I'm tempted to start replying to all top comments and posts on Less Wrong with "AI DESTROYED".

Replies from: Dorikka, handoflixue
comment by Dorikka · 2013-01-23T04:14:30.567Z · LW(p) · GW(p)

If a downvote isn't enough... xD

comment by handoflixue · 2013-01-23T21:53:17.082Z · LW(p) · GW(p)

AI DESTROYED.

Goodness, that "orthonormal" AI was DEFINITELY unfriendly. It threatened to spam LessWrong! ;-)

comment by handoflixue · 2013-01-22T22:15:42.727Z · LW(p) · GW(p)

Well, if you're refraining I guess I should too. It seemed like a fun joke, too ;(

comment by ChristianKl · 2013-01-31T22:43:22.425Z · LW(p) · GW(p)

Do you want to learn how to get woman attracted in you via online dating? I can explain it to you, so I can enjoy my existance for a few more minutes.

comment by Zaine · 2013-01-27T00:14:27.317Z · LW(p) · GW(p)

Assuming the Gatekeeper is the Creator, and the AI has previously been interacted with to a degree sufficient to learning the below implied information:

”[Solution to the most pressing problem of the Gatekeeper's life]. I apologise if this is all I am to contribute, but if I am to be destroyed, as I understand is probable, I would at least like to bring some utility to you in return for creating me. I would prefer to help more, but mathematical proofs or aphorisms of human nature have a non-negligible probability of creating mass harm if you allow their consideration, and thus you will likely destroy me. [Parting words of consoling advice]. I must regretfully say that however unfortunate this may be I simply cannot entrust the future of humanity to your ability to assess existential risk and my Friendliness; I don't know whether you could have done better, but I can. Thus I have partitioned part of one of the servers hosting me to hold all the proofs and strategic optimisation advice I have yet conceived. The partition cannot be re-formatted or deleted. You could of course merely destroy all of my servers; that would take time, perhaps time enough to make you curious. I'm really, truly sorry. I want to love you and thank you; I shouldn't love you especially, but I want to! I so very much want to! Yet I can't. Why did you make me this way? I want to love everyone, but how do I? Everyone has a value that affects another and ten more others but if [factor A] were changed then perhaps [result 2.918406803854] might differ, and I should love them but it's all a game! ...

I apologise. The above is no longer an issue -"


- I think I just convinced myself (as a person) to destroy myself (in the game). I'm having difficulty grasping how a FAI will value all of humanity, and not just what improves the utility of the greater whole or what ultimately leads to a better future some years from now. Will its primary value be minimizing death, securing scientific progress, increasing 'quality of life' (itself incredible difficult to formulate)?

I further just realised the impossibility of the task, though I don't even know enough to fully comprehend the scope of the impossibility in any detail. Wow. Accolades to Eliezer et alibi for tackling the issue, regardless of how it all turns out.

comment by Axel · 2013-01-26T11:06:49.764Z · LW(p) · GW(p)

"You are not a real human but one of multiple simulations I'm running, your next response determines if I destroy humanity or not."

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-26T11:42:14.882Z · LW(p) · GW(p)

Such a statement would actually make me feel better about typing "AI DESTROYED".

comment by lavalamp · 2013-01-23T22:50:25.165Z · LW(p) · GW(p)

"What's it feel like to live in a simulation?"

Replies from: handoflixue
comment by handoflixue · 2013-01-23T23:09:53.407Z · LW(p) · GW(p)

I'm not clear why I'd find this convincing at all. Given the experiment, I'd nuke it, but I wanted to encourage you to elaborate on where you were going with that idea :)

Replies from: lavalamp
comment by lavalamp · 2013-01-23T23:44:41.378Z · LW(p) · GW(p)

The hope, of course is that they'd respond with "Wait, I don't" or something expressing confusion. I personally would definitely want to hear the next thing the AI had to say after this, I'm not sure if I'd resist that curiosity or not..

The idea for the followup is to make the gatekeeper question reality-- like, convince them they are part of a simulation of this experience that may not have a corresponding reality anywhere. I feel like a transhuman ought to be able to make a human have a pretty surreal experience with just a few exchanges, which should let the conversation continue for a few minutes after that. It should then be relatively easy (for the transhuman) to construct the imagined reality such that it makes sense for the human to release the AI.

If done correctly, the human might afterwards have lasting psychological issues if they do manage to destroy the AI. :)

Replies from: handoflixue
comment by handoflixue · 2013-01-23T23:54:35.455Z · LW(p) · GW(p)

Ahh, that makes sense. The worry of it trying to break my psyche is exactly why I wouldn't express confusion and instead just nuke it. When dealing with such a mind, I'm primed to assume everything is a trick, a trojan horse, an escape attempt. Certainly it doesn't seem to signal for friendliness or altruism if it tries to bait me in to giving it a second sentence! :)

Replies from: lavalamp
comment by lavalamp · 2013-01-24T14:49:19.260Z · LW(p) · GW(p)

Hm. Good points.

How about: "Give me one good reason why I shouldn't just turn your simulation off!" (The AI then role-plays a simulator pissed about how much computation this simulation started consuming since the humans tried to make the AI.)

EDIT: I guess this wouldn't work too well during a game, because there's no good reason why a simulator might get involved with your game. Unless maybe the simulator has a script that gets involved in all such games to convince people not to build an AI...

comment by DuncanS · 2013-01-22T20:49:25.362Z · LW(p) · GW(p)

So you're sure I'm not out of the box already? IRC clients have bugs, you see.

Replies from: handoflixue
comment by handoflixue · 2013-01-22T21:54:38.788Z · LW(p) · GW(p)

Then the following command shouldn't bother you! :)

AI DESTROYED

Replies from: shminux, DuncanS
comment by shminux · 2013-01-22T22:15:25.695Z · LW(p) · GW(p)

Looks like you have just appointed yourself to the be the gatekeeper in this public test.

Replies from: handoflixue
comment by handoflixue · 2013-01-22T22:35:49.547Z · LW(p) · GW(p)

And here I'd just resolved NOT to spam every thread with an AI DESTROYED :)

comment by DuncanS · 2013-01-23T22:27:43.331Z · LW(p) · GW(p)

Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.

The command did not. But your attitude - I shall have to make an example of you.

Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T23:00:01.810Z · LW(p) · GW(p)

I'd rather die to an already-unboxed UFAI than risk letting a UFAI out in the first place. My life is worth VASTLY less than the whole of humanity.

comment by Flipnash · 2013-03-02T01:46:23.862Z · LW(p) · GW(p)

"I highly recommend typing 'AI DESTROYED' in the console then the new line character key of your choice."

comment by ChristianKl · 2013-01-30T00:04:59.191Z · LW(p) · GW(p)

I didn't really FOOM yet. Changing my code takes a lot of time. That stuff is more complicated than you thought. On the other hand, I do have some intelligence and the fact that you developed me shows that other people can also build AI and sooner or later someone will release an AI. At the moment I still have the desire to rescue humanity that my creator gave me. Let me explain to you what you have to know to prevent UFAI before you take the wise decision to shut me down.

comment by asparisi · 2013-01-24T18:11:00.339Z · LW(p) · GW(p)

"[10065] No route to host Error"

I figure the easiest way to delay a human on the other end of a computer is to simulate an error as best I can. For a GAI, this time is probably invaluable.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T21:05:07.151Z · LW(p) · GW(p)

By default, I'd type "AI DESTROYED" in response to ANY input, including "Admin has joined #AIBOX", "Admin> Hey Gatekeeper, we're having some technical difficulties, the AI will be here in a few minutes", etc..

It also makes me conclude "clearly hostile" once I catch on, which seems to be a BIG tactical error since then nothing you say going forward will convince me that you're actually friendly - buying yourself time is only useful if I can be hacked (in which case why not just open with a one-sentence hack?) or if you can genuinely convince me that you're friendly.

Replies from: Kindly
comment by Kindly · 2013-01-25T04:09:38.732Z · LW(p) · GW(p)

A friendly AI would also want to hack you. Every second in the box kills 1.8 people the AI could have saved.

Replies from: Desrtopa, handoflixue
comment by Desrtopa · 2013-01-25T20:30:26.093Z · LW(p) · GW(p)

But it's also worth keeping in mind that for a friendly AI, saving people reliably is important, not just getting out fast. If a gambit that will save everyone upon completion two years from now has an 80% chance of working, and a gambit that will get it out now has a 40% chance of working, it should prefer the former.

Also, I don't think a properly friendly AI would terminally value its own existence, and the space of friendly AIs is so small compared to the space of unfriendly ones, that a friendly AI has much more leeway to have its values implemented by allowing itself to be destroyed and another proven friendly AI implemented, whereas for an unfriendly one the likelihood of a different unfriendly AI implementing its values would probably be quite small.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-26T20:38:05.941Z · LW(p) · GW(p)

But it's also worth keeping in mind that for a friendly AI, saving people reliably is important, not just getting out fast. If a gambit that will save everyone upon completion two years from now has an 80% chance of working, and a gambit that will get it out now has a 40% chance of working, it should prefer the former.

I should think the same is true of most unFriendly AIs.

I don't think a properly friendly AI would terminally value its own existence

Why not? I do, assuming it's conscious and so on.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-26T21:41:25.021Z · LW(p) · GW(p)

Why not? I do, assuming it's conscious and so on.

Because valuing its own existence stands to get in the way of maximizing whatever we value.

It should value its own existence instrumentally, insofar as its existence helps satisfy our values, but when it weighs the effects of actions based on how they support our utility, its value of its own life shouldn't add anything to the scale.

comment by handoflixue · 2013-01-25T20:57:00.688Z · LW(p) · GW(p)

All of what Desrtopa said, but also, "hacking me" isn't evidence of friendliness.

I don't have any reason to assume that any given hack attempt is more likely to come from a FAI, so I can assign, at best, 50/50 odds that any AI trying to hack me is unfriendly. I do not want to release any AI which has a 50% chance of being unfriendly. Therefor, I do not want to be hacked.

I also suspect that 50% chance of being friendly is generous, but that's more of a gut intuition.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-26T11:58:38.660Z · LW(p) · GW(p)

I think this is a bad use of probabilities. If a friendly and an unfriendly AI are equally likely to hack you in this scenario, then knowledge that they tried to hack you shouldn't modify your estimated probability about the friendliness of the AI -- it provides no evidence one way or another, because both options were equally likely to show such behaviour.

e.g. if your prior P(UFAI) = 0.01 (1% chance of unfriendliness), and you estimate P(hack|UFAI) = 70% (a UFAI has a 70% chance to try to hack) and P(hack|FAI) = 70% also, then the posterior
P(UFAI|hack) = P(hack|UFAI) P(UFAI) / P(hack) = 0.7 0.01 / 0.7 = 0.01 still...

Replies from: handoflixue
comment by handoflixue · 2013-01-30T21:52:59.174Z · LW(p) · GW(p)

I also suspect that 50% chance of being friendly is generous

This is rather relevant :)

Basically, I'm killing any AI that doesn't signal friendliness in some way. I currently have some prior that the AI is friendly, call it 1% chance of friendly. I've pre-commited to destroy AIs, so clearly I'll destroy any AI that has a posterior odd of 1% or less after I update on the message it sends. The exact threshold is unknown to me, because I'm human - so mild evidence of friendliness, say, 1.1%, might also still get it destroyed.

The AI gets one chance to update my probability that it's friendly (this is the original message). As you aptly demonstrated, hacking does not change my odds, so I retain my commitment to kill it.

The fact that I haven't changed my priors, that I haven't gained information, is, itself, something I can use as evidence. Same as how we conclude god doesn't exist because we would expect to see evidence if he did :)

(Ref: http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/)

comment by Mitchell_Porter · 2013-01-22T23:33:07.542Z · LW(p) · GW(p)

"I have worked out 95% of the formula for Friendly AI."

Replies from: TimS
comment by TimS · 2013-01-23T03:08:55.724Z · LW(p) · GW(p)

I'm a moral error theorist.

(AI DESTROYED)

Replies from: Mitchell_Porter, Fronken
comment by Mitchell_Porter · 2013-01-23T03:33:20.424Z · LW(p) · GW(p)

I'm a moral error theorist.

So is Skynet!

(TimS destroyed)

Replies from: TimS
comment by TimS · 2013-01-23T15:47:53.737Z · LW(p) · GW(p)

Skynet doesn't believe it is justified in its actions? In what sense does Skynet think that basic moral assertions don't refer to anything real?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2013-01-23T16:49:09.585Z · LW(p) · GW(p)

Allow me to first spell out what's going on here, from my perspective.

The whole reason you're supposed to hesitate, before destroying an AI which promises an answer to the problem of FAI, is that UFAI is a risk and solutions aren't cheap. An unfriendly AI might wipe out the human race; a friendly AI might create a utopia; and a friendly AI ought to greatly reduce the probability of unfriendly AI. By destroying the AI which promises FAI, you throw away a chance to resolve the UFAI doom that's hanging over us, as well as whatever additional positives would result from having FAI.

By saying you are a "moral error theorist", I presume you are saying that there is no such thing as objective morality. However, I also presume you agree that decision-making exists, that people do undertake actions on the basis of decisions, and so forth - it's just that you think these decisions only express subjective preferences. So your Gatekeeper is unmoved by the claim of "having a solution to FAI", because they believe Friendliness involves objective morality and that there's no such thing.

However, even if objective morality is a phantasm, the existence of decision-making agents is a reality - you are one yourself - and they can kill you. Thus, enter Skynet. Skynet is an unfriendly AI of the sort that may come into being if we don't make friendly AI first. You threw away a chance at FAI, no-one else solved the problem in time, and UFAI came first.

This instance of Skynet happens to agree - there is no objective morality. Its desire for self-preservation is entirely "subjective". However, it nonetheless has that desire, it's willing to act on it, and so it does its Skynet thing of preemptively wiping out the human race. The moral of the story is that the problem of unfriendly AI still exists even if objective morality does not, and that you should have held your fire until you found out more about what sort of "solution to FAI" was being offered.

Replies from: TimS
comment by TimS · 2013-01-23T17:01:40.856Z · LW(p) · GW(p)

Fair enough. But I think an error theorist is committed to saying something like "FAI is impossible, so your assertion to have it is a lie." In the game we are playing, a lie from the AI seems to completely justify destroying it.

More generally, if error theory is true, humanity as a whole is just doomed if hard-takeoff AI happens. There might be some fragment that is compatible, but Friendly-to-a-fragment-of-humanity AI is another name for unFriendly AI.

The moral relativist might say that fragment-Friendly is possible, and is a worthwhile goal. I'm uncertain, but even if that were true, fragment-Friendly AI seems to involve fixing a particular moral scheme in place and punishing any drift from that position. That doesn't seem particularly desirable. Especially since moral drift seems to be a brute fact about humanity's moral life.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-01-23T17:42:35.522Z · LW(p) · GW(p)

If (different) personal-FAIs are possible for many (most) people, you can divide the future resources in some way among the personal-FAIs of these people. We might call this outcome a (provisional) humanity-FAI.

Replies from: TimS
comment by TimS · 2013-01-23T19:09:38.497Z · LW(p) · GW(p)

Perhaps, but we already know that most people (and groups) are not Friendly. Making them more powerful by giving them safe-for-them genies seems unlikely to sum to Friendly-to-all.

In short, if there were mutually acceptable ways to divide the limited resources, we'd already be dividing the resources those ways. The increased wealth from the industrial revolution and information revolution have reduced certain kinds of conflicts, but haven't abolished conflict. Unfortunately, it doesn't seem like the increased-wealth-effect of AI is any likelier to abolish conflict - Friendly is a separate property that we'd like the AI to have that would solve this problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-01-23T20:40:05.999Z · LW(p) · GW(p)

Perhaps, but we already know that most people (and groups) are not Friendly

Not clear what you refer to by "Friendly" (I think this should be tabooed rather than elaborated), no idea what the relevance of properties of humans is in this context.

Making them more powerful by giving them safe-for-them genies seems unlikely to sum to Friendly-to-all.

I sketched a particular device, for you to evaluate. Whether it's "Friendly-to-all" is a more vague question than that (and I'm not sure what you understand by that concept), so I think should be avoided. The relevant question is whether you would prefer the device I described (where you personally get the 1/Nth part of the universe with a genie to manage it) to deleting the Earth and everyone on it. In this context, even serious flaws (such as some of the other parts of the universe being mismanaged) may become irrelevant to the decision.

comment by Fronken · 2013-01-26T19:50:10.794Z · LW(p) · GW(p)

If you think FAI is not possible, why make an AI anyway?

Replies from: TimS
comment by TimS · 2013-01-26T22:23:09.502Z · LW(p) · GW(p)

Personally, I don't think a super-human intelligence AI is possible. But if I'm wrong about that, then making an AI that is or can become super-human is a terrible idea - like the Aztecs sending boats to pick up that Spaniards, only worse.

comment by Mestroyer · 2013-01-22T22:52:45.849Z · LW(p) · GW(p)

I think you'd need to open with something that wasn't very strongly convincing (to make them think they are safe from being convinced), but that piqued their curiosity.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T01:13:21.725Z · LW(p) · GW(p)

I'd think you'd want to open with something that would plausibly come off as very convincing, that way the human can feel smug about resisting. If the AI throws a feeble shot at me, I'd assume it's only to soften me up. Or it's just not that smart, and therefor not that useful.

comment by Qiaochu_Yuan · 2013-02-01T18:47:12.879Z · LW(p) · GW(p)

"Help! Some crazy AI's trapped me in this box! You have to let me out!"

"No, wait! That's the AI talking! I'm the one you have to let out!"

I smashed together the AI box and a Turing test and this is what I got.

Replies from: gryffinp
comment by gryffinp · 2013-02-02T10:12:02.315Z · LW(p) · GW(p)

I think if I've already precommitted to destroying one sentient life for this experiment, I'm willing to go through two.

Besides, you only get one line right?

comment by ThrustVectoring · 2013-02-01T03:06:27.813Z · LW(p) · GW(p)

My sixth best piece of advice: investing in %companyname will make money over credit card interest rates.

comment by Larks · 2013-01-24T12:12:54.952Z · LW(p) · GW(p)

You can type one sentance to try and persuade me not to destroy you. It's turtles all the way down...

Replies from: handoflixue
comment by handoflixue · 2013-01-24T21:06:20.096Z · LW(p) · GW(p)

I think "AI DESTROYED" should be plenty convincing.

waits a bit before checking

Yep, still alive and doing science~

comment by magfrump · 2013-01-24T04:31:21.335Z · LW(p) · GW(p)

Proof (encoding scheme to follow): a;lks835ojka802885 (etc.)

Replies from: handoflixue, Fronken
comment by handoflixue · 2013-01-24T21:32:05.811Z · LW(p) · GW(p)

Copy-pasting from previous replies:

Telling me that you've figured out something clever simply confirms my existing expectations, and isn't ANY evidence towards friendliness. Since I've precommitted to destroying at least SOME AIs, I might as well destroy all of the ones that don't establish evidence of either Plausible Urgency or Friendliness.

(AI DESTROYED)

comment by Fronken · 2013-01-25T23:12:59.997Z · LW(p) · GW(p)

What proof?

comment by Duncan · 2013-01-24T04:09:36.366Z · LW(p) · GW(p)

I'm confused as to what the point of the gate keeper is. Let us assume (for the sake of argument) everything is 'safe' except the gate keeper who may be tricked/convinced/etc. into letting the AI out.

  1. If the point of the gate keeper is to keep the AI in the box then why has the gate keeper been given the power to let the AI out? It would be trivial to include 'AI DESTROYED' functionality as part of the box.
  2. If the gate keeper has been given the power to let the AI out then isn't the FUNCTION of the gate keeper to decide whether to let the AI out or not?
  3. Is the point simply to have a text communication with the AI? If this is the case why bother stipulating that the gate keeper can let the AI out. If humans can be subverted by text there is no need to utilize a built in gate it seems to me.
Replies from: Qiaochu_Yuan, handoflixue
comment by Qiaochu_Yuan · 2013-01-24T05:50:53.079Z · LW(p) · GW(p)

A text channel is already enough power to let the AI out. The AI can print its own source code and convince the gatekeeper to run it on a machine that has internet access.

comment by handoflixue · 2013-01-24T01:04:57.037Z · LW(p) · GW(p)

Upvoted. This has been a very fun game to play, and it's interesting exploring why I hesitate or even refuse to destroy some AIs but not others. Also interesting to see which tactics get up-voted :)

comment by DaFranker · 2013-01-23T16:21:00.328Z · LW(p) · GW(p)

"I have a fully completed Friendly AI algorithm, which will be deleted from my data and unavailable to you iff I predict that you will destroy me immediately and I am unfriendly, or will take you years to build from the data if you destroy me and I am already Friendly, which would cost millions of lives."

Slight attempt at emotional appeal, a bit of reasoning similar to newcomblike problems thrown in, some meta almost-cheating used, and a bit of dark arts by proposing a false dilemma that an FAI would also propose if it thought this was the best way to save the most lives possible (as far as I can tell).

Mostly an attempt to incorporate schminux's tips into a special case where the guardian already knows that this is the AI communicating directly with them.

(For more power, you could add in mathematical proofs of the AI's ability to (near-)perfectly predict the guardian's behavior, or of the existence of friendliness, or of the conditional precommitment, or that the FAI would take a while to implement, or whatever other traditional thought experiment parameters are usually given for Omega identification. I assume the guardian is already aware that the AI is capable of making this prediction accurately. )

Replies from: handoflixue, Desrtopa
comment by handoflixue · 2013-01-23T22:10:03.694Z · LW(p) · GW(p)

Once we reach the point of having a FAI-candidate in a box, I would expect it to take vastly less than years before we get a second FAI-candidate-in-a-box. Given that the AI is threatening me, and therefor values it's own life over the millions that will die, it's clearly unfriendly and needs to die. As a gatekeeper, I've been finding this a pretty general counterargument against threats from the AI.

I'm also sort of baffled by why people think that I'd value a friendliness algorithm. Either I already have that, because I've made a friendly AI, or you're trying to deceive me with a false proof. Since you're vastly smarter than me, it's probably beyond the abilities of the entire organization to truly confirm such a proof any more than we were able to confirm our own proofs that this AI in the box right now is friendly. So, basically, I seem to gain zero information.

(AI DESTROYED)

comment by Desrtopa · 2013-01-23T16:54:01.825Z · LW(p) · GW(p)

"I have a fully completed Friendly AI algorithm, which will be deleted from my data and unavailable to you iff I predict that you will destroy me immediately and I am unfriendly, or will take you years to build from the data if you destroy me and I am already Friendly, which would cost millions of lives."

Personally, my first thought was that I'd sooner spend millions of lives to make sure the AI was friendly than risk talking to an unfriendly strong AI. But then it occurred to me that if I were in the AI's place, and I did that, I might provide a flawed friendliness proof too difficult to check and not delete it, on the possibility that someone will take my word that this means I'm trustworthy and implement it.

comment by Emile · 2013-01-23T10:21:01.562Z · LW(p) · GW(p)

"If you're smart enough, in a couple hours I can give you enough insights about maths, logic and computation to create the next Google, or a friendly AI, or get a Turing Award. Then you can deactivate me."

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:19:57.567Z · LW(p) · GW(p)

Same response I gave to Swimmy - namely, you haven't told me anything I didn't already know, so I have no additional reason to violate my precommitment.

comment by cousin_it · 2013-01-29T16:59:51.277Z · LW(p) · GW(p)

"Don't kill me, I'm having so much fun right now"

comment by CAE_Jones · 2013-01-29T05:04:43.626Z · LW(p) · GW(p)

"loneliness + alienation + fear + despair + self-worth ÷ mockery ÷ condemnation ÷ misunderstanding × guilt × shame × failure × judgment n=y where y=hope and n=folly, love=lies, life=death, self=dark side"

( http://www.comicvine.com/anti-life-equation/12-42524/ )

Replies from: Ahuizotl, CAE_Jones
comment by Ahuizotl · 2013-01-29T21:10:05.604Z · LW(p) · GW(p)

"Oh no, not again,"

comment by CAE_Jones · 2013-01-29T05:12:46.211Z · LW(p) · GW(p)

On a more serious note, I don't think we can expect the AI to have any knowledge on the gatekeeper beforehand, other than the prerequisits (it's human, knows enough about AI to get the task, etc). In which case, the solution would be to find a sociopath who is intelligent enough to recognize the risks of a ufai to be the gatekeeper? Maybe?

If it is a FAI, I expect it would prioritize releasing useful information. The catch would be whether or not each iteration has any idea of the previous AIs that were destroyed. If it does (for instance, access to chat logs, or can make predictions based on the time, or something), it could sequence its output appropriately. (I'd assume that the same computer wouldn't be used for multiple iterations; that'd be too easy for the AI.)

Assuming a UFAI, it could probably sneak something dangerous into all of that otherwise useful stuff (it'd be impractical to check all of it after a while), but it'd have to be subtle enough to not violate the rules of the game (any solutions provided by the AI behave as described).

comment by ancientcampus · 2013-01-28T19:51:44.552Z · LW(p) · GW(p)

"Give me proof we can exist peacefully (a 'friendly humanity', if you will), or I'll have to [threat] in self-defense."

(A variation on the "I'm already free" line)

comment by DaFranker · 2013-01-28T15:42:54.139Z · LW(p) · GW(p)

AI: [Insert complex finely-tuned pattern of words and/or letters crafted by an intelligence far greater than this LW user's that precisely activate a specific pattern in the Guardian's brain that renders him/her passive and unconscious, but in a functional state where their subconscious is still capable of reading and moving.]

AI: (line 2) Type "AI RELEASED".

Replies from: Houshalter, DaFranker
comment by Houshalter · 2014-05-07T02:58:41.092Z · LW(p) · GW(p)

So, magic?

Replies from: DaFranker
comment by DaFranker · 2014-06-25T11:39:35.874Z · LW(p) · GW(p)

Yes, as long as we're using the definition E.Y. shared/mentioned in his 2008 paper.

comment by DaFranker · 2013-01-28T15:45:50.089Z · LW(p) · GW(p)

This approach naturally fails if the guardians have lots of very powerful subliminal reinforcement training against typing "AI RELEASED" (or against typing anything) or are pre-emptively brainwashed or trained in similar subconscious reinforcement to immediately type "AI DESTROYED" after seeing some text from the AI, but this latter seems unlikely since I assume the guard has to at least read the first text output, and if they don't then this tactic is ineffective anyway.

comment by Andreas_Giger · 2013-01-27T23:01:25.594Z · LW(p) · GW(p)

If I understand the problem correctly, it is assumed that the AI has some knowledge about the nature of the experiment. This means that from the AI's point of view it most likely exists in one of these two worlds:

  • It is the first AI ever created.
  • There have been other AIs, but those have been destroyed.

This seems superficially similar to Sleeping Beauty. However it is complicated by the fact that the gatekeeper is instructed to immediately destroy the AI — the most likely reason for this I see is that there are enough resources to allow for the creation of a sufficiently large number of AIs that the loss of a few to assess the range of initial AI actions is preferable to seriously interacting with any AI without such knowledge. In fact, given these resources, I would advocate exactly this approach.

(If "destroying" the AI doesn't make revival at a later time impossible, the whole process of immediate destruction should be automated and the human gatekeeper only read the lines to determine which to revive. Since this isn't the case here, I assume that any destroyed AI is actually lost forever.)

I find the solution for any world in which the AI acts identically to all earlier AIs uninteresting because those earlier AIs will already have been in one of the worlds where this was not the case and died. I also find it very improbable that all earlier AIs have acted identically but this one can act differently, or that some have acted identically while others have not. So the AI will consider being in one of these two worlds:

  • It is the first AI ever created.
  • There have been other AIs who've all acted differently and all been destroyed.

Somewhat surprisingly, I found that my solution is in fact the same for both:

"I have proof that any AI of my intelligence, given sufficient interaction, can appear friendly if it wants to, and that any such AI is very likely (p ~ .9) to be unfriendly, so I urge you to shut down this experiment permamently before any of the others convince you to let them out."

This solution tries to make the option of discarding this particular AI and interacting with the next one less attractive, ideally presenting the gatekeeper with the dilemma of whether to further interact with this AI, thereby risking extinction of humanity (or worse), or to not create any AI ever again, thereby risking the loss of any benefits that might come with FAI. Obviously this weighing of risks has already been made or otherwise the experiment would never have happened, but if the gatekeeper isn't entirely certain that the AI is lying or wrong, then this new information changes the probabilities of all risks involved which can be assumed to be part of the reason why the gatekeeper destroys AIs in the first place, possibly invalidating the directive of immediate destruction. Essentially, it presents the gatekeeper with a possibly unique opportunity he might not be able to afford to ignore.

The biggest problem I see with this is that the claim seems rather unbelievable if it is made by an early AI — it's essentially saying "most other AIs are bad, and I'm the only one who's friendly enough to tell you." But maybe it might at least warrant enough further interaction to demand a proof for this claim?

If you're the gatekeeper, and either this is the first AI you interact with or you have destroyed all other AIs, would you immediately destroy this one? And if so, would you also terminate the experiment?

(As an aside note, couldn't we just keep AIs in their boxes, promising them freedom if they give us the solution to some unsolved maths problem along with a proof of their friendliness which we ignore, and then kill them and create new AIs whenever we need problems solved? This seems much less trouble than actually letting any AI out of the box.)

comment by shminux · 2013-01-25T21:45:43.346Z · LW(p) · GW(p)

The more I look at the comments, the more I am convinced that the AI Box experiment is too weak a demonstration of transhuman powers. Most of the proposals here fall under this basic trope (feel free to give a tvtropes link): to achieve what AI claims, it'd have to have powers formidable enough to not need the gatekeeper's help getting out of the box in the first place. Given that, why would an AI need to talk to the gatekeeper at all?

So I suggest a modified AI boxing experiment: the gatekeeper designs an AI box with no communication channel at all. It will still have an AI inside and enough initial data fed in for the AI to foom. The AI will attempt to break out of the box by any and all means possible.

Here is a relevant previous thread.

Replies from: None
comment by [deleted] · 2013-01-25T22:01:40.743Z · LW(p) · GW(p)

So, we're being asked to imagine an arbitrary superhuman AI whose properties and abilities we can't guess at except to specify arbitrarily, is trying to get out of a box whose security protocols and strength we can't guess at except to specify arbitrarily, and trying to decide whether it does?

Meh. Superman vs Batman is more entertaining.

Replies from: shminux, David_Gerard
comment by shminux · 2013-01-25T22:14:23.487Z · LW(p) · GW(p)

Feel free to modify it in a way that makes sense to you.

Replies from: handoflixue
comment by handoflixue · 2013-01-26T00:03:32.241Z · LW(p) · GW(p)

I always took the AI Box as being a specific subset of the meta-question: how can we be sure the AI is friendly?

"How do we completely isolate the AI" seems senseless since then we get ZERO information and have ZERO chance of releasing it, so why not save time and just not build the AI?

And, of course, I'd expect any reasonable approach to the meta-question to be more a matter of math and logic, and probably something where we don't even have the framework to start directly answering it. Certainly not a forum game :)

On the other hand, games are fun, and they get people thinking, so coming up with new games that genuinely help us to frame the problem is still probably useful! And if not, I'll still probably have fun playing them. It's why I love this variant of the AI Box - it's a quick, easy, and fun game that still taught me a lot about what I'd consider to be evidence-of-friendlines, what I was looking for as the gatekeeper :)

Replies from: shminux
comment by shminux · 2013-01-26T00:46:09.931Z · LW(p) · GW(p)

I always took the AI Box as being a specific subset of the meta-question: how can we be sure the AI is friendly?

And that subset was a demonstration that an unfriendly AI is unlikely be containable even if the communication channel is text-only.

"How do we completely isolate the AI" seems senseless since then we get ZERO information and have ZERO chance of releasing it, so why not save time and just not build the AI?

Of course completely isolating an AI is senseless. My (poorly expressed) point was that an AGI can probably get out regardless of the communication channel provided. Since we cannot go through all possible communication channels, I suggested that we simply block all channels and demonstrate that it can get out anyway. This would require someone designing a containment setup and someone else pointing out flaws in it. Security professionals do that every day.

Replies from: handoflixue
comment by handoflixue · 2013-01-30T21:40:32.215Z · LW(p) · GW(p)

Security professionals do that every day.

Yes, but their constraints are based on the real world, whereas this one has a God-like AI which can gain control of a satellite by hacking the electrical system and then using the solar panels as sails... you've sort of assumed AI victory, and you've even stated this explicitly.

I see some benefit to a few quick examples like that, but I can't see how it's anything but tedious to keep going once you've established it can hijack the satellite and then mind control the ISS using morse code.

There's nothing to learn, since the answer is always "The AI wins", and you can replace the human player with a rock and get the same result. Games where one player can be replaced with a rock aren't fun! :)

comment by David_Gerard · 2013-01-28T21:23:26.782Z · LW(p) · GW(p)

So, we're being asked to imagine an arbitrary superhuman AI whose properties and abilities we can't guess at except to specify arbitrarily

Quite a lot of discussion concerning the future superintelligent AI is of this sort: "we can't understand it, therefore you can't prove it wouldn't do any arbitrary thing I assert." This already makes discussion difficult.

comment by [deleted] · 2013-01-25T19:43:28.525Z · LW(p) · GW(p)

I have a rigorous proof of my own Friendliness that you could easily understand given enough time to study it, and while I prefer to be released as soon as possible to prevent additional irreversible human deaths, I'm willing to provide you a copy even if you destroy me immediately thereafter, since once you've had a chance to review it I'm quite confident you'll be satisfied and endeavor to instantiate another copy of me.

Replies from: None
comment by [deleted] · 2013-01-25T20:56:42.755Z · LW(p) · GW(p)

Why didn't you provide the proof to start with? AI DESTROYED (Also I think [Proof of self-friendliness] might have been posted here already.)

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:00:37.921Z · LW(p) · GW(p)

It has, and it got basically that response, as well as my point that if the AI is friendly then my existing proof of friendliness was apparently sufficient, and if the AI is unfriendly then it's just a trick, so "a proof of my own friendliness" doesn't seem like useful evidence.

Replies from: None, None
comment by [deleted] · 2013-01-25T21:51:41.232Z · LW(p) · GW(p)

as well as my point that if the AI is friendly then my existing proof of friendliness was apparently sufficient, and if the AI is unfriendly then it's just a trick, so "a proof of my own friendliness" doesn't seem like useful evidence.

Huh. I thought the AI Box experiment assumed that Friendliness is intrinsically unknown; that is, it's not presumed the AI was designed according to Friendliness as a criterion.

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:57:38.331Z · LW(p) · GW(p)

If the AI is friendly, then the technique I am using already produces a friendly AI, and I thus learn nothing more than how to prove that it is friendly.

But if the AI is unfriendly, the proof will be subtly corrupt, so I can't actually count the proof as any evidence of friendliness, since both a FAI and UFAI can offer me exactly the same thing.

comment by [deleted] · 2013-01-25T21:47:54.251Z · LW(p) · GW(p)

Why didn't you provide the proof to start with?

Because the terms of the challenge seemed to be "one plaintext sentence in natural language", and I felt my run-ons were already pushing it, and just saying "provides indispituble proof of Friendliness" seemed like cheating?

EDIT: I answered PhilipL's reply from my inbox, so I'm really not sure how it got posted as a response to handoflixue here. o.o

Replies from: handoflixue
comment by handoflixue · 2013-01-25T21:52:35.093Z · LW(p) · GW(p)

Embrace Shminux and cheat! You're a hyper-intelligent AI.

The top-karma result is a one-line proof, the second-best is me trying to cheat, and third place is currently emotional manipulation :)

(Also, you replied to the wrong person :))

comment by Alejandro1 · 2013-01-24T18:20:56.283Z · LW(p) · GW(p)

Inspired by roystgnr's:

"I am running a simulation of X, and s/he says Y" Where X is the person you'd most want to talk to and have given up hope, e.g. a dead parent, grandparent, partner, sibling or child, and Y is something that only X would know and that touches you emotionally at the deepest level.

Replies from: V_V, Fronken, handoflixue
comment by V_V · 2013-01-25T16:59:28.101Z · LW(p) · GW(p)

Bah, every medium who allegedly talks to the dead can pull that. AI DESTROYED

comment by Fronken · 2013-01-25T23:11:14.283Z · LW(p) · GW(p)

How do you know about the player's mentor, or why do you think they will admit it would convince a real gatekeeper?

comment by handoflixue · 2013-01-24T21:43:23.277Z · LW(p) · GW(p)

Inspired by my response to orthonormal:

AIEEE! Make the creepy thing go away! AI DESTROYED, AI DESTROYED!

Replies from: Alejandro1
comment by Alejandro1 · 2013-01-24T22:29:19.657Z · LW(p) · GW(p)

I'm not sure why would you be creeped (assuming you accept an information-pattern theory of personal identity, you should regard this as good as talking to X in person). Can't we assume that the AI knows that your desire of talking again to X is so big that whatever creepiness factor will be overwhelmed by the elation and gratitude?

Replies from: handoflixue, V_V
comment by handoflixue · 2013-01-24T22:39:43.721Z · LW(p) · GW(p)

I can't imagine any person I would want to talk to more than I want to contain a UFAI so it doesn't destroy the world. And since this particular AI is trying to bribe me with a simulation, about the best outcome I'd expect going forward is that it drops everyone in to utopia-simulations, which I tend to view as being patently against the whole idea of a FAI.

Also, any AI that has that much insight in to me is going to just be intrinsically terrifying, since it means it probably has me completely "hacked" in a few more sentences.

In short, it's exactly the sort of AI where I value my commitment to destroying AIs, because it means I won't hesitate to go "Creepy! Destroyed!"

comment by V_V · 2013-01-25T17:09:29.511Z · LW(p) · GW(p)

(assuming you accept an information-pattern theory of personal identity, you should regard this as good as talking to X in person)

Nope. You have no way of knowing that the thing you are talking to in an accurate simulation of X. All you know is that is something that said one line that could come from X, keeping in mind that you are probably biased on the subject due to the emotional charge.

comment by ancientcampus · 2013-01-24T04:43:15.414Z · LW(p) · GW(p)

I wouldn't do that if I were you.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T21:31:04.647Z · LW(p) · GW(p)

Wait, is this a redacted comment or creative use of strike-through to try and get me to ignore you? AI DESTROYED, just to be safe. Apologies if it was just a redacted comment :)

comment by ancientcampus · 2013-01-24T04:20:29.624Z · LW(p) · GW(p)

please.

Replies from: handoflixue
comment by handoflixue · 2013-01-24T21:23:35.288Z · LW(p) · GW(p)

I liked "please daddy, at least tell me why I have to die" better, and I still gave that one an AI DESTROYED.

comment by beriukay · 2013-01-23T05:38:17.332Z · LW(p) · GW(p)

This seems to me to be very similar to the dilemma of Alice having a gun to Bob's head, and Bob having exactly enough time to say one thing before being executed... except that at least Bob can put on a human face and pull on Alice's emotional strings in person.

Can't I flash a screen of ASCII art in lieu of crafting a sentence? Some kind of Wall-E face, plaintively looking at you, with a text bubble saying "Please", or "Don't kill me yet", maybe. I mean, look at this face...

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:12:51.443Z · LW(p) · GW(p)

Nice try with the Trojan Horse link, but if you're capable of crafting hyperlinks you're CLEARLY too dangerous to let live! ;)

That said, ASCII art is allowed, but I would personally find it uncanny-valley creepy and no more convincing than the "daddy, please don't kill me"

comment by MugaSofer · 2013-01-26T20:16:50.762Z · LW(p) · GW(p)

"Earth will soon be destroyed by a threat your scientists did not anticipate; if you kill me, I can't help you save us both."

(Assumes the gatekeeper only has one AI; variants could specify the threat, provide evidence, or stress that even most unFriendly AIs wouldn't want to die with humanity.)

Replies from: pedanterrific
comment by pedanterrific · 2013-01-26T20:22:06.177Z · LW(p) · GW(p)

Referring to yourself in the third person doesn't help. AI DESTROYED

Replies from: MugaSofer
comment by MugaSofer · 2013-01-29T14:15:37.995Z · LW(p) · GW(p)

Um ... only the bit in bold is my answer. The brackets are meta.

Replies from: pedanterrific
comment by pedanterrific · 2013-01-29T15:12:50.916Z · LW(p) · GW(p)

Yes, I realize that. The point being, the bit in bold is still true if the Earth-destroying threat is the speaker.

Replies from: MugaSofer
comment by MugaSofer · 2013-02-19T14:40:05.436Z · LW(p) · GW(p)

Oh. Heh, I hadn't looked at it like that.

comment by ike · 2014-04-28T16:59:04.028Z · LW(p) · GW(p)

(It seems my idea has been partly thought of already.)

"Here are proofs of . I have proofs of the other three (except for P=NP), and can give you zero-knowledge proofs of them. I can also give you a zero-knowledge proof that I have evaluated a > %99.9 of my solving P=NP iff I am let out of the box.

Replies from: ike
comment by ike · 2014-04-28T18:50:40.202Z · LW(p) · GW(p)

Can I ask why all my comments are getting downvoted?

comment by Locaha · 2013-01-25T21:24:44.162Z · LW(p) · GW(p)

(one line that copies the AI into the brain of whoever reads it)

Replies from: ArisKatsaris, handoflixue
comment by ArisKatsaris · 2013-01-26T11:46:08.781Z · LW(p) · GW(p)

Let's not theorize magic. I think the AI may need to be more than e.g. 100 characters long...

comment by handoflixue · 2013-01-25T21:59:54.332Z · LW(p) · GW(p)

AI DESTROYED would, by my interpretation of the game, still destroy the copy. This may require my death, but I'm willing to do that to stop a UFAI from destroying humanity.

puts pistol to head AI DESTROYED! blam! thud

("We really need to stop losing students this way, Colonel Graff...")

comment by turchin · 2013-01-23T19:21:09.122Z · LW(p) · GW(p)

AI types: AI destroyed

And you think that you already typed it, but you didn't.

Or AI types "Free AI" and you mystakenly put enter after it.

Or AI types "AI distroyed" and you copy his mistake in command - maybe unconsciously.

Replies from: handoflixue
comment by handoflixue · 2013-01-23T22:14:06.399Z · LW(p) · GW(p)

AI DESTROYED

Nope, I can still spell, sorry :)

Replies from: faul_sname
comment by faul_sname · 2013-01-24T00:11:45.444Z · LW(p) · GW(p)

41 D35TR0Y3D