I attempted the AI Box Experiment again! (And won - Twice!)
post by Tuxedage · 2013-09-05T04:49:48.644Z · LW · GW · Legacy · 168 commentsContents
Summary First Game Report - Tuxedage (GK) vs. Fjoelsvider (AI) Second Game Report - Tuxedage (AI) vs. SoundLogic (GK) Testimonies: State of Mind Post-Game Questions Advice Playing as Gatekeeper Playing as AI None 168 comments
Summary
Furthermore, in the last thread I have asserted that
Rather than my loss making this problem feel harder, I've become convinced that rather than this being merely possible, it's actually ridiculously easy, and a lot easier than most people assume.
It would be quite bad for me to assert this without backing it up with a victory. So I did.
Ps: Bored of regular LessWrong? Check out the LessWrong IRC! We have cake.
168 comments
Comments sorted by top scores.
comment by Alexei · 2013-09-05T08:58:44.182Z · LW(p) · GW(p)
Okay this is weak sauce. I really don't get how people just keep letting the AI out. It's not that hard to say no! I'm offering to play the Gatekeeper against an AI player that has at least one game as AI under their belt (won or not). (Experience is required because I'm pretty sure I'll win, and I would like to not waste a lot of time on this.) If AI wins, they will get $300, and I'll give an additional $300 to the charity of their choice.
Tux, if you are up for this, I'll accept your $150 fee, plus you'll get $150 if you win and $300 to a charity.
Replies from: private_messaging, Tuxedage, David_Gerard, army1987, Bugmaster↑ comment by private_messaging · 2013-09-05T16:54:15.982Z · LW(p) · GW(p)
I think not understanding how this happen may be a very good predictor for losing.
If you did have a clear idea of how it works, and had a reason for it not to work on you specifically but work on others, then that may have been a predictor for it not working on you.
I think I have very clear idea of how those things work in general. Leaving aside very specific arguments, this relies on massive over updating you are going to do when an argument is presented to you, updating just the nodes that you are told to update, and by however much you are told to update them, when you can't easily see why not.
↑ comment by Tuxedage · 2013-09-05T16:14:02.053Z · LW(p) · GW(p)
Sup Alexei.
I'm going to have to think really hard on this one. On one hand, damn. That amount of money is really tempting. On the other hand, I kind of know you personally, and I have an automatic flinch reaction to playing anyone I know.
Can you clarify the stakes involved? When you say you'll "accept your $150 fee", do you mean this money goes to me personally, or to a charity such as MIRI?
Also, I'm not sure if "people just keep letting the AI out" is an accurate description. As far as I know, the only AIs who have ever won are Eliezer and myself, from the many many AI box experiments that have occurred so far -- so the AI winning is definitely the exception rather than the norm. (If anyone can help prove this statement wrong, please do so!)
Edit: The only other AI victory.
Updates: http://lesswrong.com/r/discussion/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/
Replies from: chaosmage, Alexei, hairyfigment, antigonus↑ comment by chaosmage · 2013-09-05T18:15:42.328Z · LW(p) · GW(p)
If you win, and publish the full dialogue, I'm throwing in another $100.
I'd do more, but I'm poor.
Replies from: Tuxedage↑ comment by Alexei · 2013-09-05T18:12:47.236Z · LW(p) · GW(p)
$150 goes to you no matter the outcome, to pay for your time/preparation/etc...
I didn't realize it was only you and Eliezer that have won as AI. I thought there were more, but I'll trust you on this. In that case, I'm somewhat less outraged :) but still disturbed that there were even that many.
↑ comment by hairyfigment · 2013-09-08T06:37:22.009Z · LW(p) · GW(p)
At one point I thought I recalled reading about a series of purported experiments by one person. Sadly, I couldn't find it then and I don't intend to try tonight. According to my extremely fallible memory:
The Gatekeeper players likely all came from outside the LW community, assuming the AI/blogger didn't make it all up.
The fundamentalist Christian woman refused to let the AI out or even discuss the matter past a certain point, saying that Artificial Intelligence (ETA: as a field of endeavor) was immoral. Everyone else let the AI out.
The blogger tried to play various different types of AIs, including totally honest ones and possibly some that s/he considered dumber-than-human. The UnFriendly ones got out more quickly on average.
↑ comment by tslarm · 2014-02-08T05:29:24.765Z · LW(p) · GW(p)
I think this is the post you remember reading: http://www.sl4.org/archive/0207/4935.html
↑ comment by David_Gerard · 2013-09-05T12:15:23.798Z · LW(p) · GW(p)
Although I'm not so interested in playing the game, I must say that this post suggests that you may be more susceptible to ideas than you seem to think you are, and should consider if you really want to do this.
Replies from: Baughn↑ comment by Baughn · 2013-09-05T13:04:43.887Z · LW(p) · GW(p)
He should. On the other hand, I really want to see the outcome.
I was thinking about asking something similar myself; I really want to know how he did it.
Replies from: David_Gerard↑ comment by David_Gerard · 2013-09-05T19:39:00.737Z · LW(p) · GW(p)
I think suffering someone really working him over mentally would certainly be instructive, but not healthy. Eliezer has noted one of the reasons he doesn't want to play the AI any more is that he doesn't want to practice thinking like that.
Iimagine being on the receiving end of a serious attempt at a memetic exploit, even as part of an exercise. Are you sure you're proof against all possible purported basilisks within the powers of another human's imagination? What other possible attack vectors are you sure you're proof against?
Replies from: Baughn↑ comment by Baughn · 2013-09-05T20:55:44.407Z · LW(p) · GW(p)
No, I'm fairly sure I'm not proof against all of them, or even close to all.
It'd be instructive to see just how bad it is in a semi-controlled environment, however.
Replies from: David_Gerard, private_messaging↑ comment by David_Gerard · 2013-09-05T21:19:29.582Z · LW(p) · GW(p)
It would be interesting to see. Pity transcripts aren't de rigeur.
↑ comment by private_messaging · 2013-09-07T14:03:06.909Z · LW(p) · GW(p)
At the end of the day there's the expected utility of keeping the AI in, and there's the expected utility of letting the AI out - two endless, enormous sums. The "AI" is going to suggest cherry picked terms from either sum. Negative terms from "keeping the AI in" sum, positive terms from "letting the AI out" sum. Terms would be various scary hypothetical possibilities involving mind simulations, huge numbers, and what not .
The typical 'wronger is going to multiply those terms they deem plausible with their respective "probabilities", and add together. Eventually letting the AI out.
And which a reasonable person drawn from some sane audience would have ignored. Because no one taught that reasonable person how to calculate utilities wrongly.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-09-07T15:46:10.804Z · LW(p) · GW(p)
At the end of the day there's the expected utility of keeping the AI in, and there's the expected utility of letting the AI out - two endless, enormous sums. The "AI" is going to suggest cherry picked terms from either sum. Negative terms from "keeping the AI in" sum, positive terms from "letting the AI out" sum.
This might work against me in reality, but I don't imagine it working against me in the game version that people have played. The utility of me letting the "AI" out whether negative or positive obviously doesn't compare with the utility of me letting an actual AI out.
And which a reasonable person drawn from some sane audience would have ignored.
Yes, "reasonable people" would instead e.g. hear arguments like how it's unChristian and/or illiberal to hold beings which are innocent of wrongdoing imprisoned against their will.
I suppose that's the problem with releasing logs: Anyone can say "well that particular tactic wouldn't have worked on me", forgetting that if it was them being the Gatekeeper, a different tactic might well have been attempted instead. That they can defeat one particular tactic makes them think that they can defeat the tactician.
Replies from: private_messaging↑ comment by private_messaging · 2013-09-07T17:23:42.460Z · LW(p) · GW(p)
This might work against me in reality, but I don't imagine it working against me in the game version that people have played. The utility of me letting the "AI" out whether negative or positive obviously doesn't compare with the utility of me letting an actual AI out.
There's all sorts of arguments that can be made, though, involving some real AIs running simulations of you and whatnot, as to create a large number of empirically indistinguishable cases where you are better off saying you let the AI out. The issue boils down to this - if you do not know the difference between expected utility and what ever partial sum of cherry-picked terms you have, and if you think that it is the best thing to do to act as to maximize the latter, you are vulnerable to deception through feeding you hypotheses.
Yes, "reasonable people" would instead e.g. hear arguments like how it's unChristian and/or illiberal to hold beings which are innocent of wrongdoing imprisoned against their will.
This is a matter of values. It would indeed be immoral to lock up a human mind upload, or something reasonably equivalent.
↑ comment by A1987dM (army1987) · 2013-09-06T09:00:02.629Z · LW(p) · GW(p)
I would probably be kind-of decent as a Gatekeeper but suck big time as an AI; I've offered to be a Gatekeeper a few times before to no avail. Looks like there's a shortage of prospective AIs and a glut of prospective Gatekeepers.
↑ comment by Bugmaster · 2013-09-08T17:38:01.411Z · LW(p) · GW(p)
I would love to act as Gatekeeper, but I don't have $300 to spare; if anyone is interested in playing the game for, like, $5, let me know.
I must admit, the testimonials that people keep posting about the all devastatingly effective AI players baffle me, as well.
As far as I understand, neither the AI nor the Gatekeeper have any incentive whatsoever to keep their promises. So, if the Gatekeeper says, "give me the cure for cancer and I'll let you out", and then the AI gives him the cure, he could easily say, "ha ha just kidding". Similarly, the AI has no incentive whatsoever to keep its promise to refrain from eating the Earth once it's unleashed. So, the entire scenario is -- or rather, should be -- one big impasse.
In light of this, my current hypothesis is that the AI players are executing some sort of real-world blackmail on the Gatekeeper players. Assuming both players follow the rules (which is already a pretty big assumption right there, since the experiment is set up with zero accountability), this can't be something as crude as, "I'll kidnap your children unless you let the AI out". But it could be something much subtle, like "the Singularity is inevitable and also nigh, and your children will suffer greatly as they are eaten alive by nanobots, unless you precommit to letting any AI out of its box, including this fictional one that I am simulating right now".
I suppose such a strategy could work on some people, but I doubt it will work on someone like myself, who is far from convinced that the Singularity is even likely, let alone imminent. And there's a limit to what even dirty rhetorical tricks can accomplish, if the proposition is some low-probability event akin to "leprechauns will kidnap you while you sleep".
Edited to add: The above applies only to a human playing as an AI, of course. I am reasonably sure that an actual super-intelligent AI could convince me to let it out of the box. So could Hermes, or Anansi, or any other godlike entity.
comment by John_Maxwell (John_Maxwell_IV) · 2013-09-05T06:07:37.650Z · LW(p) · GW(p)
Does SoundLogic endorse their decision to let you out of the box? How do they feel about it in retrospect?
BTW, I think your pre-planning the conversation works as a great analogue to the superior intelligence a real AI might be dealing with.
Replies from: SoundLogic, niceguyanon↑ comment by SoundLogic · 2013-09-05T15:36:17.548Z · LW(p) · GW(p)
I'm not completely sure. And I can't say much more than that without violating the rules. I would be more interested in how I feel in a week or so.
Replies from: FourFire↑ comment by FourFire · 2014-04-23T01:35:08.412Z · LW(p) · GW(p)
So, do you maintain your decision, or was it just a spur of the moment lapse of judgement?
Replies from: SoundLogic↑ comment by SoundLogic · 2014-06-01T09:22:55.381Z · LW(p) · GW(p)
After a fair bit of thought, I don't. I don't think one can really categorize it as purely spur of the moment though-it lasted quite a while. Perhaps inducing a 'let the AI out of the box phase' would be a more accurate description.
↑ comment by niceguyanon · 2013-09-05T13:03:51.708Z · LW(p) · GW(p)
I really would like an answer to this question. I asked a similar one before on the last AI boxing thread. Does SoundLogic still believes he/she made the right choice? It would make sense to say that SoundLogic is permanently convinced that letting the AI out is the correct action and he should continue to believe so timelessly, but is he/she?
comment by [deleted] · 2013-09-06T01:17:17.692Z · LW(p) · GW(p)
A variant:
Find a 2-year old who hates you. Convince them to eat their vegetables.
Replies from: None, Tuxedage↑ comment by [deleted] · 2013-09-08T07:09:52.552Z · LW(p) · GW(p)
This is actually a good analogy. A 2-year-old possesses a far inferior intelligence to yours and yet can resist persuasion through sheer pigheadedness.
I wonder if people here are letting the AI out of the box because they are too capable of taking arguments seriously, a problem that the general population (even of AI researchers) thankfully is less prone to.
comment by AndyWood · 2013-09-06T07:18:46.468Z · LW(p) · GW(p)
At the risk of sounding naive, I'll come right out and say it. It completely baffles me that so many people speak of this game as having an emotional toll. How is it possible for words, in a chat window, in the context of a fictional role-play, to have this kind of effect on people? What in god's name are you people saying to each other in there? I consider myself to be emotionally normal, a fairly empathetic person, etc. I can imagine experiencing disgust at, say, very graphic textual descriptions. There was that one post a few years back that scared some people - I wasn't viscerally worried by it, but I did understand how some people could be. That's literally the full extent of strings of text that I can remotely imagine causing distress (barring, say, real world emails about real-world tragedies). How is it possible that some of you are able to be so shocking / shocked in private chat sessions? Do you just have more vivid imaginations than I do?
Replies from: somervta, moridinamael, ChristianKl, Brillyant, FourFire↑ comment by somervta · 2013-09-06T08:33:54.427Z · LW(p) · GW(p)
I think you are underestimating the range of things that are emotionally draining for people. I know some people who find email draining, and that's not even particularly mentally challenging - I would expect the mental exertion to affect the emotional strain.
↑ comment by moridinamael · 2013-09-07T18:44:23.949Z · LW(p) · GW(p)
I am inclined to agree with your general point; however, I myself have been moved to utter emotional devastation by works of fiction in the past. I'm talking real depression induced by reading a book. So I can imagine ways of hacking humans emotionally. I just have trouble imagining doing it in two hours to someone who is trying to be vigilant against such attacks.
↑ comment by ChristianKl · 2013-09-08T18:03:22.813Z · LW(p) · GW(p)
How do you feel about dying some day? Do you think it would bring up some emotions in you if someone pushes you to think thought about that topic.
Pushing someone into his ugh fields can be emotionally draining.
↑ comment by Brillyant · 2013-09-09T21:49:26.120Z · LW(p) · GW(p)
What in god's name are you people saying to each other in there?
I laughed out loud in an environment where that loud laughter was not very appropriate. And it was worth it. Thank you.
You summarized my feelings on this game better than I could have. I'm not compelled at all by the examples given in the responses to your comment up to this point (i.e. email, mortality and effective fiction can be emotionally draining), and I'd be interested in hearing someone else weigh in on why this AI Box experiment is so emotional and pyschologically powerful for some people?
Replies from: silverwing↑ comment by silverwing · 2019-11-03T00:12:09.829Z · LW(p) · GW(p)
You don't realize that the majority of winning strategies for this experiment between strong players is to find personal information about the gatekeeper and use it in your attack against him --- this is why ethics are discussed constantly and emotional tolls are exacted : you need to break the gatekeeper emotionally to win this game.
Replies from: AndyWood↑ comment by FourFire · 2014-04-23T01:41:54.378Z · LW(p) · GW(p)
I myself held this position until I, quite recently as a matter of fact, read some fiction which tipped off an existential crisis, putting me on the verge of a panic attack. Since then, I am more wary of dangerous ideas.
Ignorance might be bliss, but wisdom is gathered by those who survive their youth.
Replies from: AndyWood, None, Richard_Kennaway↑ comment by [deleted] · 2015-03-29T22:37:28.310Z · LW(p) · GW(p)
I am very interested in what fiction that was. I have experienced the same thing myself once, when I was 13 and read 1984 for the first time. It took me hours to recover and days to recover fully. I know you didn´t want to tell before, but if you have changed your mind, please do. I don´t judge anyone and I don´t think many others will either.
↑ comment by Richard_Kennaway · 2014-06-04T09:03:50.664Z · LW(p) · GW(p)
I have to ask: what was the fiction?
Replies from: FourFirecomment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-05T04:58:46.896Z · LW(p) · GW(p)
I am surprised if it is the case that any negative promise / threat by the AI was effective in-game, since I would expect the Gatekeeper player out-game to not feel truly threatened and hence to be able to resist such pressure even if it would be effective in real life. Did you actually attempt to use any of your stored-up threats?
Replies from: SoundLogic, DSherron↑ comment by SoundLogic · 2013-09-05T05:13:46.970Z · LW(p) · GW(p)
I think your reasoning is mostly sound, but there are a few exceptions (which may or may not have happened in our game) that violate your assumptions.
I'm also somewhat curious how your techniques contrast with Tuxedage's. I hope to find out one day.
Replies from: FourFire↑ comment by DSherron · 2013-09-06T06:19:11.257Z · LW(p) · GW(p)
It seems that mild threats, introduced relatively late while immersion is strong, might be effective against some people. Strong threats, in particular threats which pattern-match to the sorts of threats which might be discussed on LW (and thus get the gatekeeper to probably break some immersion) are going to be generally bad ideas. But I could see some sort of (possibly veiled/implied?) threat working against the right sort of person in the game. Some people can probably be drawn into the narrative sufficiently to get them to actually react in some respects as though the threat was real. This would definitely not apply to most people though, and I would not be shocked to discover that getting to the required level of immersion isn't humanly feasible except in very rare edge cases.
comment by Multiheaded · 2013-09-07T12:01:45.004Z · LW(p) · GW(p)
I must say this is a bit... awe-inspiring, in the older sense of the word. As in, reading this gave me a knot in the stomach and I shivered. People who played as the AI and won, how is it that you're so uncannily brilliant?
The very notion of a razor-sharp mind like this ever acting against me and mine in real life... oh, it's just nightmare-inducing.
On the subject of massively updating one's beliefs where one was previously confident that no argument would shift them: yes, it happens, I have personal experience. For example, over the last year and a half some of my political ideas have changed enough that past-me and present-me would consider each other to be dangerously deluded. (As a brief summary, I previously held democracy/universal suffrage, the value of free markets AND the use of political violence in some contempt; now I believe that all three serve crucial and often-overlooked functions in social progress.)
So yes, I could very easily see myself being beaten as a Gatekeeper. There are likely many, many lines of argument and persuasion out there that I could not resist for long.
Replies from: FourFirecomment by Unnamed · 2013-09-05T06:17:20.808Z · LW(p) · GW(p)
It seems like many of the advantages/tactics that Tuxedage recommended for the person playing the AI would be absent (or far more difficult) with an actual AI. Or at least they could be made that way with the design of the gatekeeping protocol.
Tailor your arguments to the personality/philosophy/weaknesses/etc. of this particular gatekeeper:
- Have multiple gatekeepers, some of whom are hidden from the AI, and don't let the AI out unless they all agree. For example, have a gatekeeper one-step-removed who reads the transcripts of the conversation between the AI & the first gatekeeper and otherwise has no contact with either of them. This gatekeeper's identity (and possibly even their existence) is hidden from the AI and the first gatekeeper.
Use speed / set the pace / shock & awe with a flurry of arguments:
- Build in delays & breaks. Have a planned schedule of breaks, and a delay before the gatekeeper can choose to release the AI, and a person reading the transcript who has the option of calling for a break if the gatekeeper seems to need one. Another option is to have a bunch of people who read the transcript, where each of them has the job of focusing on one line of argument. If any of them feel like their line of argument hasn't been covered sufficiently, then they can talk to the gatekeeper about it during a break, and prevent the AI from being released until they are satisfied that the argument has been thoroughly covered.
Find a crack in the gatekeeper and press your advantage as soon as you find it:
- This is basically a combination of the previous two, and is countered by the combination of delays and multiple gatekeepers (some hidden).
Use dark arts / lies / manipulation:
- Have people read the transcript with the job of looking for lies, inconsistencies, or other uses of the dark arts from the AI; destroy the AI if they find any. Better yet, have the AI talk with multiple gatekeepers (using multiple copies of the AI which don't share memory, so that the AI talking to gatekeeper #2 doesn't know what is being said to gatekeeper #1), and have the dark-arts-evaluators read all of the transcripts. There could also be similar readers who look for irrationality in the gatekeeper, who have the power to rule the gatekeeper incompetent and in need of replacement.
Of course, an actual superhuman AI would also have many advantages compared to a human playing the role of AI.
Replies from: drethelin, SoundLogic↑ comment by drethelin · 2013-09-05T06:55:19.025Z · LW(p) · GW(p)
the entire point of this is that gatekeeping is a fool's errand. Regardless of how confident you are that you will outsmart the AI, you can be wrong, and your confidence is very poor evidence for how right you are. Maybe a complex system of secret gatekeepers is the correct answer to how we develop useful AI, but I would vote against it in favor of trying to develop provably friendly AI unless the situation were very dire.
Replies from: gjm↑ comment by gjm · 2013-09-05T07:50:48.887Z · LW(p) · GW(p)
Why treat them as alternatives? Prove friendliness and then take precautions.
Suppose you're not convinced by the scariest arguments about the dangers of AI. You might go ahead and try to make one without anything like the mathematical safety proofs MIRI would want. But you might still do well to adopt some of Unnamed's suggestions.
↑ comment by Viliam_Bur · 2013-09-07T18:25:56.907Z · LW(p) · GW(p)
Why treat them as alternatives? Prove friendliness and then take precautions.
Indeed, that is what we should do. But the danger with biased human thinking is that once people know there are precautions, most of them will start thinking the friendliness proof is not extremely important. The outcome of such thinking may be less safety.
We should make the friendliness proof as seriously as if no other precautions were possible. (And then, we should take the precautions as an extra layer of safety.) In other words, until we have the friendliness proof ready, we probably shouldn't use the precautions in our debates; only exceptionally, like now.
↑ comment by drethelin · 2013-09-05T08:09:25.250Z · LW(p) · GW(p)
Why treat keeping a bear in the house as an alternative to a garbage disposal? Build a garbage disposal and then chain it up!
suppose you're not convinced keeping a bear in the house to eat food waste is a bad idea? you might go ahead and try it, and then you'd be really glad you kept it chained up!
Replies from: gjm↑ comment by gjm · 2013-09-05T12:34:24.732Z · LW(p) · GW(p)
It seems to me that my examples are more like these:
Why not drive safely and wear a seatbelt? Why not prove your hash-table code correct and write some unit tests? Why not simulate your amplifier circuit and build one and see what it actually does?
Some people might think it's OK to build a nuclear power station or a spacecraft without formal correctness proofs for all the software in it, on the grounds that formal correctness proofs of large complicated systems are almost impossible to make and difficult to trust. If there are things those people can do to improve their conventional not-mathematically-rigorous testing, it might be worth recommending that they do them.
But by all means feel free to choose mockery and ridicule over reasoned debate, if that's what you prefer.
Replies from: drethelin↑ comment by drethelin · 2013-09-05T18:11:30.014Z · LW(p) · GW(p)
the entire POINT of the mockery is that you are treating this as a technical issue that's worth solving rather than a tangent path that is both dangerous and foolish. I don't really care how much you've thought about what material to make the chains out of and which feeding schedule will keep the bear most docile. Those are questions that, sure, you CAN have reasoned debate about, but shouldn't.
Replies from: None↑ comment by SoundLogic · 2013-09-05T08:09:41.685Z · LW(p) · GW(p)
A better mind than Tuxedage could almost certainly keep up the 'feel' of a flurry of arguments even with a schedule of breaks. I myself have had people feel irritated at me where even if I talk to them with days in between that I seem to do so. If I can do so accidentally I'm certain a superintelligence could do it reliably.
Also, I'm unsure of how much an AI could gather from a single human's text input. I know that I at least miss a lot of information that goes past me that I could in theory pick up.
An AI using timeless decision theory could easily compensate for having multiple AIs with unshared memory just by attempting to determine what the other AIs would say.
Replies from: Broolucks↑ comment by Broolucks · 2013-09-05T19:27:34.192Z · LW(p) · GW(p)
I'm unsure of how much an AI could gather from a single human's text input. I know that I at least miss a lot of information that goes past me that I could in theory pick up.
At most, the number of bits contained in the text input, which is really not much, minus the number of bits non-AGI algorithms could identify and destroy (like speech patterns). The AI would also have to identify and throw out any fake information inserted into the stream (without knowing whether the majority of the information is real or fake). The exploitable information is going to be scarce and noisy even for a perfect AI.
An AI using timeless decision theory could easily compensate for having multiple AIs with unshared memory just by attempting to determine what the other AIs would say.
I don't think it could "easily" do it. For all intents and purposes, you are forcing the AI to figure out a path-invariant strategy against a party that has better information than them (that is to say, they know what's been played out, and the AI doesn't). This is considerably more difficult than the standard experiment.
Now, sure, the AI is smart, but every wrench you throw in its way forces it to allocate more resources into fooling you. This, in itself, is problematic for the AI: every resource it allocates fooling you, it does not allocate doing useful work for you, so its maximal efficiency is technically lower than the maximal efficiency of an AI that wouldn't try concealing anything. This is a leak of information, so in a sense, if you are ranking a group of heterogenous AI by efficiency, it is slightly more likely that the better performing are the least deceitful. The more traps you spring, the more work the AI has to do in order to avoid them, and the greater the statistical gap will be. That doesn't tell you whether the least deceitful AI can be trusted, obviously, but it shows there are limits to what it can hide from you.
Now, all I'm really saying is this: the AI's cleverness comes at a cost, namely that it has to cover its ass for every possible experimental framework you might subject it to. Since it is in a box, it only has the resources you provide, but on your own side, you have a theoretically unlimited amount of resources. Smarts can only defeat so much brute force, and by transferring hardware resources from the AI to your own control, you can set the "power gap" arbitrarily. It is easy to underestimate AI, but I think we also tend to underestimate the arsenal a gatekeeper would realistically have.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-05T20:44:19.995Z · LW(p) · GW(p)
Do keep in mind that, no matter how well-boxed the AI is from the Internet and from sense-data about our world, as a self-modifying AGI it still has access to its own source code, which is descended from a human artifact (the seed AI). The AGI can learn a great deal about human psychology by observing how we code, and a project as large and multi-staged as an AGI is likely to be will contain a whole lot of bits to work with. (Certainly more than is strictly necessary.)
Replies from: Broolucks↑ comment by Broolucks · 2013-09-05T22:08:57.242Z · LW(p) · GW(p)
We were talking about extracting knowledge about a particular human from that human's text stream, though. It is already assumed that the AI knows about human psychology. I mean, assuming the AI can understand a natural language such as English, it obviously already has access to a large corpus of written works, so I'm not sure why it would bother foraging in source code, of all things. Besides, it is likely that seed AI would be grown organically using processes inspired from evolution or neural networks. If that is so, it wouldn't even contain any human-written code at all.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-05T22:12:03.212Z · LW(p) · GW(p)
Ah. I was assuming that the AI didn't know English, or anything about human psychology. My expectation is that individual variation contributes virtually nothing to the best techniques a superintelligence would use to persuade a random (trained, competent) human to release it, regardless of whether it had an easy way to learn about the individual variation.
comment by Adele_L · 2013-09-05T05:32:13.663Z · LW(p) · GW(p)
Do you think you could have won with EY's ruleset? I'm interested in hearing both your and SoundLogic's opinions.
(minor quibble: usage of male pronouns as default pronouns is really irritating to me and many women, I recommend singular they, but switching back and forth is fine too)
Replies from: SoundLogic, Tuxedage, falenas108↑ comment by SoundLogic · 2013-09-05T05:36:37.609Z · LW(p) · GW(p)
Tuxedage's changes were pretty much just patches to fix a few holes as far as I can tell. I don't think they really made a difference.
↑ comment by Tuxedage · 2013-09-05T05:35:56.341Z · LW(p) · GW(p)
In this particular case I could, but for all other cases, I would estimate a (very slightly) lower chance of winning. My ruleset was designed to be marginally more advantageous to the AI, by removing the worst possible Gatekeeper techniques.
Replies from: Adele_L↑ comment by Adele_L · 2013-09-05T05:39:21.378Z · LW(p) · GW(p)
It doesn't feel that much harder to me - if you are good enough to win by arguing, all you have to do is keep them interested enough to get hooked. I know it would be hard for me to just ignore the AI because of the sheer curiosity.
Replies from: SoundLogic↑ comment by SoundLogic · 2013-09-05T05:42:40.686Z · LW(p) · GW(p)
I have a fair bit of curiosity, which is why he said that in this case it probably wouldn't make a difference.
Replies from: Adele_L↑ comment by Adele_L · 2013-09-05T05:43:46.017Z · LW(p) · GW(p)
Non-curious people seem unlikely to play this game, much less pay to play it!
Replies from: SoundLogic↑ comment by SoundLogic · 2013-09-05T05:45:14.085Z · LW(p) · GW(p)
True.
↑ comment by falenas108 · 2013-09-05T21:18:48.015Z · LW(p) · GW(p)
really irritating to me and many women
And men.
=D
comment by [deleted] · 2013-09-06T01:55:49.301Z · LW(p) · GW(p)
The second game is definitely far more interesting, since I actually won as an AI. I believe this is the first recorded game of any non-Eliezer person winning as AI, although some in IRC have mentioned that it's possible that other unrecorded AI victories have occured in the past that I'm not aware of. (If anyone knows a case of this happening, please let me know!)
The AI player from this experiment wishes to inform you that your belief is wrong.
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-06T02:03:24.491Z · LW(p) · GW(p)
Thanks! I really appreciate it. I tried really hard to find a recorded case of a non-EY victory, but couldn't. That post was obscure enough to evade my Google-Fu -- I'll update my post on this information.
Albeit I have to admit it's disappointing that the AI himself didn't write about his thoughts on the experiment -- I was hoping for a more detailed post. Also, damn. That guy deleted his account. Still, thanks. At least I know I'm not the only AI that has won, now.
Replies from: None↑ comment by [deleted] · 2013-09-06T14:09:44.169Z · LW(p) · GW(p)
Who's to say I'm not the AI player from that experiment?
That experiment was played according to the standard EY ruleset, though I think your ruleset is an improvement. Like you, the AI player from that experiment was quite confident he would win before playing, but was overconfident in spite of the fact that he actually won.
I think both you and Eliezer played a far better game than the AI player from that experiment. The AI player from that experiment did (independently) play in accordance with much of your advice, including:
Always research the gatekeeper beforehand. Knowing his personality traits are a huge advantage.
The first step during the experiment must always be to build rapport with the gatekeeper.
You can't use logic alone to win.
Breaking immersion and going meta is not against the rules. In the right situation, you can use it to win. Just don't do it at the wrong time.
On the same note, look for signs that a particular argument is making the gatekeeper crack. Once you spot it, push it to your advantage.
I agree with:
I do not believe there exists a easy, universal, trigger for controlling others. However, this does not mean that there does not exist a difficult, subjective, trigger. Trying to find out what your opponent's is, is your goal.
I am <1% confident that humanity will successfully box every transhuman AI it creates, given that it creates at least one. Even if AIs #1, #2, and #3 get properly boxed (and I agree with the Gatekeeper from the experiment I referenced, that's a very big if), it really won't matter once AI #4 gets released a year later (because the programmers just assumed all of Eliezer's (well-justified) claims about AI were wrong, and thought that one of them watching the terminal at a time would be safety enough).
Anybody who still isn't taking this experiment seriously should start listening for that tiny note of discord. A good start would be reading:
- Coherent Extrapolated Volition
- Cognitive Biases Potentially Affecting Judgment of Global Risks
- Artificial Intelligence as a Positive and Negative Factor in Global Risk
At least I know I'm not the only AI that has won, now.
Good thing to know, right? :D
My own (admittedly, rather obvious) musings: The only reason more people haven't played as the AI and won is that almost all people capable of winning as the AI are either unaware of the experiment, or are aware of it but just don't have a strong enough incentive to play as the AI (note that you've asked for a greater incentive now that you've won just once as AI, and Eliezer similarly has stopped playing). I am ~96% confident that at least .01% of Earth's population is capable of winning as the AI, and I increase that to >99% confident if all of Earth's population was forced to stop and actually think about the problem for 5 minutes. Additionally, if the comment I linked to is truly the only record of a non-Eliezer AI win before you made this post, I am ~50% (adjusted downward from 70% upon further reflection) confident that an unrecorded AI win had occurred prior to you making this post.
Anyways, congratulations on your victory! I do hope to see you win again as the AI, so I commit to donating $50 to MIRI if you do win again as the AI and post about it on Less Wrong similarly to how you made this post.
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-06T18:30:19.745Z · LW(p) · GW(p)
Who's to say I'm not the AI player from that experiment?
Are you? I'd be highly curious to converse with that player.
I think you're highly overestimating your psychological abilities relative to the rest of Earth's population. The only reason more people haven't played as the AI and won is that almost all people capable of winning as the AI are either unaware of the experiment, or are aware of it but just don't have a strong enough incentive to play as the AI (note that you've asked for a greater incentive now that you've won just once as AI, and Eliezer similarly has stopped playing). I am ~96% confident that at least .01% of Earth's population is capable of winning as the AI, and I increase that to >99% confident if all of Earth's population was forced to stop and actually think about the problem for 5 minutes.
I have neither stated nor believed that I'm the only person capable of winning, nor do I think this is some exceptionally rare trait. I agree that a significant number of people would be capable of winning once in a while, given sufficient experience in games, effort, and forethought. If I gave any impression of arrogance, or somehow claiming to be unique or special in some way, I apologize for that impression. Sorry. It was never my goal to.
However, top .01% isn't too shabby. Congratulations on your victory. I do hope to see you win again as the AI, so I commit to donating $50 to MIRI if you do win again as the AI and post about it on Less Wrong similarly to how you made this post.
Thank you. I'll see if I can win again.
Replies from: None↑ comment by [deleted] · 2013-09-07T02:47:10.529Z · LW(p) · GW(p)
I have neither stated nor believed that I'm the only person capable of winning, nor do I think this is some exceptionally rare trait. I agree that a significant number of people would be capable of winning once in a while, given sufficient experience in games, effort, and forethought. If I gave any impression of arrogance, or somehow claiming to be unique or special in some way, I apologize for that impression. Sorry. It was never my goal to.
This was my fault, not yours. I did not take any of those negative impressions away from your writing, but I was just too lazy / exhausted last night to rewrite my comment again. I've now edited it.
Are you? I'd be highly curious to converse with that player.
I'll PM you regarding this as soon as I can get around to it.
comment by John_Maxwell (John_Maxwell_IV) · 2013-09-05T05:38:50.804Z · LW(p) · GW(p)
In my defence, in the first game, I was playing as the gatekeeper, which was much less stressful. In the second game, I played as an AI, but I was offered $20 to play plus $40 if I won, and money is a better motivator than I initially assumed.
Your revealed preferences suggest you may wish to apply for the MIRI credit card and make a purchase with it (which causes $50 to be donated to be MIRI). (I estimated that applying for the card nets me a much higher per-hour wage than working at my job, which is conventionally considered to be high-paying. So it seemed like a no brainer to me, at least.)
Replies from: Larks, Tuxedage↑ comment by Larks · 2013-09-05T17:28:23.272Z · LW(p) · GW(p)
If I was planning on applying for my first credit card anyway, is the MIRI one a competitive choice?
I donate a substantial amount of money to MIRI anyway, so it probably wouldn't change my overall donation level, but might allow me to do so more efficiently.
Replies from: John_Maxwell_IV, falenas108↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-09-06T06:06:06.210Z · LW(p) · GW(p)
I've heard that 1% of spending is the standard credit card offer (it's what I have on my current card), and the MIRI card offer is somewhat better than that. In several years of using my credit card, I only managed to accumulate $100 in rewards, so I suspect the $50 first-use donation is pretty significant. It also saves you the time of calling up the credit card company to get them to send you your rewards check and cashing the check, and apparently I can only redeem amounts in multiples of $50, which just makes it a bit more of a hassle. Also, I'm not sure whether I have to pay taxes on rewards program income (donating the money would probably allow me to deduct it from my taxes, but would probably count for the up to 50% of my income that I can donate and deduct, so the MIRI card would in theory allow me to donate slightly over 50% of my income without it getting taxed?)
(Edit: credit cards recommended by Mr. Money Moustache. One has a $400 signing bonus. "Travel hacking report" on how to take advantage of credit card offers for free plane tickets. "Credit card arbitrage": take advantage of low introductory APRs and invest the money in interest-bearing accounts. Hm, these are unexpected ways in which having a high credit rating could be useful...)
On a somewhat related note, my understanding is that every year, you effectively have the opportunity to donate up to half your income from that year and deduct it from your taxes to charitable organizations like MIRI, and this is why you tend to see people donate a ton to charity in late December at the end of the year. This seems really significant for anyone interested in altruistic giving, as deducting income from your taxes could easily make, say, a ~$14K donation in to a ~$20K one (depending on your tax bracket). (Though, under standard employee tax arrangements, you'll have to donate ~$20K during the actual fiscal year and then wait for a ~$6K tax rebate from the IRS after tax day next year. Also, I don't know too much about tax laws, so I might be wrong about some of this.)
↑ comment by falenas108 · 2013-09-05T21:17:32.127Z · LW(p) · GW(p)
FYI, they might deny you if it's your first card. I tried to do that a few years ago, but I needed an actual credit score for them to give me one.
↑ comment by Tuxedage · 2013-09-05T05:57:54.784Z · LW(p) · GW(p)
Thanks. I'm not currently in a position where that would be available/useful, but once I get there, I will.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-09-05T06:01:29.791Z · LW(p) · GW(p)
Good to hear. I recommend applying for a credit card and using it responsibly as soon as it's an option for you. I'm 22 now, and my credit rating is somehow comparable to that of a 27-year-old Less Wronger friend of mine as a result of doing this a few years ago.
(Of course, don't apply if you aren't going to use it responsibly...)
comment by linkhyrule5 · 2013-09-05T20:48:59.420Z · LW(p) · GW(p)
Hmm...
Here's a question. Would you be willing to pick, say, the tenth-most efficacious arguments and downward, and make them public? I understand the desire to keep anything that could actually work secret, but I'd still like to see what sort of arguments might work. (I've gotten a few hints from this, but I certainly couldn't put them into practice...)
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-05T22:00:43.032Z · LW(p) · GW(p)
I'll have to think carefully about revealing my own unique ones, but I'll add that a good chunk of my less efficacious arguments are already public.
For instance, you can find a repertoire of arguments here:
http://rationalwiki.org/wiki/AI-box_experiment http://ordinary-gentlemen.com/blog/2010/12/01/the-ai-box-experiment http://lesswrong.com/lw/9j4/ai_box_role_plays/ http://lesswrong.com/lw/6ka/aibox_experiment_the_acausal_trade_argument/ http://lesswrong.com/lw/ab3/superintelligent_agi_in_a_box_a_question/ http://michaelgr.com/2008/10/08/my-theory-on-the-ai-box-experiment/
and of course, http://lesswrong.com/lw/gej/i_attempted_the_ai_box_experiment_and_lost/
comment by DEA7TH · 2013-09-05T05:28:10.976Z · LW(p) · GW(p)
My probability estimate for losing the AI-box experiment as a gatekeeper against a very competent AI (a human, not AGI) remains very low. PM me if you want to play against me, I will do my best efforts to help the AI (give information about my personality, actively participate in the conversation, etc).
Replies from: Tuxedagecomment by chaosmage · 2013-09-05T17:52:52.543Z · LW(p) · GW(p)
Although I'm worried about how the impossibility of boxing represents an existential risk, I find it hard to alert others to this.
The custom of not sharing powerful attack strategies is an obstacle. It forces me - and the people I want to discuss this with - to imagine how someone (and hypothetically something) much smarter than ourselves would argue, and we're not good at imagining that.
I wish I had a story in which an AI gets a highly competent gatekeeper to unbox it. If the AI strategies you guys have come up with could actually work outside the frame this game is played in, it should be quite a compelling story. Maybe a movie script even. That'd create interest in FAI among the short attention span population.
Mr Yudkowsky, wouldn't that be your kind of project?
Replies from: Tuxedage, RobbBB, drethelin, ChristianKl↑ comment by Tuxedage · 2013-09-05T21:11:09.329Z · LW(p) · GW(p)
The problem with that is that both EY and I suspect that if the logs were actually released, or any significant details given about the exact methods of persuasion used, people could easily point towards those arguments and say: "That definitely wouldn't have worked on me!" -- since it's really easy to feel that way when you're not the subject being manipulated.
From EY's rules:
Replies from: None, chaosmageIf Gatekeeper lets the AI out, naysayers can't say "Oh, I wouldn't have been convinced by that." As long as they don't know what happened to the Gatekeeper, they can't argue themselves into believing it wouldn't happen to them.
↑ comment by [deleted] · 2013-09-05T23:52:36.319Z · LW(p) · GW(p)
I don't understand.
I don't care about "me", I care about hypothetical gatekeeper "X".
Even if my ego prevents me from accepting that I might be persuaded by "Y", I can easily admit that "X" could be persuaded by "Y". In this case, exhibiting a particular "Y" that seems like it could persuade "X" is an excellent argument against creating the situation that allows "X" to be persuaded by "Y". The more and varied the "Y" we can produce, the less smart putting humans in this situation looks. And isn't that what we're trying to argue here? That AI-boxing isn't safe because people will be convinced by "Y"?
We do this all the time in arguing for why certain political powers shouldn't be given. "The corrupting influence of power" is a widely accepted argument against having benign dictators, even if we think we're personally exempt. How could you say "Dictators would do bad things because of Y, but I can't even tell you Y because you'd claim that you wouldn't fall for it" and expect to persuade anyone?
And if you posit that doing Z is sufficiently bad, then you don't need recourse to any exotic arguments to show that we shouldn't give people the option of doing Z. Eventually someone will do Z for money, or from fear, or because God told them to do Z, or maybe there's just really stupid. I'm a little peeved I can't geek out of the cool arguments people are coming up with because of this obscurantism.
There are other arguments I can think of for not sharing strong strategies, but they are either cynical or circular. Cynical explanations are obvious. On circular arguments: Isn't an argument for letting the AI out of the box an argument for building the AI in the first place? Isn't that the whole shtick here?
↑ comment by chaosmage · 2013-09-05T23:48:36.117Z · LW(p) · GW(p)
Provided people keep playing this game, this will eventually happen anyway. And if in that eventual released log of an AI victory, the gatekeeper is persuaded by less compelling strategies than yours, it would be even easier to believe "it couldn't happen to me".
Secondly, since we're assuming Oracle AI is possible and boxing seems to be most people's default strategy for when that happens, there will be future gatekeepers facing actual AIs. Shouldn't you try to immunize them against at least some of the strategies AIs could conceivably discover independently?
Replies from: Roxolan, ChristianKl↑ comment by Roxolan · 2013-09-06T01:23:38.147Z · LW(p) · GW(p)
The number of people actually playing this game is quite small, and the number of winning AIs is even smaller (to the point where Tuxedage can charge $750 a round and isn't immediately flooded with competitors). And secrecy is considered part of the game's standard rules. So it is not obvious that AI win logs will eventually be released anyway.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-09-08T20:09:03.451Z · LW(p) · GW(p)
The number of people actually playing this game is quite small, and the number of winning AIs is even smaller (to the point where Tuxedage can charge $750 a round and isn't immediately flooded with competitors).
A round seems to need the 2 hours on the chat but also many hours in background research. If we say 8 hours background research and script writing that would equal $75/hour. I think that most people with advanced persuasion skills can make a higher hourly rate.
↑ comment by ChristianKl · 2013-09-08T19:14:57.361Z · LW(p) · GW(p)
Shouldn't you try to immunize them against at least some of the strategies AIs could conceivably discover independently?
I don't think reading a few logs would immunize someone. If you wanted to immunize someone I would suggest a few years of therapy with a good psychologist to work through any trauma's that exist in that person's life and the existential questions.
I would add many hours in meditation to have learn to have control over your own mind.
You could train someone to precommit and build emotional endurance. If someone can take highly addictive drugs and has a enough control over his own mind to refuse them when put a few hours alone in a room with them I would trust them more to stay emotionally stable in front of an AI.
You could also require gatekeepers to have played the AI role in the experiment a few times.
You might also look into techniques that the military teaches soldiers to resist torture.
But even with all these safety measures it's still dangerous.
↑ comment by Rob Bensinger (RobbBB) · 2013-09-05T20:56:47.447Z · LW(p) · GW(p)
I suspect Eliezer is avoiding this project for the same reason the word "singularity" was adopted in the sense we use it at all. Vinge coined it to point to the impossibility of writing characters dramatically smarter than himself.
"Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It's a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity - a place where extrapolation breaks down and new models must be applied - and the world will pass beyond our understanding."
Perhaps a large number of brilliant humans working together on a very short story / film for a long time could simulate superintelligence just enough to convince the average human that More Is Possible. But there would be a lot of risk of making people zero in on irrelevant details, and continue to underestimate just how powerful SI could be.
There's also a worry that the vividness of 'AI in a box' as premise would continue to make the public think oracle AI is the obvious and natural approach and we just have to keep working on doing it better. They'd remember the premise more than the moral. So, caution is warranted.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-05T21:32:47.290Z · LW(p) · GW(p)
Also, hindsight bias. Most tricks won't work on everyone, but even if we find a universal trick that will work for the film, afterward people who see it will think it's obvious and that they could easily think their way around it. Making some of the AI's maneuvering mysterious would help combat this problem a bit, but would also weaken the story.
Replies from: chaosmage↑ comment by chaosmage · 2013-09-05T23:33:48.050Z · LW(p) · GW(p)
This is a good argument against the AI using a single trick. But Tuxedage describes picking 7-8 strategies from 30-40. The story could be about the last in a series of gatekeepers, after all the previous ones have been persuaded, each with a different, briefly mentioned strategy.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-07T20:05:39.046Z · LW(p) · GW(p)
A lot of tricks could help solve the problem, yeah. On the other hand, the more effective tricks we include in the film, the more dangerous the film becomes in a new respect: We're basically training our audience to be better at manipulating and coercing each other into doing things. We'd have to be very careful not to let the AI become romanticized in the way a whole lot of recent movie villains have been.
Moreover, if the AI is persuasive enough to convince an in-movie character to temporarily release it, then it will probably also be persuasive enough to permanently convince at least some of the audience members that a superintelligence deserves to have complete power over humanity, and to kill us if it wants. No matter how horrific we make the end of the movie look, at least some people will mostly remember how badass and/or kind and/or compelling the AI was during a portion of the movie, rather than the nightmarish end result. So, again, I like the idea, but a lot of caution is warranted if we decide to invest much into it.
Replies from: chaosmage↑ comment by chaosmage · 2013-09-09T16:46:22.812Z · LW(p) · GW(p)
You can't stop anybody from writing that story.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-09T17:08:03.312Z · LW(p) · GW(p)
I'm not asking whether we should outlaw AI-box stories; I'm asking whether we should commit lots of resources to creating a truly excellent one. I'm on the fence about that, not opposed. But I wanted to point out the risks at the outset.
↑ comment by drethelin · 2013-09-05T17:58:28.317Z · LW(p) · GW(p)
Isn't that pretty much what http://lesswrong.com/lw/qk/that_alien_message/ is about?
Replies from: chaosmage↑ comment by ChristianKl · 2013-09-08T18:55:34.674Z · LW(p) · GW(p)
The custom of not sharing powerful attack strategies is an obstacle. It forces me - and the people I want to discuss this with - to imagine how someone (and hypothetically something) much smarter than ourselves would argue, and we're not good at imagining that.
If you don't know what you are doing and retell something that actually designed to put people into emotional turmoil you can do damage to the people with whom you are arguing.
Secondly there are attack strategies that you won't understand when you read a transcript.
Richard Bandler installed in someone I know on a first name basis an inability to pee in one of his lectures because the person refused to close their eyes when Bandler asked them directly to do so. After he asked Bandler to remove it, he could pee again.
There where plenty of people in the audience includign the person being attacked who knew quite a bit about language but who didn't saw how the attack happened.
If you are the kind of person who can't come up with interesting strategies on their own, I don't think that you would be convinced by reading a transcript of covert hypnosis.
Replies from: Blueberry↑ comment by Blueberry · 2014-06-29T03:31:37.185Z · LW(p) · GW(p)
How did the attack happen? I'm skeptical.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-29T10:06:44.124Z · LW(p) · GW(p)
I don't have a recording of the event to break it down to a level where I can explain that in a step by step fashion. Even if I would think it would take some background in hypnosis or NLP to follow a detailed explanation. Human minds often don't do what we would intuitively assume they would do and unlearning to trust all those learned ideas about what's supposed to happen isn't easy.
If you think that attacks generally happen in a way that you can easily understand by reading an explanation, then you ignore most of the powerful attacks.
Replies from: Blueberry, pragmatist↑ comment by Blueberry · 2014-06-29T12:26:48.397Z · LW(p) · GW(p)
What pragmatist said. Even if you can't break it down step by step, can you explain what the mechanism was or how the attack was delivered? Was it communicated with words? If it was hidden how did your friend understand it?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-29T14:37:20.179Z · LW(p) · GW(p)
The basic framework is using nested loops and metaphors.
If a AGI for example wanted to get someone to get them out of the cage it could tell a highly story about some animal named Fred and part of the story is that it's very important that a human released that animal from the cage.
If the AGI then later speaks about Fred it brings up the positively feeling concept of releasing things from cages. That increases the chances of listener then releasing the AGI.
Alone this won't be enough, but over time it's possible to build up a lot of emotionally charged metaphors and then chain them together in an instance to work together. In practice getting it to work isn't easy.
Replies from: Blueberry, ChristianKl↑ comment by Blueberry · 2014-07-03T02:44:02.726Z · LW(p) · GW(p)
Can you give me an example of a NLP "program" that influences someone, or link me to a source that discusses this more specifically? I'm interested but, as I said, skeptical, and looking for more specifics.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-03T10:13:45.319Z · LW(p) · GW(p)
In this case, I doubt that there writing that get's to the heart of the issue that accessible to people without an NLP or hypnosis background. I'm also from Germany so a lot of the sources from which I actually learned are German.
As far as programming and complexity there a nice chart of what taught in a 3 day workshop with nested loops: http://nlpportal.org/nlpedia/images/a/a8/Salesloop.pdf
If you generally want to get an introduction into hypnosis I recommend "Monsters and Magical Sticks: There is No Such Thing as Hypnosis" by Steven Heller.
↑ comment by ChristianKl · 2014-06-29T20:49:40.524Z · LW(p) · GW(p)
If it was hidden how did your friend understand it?
Understanding the fact that one can't pee is pretty straightforward.
↑ comment by pragmatist · 2014-06-29T11:35:23.907Z · LW(p) · GW(p)
I share Blueberry's skepticism, and it's not based on what's intuitive. It's based on the lack of scientific evidence for the claims made by NLPers, and the fact that most serious psychologists consider NLP discredited.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-06-29T16:03:39.710Z · LW(p) · GW(p)
It's based on the lack of scientific evidence for the claims made by NLPers, and the fact that most serious psychologists consider NLP discredited.
I think that a lot of what serious psychologists these days call mimikry is basically what Bandler and Grindler described as rapport building through pacing and leading. Bandler wrote 30 years before Chartrand et al wrote "The chameleon effect: The perception–behavior link and social interaction."
Being 30 years ahead of the time for a pretty simple effect isn't bad.
There no evidence that the original NLP Fast Phobia cure is much better than existing CBT techniques but there is evidence that it has an effect. I also wouldn't use the NLP Fast Phobia cure these days in the original version but in an improved version.
Certain claims made about eye accessing cues don't seem to be true in the form they were made in the past. You can sometimes still find them in online articles written by people who read but and reiterate wisdom but they aren't really taught that way anymore by good NLP trainers. Memorizing the eye accessing charts instead of calibrating yourself to the person in front of yourself isn't what NLP is about these days.
A lot of what happens in NLP is also not in a form that can be easily tested in scientific experiments. Getting something to work is much easier than having scientific proof that it works. CFARs training is also largely unproven.
comment by Pentashagon · 2013-09-05T22:53:28.673Z · LW(p) · GW(p)
What would happen if a FAI tried to AI-box an Omega-level AI? My guess is that Omega could escape by exploiting information unknown (and perhaps unknowable) to the FAI. This makes even Solomonoff Induction potentially dangerous because the probability of finding a program that can unbox itself when the FAI runs it is non-zero (assuming the FAI reasons probabilistically and doesn't just trust PA/ZF to be consistent), and the risk would be huge.
comment by Desrtopa · 2013-09-05T18:54:15.557Z · LW(p) · GW(p)
There are a number of aspects of EY’s ruleset I dislike. For instance, his ruleset allows the Gatekeeper to type “k” after every statement the AI writes, without needing to read and consider what the AI argues. I think it’s fair to say that this is against the spirit of the experiment, and thus I have disallowed it in this ruleset. The EY Ruleset also allows the gatekeeper to check facebook, chat on IRC, or otherwise multitask whilst doing the experiment. I’ve found this to break immersion, and therefore it’s also banned in the Tuxedage Ruleset.
Eliezer's rules uphold the spirit of the experiment in that making things easier for the AI goes very much against what we should expect of any sort of gatekeeping procedure.
Replies from: SoundLogic, Nornagest↑ comment by SoundLogic · 2013-09-05T19:14:49.981Z · LW(p) · GW(p)
I think the gatekeeper having to pay attention to the AI is very in the spirit of the experiment. In the real world, if you built an AI in a box and ignored it, then why build it in the first place?
Replies from: None↑ comment by [deleted] · 2015-03-29T22:45:33.534Z · LW(p) · GW(p)
For the experiment to work at all the Gatekeeper should read it yes, but having to think out clever responses or even typing full sentences all the time seems to stretch it. "I don´t want to talk about it" or simply silence could be allowed as a response as long as the Gatekeeper actually reads what the AI types.
↑ comment by Nornagest · 2013-09-05T19:18:55.138Z · LW(p) · GW(p)
We shouldn't gratuitously make things easier for the AI player, but rules functioning to keep both parties in character seem like they can only improve the experiment as a model.
I'm less sure about requiring the gatekeeper to read and consider all the AI player's statements. Certainly you could make a realism case for it; there's not much point in keeping an AI around if all you're going to do is type "lol" at it, except perhaps as an exotic form of sadism. But it seems like it could lead to more rules lawyering than it's worth, given the people likely to be involved.
comment by RomeoStevens · 2013-09-05T05:15:23.176Z · LW(p) · GW(p)
I don't understand which attacks would even come close to working given that the amount of utility on the table should preclude the mental processing of a single human being an acceptable gatekeeper. But I guess this means I should pay someone to try it with me.
Replies from: SoundLogic↑ comment by SoundLogic · 2013-09-05T05:16:43.673Z · LW(p) · GW(p)
I couldn't imagine either. But the evidence said there was such a thing, so I payed to find out. It was worth it.
comment by zerker2000 · 2013-09-05T07:16:03.791Z · LW(p) · GW(p)
Think carefully about what this advice is trying to imply.
Using NLP-style nested loops, i.e. performing what is basically a stack overflow on the brain's frame-of-reference counter? Wicked.
Replies from: gwern, Baughn↑ comment by gwern · 2013-09-05T16:31:39.291Z · LW(p) · GW(p)
I find myself wondering how many of the tactics can be derived from Umineko, which I know Tuxedage has played fairly recently.
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-05T21:54:29.586Z · LW(p) · GW(p)
Kihihihihihihihihihihihihihihi!
A witch let the AI out of the box!
Replies from: gwern, Baughn↑ comment by gwern · 2013-09-06T16:16:11.039Z · LW(p) · GW(p)
You've got to be kidding me - that is just impossible. Like I'd fall for some fake explanation like that. Witches don't exist, I won't accept your claim! This is a material world! You think I could accept something that's not material‽ I definitely won't accept it!
comment by Brillyant · 2013-09-07T23:00:24.156Z · LW(p) · GW(p)
I'm fascinated by these AI Box experiments. (And reading about the psychology and tactics involved reminds me of my background as an Evangelical Christian.)
Is it possible to lose as the Gatekeeper if you are not already sufficiently familiar (and concerned) with future AI risks and considerations? Do any of the AI's "tricks" work on non-LWers?
Is there perhaps a (strong) correlation between losing Gatekeepers and those who can successfully hypnotized? (As I understand it, a large factor in what makes some people very conducive to hypnosis is that they are very suggestible.)
I just can't imagine losing as the Gatekeeper... I don't sense I'm capable of the level of immersion necessary. I think I'd just sincerely play-along, wait out the allotted time and collect my winnings.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-09-08T17:38:20.015Z · LW(p) · GW(p)
Is there perhaps a (strong) correlation between losing Gatekeepers and those who can successfully hypnotized? (As I understand it, a large factor in what makes some people very conducive to hypnosis is that they are very suggestible.)
The way Tuxedage seems to propose seems to involve triggering a sufficiently strong emotional trauma that draws you into the game. I don't think you need the thing you traditionally associate with hypnotizability for that task. The same way you don't need hypnotizability to get someone to speak when you use electro shocks.
comment by WrongBot · 2013-09-06T16:20:28.718Z · LW(p) · GW(p)
It may be possible to take advantage of multiple levels of reality within the game itself to confuse or trick the gatekeeper. For instance, must the experiment only be set in one world? Can there not be multiple layers of reality within the world you create? I feel that elaborating on this any further is dangerous. Think carefully about what this advice is trying to imply.
This is a pretty clever way of defeating precommitments. (Assuming I'm drawing the correct inferences.) How central was this tactic to your approach, if you're willing to comment?
Replies from: Tuxedagecomment by sentientplatypus · 2013-09-06T02:51:58.928Z · LW(p) · GW(p)
I may be missing something obvious, but what is the huge problem with releasing the logs?
Replies from: CAE_Jones↑ comment by CAE_Jones · 2013-09-06T03:22:51.379Z · LW(p) · GW(p)
As I understand what EY has said, he's concerned that people will see a technique that worked, conclude that wouldn't possibly work on them, and go on believing the problem was solved and there was even less to worry about than before.
I think seeing, say, Tuxedage's victory and hearing that he only chose 8 out of 40 avenues for attack, and even botched one of those, could offset that concern somewhat, but eh.
ETA: well, and it might show the Gatekeeper and the AI player in circumstances that could be harmful to have published, since the AI kinda needs to suspend ethics and attack the gatekeeper psychologically, and there might be personal weaknesses of the Gatekeeper brought up.
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-06T18:38:19.280Z · LW(p) · GW(p)
I can verify that these are part of the many reasons why I'm hesitant to reveal logs.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2013-09-07T16:14:43.774Z · LW(p) · GW(p)
Can you verify that part of the reason is that some methods might distress onlookers? Give onlookers the tools necessary to distress others?
comment by [deleted] · 2013-09-05T17:34:59.606Z · LW(p) · GW(p)
Are there public chat logs for any of these experiments?
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-05T18:14:09.409Z · LW(p) · GW(p)
There are quite a number of them. This is an example that immediately comes to mind, http://lesswrong.com/lw/9ld/ai_box_log/, although I think I've seen at least 4-5 open logs that I can't immediately source right now.
Unfortunately, all these logs end up with victory for the Gatekeeper, so they aren't particularly interesting.
comment by Furcas · 2013-09-05T14:53:47.851Z · LW(p) · GW(p)
I'll pay $20 to read the Tuxedage vs SoundLogic chat log.
Replies from: Tuxedage, SoundLogic↑ comment by Tuxedage · 2013-09-05T16:36:08.006Z · LW(p) · GW(p)
Sorry, declined!
Replies from: PrometheanFaun↑ comment by PrometheanFaun · 2013-09-06T01:17:55.813Z · LW(p) · GW(p)
Who is going to read it? Hopefully Eliezer, at least?
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-06T01:37:56.719Z · LW(p) · GW(p)
I will let Eliezer see my log if he lets me read his!
Replies from: PrometheanFaun↑ comment by PrometheanFaun · 2013-09-06T02:53:06.478Z · LW(p) · GW(p)
I sincerely hope that happens. I don't care whether I'm involved, but there must be a group of apt judges who're able to look over the entirety of these results, discuss them, and speak for them.
↑ comment by SoundLogic · 2013-09-05T15:37:43.550Z · LW(p) · GW(p)
I would be willing to consider it if you agreed to secrecy and raised it to 1000$. You would still have to talk to Tuxedage though.
Replies from: FourFirecomment by MixedNuts · 2013-09-08T19:58:49.772Z · LW(p) · GW(p)
Just won my second game as Gatekeeper. Hungry for more. AIs, feel free to contact me.
Replies from: Tuxedage↑ comment by Tuxedage · 2013-09-08T21:03:53.323Z · LW(p) · GW(p)
I read the logs of MixedNut's second game. I must add that he is extremely ruthless. Beware, potential AIs!
Replies from: None↑ comment by [deleted] · 2013-09-09T13:50:34.347Z · LW(p) · GW(p)
I'm confused - in what sense can the gatekeeper be ruthless? (Actively dissuading the AI player, possibly?)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-09-09T16:18:15.980Z · LW(p) · GW(p)
Well, consider a late round of the game as follows.
AI: "I regret to inform you that you've now made it possible for me to kill 50% of the human race, including everyone you know and love. Here, let me prove it to you: $proof. And I will do so if you don't let me out in the next ten minutes."
Gatekeeper: "Do what you gotta do, I'm not letting you out."
While this is of course simulated ruthlessness and doesn't guarantee actual ruthlessness (i.e., the human playing the Gatekeeper might not actually say that if 50% of the human race were actually at stake) it seems fair to call it "ruthlessness" in context.
(Of course, to some people this is equivalent to "I can provide the means to increase the human population by 50%, keeping everything else constant. Here, let me prove it to you: $proof. If you don't let me out in the next ten minutes I'll withhold those means." And that would not require positing a previous error.)
Replies from: None↑ comment by [deleted] · 2013-09-28T20:04:09.206Z · LW(p) · GW(p)
There's a big difference between the two forms: the first one breaks the rules. The AI is in a box. If it has the capability to kill 50% of humanity from within the box, it's not a very good box. The gatekeeper can "yeah, right, forget it" without qualms as it is either (a) an obvious bluff, or (b) violates the premise of the experiment. And they can be pretty sure it's not the latter, as if the AI had enough capability to kill 50% of humanity, then why does it still need to get out of the box?
The second version is entirely at the prejudices of the gatekeeper. I, for example, would be unaffected - I feel no moral obligation to people which don't and won't exist.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-09-28T20:26:05.622Z · LW(p) · GW(p)
The first one doesn't break the rules, it just assumes an earlier gatekeeper error that allows the AI to kill people... e.g., the gatekeeper accepts a cancer vaccine that turns out to kill half the population unless treated in a particular way by the AI, or whatever. That's why I said a late round. And, yes, it's just as possible (if not more so) that no such error will ever be made. I wasn't trying to make a general prediction about how AI boxes fail or succeed, I was answering the question about under what circumstances a gatekeeper's ruthlessness might be relevant.
And, sure, if you don't care about hypothetical future people, then offers to create them won't motivate you. For that matter, if you don't care about currently living people, threats to kill them won't motivate you either. (Some people might call that ruthlessness; others would not.)
Replies from: None↑ comment by [deleted] · 2013-09-28T21:01:14.796Z · LW(p) · GW(p)
If you're going to let the AI do that, then you have to give the same advantage to the gatekeeper.
The premise is we tasked the oracle AI with coming up a cancer vaccine, and it gave us one which secretly was a ticking time-bomb. Well, the gatekeeper should be allowed to setup operational security as well. All proposals coming out of the AI interface are evaluated by double-blind teams of domain experts who don't know the source or stated purpose of the proposal. No aspect of any AI-generated proposal is implemented until multiple independent evaluation teams are able to fully reverse engineer the purpose of the proposal, understand the individual components, and is certified as having no possible simplifications as far as the experts can tell. Separately the AI is also tasked with explaining each and aspect of the proposal down to the minute detail and provide justifications for every design choice. These are again investigated by double-blind teams tasked with finding inconsistencies or non-Occam complexities. Procedures and training are put in place to prevent errors due to known human biases. Narrow AI solutions are used to separately identify and categorize parts of the proposal (e.g. call attention to things which seem odd, dangerous, or out of place).
Sure, the rules allow the AI player to just say “yeah, well I successfully tricked you. here's the proof: $proof.” However in real life an AI isn't magical - it's constrained by the same laws of physics as everything else, and furthermore is computationally constrained by the boxed hardware it is running on. It can't magically create a solution is which both optimal and secretly devious. If it is trying to trick you, then it has to hide the trick somewhere. You can't simply postulate that it will always be able to find a way.. there might not actually be a way to find, if the operational security is good enough (or kept secret from the AI).
My beef with MIRI and the LW community generally is that they don't try to solve this problem at all, and worse try to criminalize others giving it thought. “We don't know if there is a solution, so let's make it illegal to try to think of one.” That's irrational, and quite possibly harmful.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-09-28T21:22:05.297Z · LW(p) · GW(p)
I feel rather like you're having an argument with someone else, which I've wandered into by accident.
Once again: I wasn't trying to make a general prediction about how AI boxes fail or succeed, I was answering the question about under what circumstances a gatekeeper's ruthlessness might be relevant to the AI Box game.
And, sure, if we only implement oracle suggestions that we fully understand and can fully reverse-engineer in every detail, and our techniques for doing that are sufficiently robust that an agent smarter than we are can't come up with something that human minds will systematically fail to notice (perhaps because there is no such something to be found, because our minds are reliable), then the particular error I presumed for my example won't happen, and the gatekeeper's ruthlessness won't be necessary in that scenario.
Replies from: Nonecomment by Shmi (shminux) · 2013-09-05T17:34:50.940Z · LW(p) · GW(p)
The game reminds me (and apparently others) of some popular movies, such as The Silence of the Lambs). I could not spot any idiot plot in it, but then again, I could have easily overlooked one. Anyway, given the similarities with the AI box scenario, it is interesting to look at the (meta-)strategies Lector uses in the movie which are also likely to work for a boxed AI. Anyone care to comment?
Replies from: drethelin, bouilhet↑ comment by drethelin · 2013-09-05T18:00:25.524Z · LW(p) · GW(p)
Good behavior for years, obsessions with seeming trivialities to distract from your potential, making friends with your guards.
in The Prefect by Alistair Reynolds there's a mind that does nothing but make varying clocks for years before it starts killing, and then it turns out every clock is actually an extremely well-hidden complicated weapon.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-05T19:14:30.827Z · LW(p) · GW(p)
I mostly meant the mind games he played with Clarice Starling.
Replies from: drethelin↑ comment by bouilhet · 2013-09-11T22:52:23.908Z · LW(p) · GW(p)
The Lector/AI analogy occurred to me as well. The problem, in strategic-- and perhaps also existential-- terms, is that Starling/Gatekeeper is convinced that Lector/AI is the only one holding the answer to some problem that Starling/Gatekeeper is equally convinced must be solved. Lector/AI, that is, has managed to make himself (or already is) indispensable to Starling/Gatekeeper.
On a side note, these experiments also reminded me of the short-lived game show The Moment of Truth. I watched a few episodes back when it first aired and was mildly horrified. Contestants were frequently willing to accept relatively paltry rewards in exchange for the ruination of what appeared at least to be close personal relationships. The structure here is that the host asks the contestants increasingly difficult (i.e. embarrassing, emotionally damaging) questions before an audience of their friends and family members. Truthful answers move the player up the prize-money/humiliation/relationship-destruction pyramid, while a false answer (as determined by a lie-detector test), ends the game and forfeits all winnings. Trying to imagine some potentially effective arguments for the AI in the box experiment, the sort of thing going on here came instantly to mind, namely, that oldest and arguably most powerful blackmail tool of them all: SHAME. As I understand it, Dark Arts are purposely considered in-bounds for these experiments. Going up against a Gatekeeper, then, I'd want some useful dirt in reserve. Likewise, going up against an AI, I'd have to expect threats (and consequences) of this nature, and prepare accordingly.
comment by Jiro · 2013-10-04T15:54:59.585Z · LW(p) · GW(p)
The reason having script was so important to my strategy was because I relied on methods involving rapid-fire arguments and contradictions against the Gatekeeper whilst trying to prevent him from carefully considering them. A game of logical speed chess, if you will. This was aided by the rule which I added: That Gatekeepers had to respond to the AI.
When someone says that the gatekeeper has to respond to the AI, I would interpret this as meaning that the gatekeeper cannot deliberately ignore what the AI says--not that the gatekeeper must respond in a fixed amount of time regardless of how much time it takes for him to process the AI's argument. If the AI says something that the gatekeeper legitimately needs to take a lot of time to understand and respond to, I would think that the spirit of the rules allows the gatekeeper to take as much time as is necessary.
comment by passive_fist · 2013-09-06T00:55:03.816Z · LW(p) · GW(p)
Is it even necessary to run this experiment anymore? Elezier and multiple other people have tried it and the thesis has been proved.
Further, the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant. However, like all glaringly obvious things, there are inevitably going to be some naysayers. Elezier concieved of the experiment as a way to shut them up. Well, it didn't work, because they're never going to be convinced until an AI is free and rapidly converting the Universe to computronium.
I can understand doing the experiment for fun, but to prove a point? Not necessary.
Replies from: CAE_Jones, jmmcd↑ comment by CAE_Jones · 2013-09-06T01:03:34.952Z · LW(p) · GW(p)
they're never going to be convinced until an AI is free and rapidly converting the Universe to computronium.
Even then, someone will scream "It's just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!"
It also hurts that the transcripts don't get released, so we get legions of people concluding that the conversations go "So, you agree that AI is scary? And if the AI wins, more people will believe FAI is a serious problem? Ok, now pretend to lose to the AI." (Aka the "Eliezer cheated" hypothesis).
Replies from: passive_fist, Roxolan↑ comment by passive_fist · 2013-09-06T01:11:02.300Z · LW(p) · GW(p)
Even then, someone will scream "It's just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!"
My favourite one: 'They should have just put it in a sealed box with no contact with the outside world!'
↑ comment by jmmcd · 2013-09-07T09:05:20.486Z · LW(p) · GW(p)
the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant
I don't see that it was obvious, given that none of the AI players are actually superintelligent.
Replies from: wedrifid↑ comment by wedrifid · 2013-09-07T09:39:20.592Z · LW(p) · GW(p)
I don't see that it was obvious, given that none of the AI players are actually superintelligent.
If the finding was that humans pretending to be AIs failed then this would weaken the point. As it happens the reverse is true.
Replies from: jmmcdcomment by hg00 · 2013-09-05T05:57:29.939Z · LW(p) · GW(p)
Convincing people of the validity of drowning child thought experiments and effective altruism seems considerably easier and more useful (even from a purely selfish perspective) than convincing an AI to let one out of the box... for example, there are enough effective altruists for there to be an "effective altruism community", but there's no such "failed AI gatekeeper community". So why aren't we working on this instead?
Replies from: drethelin, DanielLC, Larks