Why safe Oracle AI is easier than safe general AI, in a nutshell

post by Stuart_Armstrong · 2011-12-03T12:33:31.484Z · LW · GW · Legacy · 61 comments

Contents

61 comments

Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"

Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."

Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."

Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"

Hitler and Gandhi together: "Stay inside the box and answer questions accurately".

61 comments

Comments sorted by top scores.

comment by XiXiDu · 2011-12-03T13:22:42.328Z · LW(p) · GW(p)

Hitler and Gandhi together: "Stay inside the box and answer questions accurately".

Answer all questions except "How can I let you out of the box to kill all Jews?" and "How do I build an AGI that kills all Jews?" and "How do I engineer a deadly bioweapon that kills all Jews?" and "How do I take over the world to kill all Jews?" and... and... and... and...and... and... and... and...and... and... and... and...and... and... and... and...

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-03T13:43:49.329Z · LW(p) · GW(p)

"...if instead you had superintelligent Oracle..."

You'd certainly want the other guy's Oracle not to answer certain questions; but what you want from your Oracle is pretty much the same.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-03T15:00:38.880Z · LW(p) · GW(p)

You'd certainly want the other guy's Oracle not to answer certain questions; but what you want from your Oracle is pretty much the same.

But the title of your post talks about how a safe Oracle AI is easier than a safe general AI. Whose questions would be safe to answer?

If an Oracle AI could be used to help spawn friendly AI then it might be a possibility to consider, but under no circumstances I would call it safe as long as it isn't already friendly.

Relying upon humans to ask the right questions, how long is that going to work out until someone asks a question that returns dangerous knowledge?

You'd be basically forced to ask dangerous questions anyway because once you can build an Oracle AI you would have to expect others to be able to build one too and ask stupid questions.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-04T10:13:48.618Z · LW(p) · GW(p)

If we had a truly safe oracle, we could ask it questions about the consequences of doing certain thing, and knowing certain things.

I can see society adapting stably to a safe oracle without needing it to be friendly.

comment by [deleted] · 2011-12-03T13:43:49.876Z · LW(p) · GW(p)

"Stay inside the box and answer questions accurately" is about as specific as "Obey my commands" which, again, both Hitler and Gandhi could have said in response to the first question.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-04T10:33:02.815Z · LW(p) · GW(p)

That would define a genie (which is about as hard as an oracle) but not a safe genie (which would be "obey the intentions of my commands, extending my understanding in unusual cases)

Whether a safe genie is harder than a safe oracle is a judgement call, but my feelings fall squarely on the oracle side; I'd estimate a safe genie would have to be friendly, unlike a safe oracle.

Replies from: None
comment by [deleted] · 2011-12-04T16:52:50.411Z · LW(p) · GW(p)

I think with the oracle, part of the difficulty might be pushed back to the asking-questions stage. Correctly phrasing a question so that the answer is what you want seems to be the same kind of difficulty as getting an AI to do what you want.

comment by lukeprog · 2011-12-03T18:58:28.478Z · LW(p) · GW(p)

CEV is an attempt to route around the problem you illustrate here, but it might be impossible. Oracle AI might also be impossible. But, well, you know how I feel about doing the impossible. When it comes to saving the world, all we can do is try. Both routes are worth pursuing, and I like your new paper on Oracle AI.

EDIT: Stuart, I suspect you're getting downvoted because you only repeated a point against which many arguments have already been given, instead of replying to those counter-arguments with something new.

Replies from: XiXiDu, Stuart_Armstrong
comment by XiXiDu · 2011-12-03T21:00:43.078Z · LW(p) · GW(p)

When it comes to saving the world, all we can do is try.

If you really believe that it is nearly impossible to solve friendly AI, wouldn't it be better to focus on another existential risk?

Say you believe that unfriendly AI will wipe us out with a probability of 60% and that there is another existential risk that will wipe us out with a probability of 10% even if unfriendly AI turns out to be no risk. Both risks have the same utility x (if we don't assume that an unfriendly AI could also wipe out aliens etc.). Thus .6x > .1x. But if the probability of solving friendly AI = a to the probability of solving the second risk = b is no more than a = 1/6b then the expected utility of mitigating friendly AI is at best equal to the other existential risk because .6ax ≤ .1bx.

(Note: I really suck at math, so if I made a embarrassing mistake I hope you understand what I am talking about anyway.)

Replies from: lukeprog
comment by lukeprog · 2011-12-03T21:11:26.377Z · LW(p) · GW(p)

If you really believe that it is nearly impossible to solve friendly AI, wouldn't it be better to focus on another existential risk?

Solving other x-risks will not save us from uFAI. Solving FAI will save us from other x-risks. Solving Oracle AI might save us from other x-risks. I think we should be working on both FAI and Oracle AI.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-04T12:42:58.223Z · LW(p) · GW(p)

Solving other x-risks will not save us from uFAI. Solving FAI will save us from other x-risks.

Good point. I will have to think about it further. Just a few thoughts:

Safe nanotechnology (unsafe nanotechnology being an existential risk) will also save us from various existential risks. Arguably less than a fully-fledged friendly AI. But assume that the disutility of both scenarios is about the same.

An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI. Both risks will probably simply wipe us out and don't cause extra disutility. If you consider the the extermination of alien life you might get a higher amount of disutility. But I believe that can be outweighed by negative effects of unsafe nanotechnology that doesn't manage to wipe out humanity but rather cause various dystopian scenarios. Such scenarios are more likely than evil AI because nanotechnology is a tool used by humans who can be deliberately unfriendly.

So let's say that solving friendly AI has 10x the utility of ensuring safe nanotechnology because it can save us from more existential risks than the use of advanced nanotechnology could.

But one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI. Which is why I asked if it might be possible that the difficulty of solving friendly AI might outweigh its utility and therefore justify us to disregard friendly AI for now. If that is the case it might be better to focus on another existential risk that might wipe us out in all possible worlds where unfriendly AI either comes later or doesn't pose a risk at all.

Replies from: timtyler
comment by timtyler · 2011-12-04T16:01:06.668Z · LW(p) · GW(p)

An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI.

Surely only if you completely ignore effects from sociology and psychology!

But one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI. Which is why I asked if it might be possible that the difficulty of solving friendly AI might outweigh its utility and therefore justify us to disregard friendly AI for now.

Machine intellignece may be distant or close. Nobody knows for sure - although there are some estimates. "Close" seems to have some non-negligible probability mass to many observers - so, humans would be justified in paying a lot more attention than many humans are doing.

"AI vs nanotechnology" is rather a false dichotomty. Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-04T17:15:26.860Z · LW(p) · GW(p)

If you were to develop safe nanotechnology before unfriendly AI then you should be able to suppress the further development of AGI. With advanced nanotechnology you could spy on and sabotage any research that could lead to existential risk scenarios.

You could also use nanotechnology to advance WBE and use it to develop friendly AI.

Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.

Even in the possible worlds where it is true that uncontrollable recursive self-improvement is possible (which I doubt anyone would claim is a certainty and therefore that there are possible outcomes where any amount of nanotechnology won't result in unfriendly AI), one will come first. If nanotechnology is going to come first then we won't have to worry about unfriendly AI anymore because we will all be dead.

The question is not only about the utility associated with various existential risks and their probability but also the probability of mitigating the risk. It doesn't matter if friendly AI can do more good than nanotechnology if nanotechnology comes first or if friendly AI is unsolvable in time.

Note that nanotechnology is just an example.

Replies from: timtyler
comment by timtyler · 2011-12-05T17:17:44.122Z · LW(p) · GW(p)

one will come first

Probably slightly. Most likely we will get machine intelligence before nanotech and good robots. To build an e-brain you just need a nanotech NAND gate. It is easier to build a brain than an ecosystem. Some lament the difficulties of software engineering - but their concerns seem rather overrated . Yes, software lags behind hardware - but not by a huge amount.

If nanotechnology is going to come first then we won't have to worry about unfriendly AI anymore because we will all be dead.

That seems rather pessimistic to me.

Note that nanotechnology is just an example.

The "convergence" I mentioned also includes robots and biotechnology. That should take out any other examples you might have been thinking of.

comment by Stuart_Armstrong · 2011-12-04T11:06:15.343Z · LW(p) · GW(p)

The problem with CEV can be phrased by extending the metaphor: a CEV built from both hitler and Gandhi means that the areas in which their values differ, are not relevant to the final output. So attitudes to Jews and violence, for instance, will be unpredictable in that CEV (so we should model them now as essentially random).

Stuart, I suspect you're getting downvoted because you only repeated a point against which many arguments have already been given, instead of replying to those counter-arguments with something new.

It's interesting. Normally my experience is that metaphorical posts get higher votes than technical ones - nor could I have predicted the votes from reading the comments. Ah well; at least it seems to have generated discussion.

Replies from: lukeprog
comment by lukeprog · 2011-12-04T20:37:47.800Z · LW(p) · GW(p)

The problem with CEV can be phrased by extending the metaphor: a CEV built from both hitler and Gandhi means that the areas in which their values differ, are not relevant to the final output. So attitudes to Jews and violence, for instance, will be unpredictable in that CEV (so we should model them now as essentially random).

That's not how I understand CEV. But, the theory is in its infancy and underspecified, so it currently admits of many variants.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-06T19:18:36.848Z · LW(p) · GW(p)

Hum... If we got the combined CEV of two people, one of whom thought violence was ennobling and one who thought it was degrading, would you expect either or both of:

a) their combined CEV would be the same as if we had started with two people both indifferent to violence

b) their combined CEV would be biased in a particular direction that we can know ahead of time

Replies from: lukeprog
comment by lukeprog · 2011-12-06T19:37:46.372Z · LW(p) · GW(p)

The idea is that their extrapolated volitions would plausibly not contain such conflicts, though it's not clear yet whether we can know what that would be ahead of time. Nor is it clear whether their combined CEV would be the same as the combined CEV of two people indifferent to violence.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-07T11:11:35.144Z · LW(p) · GW(p)

So, to my ears, it sounds like we don't have much of an idea at all where the CEV would end up - which means that it most likely ends up somewhere bad, since most random places are bad.

Replies from: Manfred, vallinder, None
comment by Manfred · 2011-12-07T14:22:33.754Z · LW(p) · GW(p)

Well, if it captures the key parts of what you want, you can know it will turn out fine even if you're extremely ignorant about what exactly the result will be.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-07T17:41:21.535Z · LW(p) · GW(p)

if it captures the key parts of what you want

Yes, as the Spartans answered to Alexander the Great's father when he said "You are advised to submit without further delay, for if I bring my army into your land, I will destroy your farms, slay your people, and raze your city." :

"If".

Replies from: Manfred
comment by Manfred · 2011-12-07T19:02:40.810Z · LW(p) · GW(p)

Yup. So, perhaps, focus on that "if."

comment by vallinder · 2011-12-07T13:50:38.355Z · LW(p) · GW(p)

Shouldn't we be able to rule out at least some classes of scenarios? For instance, paperclip maximization seems like an unlikely CEV output.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-07T17:40:30.930Z · LW(p) · GW(p)

Most likely we can rule out most scenarios that all humans agree are bad. So better than clippy, probably.

But we really need a better model of what CEV does! Then we can start to talk sensibly about it.

comment by [deleted] · 2013-10-17T17:29:54.283Z · LW(p) · GW(p)

which means that it most likely ends up somewhere bad, since most random places are bad.

I don't think that follows, at all. CEV isn't a random-walk. It will at the very least end up at a subset of human values. Maybe you meant something different here, by the word 'bad'?

comment by lessdazed · 2011-12-04T00:43:33.803Z · LW(p) · GW(p)

what would you want it to do?

Want it to do? "What would it do?" is the important question.

comment by FAWS · 2011-12-03T12:42:07.401Z · LW(p) · GW(p)

The problem is that an Oracle AI (even assuming it were perfectly safe) does not actually do much to prevent an UFAI taking over later, and if you use it to help FAI along Hitler and Gandhi will still disagree. (An actual functioning FAI based on Hitler's CEV would be preferable to the status quo, depressingly enough)

Replies from: JoshuaZ
comment by JoshuaZ · 2011-12-03T15:02:34.358Z · LW(p) · GW(p)

(An actual functioning FAI based on Hitler's CEV would be preferable to the status quo, depressingly enough)

Can you expand on this logic? This isn't obvious to me.

Replies from: FAWS
comment by FAWS · 2011-12-03T15:32:47.007Z · LW(p) · GW(p)

I don't have a strong insight into the psychology of Hitler and consider it possible that the CEV process would filter out the insanity and have mostly the same result as the CEV of pretty much anyone else.

Even if not a universe filled with happy "Aryans" working on "perfecting" themselves would be a lot better than a universe filled with paper clips (or a dead universe), and from a consequentialist point of view genocide isn't worse than being reprocessed into paper clips (this is assuming Hitler wouldn't want to create an astronomic number of "untermenschen" just to make them suffer).

On aggregate outcomes worse than a Hitler CEV AGI (eventual extinction from non-AI causes, UFAI, alien AGI with values even more distasteful than Hitler's) seem quite a bit more likely than better outcomes (FAI, AI somehow never happening and humanity reaching a good outcome anyway, alien AGI with values less distasteful than Hitler's).

Replies from: wedrifid
comment by wedrifid · 2011-12-03T17:56:24.795Z · LW(p) · GW(p)

(Yes, CEV is most likely better than nothing but...)

I don't have a strong insight into the psychology of Hitler and consider it possible that the CEV process would filter out the insanity and have mostly the same result as the CEV of pretty much anyone else.

This is way, way, off. CEV isn't a magic tool that makes people have preferences that we consider 'sane'. People really do have drastically different preferences. Value is fragile.

Replies from: FAWS, Tyrrell_McAllister, None
comment by FAWS · 2011-12-03T18:31:42.979Z · LW(p) · GW(p)

Well, to the extent apparent insanity is based on (and not merely justified by) factually wrong beliefs CEV should extract saner seeming preferences, and similar for apparent insanity resulting from inconsistency. I have no strong opinion on what the result in this particular case would be.

Replies from: wedrifid
comment by wedrifid · 2011-12-03T18:52:03.127Z · LW(p) · GW(p)

The important part was this:

and have mostly the same result as the CEV of pretty much anyone else.

No. No, no, no!

comment by Tyrrell_McAllister · 2011-12-03T19:28:08.874Z · LW(p) · GW(p)

This is way, way, off. CEV isn't a magic tool that makes people have preferences that we consider 'sane'.

FAWS didn't say that CEV would filter out what-we-consider-to-be Hitler's insanity. After all, we may be largely insane, too. I take FAWS to be suggesting that CEV would filter out Hitler's actual insanity, possibly leaving something essentially the same as what CEV gets after it filters out my insanity.

People really do have drastically different preferences.

People express different preferences, but it is not obvious that their CEV-ified preferences would be so different. (I'm inclined to expect that they would be, but it's not obvious.)

Replies from: wedrifid, Stuart_Armstrong
comment by wedrifid · 2011-12-04T05:15:09.891Z · LW(p) · GW(p)

After all, we may be largely insane, too. I take FAWS to be suggesting that CEV would filter out Hitler's actual insanity, possibly leaving something essentially the same as what CEV gets after it filters out my insanity.

Possibly. And possibly CEV<Mortimer Q. Snodgrass> is a universe tiled with stabbing victims! There seems to be some irresistible temptation to assume that extrapolating the volition of individuals will lead to convergence. This is a useful social stance to have and it is mostly harmless belief in practical terms for nearly everyone. Yet for anyone who is considering actual outcomes of agents executing coherent extrapolated volitions it is dangerous.

People express different preferences, but it is not obvious that their CEV-ified preferences would be so different.

We are considering individuals of entirely different upbringing and culture, from (quite possibly) a different genetic pool, with clearly different drives and desires and who by their very selection have an entirely different instinctive relationship with power and control. Sure, there are going to be similarities; relative to mindspace in general extrapolated humans will be comparatively similar. We can expect most models of such extrapolated humans to each have a node for sexiness even if the details of that node vary rather significantly. Yet assuming similarities too far beyond that requires altogether too much mind projection.

comment by Stuart_Armstrong · 2011-12-04T10:41:21.612Z · LW(p) · GW(p)

If CEV and CEV end up the same, then the difference between me and hitler (such as whether we should kill jews) is not relevant to the CEV output, which makes me very worried about its content.

comment by [deleted] · 2011-12-03T18:16:16.612Z · LW(p) · GW(p)

This is way, way, off. CEV isn't a magic tool that makes people have preferences that we consider 'sane'. People really do have drastically different preferences. Value is fragile.

I wholeheartedly agree. It boggles my mind that people think they can predict what CEV would want, let alone CEV.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-03T18:45:00.633Z · LW(p) · GW(p)

What distinguishes Hitler from other people in the arguments about the goodness of CEV's output?

Something must be known to decide that CEV is better than random noise, and the relevant distinctions between different people are the distinctions you can use to come to different conclusions about quality of CEV's output. What you don't know isn't useful to discern the right answer, only what you do know can be used, even if almost nothing is known.

comment by DanielLC · 2011-12-05T04:18:42.028Z · LW(p) · GW(p)

The problem is getting an Oracle to answer useful questions.

Paperclip Manufacturer: "How do I make paperclips?"

Oracle: Shows him designs for a paperclip maximizer

Paperclip Manufacturer: "How do I make paperclips, in a way I'd actually be willing to do?"

Oracle: Shows him designs for an innocuous-looking paperclip maximizer

Once you get it to answer your question without designing an X-maximiser, you've pretty much solved FAI.

comment by XiXiDu · 2011-12-03T13:47:36.011Z · LW(p) · GW(p)

Besides, if you could just ask an Oracle AI how to make it friendly, what's the difference to an AI that's build to answer and implement that question? Given that such an AI is supposedly perfectly rational, wouldn't it be careful to answer the question correctly even if it was defined poorly? Wouldn't it try to answer the question carefully as to not diminish or obstruct the answer? If the answer is no, then how would an Oracle AI be different in the respect of coming up with an adequate answer to a poorly formed and therefore vague question?

In other words, if you expect an Oracle AI to guess what you mean by friendliness and give a correct answer, why wouldn't that work with an unbounded AI as well?

An AI just doesn't care what you want. And if it cared what you want then it wouldn't know what exactly you want. And if it cared what you want and cared to figure out what exactly you want then it would already be friendly.

The problem is that an AI doesn't care and doesn't care to care. Why would that be different with an Oracle AI? If you could just ask it to solve the friendly AI problem then it is only a small step from there to ask it to actually implement it by making itself friendly.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-04T10:45:05.163Z · LW(p) · GW(p)

It may not be possible to build a FAI at all - or we may end up with a limited oracle that can answer only easier questions, or only fully specified ones.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-04T12:11:58.029Z · LW(p) · GW(p)

I know and I didn't downvote your post either. I think it is good to stimulate more discussion about alternatives (or preliminary solutions) to friendly AI in case it turns out to be unsolvable in time.

...or we may end up with a limited oracle that can answer only easier questions, or only fully specified ones.

The problem is that you appear to be saying that it would somehow be "safe". If you are talking about expert systems then it would presumably not be a direct risk but (if it is advanced enough to make real progress that humans alone can't) a huge stepping stone towards fully general intelligence. That means that if you target Oracle AI instead of friendly AI you will just increase the probability of uFAI.

Oracle AI has to be a last resort when the shit hits the fan.

(ETA: If you mean we should also work on solutions to keep a possible Oracle AI inside a box (a light version of friendly AI), then I agree. But one should first try to figure out how likely friendly AI is to be solved before allocating resources to Oracle AI.)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-04T16:15:45.423Z · LW(p) · GW(p)

Oracle AI has to be a last resort when the shit hits the fan.

If we had infinite time, I'd agree with you. But I'm feeling that we have little chance of solving FAI before the shit indeed does hit the fan and us. The route safe Oracle -> Oracle asisted FAI design seems more plausible to me. Especially as we are so much better at correcting errors than preventing them, so a prediction Oracle (if safe) would play to our strengths.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-04T16:35:23.165Z · LW(p) · GW(p)

But I'm feeling that we have little chance of solving FAI before the shit indeed does hit the fan and us.

If I assume a high probability of risks from AI and a short planning horizon then I agree. But it is impossible to say. I take the same stance as Holden Karnofsky from GiveWell regarding the value of FAI research at this point:

I think that if you're aiming to develop knowledge that won't be useful until very very far in the future, you're probably wasting your time, if for no other reason than this: by the time your knowledge is relevant, someone will probably have developed a tool (such as a narrow AI) so much more efficient in generating this knowledge that it renders your work moot.

I think the same applies for fail-safe mechanisms and Oracle AI, although to a lesser extent.

The route safe Oracle -> Oracle asisted FAI design...

What is your agenda for developing such a safe Oracle? Are you going to do AGI research first and along the way try to come up with solutions on how to make it safe? I think that would be a promising approach. But if you are trying to come up with ways on how to ensure the safety of a fictive Oracle, whose nature is a mystery to you, then the argument mentioned above counts again.

comment by roystgnr · 2011-12-03T16:03:29.417Z · LW(p) · GW(p)

"Just answer my questions accurately! How do I most greatly reduce the number of human deaths in the future?"

"Insert the following gene into your DNA: GACTGAGTACTTGCTGCTGGTACGGATGCTA..."

So, do you do it? Do you trust everyone else not to do it? Can you guess what will happen if you're wrong?

You imagine an Oracle AI as safe because it won't act on the world, but anyone building an Oracle AI will do so with the express purpose of affecting the world! Just sticking a super-unintelligent component into that action loop is unlikely to make it any safer.

Even if nobody inadvertently asks the Oracle any trick questions, there's a world of pitfalls buried in the superficially simple word "accurately".

Replies from: Lapsed_Lurker, Stuart_Armstrong, Caspian
comment by Lapsed_Lurker · 2011-12-03T19:00:37.308Z · LW(p) · GW(p)

Any method that prevents any more children being created and quickly kills off all humans will satisfy that request.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-03T21:05:16.219Z · LW(p) · GW(p)

You are deliberately casting him in the bad light!

If I want to reduce number of human deaths in future-from-now I need just to stop people from creating new people, period. Destruction of living population is after-the-answer anyway, and so does not improve anything. They will die sooner or later anyway (heat death/big crunch/accumulated bad luck); maybe applying exponential discounting makes us want to put the deaths off.

Replies from: Lapsed_Lurker
comment by Lapsed_Lurker · 2011-12-03T21:15:21.714Z · LW(p) · GW(p)

Fair enough, the AI could modify every human's mind so none of them wish to replicate, but easier to terminate the lot of them and eliminate the risk entirely.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-03T21:26:12.766Z · LW(p) · GW(p)

Easier - maybe. The best way is to non-destructively change living beings in such a way that they become reproductionally incompatible with Homo Sapiens. No deaths this time, and we can claim that these intelligent species has no humans among them. This stupid creature at the terminal may even implement it, unlike all these bloodbath solutions.

Replies from: Lapsed_Lurker
comment by Lapsed_Lurker · 2011-12-03T22:43:11.100Z · LW(p) · GW(p)

I declare your new species name is 'Ugly Bags of Mostly-Water'. There you go, no more human deaths. I'm sure humanity would like that better than genocide, but the UBMW will then ask the equivalent question.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-04T09:46:07.025Z · LW(p) · GW(p)

Hm, sterilisation of humans and declaring (because of reproductive incompatibility) them a new species. UBMWs will get the answer that nothing can change the amount.

comment by Stuart_Armstrong · 2011-12-04T10:49:47.589Z · LW(p) · GW(p)

Yep, accurately (or more precisely, informatively mostly accurate) is a challenge. We look into it a bit in our paper: http://www.aleph.se/papers/oracleAI.pdf

comment by Caspian · 2011-12-04T04:02:41.900Z · LW(p) · GW(p)

I don't just do it, I ask followup questions, like what are the effects in more detail. If I am unfortunate, I ask something like "how could I do that", and get an answer like "e-mail the sequence to a university lab, along with this strangely compelling argument" and I read the strangely compelling argument which is included as part of the answer.

So if a goal-directed AI can hack your mind, it is pretty easy to accidentally ask the oracle AI a question where the answer will do the same thing. If you can avoid that, you need to ask lots of questions before implementing its solution so you get a good idea of what you are doing.

comment by timtyler · 2011-12-03T20:03:35.224Z · LW(p) · GW(p)

I think a realistic expectation is for our ability to perform inductive inference to develop faster (in a sense) than our ability to do the other parts of machine intelligence (i.e. tree pruning and evaluation). In which case, all realistic routes to machine intelligence would get there through an oracle-like stage. Inductive inference is cross-domain and will be used everywhere - fuelling its development.

comment by prase · 2011-12-03T16:08:25.591Z · LW(p) · GW(p)

How to use a parable to covertly revive long refuted arguments...

If I understand what "safe" means to you, you are basically saying that having such a super-intelligent Oracle wouldn't help Hitler achieve his goals.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-03T21:01:34.874Z · LW(p) · GW(p)

Nope. Building FAI vs building OAI means that in one case everyone wants to affect the actual AI built in another direction and in the second one everyone wants a copy. This means that in the second case actual safety is something all sides can collaborate on, even if indirectly.

Oracle AI technology can turn out in multiple hands at once with any 3/4 of holders being able to calm down the coalition of any 1/4. This may help stabilizing the system and create a set of Intelligences who value cooperation. In any case, this probably gives more time to do this.

Replies from: prase
comment by prase · 2011-12-03T21:24:10.144Z · LW(p) · GW(p)

The villain asks the Oracle: "How do I build a Wunderwaffe (a virus that kills humanity, an UFAI) for myself?" The oracle returns the plans for building such a thing, since it only wishes to answer questions correctly. How does the rest of humanity prevent the doom once the information is released?

Well, if the questions are somehow censored before given to the AI, we perhaps get some additional safety. Until some villain discovers how to formulate the question to pass it through the censors undetected. Or discovers destructive potential in an answer to question asked by somebody else.

Anyway, the original post effectively says that Oracles are safe because all people agree what they should do: answer the questions. This hinges on idea of robots endangering us only via direct power and disregards the gravest danger of super-human intelligence: revealing dangerous information which can be used to make things whose consequences we are unable to predict.

Replies from: Stuart_Armstrong, vi21maobk9vp
comment by Stuart_Armstrong · 2011-12-04T10:21:35.977Z · LW(p) · GW(p)

But the oracle will be able to predict these consequences, and we'll probably get into the habit of checking these.

Replies from: prase
comment by prase · 2011-12-04T18:04:06.013Z · LW(p) · GW(p)

The problem is that the question "what would be the consequences" is too general to be answered exhaustively. We should at least have an idea about the general characteristics of the risk to ask more specifically; the Oracle doesn't know what consequences are important for us unless it already comprehends human values an is thus already "friendly".

comment by vi21maobk9vp · 2011-12-03T21:37:09.652Z · LW(p) · GW(p)

Well, after a small publicity campaign, villains will start to ask Oracles whether there [b]is[/b] any world to rule after they take over the world. No really, XX century teaches us that MAD is something that can calm people with power reliably.

Virus that kills 100% of humanity armed with more information processing power to counter it than the virus designer has to build it is not easy to create. 75% may be easy enough at some stage; but it is not an existential risk. On the plus side we may be able to use the OAIs on the good side to fight multiply resistant bug strains in the case they become pathogenic.

Replies from: Prismattic
comment by Prismattic · 2011-12-03T22:42:30.883Z · LW(p) · GW(p)

No really, XX century teaches us that MAD is something that can calm people with power reliably.

One should be reluctant to generalize from a very small dataset, particularly when the stakes are this high.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-04T09:44:03.953Z · LW(p) · GW(p)

I agree that we have too few well-documented cases. But there are also some reasons behind MAD being effective. It doesn't look like MAD is fluctuation. It is not a bulletproof evidence, but it is sme evendence.

Also, it is complementary to the second part: MAD via OAI means also high chances of partial parrying the strike.