What's special about a fantastic outcome? Suggestions wanted.

post by Stuart_Armstrong · 2014-11-11T11:04:30.407Z · LW · GW · Legacy · 19 comments

Contents

19 comments

I've been returning to my "reduced impact AI" approach, and currently working on some idea.

What I need is some ideas on features that might distinguish between an excellent FAI outcome, and a disaster. The more abstract and general the ideas, the better. Anyone got some suggestions? Don't worry about quality at this point, originality is more prized!

I'm looking for something generic that is easy to measure. At a crude level, if the only options were "papercliper" vs FAI, then we could distinguish those worlds by counting steel content.

So basically some more or less objective measure that has a higher proportion of good outcomes than the baseline.

19 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2014-11-11T17:01:51.359Z · LW(p) · GW(p)

Based on a certain SSC post, a matrix-like scenario where a non-friendly AI decides to spare a few cubic meters of computronium to faithfully simulate humanity, rather than to simply get rid of it. For sentimental reasons and because human-boxing is ironic.

comment by Armok_GoB · 2014-12-12T10:31:53.565Z · LW(p) · GW(p)

It has near maximal computational capacity, but that capacity isn't being "used" for anything in particular that is easy to determine.

This is actually a very powerful criteria, in terms of number of false positive and negatives. Sadly, the false positives it DOES have still far outweigh the genuine positives, and includes all the WORST outcomes (aka, virtual hells) as well.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-12-15T13:07:30.274Z · LW(p) · GW(p)

Interesting. Is this kinda like a minimum complexity of outcome requirement?

Replies from: Armok_GoB
comment by Armok_GoB · 2014-12-16T19:57:14.394Z · LW(p) · GW(p)

Didn't think of it like that, but sort of I guess.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-11-12T06:16:00.121Z · LW(p) · GW(p)

I presume the answer you're looking for isn't "fun theory", but I can't tell from OP whether you're looking for distinguishers from our perspective or from an AI's perspective.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-11-13T18:39:10.797Z · LW(p) · GW(p)

I'm looking for something generic that is easy to measure. At a crude level, if the only option were "papercliper" vs FAI, then we could distinguish those worlds by counting steel content.

So basically some more or less objective measure that has a higher proportion of good outcomes than the baseline.

Replies from: Eliezer_Yudkowsky, Leonhart, Gurkenglas
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-11-15T04:21:10.636Z · LW(p) · GW(p)

Merely higher proportion, and we're not worried about the criterion being reverse-engineered? Give a memory expert a large prime number to memorize and talk about outcomes where it's possible to factor a large composite number that has that prime as a factor. Happy outcomes will have that memory expert still be around in some form.

EDIT: No, I take that back because quantum. Some repaired version of the general idea might still work, though.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-11-16T08:21:45.997Z · LW(p) · GW(p)

we're not worried about the criterion being reverse-engineered?

I'm trying to think about ways that might potentially prevent reverse engineering...

comment by Leonhart · 2014-11-13T21:52:36.130Z · LW(p) · GW(p)

Smiles, laughter, hugging, the humming or whistling of melodies in a major key, skipping, high-fiving and/or brofisting, loud utterance of "Huzzah" or "Best thing EVER!!!", airborne nanoparticles of cake, streamers, balloons, accordion music? On the assumption that the AI was not explicitly asked to produce these things, of course.

comment by Gurkenglas · 2014-12-12T18:40:26.879Z · LW(p) · GW(p)

If you're planning to simulate an AI in a universe in a box and examine whether the produced universe is good via some process that doesn't allow the AI to talk to you, the AI is just gonna figure out it's being simulated and pretend to be an FAI (Note that an AI that pretends to be an FAI maximizes not for friendliness, but for apparent friendliness, so this is no pathway to FAI) so you'll let it loose on the real world.

To the first approximation, having a box that contains an AI anywhere in it output even a few bits of info tends to choose those bits that maximize the AI's IRL utility.

(If you have math-proofs that no paperclipper can figure out that it's in your box, it's just gonna maximize a mix of its apparent friendliness score and the number of paperclips (whether it is let loose in the box or in the real world), which doesn't cost it much compared to either maximizing paperclips or apparent friendliness because of the tails thing)

comment by James_Miller · 2014-11-12T00:32:23.304Z · LW(p) · GW(p)

Have the FAI create James+, who is smarter than me but shares my values. In a simulation in which I spend a long time living with James+ I agree that he is an improved me. Let James++ be similarly defined with respect to James+. Continue this process until the FAI isn't capable of making improvements. Next, let this ultimate-James spend a lot of time in the world created by the FAI and have him evaluate it compared with possible alternatives. Finally, do the same with everyone else alive at the creation of the FAI and if they mostly think that the world created by the FAI is about the best it could do given whatever constraints it faces, then the FAI has achieved an excellent outcome.

comment by LizzardWizzard · 2014-11-11T13:02:50.948Z · LW(p) · GW(p)

Unfortunately our brains lack capacity of thinking about superior intelligence. As I understood you want to describe particular examples of what lies between scenarios 0 (stands for human extinction) and 1 (mutual cooperation and new better level of everything)

First, there are scenarios where human race is standing on the edge of extinction, but somehow ables to fight back and surive, call that Skynet scenario. Analogously, you can think of a scenario where emergence of FAI don't do any great harm, but also don't provide too much new insights and never really get far beyond Human-level intelligence

Replies from: ChristianKl
comment by ChristianKl · 2014-11-11T15:47:02.980Z · LW(p) · GW(p)

First, there are scenarios where human race is standing on the edge of extinction, but somehow ables to fight back and surive, call that Skynet scenario.

Skynet is no realistic scenario.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-11-11T20:17:56.334Z · LW(p) · GW(p)

After someone goes out and films Clippy: The Movie, will we be also prevented from using Clippy as shorthand for a specific hypothetical AI scenario?

Replies from: ChristianKl
comment by ChristianKl · 2014-11-11T21:30:27.680Z · LW(p) · GW(p)

If you don't mean skynet as skynet in terminator what do you mean with the skynet?

Replies from: LizzardWizzard
comment by LizzardWizzard · 2014-11-12T11:01:56.521Z · LW(p) · GW(p)

Yeah I was talking about that terminator guy in terms that AI got ultimate control and used its power against humans, but was defeated, it is not obligatory to include here cyborgs time-travellers.

How you do measure, besides your gut feelings, realisticness of these kinds of scenarios? There is no way to assign probabilities accurately, all we can and should do is imagine as much consequences as possible

Replies from: ChristianKl
comment by ChristianKl · 2014-11-12T11:16:32.177Z · LW(p) · GW(p)

Ultimate control and getting defeated don't mesh well. In Hollywood there a chance that an AGI that gets ultimate control gets afterwards defeated. In the real world not so much.

How you do measure, besides your gut feelings, realisticness of these kinds of scenarios?

Analysis in multiple different ways. Keeping up with the the discourse.

There is no way to assign probabilities accurately

That's debatable. You can always use Bayesian reasoning but it's not the main issue of this debate.

Replies from: LizzardWizzard
comment by LizzardWizzard · 2014-11-12T11:58:05.434Z · LW(p) · GW(p)

Oh thanks, now I see it, these almost-there cases looking somewhat holywoodish, like the villain obligatory should pronounce lingering monologue before actually killing his victim, and thanks God hero appears in last moment.

Okay, Skynet which won't instantly get rid of humanity is improbable, if really superintelligent, and if it has this goal.

We can easily imagine many kinds of possible catastrophe which can happen, but we are not equally good at producing heavenlike utopian views, but this is an evidence only about our lack of imagination

comment by IlyaShpitser · 2014-11-11T12:50:48.518Z · LW(p) · GW(p)

I think if there was an FAI, and it successfully built utopia, it would have to solve this problem of convincing us that it did (?using something like the Arthur/Merlin protocol?)


edit: I realize that there is a nearly limitless room for trickery here, I am just saying that if I had to define FAI success it would have to be in terms of some sort of convincing game with asymmetric computing power setup, how else would we even do it? Importantly, Merlin is allowed to lie.