The Kolmogorov complexity of a superintelligence

post by Thomas · 2011-06-26T12:11:38.392Z · LW · GW · Legacy · 30 comments

Contents

30 comments

The most simple, able to self-improve "seed" for a superintelligence must be how complex?

Give your (wild) estimation or a pure guess in the terms of bits.

Do you reckon it is 1000 bits enough? At least 1000000 is required? More than 10^20 bits?

I wonder what your approximate numbers are.

Thank you!

30 comments

Comments sorted by top scores.

comment by cousin_it · 2011-06-26T14:33:24.274Z · LW(p) · GW(p)

The K-complexity of a superintelligence can't be higher than the K-complexity of a program that searches for a superintelligence, enumerating all programs in order and asking them hard questions to check their abilities. I don't see any reason why the latter should be over a megabyte, probably much less.

Replies from: MrMind, Zetetic, Miller
comment by MrMind · 2011-06-30T14:58:14.676Z · LW(p) · GW(p)

It must be, because this verification procedure also produces infinites false positives, for example all the hash tables which happen to have the correct answers by chance. That is, the procedure doesn't produce more information than simply asking "Are you a superintelligence?".

comment by Zetetic · 2011-06-26T20:53:38.429Z · LW(p) · GW(p)

I don't see any reason why the latter should be over a megabyte, probably much less.

In contrast, it seems to me like the verification procedure for a friendly superintelligence could be quite long. Do you agree with this? It would definitely be longer than for just any superintelligence, but I haven't got a clue about how much; just that we could potentially be adding some pretty heavy constraints.

Replies from: cousin_it
comment by cousin_it · 2011-06-26T21:05:02.181Z · LW(p) · GW(p)

Yes, I agree. An upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains, but you could probably compress that a lot.

Replies from: wedrifid, ciphergoth, Zetetic
comment by wedrifid · 2011-06-26T21:17:51.854Z · LW(p) · GW(p)

Yes, I agree. An upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains, but you could probably compress that a lot.

False. The upper bound on the complexity of a Friendly superintelligence is [(total information content of all brains) + (minimum complexity of a what it takes to identify a superintelligence with a defined goal of deriving its objective from a set of agents according to some mechanism that represents what it would mean to be 'friendly' to them)].

ie. Just having all the information content of all human brains is not enough. You can't avoid defining Friendliness in the search program.

Replies from: Wei_Dai, cousin_it, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2011-06-26T21:57:09.089Z · LW(p) · GW(p)

I agree with wedrifid here. We don't seem to have a valid argument showing that "an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains". I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.

comment by cousin_it · 2011-06-26T22:11:35.907Z · LW(p) · GW(p)

Technically you can cheat by using the information in human brains to create upload-based superintelligences along the lines of Stuart's proposal, make them do the research for you, etc., so it seems likely to me that the upper bound should hold... but I appreciate your point and agree that my comment was wrong as stated.

Replies from: wedrifid
comment by wedrifid · 2011-06-27T00:57:37.897Z · LW(p) · GW(p)

When talking about upper bounds we cannot afford to just cheat saying humans can probably figure it out. That isn't an upper bound - it is an optimistic prediction about human potential. Moreover we still need a definition of Friendliness built in so we can tell whether this thing that the human researchers come up with is Friendliness or some other thing with that name. (Even an extremely 'meta' reference to a definition is fine but still requires more bits to point to which part of the humans is able to define Friendliness.)

Upper bounds are hard. But yes, I know your understanding of the area is solid and your ancestor comment serves as the definitive answer to the question of the post. I disagree only with the statement as made.

comment by Vladimir_Nesov · 2011-06-26T21:56:04.236Z · LW(p) · GW(p)

False.

You are most likely not disagreeing with the intended meaning, so using words like "false" to motivate the clarification you were making is wrong.

Replies from: wedrifid
comment by wedrifid · 2011-06-27T02:43:02.588Z · LW(p) · GW(p)

You are most likely not disagreeing with the intended meaning, so using words like "false" to motivate the clarification you were making is wrong.

No Vladimir. My disagreement was with the statement and the statement was, indeed false. That doesn't mean I disagree with cousin_it's overall philosophy. It just means I am calling a single false claim false.

You are wrong.

comment by Paul Crowley (ciphergoth) · 2011-06-28T09:15:46.619Z · LW(p) · GW(p)

I'm confused - a Friendly AI doesn't start with information about the insides of brains, it only starts with enough information to correctly identify human beings in order to know whose preferences to infer its values from.

comment by Zetetic · 2011-06-26T21:12:12.336Z · LW(p) · GW(p)

but you could probably compress that a lot.

This definitely fits my intuition. It seems like CEV style friendliness, for example, could at least be reduced to something on the order of complexity of a single "averaged" human brain. This is all very vague of course, I'm mostly just trying to feel out some intuitions on this.

Replies from: wedrifid
comment by wedrifid · 2011-06-26T21:27:53.441Z · LW(p) · GW(p)

It seems like CEV style friendliness, for example, could at least be reduced to something on the order of complexity of a single "averaged" human brain.

That doesn't sound like it passes the adequately-friendly-to-wedrifid test. I'd thermite such an AI rather than press the on button.

Replies from: Zetetic
comment by Zetetic · 2011-06-26T21:37:05.005Z · LW(p) · GW(p)

That seems difficult to determine without unpacking what I mean by "averaged"; how did you come to that conclusion? (I'm wondering if you have a clearer concept of what it would mean to "average" over brains, my intuition is nothing more than a sort of vague feeling that makes me think that I should study certain topics and ask certain questions) I don't even know enough to accurately convey my intuition behind my use of "averaged", I was just hoping that it might elicit a response from someone that would give me some useful information that would, in turn, help me in my task of understanding CEV better.

Replies from: wedrifid
comment by wedrifid · 2011-06-26T21:45:50.678Z · LW(p) · GW(p)

Unfortunately nobody has a good answer to your question. At least none that would satisfy me. The aggregation problem is the hardest part of the problem and hasn't been answered adequately yet. Without such an answer CEV remains basically undefined. (This is something that troubles me.)

Replies from: Zetetic
comment by Zetetic · 2011-06-26T22:39:16.321Z · LW(p) · GW(p)

I see. Thanks; that is enlightening. I had gotten the impression that this was the case, but wasn't sure.

comment by Miller · 2011-06-26T18:01:46.465Z · LW(p) · GW(p)

Interesting. How does the program determine hard questions (and their answers) without qualifying as generating them itself? That is, the part about enumerating other programs seems superfluous.

[Edit added seconds later] Ok, I see perhaps it could ask something like 'predict what's gonna happen 10 seconds from now', and then wait to see if the prediction is correct after the universe runs for that long. In that case, the parent program isn't computing the answer itself.

Replies from: cousin_it
comment by cousin_it · 2011-06-26T18:20:23.307Z · LW(p) · GW(p)

You don't need to be able to generate the answer to your hard problem yourself, only to check that the superintelligence's offered answer is correct. These two abilities are equivalent if computing resources are unlimited, but you could run the superintelligence for a limited time... This line of thought seems to lead into the jungle of complexity theory and you should probably take my comments with a grain of salt :-)

comment by gwern · 2011-06-26T16:55:01.339Z · LW(p) · GW(p)

Legg's 2006 "Is There an Elegant Universal Theory of Prediction?" may be relevant.

(The answer, BTW, is 'no'; seems to be in the usual Godelian limit vein of thought: "In this paper, it is shown that although highly powerful algorithms exist, they are necessarily highly complex.")

Replies from: timtyler
comment by timtyler · 2011-06-27T06:36:21.468Z · LW(p) · GW(p)

The paper seems not very quantative. It is not obvious from it whether a human needs a thousand bits, a million bits, a trillion bits - or whatever.

Replies from: gwern
comment by gwern · 2011-06-27T13:17:15.623Z · LW(p) · GW(p)

It would be quite impressive if it were able to...

My point was that Legg has shown, as I understand it, that any powerful prediction algorithm which is powerful enough to predict most/all of the universe (as one would expect a fearsome AGI to able to do) will be at least as complex as the universe it's predicting.

comment by gjm · 2011-06-27T14:55:27.947Z · LW(p) · GW(p)

It seems likely that the answer depends on how rapidly and how reliably you want your "seed" to be able to turn into an actual working superintelligence. For instance:

Something along the lines of AIXI might have a very short program indeed, but not do anything useful unless you gave it more computing power and time than the entire universe can offer. (It is AIUI an open question whether that's actually finite. Let's suppose it is.)

A lots-of-humans simulator might well produce a superintelligence but its inhabitants might instead wipe themselves out in a (simulated) nuclear war, or just turn out not to be smart enough to make a working AI, or something.

comment by timtyler · 2011-06-26T12:22:03.933Z · LW(p) · GW(p)

Some entertain the hypothesis that the K-complexity of the entire universe may be under 1000 bits - though nobody knows if that is true or not.

A few bacteria will have created the explosion that leads to any future superintelligence. Much of the resulting complexity came not from them, but from their environment, though. Also, that took a while. It seems as though time limits would be needed if you are looking for a different kind of answer.

Replies from: wedrifid
comment by wedrifid · 2011-06-26T21:42:26.340Z · LW(p) · GW(p)

Some entertain the hypothesis that the K-complexity of the entire universe may be under 1000 bits - though nobody knows if that is true or not.

More can be said about a single apple than about all the apples in the world.

Handing me an entire universe and saying "here you go, there is a superintelligence in there somewhere, find it yourself" does not qualify as giving me a superintelligence (in the information sense). In fact I need the same amount of information to find a superintelligence in the "1,000 bit universe" as the minimum K-complexity of the superintelligence itself.

The K-complexity of the entire universe is basically irrelevant.

Replies from: timtyler, paulfchristiano
comment by timtyler · 2011-06-27T06:32:04.351Z · LW(p) · GW(p)

In fact I need the same amount of information to find a superintelligence in the "1,000 bit universe" as the minimum K-complexity of the superintelligence itself.

Well, perhaps minus 1,000 bits. Those 1,000 bits might be screening off a lot of universes with no superintelligences in them, so they could matter a great deal. If so, that's a smaller space to search - by a factor of 2^1,000.

comment by paulfchristiano · 2011-06-27T01:33:27.779Z · LW(p) · GW(p)

A description of a superintelligence might be quite easy to find in our universe; for example, if the entire universe were tiled with AIs for most of its lifetime, it may not require much information to point to one of them once you have described the universe. If the Kolmogorov complexity of the universe were really only 1000, this could easily beat cousn_it's suggestion.

Replies from: wedrifid
comment by wedrifid · 2011-06-27T02:45:10.344Z · LW(p) · GW(p)

This is an upper bound not a lower bound. That roughly means that whenever you don't have proof to the contrary you assume the worst.

Replies from: paulfchristiano
comment by paulfchristiano · 2011-06-27T03:43:04.780Z · LW(p) · GW(p)

What? Cousin_it gave an upper bound. I (and timtyler) are pointing out a plausible way the value could be significantly below that upper bound.

Replies from: wedrifid
comment by wedrifid · 2011-06-27T09:37:26.790Z · LW(p) · GW(p)

It is true that the grandparent does not fit this context. Mistake!

comment by Manfred · 2011-06-26T14:32:00.459Z · LW(p) · GW(p)

If what you really want is a program that can run on a real supercomputer and reach super-human intelligence in less than a month, that's not just measuring its complexity. But I'd guess it could fit on a floppy disk.