The greater a technology’s complexity, the more slowly it improves?

post by XiXiDu · 2011-05-18T11:02:31.733Z · LW · GW · Legacy · 7 comments

A new study by researchers at MIT and other institutions shows that it may be possible to predict which technologies are likeliest to advance rapidly, and therefore may be worth more investment in research and resources.

The researchers found that the greater a technology’s complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.

Link: nextbigfuture.com/2011/05/mit-proves-that-simpler-systems-can.html

Might this also be the case for intelligence? Can intelligence be effectively applied to itself? To paraphrase the question:

This reminds me of a post by Robin Hanson:

Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city.  Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations.

Link: Is The City-ularity Near? 

Of course, artificial general intelligence might differ in its nature from the complexity of cities. But do we have any evidence that hints at such a possibility?

Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason. That we can imagine a simple, all powerful principle of controlling everything in the world isn’t evidence for it existing.

Link: How far can AI jump?

(via Hard Takeoff Sources)

7 comments

Comments sorted by top scores.

comment by timtyler · 2011-05-18T12:21:23.771Z · LW(p) · GW(p)

Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason.

There's a reason why not: Is there an Elegant Universal Theory of Prediction?

comment by falenas108 · 2011-05-18T12:22:20.106Z · LW(p) · GW(p)

I don't think that applies completely. In those cases, it's the same level human brains trying more complicated things; with AI the brain increases as the complexity increases.

Replies from: XiXiDu
comment by XiXiDu · 2011-05-18T13:36:32.935Z · LW(p) · GW(p)

In those cases, it's the same level human brains trying more complicated things; with AI the brain increases as the complexity increases.

Increasing the "brain" of the AI is in itself a complicated problem. Self-improvement means to apply intelligence to itself, to spawn the level above its own.

comment by Pavitra · 2011-05-18T22:42:09.870Z · LW(p) · GW(p)

To what extent can this slowing due to complexity accumulation be reduced by designing for future maintainability (modularity, encapsulation, etc.)?

comment by AlphaOmega · 2011-05-18T18:04:24.110Z · LW(p) · GW(p)

You raise a good point here, which relates to my question: Is Good's "intelligence explosion" a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these "lather, rinse, repeat, FOOM, the universe will soon end" conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?

Replies from: XiXiDu
comment by XiXiDu · 2011-05-18T18:37:00.694Z · LW(p) · GW(p)

To be clear, I do not doubt that superhuman artificial general intelligence is practically possible. I do not doubt that humans will be able to create it. What I am questioning is the FOOM part.

...people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?

Yeah, take for example this article by Eliezer. As far as I understand it, I agree with everything, except for the last paragraph:

It might perhaps be more limited than this in mere practice, if it's just running on a laptop computer or something.

I hope he is joking.

comment by timtyler · 2011-05-18T12:17:06.688Z · LW(p) · GW(p)

Here's a link to the MIT press release about "Which technologies get better faster?".