Disjunctive AI scenarios: Individual or collective takeoff?

post by Kaj_Sotala · 2017-01-11T15:43:31.578Z · LW · GW · Legacy · 4 comments

This is a link post for http://kajsotala.fi/2017/01/disjunctive-ai-scenarios-individual-or-collective-takeoff/

Contents

4 comments

4 comments

Comments sorted by top scores.

comment by siIver · 2017-01-11T17:18:32.095Z · LW(p) · GW(p)

I don't find this convincing.

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee.

I think the same argument has been made by Hanson, and it doesn't seem to be true. Humans seem significantly superior based on the fact that they are capable of learning language. There is afaik no recorded instance of a chimpanzee doing that. The quote accurately points out that there are lots of things which an individual human or a tribe can't do that a chimpanzee can't do either, but it ignores the fact that there are also things which a human can in fact do and a chimpanzee can't. Moreover, even if it was true that a human brain isn't that much more awesome than a chimpanzee's, that doesn't imply that an AI can't be much more awesome than a human brain.

The remainder of the article argues that human capability is really based on a lot of implicit skills that aren't written down anywhere. I don't think this argument holds. If an AI is capable of reading much more quickly than humans, then it should also be able of watching video footage much more quickly than humans (if not by the same factor), and if it has access to the Internet, then I don't see why it shouldn't be able to learn how to turn the right knobs and handles on an oil rig and how to read the faces of humans – or literally anything else.

Am I missing something here?

Replies from: scarcegreengrass, moridinamael
comment by scarcegreengrass · 2017-01-13T16:57:57.477Z · LW(p) · GW(p)

It occurred to me too that a strong human advantage is the verbal world.

comment by moridinamael · 2017-01-11T20:33:18.301Z · LW(p) · GW(p)

The "turning knobs on an oil rig" analogy is particularly unconvincing. Even a smart human can read the engineering schematics and infer what the knobs do without needing to be shown.

Replies from: whpearson
comment by whpearson · 2017-01-12T20:02:48.660Z · LW(p) · GW(p)

I can potentially see an argument about some mechanism that is more likely to be jury rigged off-spec in the field. Or a mechanism that is currently partially malfunctioning.

The best argument around implicit knowledge would be things like "Pure math research". While it is easy enough to get the axioms of maths, it is harder to see how people search the space from videos etc. My best model for how this is transferred between people is that people learn from other people by attempting to do maths and then getting given feedback based upon their methodology and what their teachers think needs to change. So the teacher needs to be able to model the student somewhat so that they can give useful feedback and correct errors. If this is necessarily the case, then computers will need a lot of personal human input to get good at abstract reasoning.

I don't think this is the necessarily the strongest argument.