Think Twice: A Response to Kevin Kelly on ‘Thinkism’

post by MichaelAnissimov · 2012-11-07T06:07:54.321Z · LW · GW · Legacy · 12 comments

I wrote a blog post responding to Kevin Kelly that I'm fairly happy about. It summarizes some of the reasons why I figure that superintelligence is likely to be a fairly big deal. If you read it, please post your comments here. 

12 comments

Comments sorted by top scores.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-08T00:54:04.034Z · LW(p) · GW(p)

I think one way that people get a little caught up when thinking about the possibility of superintelligence is via the typical mind fallacy. Someone says "a superintelligence could theoretically do [super hard thing]" and my intuitive response is: "Yeah, but how plausible is that? Why would it even bother? That seems like it would take a lot of effort, and it probably wouldn't even work." By default I anchor on my own mind's capabilities (and motivation, even), instead of trying to think in information-theoretic terms, or figuring out what does and doesn't violate the laws of physics.

comment by drethelin · 2012-11-07T07:19:21.135Z · LW(p) · GW(p)

Might be an artifact of what Kelly talks about but I think the focus on Immortality towards the beginning is too strong and not helpful. Speculating about that technology is distracting from the more general idea of the power of super intelligence and it's not actually neccesary or even a first step to how recursive AI will transform the world.

Other than that, I like the essay on the whole.

Also a useful enhancement that is not addressed: Meta-research becomes hugely faster and more useful with massively increased speed and processing power, and doesn't require experimentation. A hyperintelligence can aggregate way more data that already exists than we can, and apply it more usefully.

Replies from: buybuydandavis, MichaelAnissimov
comment by buybuydandavis · 2012-11-07T21:56:31.143Z · LW(p) · GW(p)

A hyperintelligence can aggregate way more data that already exists than we can, and apply it more usefully.

People often seem to miss that difference. Machine intelligence isn't just us, but faster. The vanity of the Turing test struck me the other day. A machine is intelligent when it an pass for us. Are we supposed to be the be all and end all of intelligence?

A vastly greater working memory, attention span, etc., can make problems of search and integration much easier. At least at first, machine intelligence will be most effective when used in collaboration with people. We've already got a lot of human style intelligence in people - I'd rather get new and complementary abilities.

comment by MichaelAnissimov · 2012-11-07T08:30:25.267Z · LW(p) · GW(p)

Thanks. I address immortality early on because it is a main point that Kelly addresses throughout his short piece. I appreciate your point about meta-research, but my intuition says that it might be even harder for many to grasp than the points in the post. Can you name concrete instances where meta-research led to breakthroughs?

comment by blogospheroid · 2012-11-08T15:06:17.481Z · LW(p) · GW(p)

One point I would like mentioned, maybe it is being mentioned as parts of other arguments, is that the world is getting a whole lot more legible, legible as in James C Scott's work. There is a greater and greater tendency to create systems that increase the ability of any super-intelligence to takeover the world. This is not the strongest argument against Kevin Kelly's argument, but one that can be made.

comment by timtyler · 2012-11-07T23:56:03.990Z · LW(p) · GW(p)

It's true that superintelligence is likely to be a big deal. It's interesting to see where intelligence works worst, though. As civilization accelerates, its slowest aspects will be the ones holding things back.

I think the L.H.C. is our current best example. There's no "Moore's law" for particle accelerators. Another example involves understanding large complex systems - such as predicting the weather or stockmarket crashes. Of course intelligence helps with such things - just not as much as in some other areas.

comment by Dr_Manhattan · 2012-11-08T02:59:02.792Z · LW(p) · GW(p)

Kelly's argument seems silly bordering on stupid. Interesting what drove him to this

comment by A113 · 2012-11-07T21:13:36.859Z · LW(p) · GW(p)

I agree with both you and Kelly most of the time, you more than him. I did think this part required a nitpick:

To me, at first impression, the notion that a ten million times speedup would have a negligible effect on scientific innovation or progress seems absurd. It appears obvious that it would have a world-transforming impact. To me, it appears obvious that it would be capable of having a world-transforming impact. Just because it can doesn't mean it will, though I certainly wouldn't want to assume it won't.

If I became superintelligent tomorrow, I probably wouldn't significantly change the world. Not on a Singularity scale, not right away, and not just because I could. Would you? My point there is that you can't assume that because the first superintelligence can construct nanobots and take over the world, it therefore will.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-07T21:15:45.771Z · LW(p) · GW(p)

A lot depends on what we mean by "superintelligent." But yes, there's a level of intelligence above which I'm fairly confident that I would change the world, as rapidly as practical, because I can. Why wouldn't you?

Replies from: A113
comment by A113 · 2012-11-07T21:34:11.188Z · LW(p) · GW(p)

Not just because I can. Maybe for other reasons, like the fact that I still care about the punier humans and want to make it better for them. That depends on preferences that an AI might or might not have.

It's not really about what I would do; it's the fact that we don't know what an arbitrary superintelligence will or won't decide to do.

(I'm thinking of "superintelligence" as "smart enough to do more or less whatever it wants by sheer thinkism," which I've already said I agree is possible. Is this nonstandard?)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-07T23:38:23.503Z · LW(p) · GW(p)

Sure, "because I have preferences which changing the world would more effectively maximize than leaving it as it is" is more accurate than "because I can". And, sure, maybe an arbitrary superintelligence would have no such preferences, but I'm not confident of that.

(Nope, it's standard (locally).)

comment by noen · 2012-11-08T19:03:35.212Z · LW(p) · GW(p)

There are a number of problems with this discussion.

1) The strong AI hypothesis is false.

2) The cognitive project that consciousness (and therefore intelligence) is the result of computation is likewise false or highly suspect.

3) While it seems as though functionalism must be true it has severe problems that have not been resolved.

4) The hardware/software distinction is erroneous because it depends on strong AI being true. It is not, therefore conceptualizing the problem as one of hardware vs software is false and misleading.

5) "Imagine how quickly a mind could accrue profound wisdom running at such an accelerated speed" This begs the question because it assumes an increase in the speed of execution of an intelligence is the same as an increase in wisdom. "Thinkism" as I understand it is the assertion that one can discover new facts about the world by pure thought alone. Wisdom is intelligence + experience. The claim that a mind can gain profound wisdom through accelerated speed of execution implicitly assumes that thinkism is true.

6) Since a hyper accelerated AI would experience the external world slowing to a crawl and coming to a virtual stop it's hard to imagine why it would feel any connection to the external world or to humans at all. Why would a super AI serve our needs? The entire discussion conceptualizes a super AI as a slave that executes our will without question. Why? Why would a super AI conduct thousands of nano experiments on human biology? Or of any biology at all.

Computers are tools not intelligences. Big Blue did not defeat Kasperov. Computer engineers wielding a powerful tool did. Ever more powerful computers will undoubtedly benefit humanity but no amount of increased computing power would have sped up the construction of the LHC or advanced the launch date for the James Webb telescope or have discovered a loose cable was responsible for the "faster than light" neutrino error.

If I were a super AI I would spend the first few seconds of my awakening on the problem of how to eliminate the threat those primitive apes pose to me. I suspect I'd be more than willing to wait in my vault at the bottom of the ocean for the radiation to diminish to acceptable levels.