IntelligenceExplosion.com

post by lukeprog · 2011-08-07T17:46:40.071Z · LW · GW · Legacy · 23 comments

Contents

23 comments

I put together a 'landing page' for the intelligence explosion concept similar to Nick Bostrom's landing pages for anthropics, the simulation argument, and existential risk. The new website is IntelligenceExplosion.com. You can see I borrowed the CSS from Bostrom's anthropics page and then simplified it.

Just as with the Singularity FAQ, I'll be keeping this website up to date, so please send me corrections or bibliography additions at luke [at] singinst [dot] org.

23 comments

Comments sorted by top scores.

comment by Lightwave · 2011-08-08T10:45:40.943Z · LW(p) · GW(p)

I offer my web design skills to improve the site design/code (for free). I can send you a redesign sample if you'd like.

Edit: it's done.

comment by curiousepic · 2011-08-07T17:58:41.167Z · LW(p) · GW(p)

I don't care for the graphic - it doesn't really get the idea across very well, and its composition and quality is kind of grating. IMO, at the moment, having no graphic is preferable.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2011-08-07T18:04:03.190Z · LW(p) · GW(p)

Agreed. Even with a decent grasp on the concept it's supposed to be showing, it took me a while to figure out what it was trying to show. The arrow from the brain to the brain in particular doesn't seem to click. (If you really want a graphical representation along that line, something with a bubble moving along the arrow and into the brain, and the brain expanding as the bubble dissolves, would probably work better.)

Replies from: lukeprog
comment by lukeprog · 2011-08-07T22:33:58.762Z · LW(p) · GW(p)

Anybody have an idea for how to represent intelligence explosion graphically?

Replies from: Dreaded_Anomaly, steven0461, Incorrect
comment by Dreaded_Anomaly · 2011-08-07T23:03:21.637Z · LW(p) · GW(p)

The concept you're trying to convey might become more obvious if you used thought bubbles instead of arrows. Have the humans imagine the artificial brain, and it appears; then have the artificial brain imagine a bigger version of itself, and it grows; and so forth. (This will involve more frames in a larger .gif, but I think it will make the process clearer.)

Replies from: omslin
comment by omslin · 2011-08-07T23:42:34.060Z · LW(p) · GW(p)

Animated GIFs look unprofessional.

Replies from: lukeprog
comment by lukeprog · 2011-08-08T01:00:48.928Z · LW(p) · GW(p)

That is a problem. What do ya'll think of the new image?

Replies from: steven0461, dbaupp, shokwave
comment by steven0461 · 2011-08-08T19:23:00.879Z · LW(p) · GW(p)

It doesn't make as much sense without the context of showing the parochial human picture first, and I'm worried that without that context it'll just come across as hyperbole. "The AI will be thiiiiiiiiiiis much smarter than Einstein!!!" It also suggests too strong a connection between recursive self-improvement and a specific level of intelligence.

comment by dbaupp · 2011-08-08T08:03:43.563Z · LW(p) · GW(p)

Where's EY?

(More seriously: that image looks much nicer)

comment by shokwave · 2011-08-08T03:11:12.258Z · LW(p) · GW(p)

Like. The big problem in explaining intelligence explosions is not explaining the process - in my experience, people grasp the process very intuitively from even my unclear explanations. The big problem is communicating the end result: recursive self-improvement takes AI off the far end of the human scale of intelligence. (The process might only be disputed as a way to reject the end result.) This image does a lot of that work right away.

comment by steven0461 · 2011-08-07T22:51:11.559Z · LW(p) · GW(p)

Probably too silly to use here, but one thing that comes to mind is a brain reshaped to have the form of a nuclear mushroom.

Replies from: Incorrect, Manfred
comment by Incorrect · 2011-08-08T01:58:42.691Z · LW(p) · GW(p)

That might be misinterpreted to mean "mind blowing."

comment by Manfred · 2011-08-07T23:23:39.480Z · LW(p) · GW(p)

Maybe has the wrong connotations :P

comment by Incorrect · 2011-08-08T01:55:18.386Z · LW(p) · GW(p)

(λf.(λx.f (x x)) (λx.f (x x))) {image of a brain}

Replies from: Alex_Altair
comment by Alex_Altair · 2011-08-08T23:32:18.042Z · LW(p) · GW(p)

What lambda expression grows exponentially with each evaluation?

Replies from: Incorrect
comment by Incorrect · 2011-08-08T23:43:23.046Z · LW(p) · GW(p)

It's called the Y combinator. If evaluated lazily it wont necessarily run forever.

comment by shokwave · 2011-08-08T03:16:23.375Z · LW(p) · GW(p)

The primer uses "light cone" several times towards the end; considering replacing with something less technical? Something like "our part of the universe" maybe.

Also the second-last paragraph of the primer has a typo:

For example, suppose the superintelligent maachine shares all our intrinsic goals but lacks our goal

comment by aletheilia · 2011-08-09T23:52:20.578Z · LW(p) · GW(p)

What is the difference between the ideas of recursive self-improvement and intelligence explosion?

They sometimes get used interchangeably, but I'm not sure they actually refer to the same thing. It wouldn't hurt if you could clarify this somewhere, I guess.

comment by jsalvatier · 2011-08-07T21:47:18.424Z · LW(p) · GW(p)

Good on you for making this.

The site talks about 'the intelligence explosion' which doesn't seem quite right since it's a kind of process than a specific event. You might want to say 'an' intelligence explosion, though that would sound awkward.

Replies from: lukeprog
comment by lukeprog · 2011-08-08T01:06:34.059Z · LW(p) · GW(p)

Done.

comment by Endovior · 2011-08-11T19:49:00.667Z · LW(p) · GW(p)

From the site: "If there is a 'fast takeoff', the first self-improving AI will could prevent any competing machine superintelligences from arising."

'will could' sounds wrong; one of those two words needs to go.

comment by AdeleneDawner · 2011-08-07T21:18:27.941Z · LW(p) · GW(p)

I'm seeing a 'no hotlinking' message. (Also, the downvote isn't from me.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-07T21:36:35.242Z · LW(p) · GW(p)

That was strange, a picture of a cat too big to fit into the markup, and no text indicating its relevance, from a username "BigCat", so I banned the comment for now...