Biointelligence Explosion
post by Dr_Manhattan · 2011-11-07T14:05:08.985Z · LW · GW · Legacy · 16 commentsContents
16 comments
http://biointelligence-explosion.com/
- Site put together by David Pearce
The content and choice of domain name should be of interest.
16 comments
Comments sorted by top scores.
comment by timtyler · 2011-11-07T16:41:23.779Z · LW(p) · GW(p)
Despite this witches' brew of new technologies, a conceptual gulf remains in the futurist community between those who imagine human destiny, if any, lies in digital computers and hypothetical artificial consciousness; and in contrast radical bioconservatives who believe that our posthuman successors will also be our supersentient descendants at their neural networked core - not the digital zombies of symbolic AI run on classical serial computers.
Digital creatures need not be "zombies" - any more than human beings are - and they certainly don't need to run on "classical serial computers".
There is a gulf much like the one David describes - but the "bioconservative" position seems unbelievable to me - the future will be engineered.
Replies from: a_gramsci↑ comment by a_gramsci · 2011-11-08T20:22:34.490Z · LW(p) · GW(p)
Random question that just occurred to me: would you be fine if an exact copy was made of you (ignore quantum mechanics for now), and the old you was killed off?
Replies from: lessdazed, timtyler, Burrzz↑ comment by timtyler · 2011-11-08T22:30:28.764Z · LW(p) · GW(p)
Me? I suppose so - if I could be really convinced the process was reliable. Make two of me and I might need less convincing.
Replies from: a_gramsci↑ comment by a_gramsci · 2011-11-09T02:18:11.442Z · LW(p) · GW(p)
I don't know. The question of self is a hard one. I would not, because I would like my consciousness, as in the one that I control (a little recursive, but you get the point) to be alive, and because that other me is another distinct set of atoms, and therefore my neurons don't control him. So I would say no
Replies from: None↑ comment by [deleted] · 2011-11-10T17:04:22.349Z · LW(p) · GW(p)
There might not be such a thing as "distinct set of atoms" on fundamental level and even if it does, the atoms/molecules in constellation that constitutes you are turned over all the time. In short you in 5 sec do not consist of the same set of atoms at present you. Does that make you think that 5 sec you is not really you?
Replies from: wedrifid↑ comment by wedrifid · 2011-11-10T18:09:16.582Z · LW(p) · GW(p)
In short you in 5 sec do not consist of the same set of atoms at present you. Does that make you think that 5 sec you is not really you?
The five seconds in the future guy is me. The guy from 5 seconds ago... nah, he was kind of a dick.
comment by timtyler · 2011-11-07T16:48:12.841Z · LW(p) · GW(p)
Metaphorically, the Biointelligence Explosion represents an "event horizon" beyond which humans cannot model or understand the future.
As usual, the idea that we cannot model or understand the future is bunk. Wolfram goes on about this too - with his "computational irreducibility". Popper had much the same idea - in "The Poverty of Historicism". What is it about the unknowable future that makes it seem so attractive?
Replies from: JoshuaZ, amcknight↑ comment by JoshuaZ · 2011-11-07T19:08:50.812Z · LW(p) · GW(p)
What is it about the unknowable future that makes it seem so attractive?
There are a variety of different issues going on here. One is that there's a large history of very inaccurate predictions about the future, so they are reacting against that. Another is that predicting about the future with any accuracy is really hard. If the thesis were restricted to "predicting the future is so difficult that the vast majority of it is a waste of time" then it would look more reasonable. I suspect that when some people make this sort of assertion they mean something closer to this.
Replies from: timtyler↑ comment by timtyler · 2011-11-07T19:25:47.234Z · LW(p) · GW(p)
If the thesis were restricted to "predicting the future is so difficult that the vast majority of it is a waste of time" then it would look more reasonable.
Well, the brain is constantly predicting the future. It has to understand the future consequences of its possible actions - so that it can choose between them. Prediction is the foundation of all decision making. Predicting the future seems rather fundamental and commonplace to me - and I would not normally call it "a waste of time".
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-07T19:28:43.380Z · LW(p) · GW(p)
Ok. How about ""predicting the future to any substantial level beyond the next few years is so difficult that the vast majority of it is a waste of time."
(I disagree with both versions of this thesis, but this seems more reasonable. Therefore, it seems likely to me that people mean something much closer to this.)
↑ comment by amcknight · 2011-11-07T21:31:16.652Z · LW(p) · GW(p)
Also note the conflation between two types of singularity even though only one type (intelligence explosion) is in the name! Isn't the reason one would use the term "intelligence explosion" to distinguish your view from the event horizon one?
Replies from: timtyler↑ comment by timtyler · 2011-11-07T21:58:55.522Z · LW(p) · GW(p)
It is best not to use the term "intelligence explosion" for some hypothetical future event in the first place. That is severely messed up terminology.
comment by davidpearce · 2012-04-30T00:04:55.998Z · LW(p) · GW(p)
Many thanks Dr Manhattan. A scholarly version plus a bibliography ( PDF: http://www.biointelligence-explosion.com/biointelligence-explosion.pdf ) will appear in Springer's forthcoming Singularity volume (cf. http://singularityhypothesis.blogspot.com/p/central-questions.html ) published later this year. Here's a question for lesswrong members. I've been asked to co-author the Introduction. One thing I worry about is that causal readers of the contributors' essays won't appreciate just how radically the Kurzweilian and SIAI conceptions of the Singularity are at odds. Is there anywhere in print or elsewhere where Ray Kurzweil and Eliezer Yudkowski, say, directly engage each other's arguments in debate ? Also, are there any critical but non-obvious points beyond the historical background you think should be included the introductory overview?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-04-30T00:46:46.386Z · LW(p) · GW(p)
It may make sense if you are going to make a point about the different notions of a Singularity to use I. J. Good as the prototype of intelligence explosion Singularity since he's the original proponent. That said, I'm not aware of anywhere where Kurzweil and Eliezer directly engage each other.