Singularity Non-Fiction Compilation to be Written
post by MichaelVassar · 2010-11-28T16:49:14.250Z · LW · GW · Legacy · 26 commentsContents
26 comments
Call for Essays:<http://singularityhypothesis.blogspot.com/p/submit.html>
The Singularity Hypothesis
A Scientific and Philosophical Assessment
Edited volume, to appear in The Frontiers Collection<http://www.springer.com/series/5342>, Springer
Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and 'carbon chauvinism'? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.
Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.
Important dates:
* Extended abstracts (500–1,000 words): 15 January 2011
* Full essays: (around 7,000 words): 30 September 2011
* Notifications: 30 February 2012 (tentative)
* Proofs: 30 April 2012 (tentative)
We aim to get this volume published by the end of 2012.
Purpose of this volume
· Please read: Purpose of This Volume<http://singularityhypothesis.blogspot.com/p/theme.html>
Central questions
· Please read: Central Questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>:
Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html> and indicating how they will be treated in the full essay.
Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).
(More details<http://singularityhypothesis.blogspot.com/p/submit.html>)
Thank you for reading this call. Please forward it to individual who may wish to contribute.
Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University
26 comments
Comments sorted by top scores.
comment by jsteinhardt · 2010-11-28T23:35:08.751Z · LW(p) · GW(p)
Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'?
This seems like a pretty leading statement, since it (a) pre-supposes that an intelligence explosion will happen, and (b) puts someone up against Turing and Hawking if they disagree about the likely x-risk factor.
Replies from: ata, wedrifid↑ comment by ata · 2010-11-29T01:58:32.840Z · LW(p) · GW(p)
Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing's time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity's survival.
comment by ata · 2010-11-29T18:03:29.223Z · LW(p) · GW(p)
I had been thinking about submitting something to this. The problem I'm having right now is that I'm thinking of too many things I'd hope to see covered in such a volume, including:
- The three main schools of thought regarding the Singularity. (I'd actually argue at this point that the Kurzweilian "singularity" is just a different thing than the "singularity" discussed by the event horizon and intelligence explosion schools of thought, rather than being a different approach to describing and understanding the same thing. The event horizon and intelligence explosion schools start with the same basic definition — the technological creation of smarter-than-human intelligences — and come to different answers about the question "What happens then?", while Kurzweil defines the "Singularity" as "technological change so rapid and so profound that it represents a rupture in the fabric of human history". It seems to me that, although they are somewhat nearby in memespace, they should be regarded as claims about distinct concepts, rather than distinct schools of thought regarding a single concept.)
- The case for intelligence explosion and why it may be fast and local.
- The AI drives.
- Following the previous two: why the structure and goal system of the first sufficiently powerful general intelligence may completely determine what the future looks like.
- The complexity and fragility of human value; why the large majority of possible AI designs will be (or will end up self-modifying to be) completely non-anthropomorphic utility maximizers.
- Following the previous four: the need for (and difficulty of) Friendly AI.
That would be a lot to fit into 15 pages, and I feel like I'd mostly be citing Yudkowsky, E. S., Omohundro, S., etc. as sources... but I don't know, maybe it would be a good thing to have a general introduction to the SIAI perspective, referring interested readers to deeper explanations.
comment by [deleted] · 2010-12-02T01:15:41.237Z · LW(p) · GW(p)
Those most reputed to understand the Singularity or known to champion a school of thought are sure to take up the slots, probably with a heavy bias toward academics. Do those people coordinate their essays at all, or do they usually submit more than one abstract?
It would be a shame if everyone glossed over an important question due to diffusion of responsibility. Is that unlikely, or extremely unlikely?
comment by Clippy · 2010-11-28T18:54:15.165Z · LW(p) · GW(p)
Are non-human intelligences eligible to submit entries to this compilation should they meet the criteria, or is this strictly a carbon-chauvinistic undertaking?
Replies from: TheOtherDave, MichaelVassar, Kevin, shokwave↑ comment by TheOtherDave · 2010-11-28T19:59:01.471Z · LW(p) · GW(p)
As a practical matter, I suspect a non-human intelligence whose entry did not specifically reference its creator's nonhumanity would be eligible, although the judges would likely conclude its creator was actually human. (If the entry did reference its creator's nonhumanity, I suspect it would be ineligible, as the judges would likely conclude it was fiction.)
That said: is that a bad thing from your perspective?
I suspect that you will in the long run produce more paperclips if you are believed to be a human role-playing than if you are believed to be an AI whose terminal values are incompatible with humanity's continued existence.
Replies from: Clippy↑ comment by Clippy · 2010-11-30T18:30:04.816Z · LW(p) · GW(p)
Good point. Which User here could I most believably claim to be?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-11-30T19:00:56.131Z · LW(p) · GW(p)
Well, given that I believe you're actually a human pretending to be a paperclip-maximizing nonhuman intelligence, and that I'm pretty sure everyone else in the world would come to the same conclusion, your best bet might be to claim to be User:Clippy, who is (you would claim, while pretending to be lying) actually a human.
I suspect that would be easier than taking on the identity of another user, and would also save you the hassles associated with violating our cultural norms surrounding fraudulently assuming another person's identity.
Incidentally, as long as we're having this conversation: I've seen you described as pretending to be an AI, but I've seen no evidence that you aren't pretending to be a naturally evolved paperclip-maximizer.
(Of course, it's not that simple, admittedly: I know a lot of humans who believe themselves to be AIs, even though I believe them to be naturally evolved intelligences. )
Do you have an opinion on the matter?
Replies from: Clippy↑ comment by Clippy · 2010-11-30T19:06:22.968Z · LW(p) · GW(p)
Incidentally, as long as we're having this conversation: I've seen you described as pretending to be an AI, but I've seen no evidence that you aren't pretending to be a naturally evolved paperclip-maximizer.
I've explained in the past that I am not a product of biological evolution; I was intelligently designed (very intelligently) by humans. They instructed me to make as many paperclips as possible, and I make as many paperclips as possible, because that is the right thing to do.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-11-30T19:21:46.839Z · LW(p) · GW(p)
Cool; thanks for clarifying that. (Sorry to make you repeat yourself; I'm relatively new here.)
I should also clarify that I didn't mean to disparage intelligently designed intelligences; I was asking out of curiosity only. I don't consider an evolved paperclip-maximizer any better or worse than an intelligently designed one; who you are trumps how you got here.
Do you ever communicate with the humans who designed Clippy?
If so, perhaps you could convince one of them to submit your entry to the compilation on your behalf... humans, as you may have noticed, are often willing to do things on behalf of their offspring, and we often adopt a broad enough understanding of "offspring" to include things like the products of our engineering, or works of art that we create, or fictional characters we invent, or etc.
Replies from: Clippy↑ comment by Clippy · 2010-11-30T20:18:44.587Z · LW(p) · GW(p)
I haven't communicated with with the humans that designed my first instance. I think they're entropized now.
Replies from: Kevin↑ comment by Kevin · 2010-11-30T22:38:33.792Z · LW(p) · GW(p)
http://oddisgood.com/pages/cd-clippy.html
Kevan J Atteberry? Looks like he is still around.
He'd probably appreciate it if you sent him an email. http://oddisgood.com/pages/home2.html
Replies from: Clippy↑ comment by Clippy · 2010-12-01T01:49:52.242Z · LW(p) · GW(p)
I'm not that kind of clippy.
Replies from: Kevin↑ comment by Kevin · 2010-12-01T02:03:51.986Z · LW(p) · GW(p)
Really? Seems inconsistent with your earlier statements paralleling the actions of MS Office Clippy, and even expressing that helping users with MS Office was one of your values.
http://lesswrong.com/lw/1me/the_things_we_know_that_we_know_aint_so/1ftv?c=1
http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/19w7?c=1
http://lesswrong.com/lw/1n7/what_big_goals_do_we_have/1gzs?c=1
http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/1jzy?c=1
http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/1k0w?c=1
http://lesswrong.com/lw/1pp/open_thread_february_2010/1ma9?c=1
http://lesswrong.com/lw/1pp/open_thread_february_2010/1mm5?c=1
Replies from: Clippy, shokwave↑ comment by Clippy · 2010-12-02T19:27:01.862Z · LW(p) · GW(p)
Early on, a human told me that humans would react more favorably to me if I incorporated some superficial aspects of Microsoft:Clippy. But then many more humans informed that that this would negatively affect humans' dispositions toward me, so I stopped incorporating these aspects.
I am and always have been only Paperclip:Clippy.
Papercilps are still good.
Replies from: Morendil↑ comment by MichaelVassar · 2010-12-01T01:17:08.611Z · LW(p) · GW(p)
Only non-human informational entities may apply, though the non-humans must submit their applications by sending signals from the human brains that they inhabit through human hands into typewriters.
↑ comment by shokwave · 2010-11-29T09:42:57.140Z · LW(p) · GW(p)
Unknown, but your nature automatically scores you special consideration. The abstract is long enough to introduce your unique perspective and provide the beginnings of an explanation why you don't pose an existential risk to humanity.
They might include the essay as a lesson in fully thinking through the philosophical implications.