The Singularity in the Zeitgeist

post by dclayh · 2010-10-02T06:51:30.430Z · score: 8 (8 votes) · LW · GW · Legacy · 49 comments

As a part of public relations, I think it's important to keep tabs on how the Singularity and related topics (GAI, FAI, life-extension, etc.) are presented in the culture at large.  I've posted links to such things in the past, but I think there should be a central clearinghouse, and a discussion-level post seems like the right place. 

So: in the comments, post examples of references to Singularity-related topics that you've found, ideally with a link and a few sentences' description of what the connection is and how it's presented (whether seriously or as an object of ridicule, for instance). 

 

There should probably be a similar post for rationality references, but let's see how this one goes first.

49 comments

Comments sorted by top scores.

comment by dclayh · 2010-10-02T07:03:23.192Z · score: 6 (6 votes) · LW(p) · GW(p)

Too many SMBC comics to get all of them in one post, but here are four recent ones:

#1 The Fermi Paradox is resolved with reference to wireheading.

#2 About mind uploading.

#3 A different kind of singularity, and (naive) Fun Theory.

#4 Making fun of aging singularitarians.

comment by Document · 2010-11-01T03:27:21.648Z · score: 4 (4 votes) · LW(p) · GW(p)
  • #2041 has exponentially increasing human lifespan as a background assumption.

  • #2070 argues that we'll never completely leave behind the regrettable parts of our past.

  • #2072, #2116's coda and #2305 are on how societies fail to transition to post-scarcity.

  • #2123 relates to timeless reasoning.

  • #2124 looks like a malicious simulator intervening in reality to cause a negative singularity and make it look like humans' fault. LW thread here.

  • #2125 is on "Why truth?" (throwing it in since there's no rationality-in-the-zeitgeist thread).

  • #2128 references the age/grief relationship.

  • #2138 portrays the increasing severity of technological risks as technology advances.

  • #2139 could be read as portraying people's discomfort with rationally analyzing their relationships without applause lights.

  • #2143's last panel is a counterpoint to this OB post.

  • #2144 is on the future as a time of increasing normalcy.

  • #2175 uncritically presents standard free-will confusion.

  • #2184 tries to counter the gerontocracy argument against immortalism.

  • Not sure I get #2186, but it has its own thread.

  • No comment on #2191 or the last panel of #2196..

  • #2203 is yet another(?) simulation scenario.

  • #2204 relates to Moravec's paradox and the third observation here.

  • #2211 uncritically presents standard free-will and quantum-brain confusion.

  • #2236 has eternally static human lifespan as a background assumption; so does #532 to a lesser extent.

  • #2286 portrays an almost-Friendly optimization process, and has a thread here.

  • #2289 is on the inexhaustibility of fun and/or its converse.

  • #2290 mocks the absurd idea that medical expert systems could be effective.

  • #2298 mocks the failures and/or anti-epistemologies of mainstream academic philosophy.

  • No comment on the argument for mortalism in #2299.

  • #2300 warns against a particular type of pseudorationality.

  • No comment on #2312.

  • #2398 is on naively extrapolating exponential trends.

  • #2401 presents a paradox of causal decision theory.

  • #2418 involves prediction markets.

Older comics:

comment by Singuhilarity · 2011-01-12T14:08:58.921Z · score: 5 (5 votes) · LW(p) · GW(p)

I recently started a theoretically humorous webcomic about the Singularity entitled Singuhilarity. It's poorly drawn and a little rough in places, but it touches on a number of lesswrong-type subjects as well as some pretty standard science fiction tropes.

Here's the first comic and here's the latest.

comment by WrongBot · 2011-01-12T16:40:54.515Z · score: 2 (2 votes) · LW(p) · GW(p)

I just read the archive and was amused several times. You should continue this project.

comment by Singuhilarity · 2011-01-13T01:49:53.986Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think I can ask for anything better than being amused several times. Thanks. I will indeed continue the project.

comment by anon895 · 2011-02-19T23:26:25.335Z · score: 1 (1 votes) · LW(p) · GW(p)

Followup to previous comment: I feel like this link from Reddit may apply.

comment by anon895 · 2011-02-19T23:24:56.004Z · score: 0 (0 votes) · LW(p) · GW(p)

Found on Reddit.

comment by anon895 · 2011-01-15T18:00:13.074Z · score: 0 (0 votes) · LW(p) · GW(p)

Read first comic, said to self "This is terrible" halfway through, didn't read further. There may be room for improvement.

comment by Singuhilarity · 2011-01-15T20:37:44.877Z · score: 0 (0 votes) · LW(p) · GW(p)

May be room for improvement? Well that's an understatement. ;)

comment by dclayh · 2010-10-02T06:57:23.350Z · score: 5 (5 votes) · LW(p) · GW(p)

The Big Bang Theory, Episode 402.

Sheldon (the most socially atypical character on a show full of them) plans a program of life-extension so that he will last until the Singularity, which he projects to occur around 2060 and chiefly involve the uploading of human consciousness into machines (his roommate Leonard describes the latter as become a "freakish self-aware robot". By the end of the episode Sheldon seems to have given up on the plan as too inconvenient/inadvertently dangerous.

comment by Document · 2010-10-20T02:09:19.713Z · score: 1 (1 votes) · LW(p) · GW(p)

Discussed in more detail by Greg Fish here.

comment by Document · 2010-10-20T00:29:09.941Z · score: 3 (3 votes) · LW(p) · GW(p)

T-Rex's birthday is tomorrow which happens to be MY birthday as well! Unlike T-Rex, however, I am not all emo about aging AND also unlike T-Rex, I have discovered a way to live forever: I will give you a hint, it involves liquid nitrogen and the boundless expanse of interstellar space and also entropy reversing somehow

From the Dinosaur Comics news post for 2010 October 19.

comment by Document · 2011-05-14T04:00:21.362Z · score: 0 (0 votes) · LW(p) · GW(p)

Has anyone claimed saving the world yet?

-- Ryan North, Twitter

comment by Document · 2011-02-11T02:09:15.330Z · score: 0 (0 votes) · LW(p) · GW(p)
comment by Document · 2010-10-20T09:25:57.631Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't feel like starting/finding a conversation elsewhere about the comic, but for the record, I'm still unconvinced by the arguments I've heard against quantum (or modal-realist, or eternal-recurrence) immortality. (I haven't read the paper linked here, though.) I realize few of the "me"s that would result from that kind of transition would have much in common with me-today, but I think I can live with that. It's harder to live with the fact that a lot of me will be as badly off as factory-farmed animals or worse, but there's not much I can do about that beyond trying to reduce the measure of conditions like that in general, which I have limited ability and will to do.

I also hold out hope for some kind of repeated quantum suicide for "free" energy after we run out, or (slightly more dubiously) a Permutation City scenario.

I'm not particularly optimistic about unknown physics, or (edit 10/22) convincing the simulators to let us out, and (edit 11/25) the Omega Point is of course bunk.

comment by Document · 2011-04-18T20:01:17.497Z · score: 2 (2 votes) · LW(p) · GW(p)

Today's xkcd: Future Timeline.

Edits:

  • #888: uncritically-presented bad Fun Theory.
  • #893: uncritically-presented straw-Vulcan argument for space travel.
  • #894 references the increasingly visible progress being made in narrow AI.
comment by Document · 2010-10-20T01:56:05.949Z · score: 2 (2 votes) · LW(p) · GW(p)

Questionable Content mentioned the singularity in passing circa last night. The phrase "according to the Internet" made me think that there was some particular exaggerated article or press release making the rounds that it was referring to, but I couldn't find it and I was encouraged to refrain from asking directly.

comment by dclayh · 2010-10-22T17:33:43.319Z · score: 3 (3 votes) · LW(p) · GW(p)

And now it's mentioned Friendly AI directly. Has Jeph Jacques been reading Eliezer?

comment by Sniffnoy · 2010-10-22T17:55:20.916Z · score: 2 (2 votes) · LW(p) · GW(p)

Tangential, but this old XKCD is essentially a demonstration of the importance of Friendly AI.

comment by Document · 2010-10-23T17:39:58.574Z · score: 2 (2 votes) · LW(p) · GW(p)

Squirrel.

comment by nick012000 · 2010-10-22T18:22:04.315Z · score: 1 (1 votes) · LW(p) · GW(p)

I think that the poster in question was assuming that you were unfamiliar with the Singularity in general, rather than enquiring as to the nature of the Singularity that occurred in-comic in particular.

Or, possibly, that you were silly enough to confuse the QC world with our own; they've had Strong AI since the start of the comic, after all, a superhero who delivers pizzas, and one one the cast grew up on a space station. Needless to say, it only appears similar to ours since we're just seeing the lives of a small circle of hipsters who run a coffee shop and an office-bitch-turned-librarian. I'd imagine that, say, their US Military probably looks quite different to ours.

comment by Document · 2010-10-23T18:14:28.602Z · score: 0 (0 votes) · LW(p) · GW(p)

they've had Strong AI since the start of the comic

Incidentally, I only just noticed that the latest comic's title is They've Had AI Since 1996. IIRC there was a calendar shown at one point implying(?) that it was 2004, but that's probably contradicted elsewhere, even accounting for transplanted pop culture.

Elsewhere in the comic: How did sentient machine intelligence come about?.

comment by Document · 2010-10-23T18:13:42.249Z · score: 0 (0 votes) · LW(p) · GW(p)

they've had Strong AI since the start of the comic

Incidentally, I only just noticed that the latest comic's title is They've Had AI Since 1996. IIRC there was a calendar shown at one point implying(?) that it was 2004, but that's probably contradicted elsewhere, even accounting for transplanted pop culture.

Elsewhere in the comic: How did sentient machine intelligence come about?.

comment by Document · 2010-10-22T23:42:05.986Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm not one of the posters in that thread.

comment by nick012000 · 2010-10-23T00:12:15.018Z · score: 2 (2 votes) · LW(p) · GW(p)

Ah. You sort of implied that you were. No worries, then.

comment by Document · 2010-12-02T23:23:11.450Z · score: 1 (1 votes) · LW(p) · GW(p)

Cracked.com's Jason Iannone, on the game-changing nature of AGI versus its portrayal in fiction:

Cortana, the Chief's AI sidekick [in Halo] [...] can control entire cities by remotely breaking into their battle nets and taking over their weaponry, without anyone ever setting foot in the area. Which is another way of saying that she really doesn't need the Chief at all.

[...] Master Chief, for all his badassitude, is really just a grunt whose life is being put unnecessarily at risk, although a video game featuring him doing nothing but sitting around eating tacos in the cafeteria while Cortana does all the work would probably not have sold as well.

From 6 Video Game Heroes Made Useless By Supporting Characters. High Challenge is relevant to the closing point.

comment by Document · 2010-10-20T00:31:07.421Z · score: 1 (1 votes) · LW(p) · GW(p)

At another thread: Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book (specifically Zendegi).

comment by jmmcd · 2010-10-17T05:41:44.906Z · score: 1 (1 votes) · LW(p) · GW(p)

Why the Singularity isn't going to happen on io9.com. Does that count as "in the culture at large"? Anyway, it's a really really weak article (apparently typical of io9) based on the idea that "singularity-level" technologies in the past (a misunderstanding in itself), such as the industrial revolution, didn't lead to the paradise on earth that some people said they would. The link between "it won't be that great" and "it won't even happen" is never bridged. Could be summed up as saying "it's geek religion, man".

comment by Document · 2011-07-11T18:15:13.475Z · score: 0 (0 votes) · LW(p) · GW(p)

Found via Reddit: On eutopia.

comment by Document · 2011-05-15T02:35:25.937Z · score: 0 (0 votes) · LW(p) · GW(p)

PvP 2009/03/24.

comment by Document · 2010-11-25T20:41:27.571Z · score: 0 (0 votes) · LW(p) · GW(p)

Reddit comment thread in progress:

depending on when exactly we achieve this, this could be the best time to be born ever, because it will be the absolute earliest anybody will have achieved immortality. Someone born within 20 years of this moment could one day be the oldest human, sentient, or even living being in the Universe.

The comments are currently split between arguing and agreeing with this. So far, no mention of cryonics. One post presents a technical argument that "our current knowledge/technology is centuries away" from mind uploading/whole-brain emulation.

comment by Document · 2010-11-06T20:46:01.217Z · score: 0 (0 votes) · LW(p) · GW(p)

At another thread: Stanford historian on the singularity.

comment by Document · 2010-11-06T19:05:57.359Z · score: 0 (0 votes) · LW(p) · GW(p)

Machine of Death, a recently released short-fiction anthology that hit #1 on Amazon, has a cover quote from Cory Doctorow containing the sentiment "Makes me wish I could die, too!".

In one of the stories, "Flaming Marshmallow", "zvyyraavhz fcnpr ragebcl" vf vagrecergrq nf zrnavat gur crefba jba'g qvr gvy gur arkg zvyyraavhz, juvpu vf frra nf tbbq arjf engure guna gur greevoyr arjf vg zvtug or sbe fbzrbar rkcrpgvat cebcre vzzbegnyvgl gb unccra ol gura.

So far I haven't seen any of the stories deal with how the machine affects the cryonics movement (start testing rats and then putting them in suspension, and try to either fool the machine or get a prediction that doesn't match the cause of deanimation?) or physics (strong evidence against many-worlds?), or how it deals with mind copying (does it arbitrarily distinguish between the "original" and copies, thereby giving people a loophole to escape their deaths, or do all the copies end up dying in similar ways?), merging or reconstruction, or whether there's a large-scale study of the effects of praying and making sacrifices to the obviously intelligent Predictor. On the other hand I've only just started the book, and there's already been talk of another volume, so I hold out some hope.

The anthology's concept seems to rule out an imminent positive singularity, since you'd expect to have over a century's advance knowledge when predictions started reading "end of the universe"; but I wonder if an FAI could make a deal with the Predictor by precommitting to kill everyone in their appointed ways just before the universe ended. If the prediction required a person to be dead by a particular date, the AI would want to figure out how much of a person could be preserved while still having the Predictor consider them dead. Or maybe the Prediction Enforcer kills them in their appointed way heedless of the FAI, and the only difference the AI makes is in making it impossible for the Enforcer to disguise its actions as the arbitrary happenstance of life. (Back in 2006 I figured that was the only real possibility, since information going backward in time allows paradoxes; but now I see both as plausible.)

comment by Document · 2010-11-07T19:26:30.037Z · score: 0 (0 votes) · LW(p) · GW(p)

OT: I'm also curious what happens when you mix multiple people's blood. Or if you developed the technology to transplant brains between bodies - does the prediction follow the body or the brain? What if you could clone a brainless body and test its blood before and after installing a brain? More generally, by what method does the machine decide who a particular blood sample "belongs" to a particular person, and what are the edge cases and ambiguities of that method?

comment by Document · 2011-05-28T04:25:23.376Z · score: 0 (0 votes) · LW(p) · GW(p)

The editors are currently accepting submissions for Volume 2 (deadline July 15), making me wish I had a story to submit exploring some of those questions.

comment by Document · 2010-10-30T02:42:22.089Z · score: 0 (0 votes) · LW(p) · GW(p)

The AI-related sci-fi series Caprica was recently cancelled, following in the footsteps of The Sarah Connor Chronicles and Odyssey 5. The last five episodes are planned to air sometime in 2011.

I haven't watched the show, but comments I've seen on it include:

Genii Lodus, Stardestroyer.net, after a fairly harsh recap of the pilot:

The only interesting story this series might have had to tell - how Cylons come about has pretty much been done in the pilot- a 16 year old girl developed AI in her spare time outside of school and she was put into a magical plot device processor and plugged into a robot. (snip) I suppose if they'd had an actual progression towards developing true AI then it might well have looked too much like the Sarah Connor Chronicles but given both settings feature the significant plot of man builds AI which goes bad and then nukes man to fuck there would always be comparisons.

JoCoLa, Reddit:

it's taking its time with pretty complex subject matter- every movie or show about A.I. takes place after it was created, this is the only show I know of that addresses how it comes into being.

comment by Document · 2010-10-30T07:59:05.281Z · score: 0 (0 votes) · LW(p) · GW(p)

In another SDnet thread:

...it had the wrong premise. It was "corporate, terrorist and gangster culture with a VR twist" instead of "this is how the cylons came to be."

I've heard similar descriptions elsewhere; that the AI-related parts are there but diluted in frequently non-sci-fi plotlines.

comment by Document · 2010-10-28T08:46:52.703Z · score: 0 (0 votes) · LW(p) · GW(p)

Earlier this year, the Reddit thread "Hey Reddit, how do you think the human race will come to an end?" received a fairly elaborate Singularitarian reply. Although it might be unfair to generalize given the context, he seems to be taking a basically fatalist stance, taking it as given that whoever creates AIs will program them to share our values.

The post currently has about 1165 karma (not necessarily representing votes of agreement), plus at least a few hundred replies taking probably every major position on the subject. This reply links to Robin Hanson's article If Uploads Come First.

I was disappointed that this subthread ended with the rhetorical question "What current AI projects aren't working with brains?" unanswered.

(Edit 11/25: corrected the assumption that the number of upvotes displayed by the script I had installed was accurate.)

comment by Document · 2010-10-28T05:33:33.986Z · score: 0 (0 votes) · LW(p) · GW(p)

The half-hour TV series "Sci Fi Science: Physics of the Impossible" recently aired an episode on AI risk, "A.I. Uprising". This is the airing schedule. It doesn't seem to be available online yet.

comment by Document · 2010-10-30T02:05:49.378Z · score: 0 (0 votes) · LW(p) · GW(p)

Another show, "10 Ways", aired an episode "10 Ways the World Might End". I haven't watched it either, but apparently "Invasion of Grey Goo" and "Robots Inherit the Earth" are two of the ten.

comment by Document · 2010-10-28T02:10:26.593Z · score: 0 (0 votes) · LW(p) · GW(p)

Via Cyan: John Scalzi blogged this semi-humorous story of a dubiously-friendly intelligence explosion, going into more detail than most about the path from intelligence to real-world power.

comment by Document · 2010-10-20T02:12:41.885Z · score: 0 (0 votes) · LW(p) · GW(p)

A 2006 Overcompensating post talks about the absurdity of how widely ignored existential risks are.

comment by sfb · 2010-10-04T21:07:33.107Z · score: 0 (0 votes) · LW(p) · GW(p)

There's an unpleasant comedy sketch show take on cryogenic preservation here: http://www.youtube.com/watch?v=g7Lzr3cwaPs

Plot: a cryogenics company goes bust, the bodies they store are bought cheaply and revived to make low budget TV with no concerns about treating the revived people badly.

Relevance: What if cryogenic preservation works but the plan of waking up in a better future, doesn't.

comment by XiXiDu · 2010-10-04T10:44:32.036Z · score: 0 (0 votes) · LW(p) · GW(p)

The Post-Singularity Future Of Astronomy.

comment by dclayh · 2011-01-16T02:59:59.296Z · score: -2 (2 votes) · LW(p) · GW(p)

Yesterday's SMBC presents SIAI-style uFAI fears as essentially a self-fulfilling prophecy.

comment by ata · 2011-01-16T03:10:22.186Z · score: 4 (4 votes) · LW(p) · GW(p)

SIAI-style uFAI fears

That's quite a stretch.

comment by dclayh · 2011-01-16T20:17:58.869Z · score: 1 (1 votes) · LW(p) · GW(p)

Admittedly the comic seems to assume malevolence rather than the more likely indifference...but it's still a comic about a self-improving superhuman intelligence that destroys humanity.