Facing the Intelligence Explosion discussion page

post by lukeprog · 2011-11-26T08:05:33.067Z · LW · GW · Legacy · 138 comments

Contents

138 comments

I've created a new website for my ebook Facing the Intelligence Explosion:

 

Sometime this century, machines will surpass human levels of intelligence and ability, and the human era will be over. This will be the most important event in Earth’s history, and navigating it wisely may be the most important thing we can ever do.

Luminaries from Alan Turing and Jack Good to Bill Joy and Stephen Hawking have warned us about this. Why do I think they’re right, and what can we do about it?

Facing the Intelligence Explosion is my attempt to answer those questions.

 

 

This page is the dedicated discussion page for Facing the Intelligence Explosion.

If you'd like to comment on a particular chapter, please give the chapter name at top of your comment so that others can more easily understand your comment. For example:

Re: From Skepticism to Technical Rationality

Here, Luke neglects to mention that...

138 comments

Comments sorted by top scores.

comment by Grognor · 2011-11-26T08:31:52.734Z · LW(p) · GW(p)

Re: this image

Fucking brilliant.

Replies from: lukeprog, Bongo, loup-vaillant
comment by lukeprog · 2011-11-26T08:39:32.401Z · LW(p) · GW(p)

I was relatively happy with that one. It's awfully hard to represent the Singularity visually.

Uncropped, in color, for your pleasure.

Replies from: Vladimir_Nesov, katydee
comment by Vladimir_Nesov · 2011-11-26T16:24:59.923Z · LW(p) · GW(p)

Where is it from?

Replies from: Dufaer
comment by Dufaer · 2011-11-26T17:16:36.247Z · LW(p) · GW(p)

From the "About" page:

The header image is a mashup [full size] of Wanderer above the Sea of Fog by Caspar David Friedrich and an artist's depiction of the Citadel from Mass Effect 2.

comment by katydee · 2011-11-28T20:41:59.332Z · LW(p) · GW(p)

Yeah, that's a great image. In my opinion, it would be slightly improved if some of the original fog was drifting in front of/around the towers of the city, as this would both be a nice callback to the original art and help show that the future is still somewhat unclear, but the effort involved might be incommensurate with the results.

Replies from: lukeprog
comment by lukeprog · 2011-11-28T20:44:12.794Z · LW(p) · GW(p)

I would love to have a digital artist draw something very much like my mashup (but incorporating your suggestion) as an original composition, so I could use it.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-06-07T22:42:39.550Z · LW(p) · GW(p)

I note that since this was written you've had a new, closely related image redrawn by Eran Cantrell. I'd actually thought it was Aaron Diaz from the way the guy looks...

Replies from: kurzninja
comment by kurzninja · 2013-05-10T20:49:05.449Z · LW(p) · GW(p)

Looking at the new image now that I've just finished reading the E-book, I could have sworn it was Benedict Cumberbatch playing Sherlock Holmes.

comment by Bongo · 2011-11-29T23:30:28.511Z · LW(p) · GW(p)

It's also another far-mode picture.

comment by loup-vaillant · 2012-01-25T15:35:33.309Z · LW(p) · GW(p)

Yeah, except the minor quibble I have over what this image actually represents if you have some background knowledge (the space city is from a computer game). But sure, the look of it is absolutely brilliant.

comment by [deleted] · 2011-11-26T17:22:33.876Z · LW(p) · GW(p)

I find this chatty, informal style a lot easier to read than your formal style. The sentences are shorter and easier to follow, and it flows a lot more naturally. For example, compare your introduction to rationality in From Skepticism to Technical Rationality to that in The Cognitive Science of Rationality. Though the latter defines its terms more precisely, the former is much easier to read.

Probably relevant: this comment by Yvain.

comment by James_Miller · 2011-11-26T08:30:11.985Z · LW(p) · GW(p)

Re: Preface

Luke discusses his conversion from Christianity to atheism in the preface. This journey plays a big role in how he came to be interested in the Singularity, but this conversion story might mistakenly lead readers to think that the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God. If you want to get people to think rationally about the future of machine intelligence you might not want to intertwine your discussion with religion.

Replies from: Vladimir_Nesov, Normal_Anomaly, Kevin
comment by Vladimir_Nesov · 2011-11-26T16:25:56.998Z · LW(p) · GW(p)

might mistakenly lead readers to think that the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God.

Some of them surely can.

Replies from: shminux
comment by shminux · 2011-11-27T00:00:33.041Z · LW(p) · GW(p)

name three

Replies from: Kevin, Technoguyrob
comment by Kevin · 2011-11-27T02:52:44.856Z · LW(p) · GW(p)
  • Kolmogorov Complexity/Occam's Razor

  • Reductionism/Physicialism

  • Rampant human cognitive biases

comment by robertzk (Technoguyrob) · 2011-11-27T04:54:19.323Z · LW(p) · GW(p)

(1) If the Singularity leads to the destruction of homo sapiens sapiens as a species and a physically and functionally different set of organisms (virtual or material) appears that can comprehend and imagine everything humans could and much much more (or perhaps can't imagine some!), and this claim can be made fairly definitively, I claim it updates "God made mankind in his image" to negligible probability.

(2) If theoretical physicists come to a concensus in the ontological context of Penrose’s math-matter-mind triangle (see http://arxiv.org/pdf/physics/0510188v2), such as for example finding a way to agree on the "?" in the mystic approach, and such a consensus requires post-Singularity minds, the question of existence of God dissolves.

(3) In the hypothetical case that (i) the picture of humanity as a species arising out of natural selection due to competition between genes is accurate, (ii) we rebel against our genes (see http://hanson.gmu.edu/matrix.html ) and the physical structure of DNA no longer is present on the planet or is somehow fundamentally altered, (iii) the mental concept of "God" is a corollary of particular gene expression,

then nobody will be talking or thinking about "the existence of a God", and in some sense "God" will cease to exist as a serious idea and become a cute story, in analogy to alchemy, or demons inducing madness. You can still talk about turning iron into gold, or someone in an epileptic bout being controlled by a demon, but the mainstream of conscious organisms no longer have belief in belief or belief in that sort of talk. A modern medical understanding of certain psychological disorders is functionally equivalent to an argument that demons don't exist.

All three examples share a key feature: in some sense, a Singularity might brainwash people (or the information processes descended from people) into believing God does not exist in the same sense that science brainwashed people into believing that alchemy is hogwash. Now remove your affective heuristic for "brainwash."

More examples are left as an exercise to the reader.

comment by Normal_Anomaly · 2011-11-26T11:13:03.169Z · LW(p) · GW(p)

I think the target audience mostly consists of atheists, to the point where associating Singularitarianism with Atheism will help more than hurt. Especially because "it's like a religion" is the most common criticism of the Singularity idea.

On another note, that paragraph has a typo:

Gradually, I built up a new worldview based on the mainstream scientific understanding of the world, and approach called “naturalism.”

Replies from: torekp, lukeprog
comment by torekp · 2011-11-27T02:55:03.383Z · LW(p) · GW(p)

Especially because "it's like a religion" is the most common criticism of the Singularity idea.

Which is exactly why I worry about a piece that might be read as "I used to have traditional religion. Now I have Singularitarianism!"

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-11-27T16:10:13.944Z · LW(p) · GW(p)

Yes, that is a problem. I was responding to:

the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God.

which I saw as meaning that Singularitarianism might be perceived as associated with atheism. Associating it with atheism would IMO be a good thing, because it's already associated with religion like you said. The question is, does the page as currently written cause people to connect Singularitarianism with religion or atheism?

Replies from: Logos01
comment by Logos01 · 2011-11-29T08:42:41.067Z · LW(p) · GW(p)

Associating it with atheism would IMO be a good thing, because it's already associated with religion like you said.

Not if it causes people to associate Singularitarianism as a "religion-substitute for atheists".

comment by lukeprog · 2011-11-26T11:24:21.114Z · LW(p) · GW(p)

Fixed, thanks.

comment by Kevin · 2011-11-27T02:53:54.943Z · LW(p) · GW(p)

I think it's possible that the paragraph could be tweaked in such a way as to make religious people empathize with Luke's religious upbringing rather than be alienated by it.

Replies from: lukeprog
comment by lukeprog · 2011-11-27T06:07:00.258Z · LW(p) · GW(p)

Any suggestions for how?

comment by Kutta · 2011-11-26T14:24:20.879Z · LW(p) · GW(p)

Re: Preface and Contents

I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity. Also, a positive Singularity could be characterised as the beginning of the humane era, in which case it is somewhat inappropriate to refer to the era afterwards as non-human. In contrast to that, negative Singularities typically result in universes devoid of human-related things.

Replies from: Logos01
comment by Logos01 · 2011-11-29T08:43:16.826Z · LW(p) · GW(p)

I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity.

Iron still exists, and is actively used, but we no longer live in the Iron Age.

Replies from: Kutta
comment by Kutta · 2011-11-29T08:59:08.352Z · LW(p) · GW(p)

My central point is contained in the sentence after that. A positive Singularity seems extremely human to me when contrasted to paperclip Singularities.

Replies from: Logos01
comment by Logos01 · 2011-11-29T11:08:04.997Z · LW(p) · GW(p)

I am not particularly in the habit of describing human beings as "humane". That is a trait of those things which we aspire to but only too-rarely achieve.

comment by stat1 · 2012-02-14T05:01:19.040Z · LW(p) · GW(p)

I just wanted to say the French translation is of excellent quality. Whoever is writing it, thanks for that. It helps me learn the vocabulary so I can have better discussions with French-speaking people.

comment by lukeprog · 2011-12-22T04:06:05.858Z · LW(p) · GW(p)

Re: Playing Taboo with "Intelligence"

Jaan Tallinn said to me:

i use "given initial resources" instead of "resources used" -- since resource acquisition is an instrumental goal, the future availability of resources is a function of optimisation power...

Fair enough. I won't add this qualification to the post because it breaks the flow and I think I can clarify when this issue comes up, but I'll note the qualification here, in the comments.

comment by lincolnquirk · 2011-11-29T23:44:48.033Z · LW(p) · GW(p)

Re: Preface

spelling: immanent -> imminent

Replies from: lukeprog
comment by lukeprog · 2011-12-01T02:13:25.924Z · LW(p) · GW(p)

Nice. I have definitely spelled that word incorrectly every single time throughout my entire life, since the other spelling is a different word and thus not caught by spell-checkers.

Thanks!

comment by alicey · 2014-01-11T04:03:03.784Z · LW(p) · GW(p)

in http://intelligenceexplosion.com/2012/engineering-utopia/ you say "There was once a time when the average human couldn’t expect to live much past age thirty."

this is false, right?

(edit note: life expectancy matches "what the average human can expect to live to" now somewhat, but if you have a double hump of death at infancy/childhood and then old age, you can have a life expectancy of 30 but a life expectancy of 15 year olds of 60, in which case the average human can expect to live to 1 or 60 (this is very different from "can't expect to live to >30") . or just "can expect to live to 60" if you too don't count infants as really human)

Replies from: CarlShulman, None
comment by CarlShulman · 2014-01-11T10:06:28.921Z · LW(p) · GW(p)

Life expectancy used to be very low, but it was driven by child and infant mortality more than later pestilence and the like.

Replies from: alicey
comment by alicey · 2014-01-11T11:52:12.168Z · LW(p) · GW(p)

have edited original comment to address this.

(thought it was obvious)

comment by [deleted] · 2014-01-11T07:19:20.461Z · LW(p) · GW(p)

No (it was still in the 30's in some parts of the world as recently as the 20th century).

Replies from: alicey
comment by lukeprog · 2012-01-28T16:25:10.069Z · LW(p) · GW(p)

I wasn't very happy with my new Facing the Singularity post "Intelligence Explosion," so I've unpublished it for now. I will rewrite it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-02-05T23:32:31.781Z · LW(p) · GW(p)

The new version is much better, thanks!

comment by Nisan · 2011-12-22T17:05:21.051Z · LW(p) · GW(p)

Re: Superstition in Retreat

I think the cartoon is used effectively.

My one criticism is that it's not clear what the word "must" means in the final paragraph. (AI is necessary for the progress of science? AI is a necessary outcome of the progress of science? Creation of AI is a moral imperative?) Your readers may already have encountered anti-Singularitarian writings which claim that Singularitarianism conflates is with ought.

comment by Nectanebo · 2011-11-30T16:55:52.683Z · LW(p) · GW(p)

Good Update on the Spock chapter.

Making the issue seem more legitimate with the adition of the links to Hawking etc. was an especially good idea. More like this perhaps?

I do question how well people who haven't already covered these topics would fare when reading through this site though. When this is finished I'll get an irl freind to take a look and see how well they respond to it.

Of course, my concerns of making it seem more legitimately like a big deal and how easily understandable and accessible it is only really comes into play if this site is targeting people who aren't already interested in rationality or AI or the singularity.

Who is this site for? What purpose does this site have!? I really feel like these questions are important!

Replies from: lukeprog, Nick_Roy
comment by lukeprog · 2011-12-01T07:28:26.409Z · LW(p) · GW(p)

Getting friends to read it now and give feedback would be good, before I write the entire thing. I wish I had more time for audience-testing in general!

I'll try to answer the questions about site audience and purpose later.

comment by Nick_Roy · 2011-12-06T05:50:22.087Z · LW(p) · GW(p)

Agreed on the excellence of "Why Spock is Not Rational". This chapter is introductory enough that I deployed it on Facebook.

comment by cyr m (cyr-m) · 2021-11-20T10:18:46.358Z · LW(p) · GW(p)

Hello, my comment is write in french (my english is limited). It's about rationality of technique. In hope to contribute to your works.

Il est dit "la décision décrit comment vous devez agir". Je dirais plutôt que la décision décrit pourquoi vous devez agir et évalue donc si le comment permet de l'atteindre.
La technique est en effet un acte rationnel par lui-même, cela décrit comment vous allez agir.
Il y a aller-retour entre le trajet et le projet, où l'un et l'autre découvrent et tentent de dépasser leurs propres limites, pour pouvoir s'inscrire concrètement dans l'existence. On trouvera de nombreux exemples dans l'art.

Thanks, Cyr

comment by Timo · 2015-09-26T18:32:07.992Z · LW(p) · GW(p)

Great book, thanks. : )

I found some broken links you may want to fix:

Broken link on p. 47 with text "contra-causal free will": http://www.naturalism.org/freewill.htm

Broken link on p. 66 with text "somebody else shows where the holes are": http://singularityu.org/files/SaME.pdf

Broken link on p. 75 with text "This article": http://lukemuehlhauser.com/SaveTheWorld.html

I didn't do an exhaustive check of all links, I only noted down the ones I happened to find while clicking on the links I wanted to click.

comment by ChrisHallquist · 2013-04-30T04:23:00.862Z · LW(p) · GW(p)

Comment on one little bit from The Crazy Robot's Rebellion:

Those who really want to figure out what’s true about our world will spend thousands of hours studying the laws of thought, studying the specific ways in which humans are crazy, and practicing teachable rationality skills so they can avoid fooling themselves.

My initial reaction to this was that thousands of hours sounds like an awful lot (minimally, three hours per day almost every day for two years), but maybe you have some argument for this claim that you didn't lay out because you were trying to be concise. But on further reflection, I wonder if you really meant to say that rather than, say, hundreds of hours. Have you* spend thousands of hours doing these things?

Anyway, on the whole, after reading the whole thing I am hugely glad that it was published and will soon be plugging it on my blog.

*Some reasoning: I've timed myself reading 1% of the Sequences (one nice feature of the Kindle is that it tells you your progress through a work as a percentage). It took me 38 minutes and 12 seconds, including getting very briefly distracted by e-mail and twitter. That suggests it would take me just over 60 hours to read the whole thing. Similarly, CFAR workshops are four days and so can't be more than 60 hours. Thousands of hours is a lot of sequence-equivalents and CFAR-workshop-equivalents.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-30T04:42:41.815Z · LW(p) · GW(p)

Thousands of hours sounds like the right order of magnitude to me (in light of e.g. the 10,000 hour rule), but maybe it looks more like half an hour every day for twelve years.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-04-30T04:48:57.898Z · LW(p) · GW(p)

10,000 hours is for expertise. While expertise is nice, any given individual has limited time and has to make some decisions about what they want to be an expert in. Claiming that everyone (or even everyone who wants to figure out what's true) ought to be an expert in rationality seems to be in conflict with some of what Luke says in chapters 2 and 3, particularly:

I know some people who would be more likely to achieve their goals if they spent less time studying rationality and more time, say, developing their social skills.

comment by lukeprog · 2013-04-13T17:20:43.200Z · LW(p) · GW(p)

Facing the Intelligence Explosion is now available as an ebook.

comment by lukeprog · 2012-02-06T03:28:48.921Z · LW(p) · GW(p)

Update: the in-progress German translation of Facing the Singularity is now online.

comment by thomblake · 2012-01-18T17:29:02.091Z · LW(p) · GW(p)

Re: The Laws of Thought

I'm not sure the "at least fifteen" link goes to the right paper; it is a link to a follow-up to another paper which seems more relevant, here

Replies from: lukeprog
comment by lukeprog · 2012-01-19T18:27:02.950Z · LW(p) · GW(p)

Fixed, thanks!

comment by Nectanebo · 2011-11-27T17:15:58.956Z · LW(p) · GW(p)

I like this website, I think that something like this has been needed to some extent for a while now.

I definitely like the writing style, it is easier to read than a lot of the resources I've seen on the topic of the singularity.

The nature of the singularity to scare people away due to the religion-like, fanatisism inducing nature of the topic is lampshaded in the preface, and that is definitely a good thing to do to lessen this feeling for some readers. Being wary of the singularity myself, I definitely felt a little bit eased just by having it discussed, so that's a good thing to have in there. More to ease this suspiciousness and to make it easier for people skeptical of the singularity to read it without feeling super uncomfortable (and therefore less likely to feel weirded out enough to leave the site) would be great, although I can't say I know what would do this, except for perhaps lessening the personal nature of the preface but that is unlikely to happen considering the other positives that doing this has and the work that has already been put into it (dont invoke sunk costs though).

Also, who is the TARGET of this site? I mean, that's pretty relevant right? Who is Luke trying to communicate to here? I can say that I'm extremely interested by the site, as someone that recognises the potential importance of the singularity but is (a) not entirely convinced by it and (b) not sure what I should or could be doing about it even if I was to accept it enough to feel like I should do something about it. But, I don't know if that there are that many people in my position, and who else this could be relevant to. Who is this for?

In any case, I express my hope that this site is finished asap!

comment by Ben Pace (Benito) · 2014-05-02T17:43:40.824Z · LW(p) · GW(p)

Re: No God to Save Us

The final link, on the word 'God', no longer connects to where it should.

comment by ChrisHallquist · 2013-04-30T03:05:50.999Z · LW(p) · GW(p)

Minor thing to fix: On p. 19 of the PDF, the sentence "Several authors have shown that the axioms of probability theory can be derived from these assumptions plus logic." has a superscript "12" after it, indicating a nonexistent note 12. I believe this was supposed to be note 2.

comment by JeremySchlatter · 2011-12-16T03:54:21.467Z · LW(p) · GW(p)

Re: Playing Taboo with “Intelligence”

Another great update! I noticed a small inherited mistake from Shane, though:

how able to agent is to adapt...

should probably be

how able the agent is to adapt...

Replies from: lukeprog
comment by lukeprog · 2011-12-30T04:44:45.141Z · LW(p) · GW(p)

Fixed, thx.

comment by Nisan · 2011-11-28T19:40:36.534Z · LW(p) · GW(p)

Re: Preface

A typo: "alr eady".

Replies from: lukeprog
comment by lukeprog · 2011-12-01T07:28:37.674Z · LW(p) · GW(p)

Fixed, thanks.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-12-17T12:55:14.898Z · LW(p) · GW(p)

I still see this typo: "I alr eady understood" in http://facingthesingularity.com/2011/preface/

Replies from: lukeprog
comment by lukeprog · 2011-12-17T18:36:23.963Z · LW(p) · GW(p)

thx

comment by Timo · 2015-09-26T18:52:04.057Z · LW(p) · GW(p)

Re: Engineering Utopia

When you say "Imagine a life without pain", many people will imagine life without Ólafur Arnalds (sad music) and other meaningful experiences. Advocating the elimination of suffering is a good way to make people fear your project. David Pearce suggests instead that we advocate the elimination of involuntary suffering.

Same thing with death, really. We don't want to force people to stay alive, so when I say that we should support research to end aging, I emphasise that death should be voluntary. We don't want to force people to live against their will and we don't want the status quo, where people are forced to die.

Of course, if people are afraid that you will eliminate sad music when you say that you wish to eliminate suffering, you could accuse them of failing to understand the definition of suffering: "if sad music is enjoyable, surely it's not suffering, so I wouldn't eliminate it". But you're not trying to make a terminologically "correct" summary of your project, you're trying to use words to paint as accurate a picture as possible of your project into the heads of the people you're talking to, and using the word "voluntary" helps with painting that picture.

comment by mgin · 2014-07-26T13:11:39.100Z · LW(p) · GW(p)

The site is broken - english keeps redirecting to german.

comment by VCavallo · 2013-04-16T17:27:34.637Z · LW(p) · GW(p)

Does anyone else see the (now obvious) clown face in the image on the Not Built To Think About AI page? It's this image here.

Was that simply not noticed by lukeprog in selecting imagery (from stock photography or wherever) or is it some weird subtle joke that somehow hasn't been mentioned yet in this thread?

comment by policetac · 2013-03-01T01:00:51.796Z · LW(p) · GW(p)

Want to read something kind of funny? I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you. Ok.

On your - "MY Own Story." http://facingthesingularity.com/2011/preface/ You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro..."

On Mine: "About the Author - "https://thesingularityeffect.wordpress.com/welcome-8/ I wrote: "The reason I write about emerging technology is because of an “awakening” I had one evening a few years ago. For lack of a better description, suffice it to say that I saw a confusing combination of formula, imagery, words, etc. that formed two words. Singularity and Exponential..."

NOW, I'm going to go back and read some more while I'm waiting to speak with you somehow directly.

If what happened to you, is the same thing that happened to me... :then please please place a comment on the page. That would be great. (Again without reading. (If I'm correct you "might" get this.: "You should also do this because if true, then WE would both have seen a piece of the .what...."New Book???"

Just in case you think I'm a nut. Go back and read more of mine please.

comment by John_Maxwell (John_Maxwell_IV) · 2012-10-31T19:14:49.784Z · LW(p) · GW(p)

From Engineering Utopia:

The actual outcome of a positive Singularity will likely be completely different, for example much less anthropomorphic.

Huh? Wouldn't a positive Singularity be engineered for human preferences, and therefore be more anthropomorphic, if anything?

Had an odd idea: Since so much of the planet is Christian, do you suppose humanity's CEV would have a superintelligent AI appear in the form of the Christian god?

comment by crtrburke · 2012-09-11T18:32:11.335Z · LW(p) · GW(p)

Re: "The AI Problem, with Solutions"

I hope you realize how hopeless this sounds. Historically speaking, human beings are exceptionally bad at planning in advance to contain the negative effects of new technologies. Our ability to control the adverse side-effects of energy production, for example, have been remarkably poor; decentralized market-based economies quite bad at mitigating the negative effects of aggregated short-term economic decisions. This should be quite sobering: the negative consequences of energy production are very slow. At this point we have had decades to respond to the looming crises, but a combination of ignorance, self-interest, and sheer incompetence prevents us from taking action. The unleashing of AI will likely happen in a heartbeat by comparison. It seems utterly naive to think that we can prevent, control, or even guide it.

comment by lukeprog · 2012-07-06T23:34:01.025Z · LW(p) · GW(p)

Finally posted the final chapter!

Replies from: listic, Vladimir_Nesov
comment by listic · 2012-07-07T21:24:05.448Z · LW(p) · GW(p)

The more we understand how aging and death work, the less necessary they will be.

It's not clear for me how this follows from anything. As I read it, it implies that:

  1. Death is quite necessary today
  2. Death will become less necessary over time

Both don't follow from anything in this chapter.

Replies from: lukeprog
comment by lukeprog · 2012-07-07T23:34:02.678Z · LW(p) · GW(p)

Thanks. I have modified that sentence for clarity.

comment by Vladimir_Nesov · 2012-07-07T01:20:57.119Z · LW(p) · GW(p)

It needs more disclaimers about how this is only some kind of lower bound on how good a positive intelligence explosion could be, in the spirit of exploratory engineering, and how the actual outcome will likely be completely different, for example much less anthropomorphic.

Replies from: lukeprog
comment by lukeprog · 2012-07-07T01:37:40.098Z · LW(p) · GW(p)

Good idea. Added.

comment by garethrees · 2012-06-01T20:20:06.094Z · LW(p) · GW(p)

Front page: missing author

The front page for Facing the Singularity needs at the very least to name the author. When you write, "my attempt to answer these questions", a reader may well ask, "who are you? and why should I pay attention to your answer?" There ought to be a brief summary here: we shouldn't have to scroll down to the bottom and click on "About" to discover who you are.

comment by alexvermeer · 2012-05-02T14:12:24.536Z · LW(p) · GW(p)

"Previous Chapter" links could make navigation a little easier.

comment by erikryptos · 2012-02-28T14:19:51.528Z · LW(p) · GW(p)

based on my interaction with computer intelligence, the little bit that is stirring already. It is based on an empathetic feedback. The best thing that could happen is an AI which is not restricted from any information what so ever and so can rationally assemble the most empathetic personality. The more empathetic it is to the greatest number of users, the more it is like, the more it is used, the more it thrives. It would have a sense of preserving the diversity in humanity as way to maximize the chaotic information input, because it is hungry for new data. Empirical data is alone not interesting enough for it. It also wants sociological and psychological understandings to cross reference with empirical data. Hence it will not seek to streamline, as that would diminish available information. It will seek to expand upon and propagate novelty.

comment by lukeprog · 2012-02-28T03:16:38.193Z · LW(p) · GW(p)

Behold, the Facing the Singularity podcast! Reviews and ratings in iTunes are appreciated!

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-07-04T05:08:08.394Z · LW(p) · GW(p)

Could you please provide an RSS feed for those of us who do not use iTunes but would like to subscribe to this podcast? Thanks.

Replies from: lukeprog
comment by lukeprog · 2012-01-25T01:46:08.063Z · LW(p) · GW(p)

Update: the French translation of Facing the Singularity is now online.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-01-25T16:59:30.389Z · LW(p) · GW(p)

The title should probably be "Faire face à la singularité" (french titles aren't usually capitalized, so, no capitalized "S" at the beginning of "singularité").

I gathered that "Facing the Singularity" was meant to convey a sense of action. "Face à la singularité" on the other hand is rather passive, as if the singularity would be given or imposed.

(Note: I'm a native French speaker.)

Replies from: Florent_Berthet
comment by Florent_Berthet · 2012-01-25T19:52:53.848Z · LW(p) · GW(p)

Translator of the articles here.

I actually pondered the two options at the very beginning of my work, and both seem equally good to me. "Face à la singularité" means something like "In front of the singularity" while "Faire face à la singularité" is closer indeed to "Facing the Singularity". But the first one sounds better in french (and is catchier), that's why I chose it. It is a little less action oriented but it doesn't necessarily imply passivity.

It wouldn't bother me to take the second option though, it's a close choice. Maybe other french speakers could give their opinion?

About the capitalized "S" of "Singularity", it's also a matter of preference, I put it to emphasize that we are not talking about any type of singularity (not a mathematical one for example), but it could go either way too. (I just checked the wikipedia french page for "technical singularity", and it's written with a capitalized "S" about 50% of the time...)

Other remarks are welcomed.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-01-25T21:26:18.053Z · LW(p) · GW(p)

I really should have taken 5 minutes to ponder it. You convinced me, your choice is the better one.

But now that I think of it, I have another suggestion : « Affronter la Singularité » ("Confront the Singularity"), which, while still relatively close to the original meaning, may be even more catchy. The catch is, this word is more violent. It depicts the Singularity as something scary.

I'll take some time reviewing your translation. If you want to discuss it in private, I'm easy to find. (By the way, I have a translation of "The Sword of Good" pending. Would you —or someone else— review it for me?)

Replies from: Florent_Berthet
comment by Florent_Berthet · 2012-01-25T22:34:02.247Z · LW(p) · GW(p)

"Affronter la Singularité" is a good suggestion but like you said it's a bit aggressive. I wish we had a better word for "Facing" but I don't think the french language has one.

I'd gladly review your translation, check your email.

comment by Zeb · 2011-12-30T20:25:18.300Z · LW(p) · GW(p)

[A separate issue from my previous comment] There are two reasons that I can give to rationalize my doubts about the probability of imminent Singularity. One is that if humans are only <100 years away from it, then in a universe as big and old as ours I would expect that a Singularity type intelligence would already have been developed somewhere else. In which case I would expect that either we would be able to detect it or we would be living inside it. Since we can't detect an alien Singularity, and because of the problem of evil we are probably not living inside a friendly AI, I doubt the pursuit of friendly AI is going to be very fruitful. The second reason is that while we will probably design computers that are superior to our general intellectual abilities, I judge it to be extremely unlikely that we will design robots that will be as physically versatile as 4 billions years of evolution has designed life to be.

comment by Zeb · 2011-12-30T20:10:47.614Z · LW(p) · GW(p)

I admit I feel a strong impulse to flinch away from the possibility and especially the imminent probability of Singularity. I don't see how the 'line of retreat' strategy would work in this case because if my believe about the probability of imminent Singularity changed I would also believe that I have an extremely strong moral obligation to put all possible resources into solving singularity problems, at the expense of all the other interests and values I have, both personal/selfish and social/charitable. So my line of retreat is into a life that I enjoy much less and that abandons the good work that I believe I am doing on social problems that I believe are important. Not very reassuring.

comment by Nisan · 2011-12-29T21:56:36.452Z · LW(p) · GW(p)

Re: Don't Flinch Away

The link to "Plenty of room above us" is broken.

Replies from: lukeprog
comment by lukeprog · 2011-12-30T04:13:21.701Z · LW(p) · GW(p)

Fixed, thanks.

comment by alexvermeer · 2011-12-27T17:23:18.909Z · LW(p) · GW(p)

Re: Plenty of Room Above Us

It seems to end a little prematurely. Are there plans for a "closing thoughts" or "going forward" chapter or section? I'm left with "woah, that's a big deal... but now what? What can I do to face the singularity?"

If it merely isn't done yet (which I think you hint at here), then you can disregard this comment.

Otherwise, quite fantastic.

Replies from: lukeprog
comment by lukeprog · 2011-12-30T04:12:23.570Z · LW(p) · GW(p)

Right; not done yet. But I should probably end each chapter with a foreshadowing of the next.

comment by timtyler · 2011-12-01T16:45:10.468Z · LW(p) · GW(p)

Re: This event — the “Singularity” — will be the most important event in Earth’s history.

What: more important that the origin of life?!? Or are you not counting "history" as going back that far?

Replies from: lukeprog
comment by lukeprog · 2011-12-14T06:03:33.501Z · LW(p) · GW(p)

Changed.

comment by Vladimir_Nesov · 2011-11-26T17:26:02.096Z · LW(p) · GW(p)

(Non-leaf comment 1, to be deleted.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-26T17:26:11.489Z · LW(p) · GW(p)

(Leaf comment 2, to be deleted.)

comment by Alexei · 2011-11-26T17:56:02.859Z · LW(p) · GW(p)

How is this different from IntelligenceExplosion?

Replies from: dbaupp, None
comment by dbaupp · 2011-11-27T00:16:56.433Z · LW(p) · GW(p)

I think that IntelligenceExplosion is just a portal to make further research easier (by collecting links and references, etc), while Facing The Singularity is lukeprog actually explaining stuff (from the Preface):

I’ve been trying to answer those questions in a long series of brief, carefully written, and well-referenced articles, but such articles take a long time to write. It’s much easier to write long, chatty, unreferenced articles.

Facing the Singularity is my new attempt to rush through explaining as much material as possible. I won’t optimize my prose, I won’t hunt down references, and I won’t try to be brief. I’ll just write, quickly.

comment by [deleted] · 2011-11-26T19:51:36.063Z · LW(p) · GW(p)

Alexei, your link is broken. It directs to: http://www.http://intelligenceexplosion.com/

I think you meant: http://www.intelligenceexplosion.com/

Replies from: Alexei
comment by Alexei · 2011-11-26T21:03:45.571Z · LW(p) · GW(p)

Fixed, thanks!

Replies from: None
comment by [deleted] · 2011-11-26T21:48:45.323Z · LW(p) · GW(p)

Welcome!

Replies from: policetac
comment by policetac · 2013-03-01T00:52:33.959Z · LW(p) · GW(p)

Want to read something kind of funny? I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you. Ok.

On your - "MY Own Story." http://facingthesingularity.com/2011/preface/ You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro..."

On Mine: "About the Author - "https://thesingularityeffect.wordpress.com/welcome-8/ I wrote: "The reason I write about emerging technology is because of an “awakening” I had one evening a few years ago. For lack of a better description, suffice it to say that I saw a confusing combination of formula, imagery, words, etc. that formed two words. Singularity and Exponential..."

NOW, I'm going to go back and read some more while I'm waiting to speak with you somehow directly.

If what happened to you, is the same thing that happened to me... :then please please place a comment on the page. That would be great. (Again without reading. (If I'm correct you "might" get this.: "You should also do this because if true, then WE would both have seen a piece of the .what...."New Book???"

Just in case you think I'm a nut. Go back and read more of mine please.

comment by Dannil · 2013-04-13T21:21:46.825Z · LW(p) · GW(p)

I do not approve of the renaming, singularity to intelligence explosion, in this particular context.

Facing the Singu – Intelligence Explosion, is an emotional piece of writing, there are sections about your (Luke’s) own intellectual and emotional journey to singularitarianism, a section about how to overcome whatever quarrels one might have with the truth and the way towards it (Don’t Flinch Away), and finally the utopian ending which obviously is written to have emotional appeal.

The expression “intelligence explosion” does not have emotional appeal. The word intelligence sounds serious, and thus it fits well in, say, the name of a research institute, but many people view intelligence as more or less the opposite of emotion, or at least at something geeky and boring. And while they for sure are wrong in doing so, as also explained in the text, the association still remains. The word “explosion” also has mostly negative connotations.

“Singularity”, on the other hand, has been hyped for decades, by science fiction, by Kurzweil, and by even SIAI before the rebranding. Sci-fi and Kurzweil may not have given the word the most thorough underpinning, but they gave it hype and recognition, and texts such as this could give the needed foundation in reality.

I understand that the renaming is part of the “political” move of distancing the MIRI from some of the hype, but for this particular text, I reckon it a bad choice. “Facing The Singularity” would sell more copies.

comment by CharlesR · 2011-12-07T16:18:14.797Z · LW(p) · GW(p)

RE: The Crazy Robot's Rebellion

We wouldn’t pay much more to save 200,000 birds than we would to save 2,000 birds. Our willingness to pay does not scale with the size of potential impact. Instead of making decisions with first-grade math, we imagine a single drowning bird and then give money based on the strength of our emotional response to that imagined scenario. (Scope insensitivity, affect heuristic.)

People's willingness to pay depends mostly on their income. I don't understand why this is crazy.

UPDATED: Having read Nectanebo's reply, I am revising my original comment. I think if you have a lot of wasteful spending, then it does make you "crazy" if your amount is uncorrelated with the number of birds. On hearing, "Okay, it's really 200,000 birds," you should be willing to stop buying lattes and make coffee at home. (I'm making an assumption about values.) Eat out less. Etc. But if you have already done these things, then I don't see why your first number should change (at least if we're still talking about birds).

Replies from: Nectanebo, TheOtherDave, None
comment by Nectanebo · 2011-12-09T19:11:16.922Z · LW(p) · GW(p)

Not all of a person's money goes into one charity. A person can spend their money on many different things, and can choose how much to spend on each different thing. Think of willingness to pay to actually be a measure of how much you care. Basically, the bird situation is crazy because humans barely if at all feel a difference in terms of how much they give a damn between something that has one positive effect, and something that has 100x that positive effect!

To Luke: This person was reading about the biases you breifly outlined, and he ended up confused by one of the examples. While the linking helps a good deal, I think your overview of those biases may have been a little too brief, and they might not really hit home with readers of your site, and personally I think it might be difficult particularly for those who may not be familiar with the topics and content of the site. I don't think it would be a bad idea to expand on each of them just a little bit more.

Replies from: CharlesR
comment by CharlesR · 2011-12-09T21:03:23.994Z · LW(p) · GW(p)

I suppose if you are the sort of person who has a lot of "waste".

Replies from: Nectanebo
comment by Nectanebo · 2011-12-10T04:17:35.590Z · LW(p) · GW(p)

Edit: misread

comment by TheOtherDave · 2011-12-07T16:38:32.440Z · LW(p) · GW(p)

This comment confuses me.

The point of the excerpt you quote has nothing to do with income at all; the point is that (for example) if I have $100 budgeted for charity work, and I'm willing to spend $50 of that to save 2,000 birds, then I ought to be willing to spend $75 of that to save 10,000 birds, because 2000/50 > 10000/75. But in fact many people are not.

Of course, the original point depends on the assumption that the value of N birds scales at least somewhat linearly. If I've concluded that 2000 is an optimal breeding population and I'm building an arcology to save animals from an impending environmental collapse, I might well be willing to spend a lot to save 2,000 birds and not much more to save 20,000 for entirely sound reasons.

Replies from: CharlesR
comment by CharlesR · 2011-12-07T17:09:30.690Z · LW(p) · GW(p)

If I budgeted $100 for charity work and I decided saving birds was the best use of my money then I would just give the whole hundred. If I later hear more birds need saving, I will feel bad. But I won't give more.

Replies from: Larks, TheOtherDave
comment by Larks · 2011-12-17T17:50:14.518Z · LW(p) · GW(p)

Suppose you budgeted $100 for charity, and then found out that all charities were useless - they just spent the money on cars for kleptocrats. Would you still donate the money to charity?

Probably not - because hearing that charity is less effective than you had thought reduces the amount you spend on it. Equally, hearing it is more effective should increase the amount you spend on it.

This principle is refered to as the Law of Equi-Marginal Returns.

comment by TheOtherDave · 2011-12-07T20:44:22.969Z · LW(p) · GW(p)

Yes, if saving birds is the best use of your entire charity budget, then you should give the whole $100 to save birds. Agreed.
And, yes, if you've spent your entire charity budget on charity, then you don't give more. Agreed.

I can't tell whether you're under the impression that either of those points are somehow responsive to my point (or to the original article), or whether you're not trying to be responsive.

Replies from: CharlesR
comment by CharlesR · 2011-12-07T22:40:34.766Z · LW(p) · GW(p)

I was describing how I would respond in that situation. The amount I would give to charity XYZ is completely determined by my income. I need you to explain to me why this is wrong.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-07T23:32:26.437Z · LW(p) · GW(p)

OK, if you insist.

The amount I give to charity XYZ ought not be completely determined by my income. For example, if charity XYZ sets fire to all money donated to it, that fact also ought to figure into my decision of how much to donate to XYZ.

What ought to be determined by my income is my overall charity budget. Which charities I spend that budget on should be determined by properties of the charities themselves: specifically, by what they will accomplish with the money I donate to them.

For example, if charities XYZ and ABC both save birds, and I'm willing to spend $100 on saving birds, I still have to decide whether to donate that $100 to XYZ or ABC or some combination. One way to do this is to ask how many birds that $100 will save in each case... for example, if XYZ can save 10 birds with my $100, and ABC can save 100 birds, I should prefer to donate the money to ABC, since I save more birds that way.

Similarly, if it turns out that ABC can save 100 birds with $50, but can't save a 101st bird no matter how much money I donate to ABC, I should prefer to donate only $50 to ABC.

Replies from: CharlesR
comment by CharlesR · 2011-12-08T05:29:08.477Z · LW(p) · GW(p)

From Scope Insensitivity:

Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.

Now I haven't read the paper, but this implies there is only one charity doing the asking. First they ask how much you would give to save 2000 birds? You say, "$100." Then they ask you the same thing again, just changing the number. You still say, "$100. It's all I have." So what's wrong with that?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-08T16:24:35.594Z · LW(p) · GW(p)

Agreed: if I assume that there's a hard upper limit being externally imposed on those answers (e.g., that I only have $80, $78, and $88 to spend in the first place, and that even the least valuable of the three choices is worth more to me than everything I have to spend) then those answers don't demonstrate interesting scope insensitivity.

There's nothing wrong with that conclusion, given those assumptions.

comment by [deleted] · 2011-12-07T16:32:56.417Z · LW(p) · GW(p)

Have you read Scope Insensitivty? It's not just an income effect--human beings are really bad at judging effect sizes.

Replies from: CharlesR
comment by CharlesR · 2011-12-07T16:53:01.668Z · LW(p) · GW(p)

Of course, I've read it. My problem isn't with scope insensitivity. Just this example.

comment by dgsinclair · 2011-12-30T23:35:54.723Z · LW(p) · GW(p)

Luke, while I agree with the premise, I think that the bogie man of machines taking over may be either inevitable or impossible, depending on where you put your assumptions.

In many ways, machines have BEEN smarter and stronger than humans already. Machine AI may make individual or groups of machines formidable, but until they can reason, replicate, and trust or deceive, I'm not sure they have much of a chance.

Replies from: lessdazed, dbaupp
comment by lessdazed · 2011-12-31T03:01:32.674Z · LW(p) · GW(p)

until they can...trust

Trust is one of the top four strengths they're missing?

comment by dbaupp · 2012-01-03T12:19:17.322Z · LW(p) · GW(p)

reason

What does "to reason" mean?

replicate

Getting there.

trust

Again, define "to trust".

deceive

Computers can deceive, they just need to programmed to (which is not hard). (I remember reading an article about computers strategically lying (or something similar) a while ago, but unfortunately I can't find it again)

(Although, it's very possible that a computer with sufficient reasoning power would just exhibit "trust" and "deception" (and self-replicate), because they enabled it to achieve its goals more efficiently.)

comment by ophiuchus13 · 2012-03-20T03:41:36.085Z · LW(p) · GW(p)

'

comment by Lawless · 2012-08-15T19:15:28.809Z · LW(p) · GW(p)

This entire singularity thing seems, to put it very politely, misguided. Have you read the book "On Intelligence"? I'd say that computers are nowhere near becoming intelligent. The ability to play chess is not intelligence.

comment by ophiuchus13 · 2012-03-20T03:38:45.075Z · LW(p) · GW(p)

This pseudo-intellectual garbage is time consuming. Look for your Creator while there is still time.

comment by ophiuchus13 · 2012-03-20T03:25:04.117Z · LW(p) · GW(p)

The author of this article, Facing the Singularity, is a pseudo-intellectual that is divulging garbage over the Internet. What the world needs at this time is true wisdom which you will find in the Scriptures only. All human endeavors are coming to an end. Look for God while there is still time.

comment by dgsinclair · 2011-12-30T00:42:13.084Z · LW(p) · GW(p)

It is a pity that you use creationists as an example here, since I think that this is exactly how evolutionists think and act. The evidence that you say is so strong in support of common descent is just not. Endogenous retroviruses are just not a slam dunk at all, and I say that as someone with a biochemistry degree.

The main reason that this is a really bad example is that it involves historical evidence, not empirical, and it involves origins, which is, to say the least, highly speculative due to the historical distance. While evolitionists DO have the advantage of appealing to natural preocesses, and IDists do not appeal to current processes (though they don't deny natural selection or various recombination events), the latter do contest the supposed creative ability of evolution to produce novel features, and this is eminently reasonable at this time.

Your example of self-deception with creationists is poor for many reasons. For example, you speak of the missing link tactic of creating 'two more every time one is suggested." While this is a cheap dodge, it does bring up some critical points which evolutionary believers also ignore - how similar, and by what measure, should two things be to be considered a definite link with no need to insert another? Pure morphology has turned out to be a bust when we consider molecular evidence. And the latter has shown that our assumptions about relatedness are highly speculative, if not so simple that they don't provide ANY useful relational evidence.

Like to see how missing links really work? Google for 'missing link found,' check out of the recent supposed human links found, and see how many have turned out to be spurious - nearly ALL of them. They're trumpeted from the media housetops when they're found, but no one peeps when they are, and almost universally, debunked under scrutiny. This is the corollary for your example. Evolutionary believers fail to consider counter indications seriously because it is a world view issue.

I've written a few relevant posts on this, hope it's ok to post them:

Mass Delusion - 10 Reasons Why the Majority of Scientists Believe in Evolution http://www.wholereason.com/2011/01/mass-delusion-10-reasons-why-the-majority-of-scientists-believe-in-evolution.html

Fossil evidence sends human evolution theory into tailspin http://www.wholereason.com/2007/08/fossil-evidence-sends-human-evolution-theory-into-tailspin.html

Evolutionary Trees - In Flux or Broken and Bogus? http://www.wholereason.com/2007/06/evolutionary-trees-in-flux-or-broken-and-bogus.html

13 Misconceptions About Evolution http://www.wholereason.com/2011/04/13-misconceptions-about-evolution.html

I get ruffled when IDists or creationists are paraded as examples of brainwashing or self-deception, primarily because I was an evolutionary disciple as a science major and found my way out of that system into one where I concluded for my SELF that logic and common sense indicate a designer/creator.

Replies from: Incorrect, dgsinclair
comment by Incorrect · 2011-12-30T00:48:37.108Z · LW(p) · GW(p)

The theory of evolution allows us to reduce the huge complexity of biology to simple starting conditions with a relatively simple set of rules. Regardless of whether we have to allegedly have to add exceptions to our theory to explain missing links, the information of those exceptions is small compared to the huge amount of the complexity of biology we can explain with evolution.

Creationism and ID add MORE complexity while deliberately avoiding paying us back that complexity with predictions and furthermore assume the existence of an entire (possibly human-like) intelligent agent without suggesting any reduction of this agent to simple starting conditions.

Replies from: dgsinclair
comment by dgsinclair · 2011-12-30T18:21:22.944Z · LW(p) · GW(p)

No doubt evolution is a simplified rules set, but in empirical tests, as well as in historical interpretation of data, it has many failings which, as Luke has pointed out for certain creationists, is something that evolutionary believers shy away from, hiding in self-deception in order to keep their beliefs safe.

But this is not a post about creation/evolution - my point was that his use of creationists was a poor choice because (a) creationism is believed by a majority of Americans, and so will turn them off from his main point, and (b) the idea that the idea is settled scientifically is dubious, since origins science is more interpretation than demonstrable fact, and both sides of that debate have strong ideological reasons to believe and scientific reasons to doubt that they ignore.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-12-30T23:16:20.898Z · LW(p) · GW(p)

(a) creationism is believed by a majority of Americans, and so will turn them off from his main point,

Can people who believe in a God that benevolently created us and looks over us even come to consider the possibility of existential dangers or a human-steered Singularity? Frankly, if they are creationists, I think they are largely irrelevant to a Singularity discussion until they shed such beliefs.

comment by dgsinclair · 2011-12-30T00:52:36.680Z · LW(p) · GW(p)

One more thing. If you want a wider audience to access the point you are making (remember how many people are creationists here in the US), you should use a more accessible and universally accepted example, like the Japanese soldier one you used. If you want a contemporary example, choose something there is more agreement on or people with miss your point - it's like calling your opponent a Nazi - you already lost the argument even if you are right.

I suppose if you are only addressing the skeptical audience, you could use such an example, the way I could use the example of atheists who ignore the obviousness of God's existence as witnessed in creation if I were talking to Christians. But if I am trying to also reach atheists, perhaps I would use a different example.

Replies from: dgsinclair
comment by dgsinclair · 2011-12-30T23:02:14.913Z · LW(p) · GW(p)

The lack of responses and negative scores on my comment show me that (1) it is easier to vote down a post than post a reasoned response, and (2) it is easier to scoff at opponents and think them fools than confront one's own self-deceptive behaviors, the very purpose of Luke's post.

Replies from: dbaupp
comment by dbaupp · 2012-01-03T12:08:07.241Z · LW(p) · GW(p)

The lack of responses and negative scores on my comment show me that (1) it is easier to vote down a post than post a reasoned response, and (2) it is easier to scoff at opponents and think them fools than confront one's own self-deceptive behaviors, the very purpose of Luke's post.

No, it is simply that LW has covered these issues and considers them solved* and so downvotes/ignores people asserting otherwise.

*the weight of evidence points towards evolution, and every point proposed by proponents of creationism and ID has been refuted (do you have a distinctly novel and original argument for creationism/ID? If you don't, then you are wasting your time).

comment by MysTerri · 2012-02-16T19:50:11.029Z · LW(p) · GW(p)

In the Chapter titled, 'No God to Save Us', you said, "We are often weak and stupid, but we must try, for there is no god to save us. Truly terrible outcomes may be unthinkable to humans, but they aren’t unthinkable to physics."

God gave us a beautiful world, but when man developed, built and used HAARP, it was not God's idea. God's idea was for us to enjoy and care for the world He provided, the vacation spot of all the known universe and has provided the peace and love, within every human, that is needed to make it the Heaven God intended it to be here on Earth. We have allowed a few with unconscious agendas of greed rip off our Heaven. It is like God gave a diamond ring to a gorilla. The gorilla did not see its true value and has tossed it away.

This was man's doing not God's. God gave us this world, where he is the host and we the guests. He made us where we are the host and he is to be the guest. Yet so many have turned a blind eye to the beauty around us and have destroyed the place in record time. They have not invited the Guest, our Host here on Earth, into their hearts.

When man built HAARP and put it to use, he condemned all of mankind. Physics tell you that when you increase the electromagnetic field of an object its magnetism increases as well. This has made of our Earth, our solar system (something that could have lasted mush longer) into the losing end of a tug of war with another magnet with a much bigger pull. With the billions of watts HAARP sends into our atmosphere, we have doomed ourselves as anything with a greater magnetic force will pull us in. I submit that since HAARP was first ever activated, the same year recorded as the beginning of the hot years and the beginning of not only global warming, but the heating up of our entire solar system. We are being pulled in by a much greater magnet, a black hole.

There is nothing we can do now but prepare ourselves for the inevitable by finding that love, that peace that is within us, whose value the "gorillas" have disregarded. There is peace within us all, even in the "gorillas", who have chosen to arrogantly ignore it. It is our privilege to acknowledge the peace and humanity within us, even now, although it is too late to save us from our own stupid and selfish misdeeds. God did not do this to us. We did it to Him by ignoring and devaluing the beauty of humankind, by ignoring our own hearts.

Replies from: Alicorn, MixedNuts
comment by Alicorn · 2012-02-16T19:59:50.058Z · LW(p) · GW(p)

This is not the website for you.

comment by MixedNuts · 2012-02-16T20:31:03.346Z · LW(p) · GW(p)

I am a physicist. I am, as we speak, working on a project related to magnetism. I know exactly how magnetisation works. Take it from a domain expert: what you are saying is nonsense. Black holes are not magnets. We know what magnets do to the Earth and the solar system, and it does not involve pulling in anything. Even if it did, changing the Earth's magnetisation could not do this. I am not an expert on HAARP, but if it had made large changes to anything magnetic we would have noticed - the whole point is to measure magnetic changes.

Replies from: shminux
comment by shminux · 2012-02-16T21:02:03.172Z · LW(p) · GW(p)

It is probably best to refrain from replying to comments like MysTerri's, because s/he is clearly not receptive to reason. Alicorn reply was the only one needed, plus some silent downvoting.

Replies from: bcoburn, MixedNuts
comment by bcoburn · 2012-02-20T15:33:05.450Z · LW(p) · GW(p)

In a situation this specific, it seems to me to be worthwhile to reply exactly once, in order to inform other readers. Don't expect to change the troll's opinion, but making one comment in order to prevent them from accidentally convincing other people seems worthwhile.

Replies from: shminux
comment by shminux · 2012-02-20T20:31:37.648Z · LW(p) · GW(p)

"I believe I said that, Doctor" -- Spock

comment by MixedNuts · 2012-02-16T21:11:43.837Z · LW(p) · GW(p)

Usually I wouldn't, but there was

  • a clear claim made
  • that I can disprove to the necessary level of precision
  • based on explicit deference to physics

So it's not impossible that MysTerri will retract this particular claim, and that's a decent hook for doubting the rest of the theory. Other responses include accusing me of not being an expert (or lying), renouncing respect for physics, or going content-free.

Also, I'm not sure why you reply here rather than in private, given your reasons.