Bloggingheads: Yudkowsky and Horgan

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-07T22:09:00.000Z · LW · GW · Legacy · 37 comments

Contents

37 comments

I appear today on Bloggingheads.tv, in "Science Saturday: Singularity Edition", speaking with John Horgan about the Singularity.  I talked too much.  This episode needed to be around two hours longer.

One question I fumbled at 62:30 was "What's the strongest opposition you've seen to Singularity ideas?"  The basic problem is that nearly everyone who attacks the Singularity is either completely unacquainted with the existing thinking, or they attack Kurzweil, and in any case it's more a collection of disconnected broadsides (often mostly ad hominem) than a coherent criticism.  There's no equivalent in Singularity studies of Richard Jones's critique of nanotechnology - which I don't agree with, but at least Jones has read Drexler.  People who don't buy the Singularity don't put in the time and hard work to criticize it properly.

What I should have done, though, was interpreted the question more charitably as "What's the strongest opposition to strong AI or transhumanism?" in which case there's Sir Roger Penrose, Jaron Lanier, Leon Kass, and many others.  None of these are good arguments - or I would have to accept them! - but at least they are painstakingly crafted arguments, and something like organized opposition.

37 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Roko · 2008-06-08T00:46:06.000Z · LW(p) · GW(p)

That interview was quite funny. I really admire your patience; especially when Horgan made certain errors of reasoning that you'd carefully told him not to make earlier in the interview!

comment by poke · 2008-06-08T01:25:31.000Z · LW(p) · GW(p)

Eliezer, serious question, why don't you re-brand your project as designing an Self-Improving Automated Science Machine rather than a Seed AI and generalize Friendly AI to Friendly Optimization (or similar)? It seems to me that: (a) this would be more accurate since it's not obvious (to me at least) that individual humans straightforwardly exhibit the traits you describe as "intelligence"; and (b) you'd avoid 90% of the criticism directed at you. You could, for example, avoid the usual "people have been promising AI for 60 years" line of argument.

comment by Ben6 · 2008-06-08T01:29:46.000Z · LW(p) · GW(p)

I found the discussion really interesting, though I'm still fairly lost as to what this singularity stuff is. Don't worry about talking too much; as a regular bhtv viewer, I'd insist that John Horgan generally talks to much. He kinda likes to make forceful claims about admittedly non-scientific questions, generally of the form "branch of science X won't amount to anything useful." He'll give non-scientific reasons why he supports that position, but then responds to criticisms of his argument by either a) repeating his claim or b) shrugging off the objection as non-scientific.

I was quite surprised by your quoting Robert Pirsig, as I've taken him to be pretty marginalized by a lot of thinkers. Do his ideas play into singultarity?

comment by cole_porter · 2008-06-08T01:30:10.000Z · LW(p) · GW(p)

John Horgan is a sloppy thinker. But if this was a contest to strengthen vs. weaken the credibility of AI research -- a kind of status competition -- then I think he got the better of you.

Is it important to convince nonprofessionals that the singularity is plausible, in advance of it actually happening? If so, then you need to find a way to address the "this is just an apocalyptic religion" charge that Mr. Horgan brings here. It will not be the last time you hear it, and it is particularly devastating in its own somewhat illogical way. 1. All people dismiss most claims that their lives will be radically different in the near future, without giving due consideration 2. This behavior is rational! At least, it is useful, since nearly all such claims are bogus and "due consideration" is costly, 3. Your own claims can be easily caricatured as resembling millenarianist trash (singularity = rapture, etc. One of the bloggingheads commenters makes a crack about your "messianism" as a product of your jewish upbringing.)

How do you get through the spam filter? I don't know, but "read my policy papers" sounds too much like "read my manifesto." It doesn't distinguish you from crazy people before they read it, so they won't. (Mr. Horgan didn't. Were you really surprised?) You need to find sound bites if you're going to appear on bloggingheads at all.

In the political blogosphere, this is called "concern trolling." Whatever.

comment by bjkeefe · 2008-06-08T01:40:50.000Z · LW(p) · GW(p)

Eli:

You presented so many intriguing ideas in that diavlog that I can't yet say anything meaningful in response, but I did want to drop by to tell how you how much I enjoyed the diavlog overall. I hope to see you come back to do another one, whether with John or with somebody else.

I do think John interfered with your presenting your ideas with his excessive and somewhat kneejerk skepticism, but perhaps he helped you to make some points in reaction. Anyway, I could tell that you had a lot more to say, and in addition to reading your work, I look forward to hearing you talk about it some more.

comment by JulianMorrison · 2008-06-08T01:40:55.000Z · LW(p) · GW(p)

Goodness me. "How will superintelligence help me make better flint arrows? Thag, who is clever, doesn't seem to be any good at making arrows. Oh and by the way let me interrupt you on some other tangent..."

I don't see how you keep your morale up.

comment by Hopefully_Anonymous · 2008-06-08T01:56:13.000Z · LW(p) · GW(p)

Interesting. Would be even better if you did this with Robin, Nick, etc. and on a weekly basis.

comment by John · 2008-06-08T04:48:23.000Z · LW(p) · GW(p)

@poke:

I imagine Eliezer is more interested in doing what works than avoiding criticism. And the real danger associated with creating a superhuman AI is that things would spiral out of control. That danger is still present if humanity is suddenly introduced to 24th century science.

comment by edbarbar · 2008-06-08T05:04:30.000Z · LW(p) · GW(p)

Eli, Enjoyed your conversation with John today, though I suspect he would have tried to convince the Wright brothers to quit because so many had failed.

I read your essay on friendly AI, and think this essay is off the mark. If the singularity happens, there will be so many burdens the AI can throw off (such as anthropomorphism) it will be orders of magnitude superior very quickly. I think the apt analogy isn't we would be apes among men, but more like worms among men. Men need not nor should be concerned with worms, and worms aren't all that important in a world with men.

Ed

comment by Hopefully_Anonymous · 2008-06-08T05:14:16.000Z · LW(p) · GW(p)

john, I think a good case could be made that things are currently out of control, and that the danger of a superhumanAI gone wrong would be that we'd cease to exist as subjective conscious entities.

edbarbar, I think the apt analogy may be something like that we'd be silicon sand among IT companies.

comment by Unknown · 2008-06-08T05:52:48.000Z · LW(p) · GW(p)

Eliezer, it's possible for there to be a good argument for something without that implying that you should accept it. There might be one good argument for X, but 10 good arguments for not-X, so you shouldn't accept X despite the good argument.

This is an important point because if you think that a single good argument for something implies that you should accept it, and that therefore there can't be any good arguments for the opposite, this would suggest a highly overconfident attitude.

comment by edbarbar · 2008-06-08T06:42:50.000Z · LW(p) · GW(p)

Hopefully anonymous: There are strong warnings against posting too much, but my personal suspicion is that the next generation of AI will not colonize other planets, convert stars, or any of the things we see as huge and important, but go in the opposite direction and become smaller and smaller. At least, should the thing decide that survival is ethical and desirable.

But as sand or worms or simply irrelevant, the result is the same. We shouldn't be worried that our children consume us: it's the nature of life, and that will continue even with the next super intelligent beings. To evolve, everything must die or be rendered insignificant, and there is no escape from death even for stagnant species. I think that will hold true for many generations.

comment by [deleted] · 2008-06-08T07:12:03.000Z · LW(p) · GW(p)

deleted

comment by Matthew2 · 2008-06-08T07:19:52.000Z · LW(p) · GW(p)

He wasted 90% of the interview because Yudkowsky discussed how to be rational rather than answering implications of AGI being possible.

How does Yudkowsky's authority change our viewpoint of the feasibility of AGI being developed quickly when most experts clearly disagree? We need to go from the elders being wrong in technique to the path to AGI.

And what about the claim that a billion dollar project isn't needed? Singinst thinks they can do it alone, with a modest budget of a few millionaires? Isn't this a political position?

I am glad Yudkowsky is trying so hard but it seems he is doing more politics and philosophy than research. Perhaps in the long term this will be more effective, as the goal is to win, not to be right.

comment by Ian_C. · 2008-06-08T08:11:48.000Z · LW(p) · GW(p)

It was OK until the interviewer started going on about his ridiculous communist utopia, and in almost the same breath he accuses Eliezer of being pie in the sky!

By the way Eli, if you put an MP3 clip of how to pronounce your name somewhere on the web, maybe interviewers wouldn't have to ask all the time (don't you get sick of that?).

comment by FrF · 2008-06-08T09:01:37.000Z · LW(p) · GW(p)

I'd like to read/hear an interview with Eliezer where he talks mainly about SF. Sure, we have his bookshelf page but it is nearly ten years old and by far not comprehensive enough to satisfy my curiosity!

Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

comment by Matt4 · 2008-06-08T09:01:48.000Z · LW(p) · GW(p)

"That's not how I experience my epiphanies, it's just sort of 'Oh, that's obviously correct.'"

I found that comment really resonated with me, but having been exposed to experimental psychology (which by a roundabout route is what led me to this blog in the first place), I've always struggled with how to distinguish that response from confirmation bias. It seems to me that I have in fact radically changed my opinions on certain issues on the basis of convincing evidence (convincingly argued), but that could just as well be revisionist memories.

comment by Tim_Tyler · 2008-06-08T10:38:23.000Z · LW(p) · GW(p)

Re: "Objections to the singularity" - if the singularity is defined as being an "intelligence explosion", then it's happening now - i.e. see my:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

comment by Tim_Tyler · 2008-06-08T11:18:28.000Z · LW(p) · GW(p)

"Strong AI" seems like a bit of an oxymoron. Maybe we should be using "Powerful AI" or "Smart AI" - rather than perpetuating an old misunderstanding of Searle's stupid game:

http://en.wikipedia.org/wiki/Strong_AI#Origin_of_the_term:_John_Searle.27s_strong_AI

comment by Shane_Legg · 2008-06-08T13:11:05.000Z · LW(p) · GW(p)

I think Horgan's questions were good in that they were a straight forward expression of how many sceptics think. My own summary of this thinking goes something like this:

The singularity idea sounds kind of crazy, if not plain out ridiculous. Super intelligent machines and people living forever? I mean... come on! History is full of silly predictions about the future that turned out to be totally wrong. If you want me to take this seriously you're going to have to present some very strong arguments as to why this is going to happen.

Although I agree with most of what Eli said, rhetorically it sounded like he was avoiding this central question with a series of quibbles and tangents. This is not going to win over many sceptics' minds.

I think it's an important question to try to answer as directly and succinctly as possible -- a longish "elevator pitch" that forms a good starting point for discussion with a sceptic. I'll think about this and try to write a blog post.

Replies from: Dojan
comment by Dojan · 2011-12-24T13:42:52.832Z · LW(p) · GW(p)

Did you ever formulate anything good? I'd be interested to read it if so, I'm having trouble keeping the attention of my friends and family for long enough to explain...

comment by Recovering_irrationalist · 2008-06-08T13:57:20.000Z · LW(p) · GW(p)
FrFL: Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

That would be great, but in the meantime see these recommendations.

comment by anonymous34 · 2008-06-08T16:28:12.000Z · LW(p) · GW(p)

No offense to Horgan, but I can't help but feel that he made a bad career choice in becoming a science journalist ... should've picked sports or something.

comment by Latanius2 · 2008-06-08T16:30:18.000Z · LW(p) · GW(p)

Well... I liked the video, especially to watch how all the concepts mentioned on OB before work in... real life. But showing how you should think to be effective (which Eliezer is writing about on OB) is a different goal from persuading people that the Singularity is not some other dull pseudo-religion. No, they haven't read OB, and they won't even have a reason to if they are told "you won't understand all this all of a sudden, see inferential distances, which is a concept I also can't explain now". To get thorough their spam filter, we'll need stories, even details, with a "this is very unprobable, but if you're interested, read OB" disclaimer at the end. See the question "but how could we use AI to fight poverty etc."... Why is the Singularity still "that strange and scary prediction some weird people make without any reason"?

comment by Caledonian2 · 2008-06-08T17:01:48.000Z · LW(p) · GW(p)

But all of the beliefs about what the world will do once it hits a Singularity ARE a dull religion, because the whole point of a Singularity is that we can't trust our ability to extrapolate and speculate beyond it.

comment by Ian_C. · 2008-06-08T17:24:41.000Z · LW(p) · GW(p)

The interviewer accused Eliezer of being religious-like. But if the universe is deterministically moving from state to state then it's just like a computer, a machine that moves predictably from state to state. Therefore it's not religious at all to believe anything in the world (including intelligence) could eventually be reproduced in a computer.

But of course the universe is not like a computer. Everything a computer does until the end of time is implied in it's initial state, the nature of it's CPU, and subsequent inputs. It can never deviate from that course. It can never choose like a human, therefore it can never model a human.

And it's not possible to rationally argue that choice is an illusion because reason uses choice in it's operations. If you use something in the process of arguing against it, you fall in to absurdity. e.g. your proof comes out something like: "I presumed P, pondered Q and R, chose R, reasoned thusly about R vs S, finally choosing S. Therefore choice isn't really choosing."

comment by Silas · 2008-06-08T19:09:13.000Z · LW(p) · GW(p)

Eliezer_Yudkowsky: Considering your thrice-daily mention of Aumann (two month running average), shouldn't you have been a little more prepared for a question like that?

Btw, I learned from that video that your first name has four syllables rather than three.

comment by Michael_G.R. · 2008-06-08T19:26:55.000Z · LW(p) · GW(p)

You need a chess clock next time. John talks way too much.

comment by Joseph_Knecht · 2008-06-09T06:08:33.000Z · LW(p) · GW(p)

AI researchers of previous eras made predictions that were wildly wrong. Therefore, human-level AI (since it is a goal of some current AI researchers) cannot happen in the foreseeable future. They were wrong before, so they must be wrong now. And dawg-garn it, it seems like some kind of strange weirdo religious faith-based thingamajiggy to me, so it must be wrong.

Thanks for a good laugh, Mr. Horgan! Keep up the good work.

comment by Nick_Tarleton · 2008-06-09T23:12:28.000Z · LW(p) · GW(p)

Ian: what makes you think the things humans do aren't implied by its initial state, nature, and inputs? The form of choice reason demands (different outputs given different inputs) is perfectly compatible with determinism, in fact it requires determinism, since nondeterministic factors would imply less entanglement between beliefs and reality. If your conclusion is not totally determined by priors and evidence, you're doing something wrong.

comment by Nate4 · 2008-06-10T05:59:49.000Z · LW(p) · GW(p)

I thought you did an excellent job.

comment by Michael_Sullivan · 2008-06-10T15:53:28.000Z · LW(p) · GW(p)

I would think the key line of attack in trying to describe why a singularity prediction is reasonable is in making clear what you are predicting and what you are not predicting.

Guys like Horgan hear a few sentences about the "singularity" and think humanoid robots, flying cars, phasers and force fields, that we'll be living in the star-trek universe.

Of course, as anyone with the Bayes-skilz of Eliezer knows, start making detailed predictions like that and you're sure to be wrong about most of it, even if the basic idea of a radically altered social structure and technology beyond our current imagination is highly probable. And that's the key: "beyond our current imagination". The specifics of what will happen aren't very predictable today. If they were, we'd already be in the singularity. The things that happen will seem strange and almost incomprehensible by today's standards, in the way that our world is strange and incomprehensible by the standards of the 19th century.

The last 200 years already are much like a singularity from the perspective of someone looking forward from 15th century europe and getting a vision of what happened between 1800 and 2000, even though the basic groundwork for that future was already being laid.

comment by Ian_C. · 2008-06-11T08:00:26.000Z · LW(p) · GW(p)

Nick: "what makes you think the things humans do aren't implied by its initial state, nature, and inputs?"

What humans do is determined by their nature, just like with a computer. The difference is, human nature is to be able to choose, and computer nature is not.

"The form of choice reason demands (different outputs given different inputs) is perfectly compatible with determinism, in fact it requires determinism, since nondeterministic factors would imply less entanglement between beliefs and reality. If your conclusion is not totally determined by priors and evidence, you're doing something wrong."

You're not doing something wrong, because I don't think reason is pure discipline, pure modus-ponens. I think it's more like tempered creativity - utilizing mental actions such as choice, focus, imagination as well as pure logic. The computer just doesn't have what it takes.

But the point I was making is that the whole idea of reason wouldn't arise in the first place without prior acceptance of free will. It is only by accepting that we control our minds that the question of how best to do so arises, and ideas like reason, deduction etc. come to be.

All these ideas therefore presuppose free will in their very genesis, and can not validly be used to argue against it. It would be like trying to use the concept "stealing" in a proof against the validity of "property" - there is no such thing as stealing without property. Likewise there is no such thing as reason without free will.

comment by Tim_Tyler · 2008-07-12T08:57:58.000Z · LW(p) · GW(p)

From the BH comments:

I've been reading overcomingbias.com for a long time, more out of interest than because I agree with their world view. It's certainly one of the most pretentious and eliteist blogs on the internet. They need to learn humility.
comment by Tim_Tyler · 2008-07-20T14:22:21.000Z · LW(p) · GW(p)

See also, Michael Anissimov's dissection of this discussion.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-13T15:22:02.000Z · LW(p) · GW(p)

If re-asked the question "What's the strongest criticism you've seen of Singularity ideas?" I would now be able to unhesitatingly answer "Robin Hanson's critique of hard takeoff."

comment by themusicgod1 · 2017-04-25T22:47:56.184Z · LW(p) · GW(p)

My concern isn't with the interview per se(everything I would add would best be put in another thread). It's with the reaction here in the comments here.

That 90% wasn't a waste anymore than overcomingbias as a blog is a waste. Horgan is hardly alone in remembering the Fifth Generation Project and it was worth it to get Yudkowsky to hammer out, once more, to a new audience why what happened in the 80's was not representative of what is to come in the 10ky timeframe. Those of you who are hard on Horgan he is not one of you. You cannot hold him to LW standards. Yudkowsky has spent a lot of time and effort trying to get other people to not make mistakes, for example mislabeling broad singulitarian thought on him as if he's kurzweil, vinge, the entirety of MIRI and whatnot personified and so it's understandable why he might be annoyed, but at the same time...the average person is not going to bother with the finer details. He probably put in about as much or more journalistic work as the average topic requires. This just goes to really drive home how different intelligence is from other fields, how hard science journalism in a world with AI research can be.

It's frustrating because it's hard. It's hard for many reasons, but one reason is because the layman's priors are very wrong. This it shares in common(for good reason) with economics and psychology more generally that people who are not in the field bring to the table a lot of preconceptions that have to be dismantled. Dismantling them all is a lot of work for a 1 hour podcast. Like those who answer Yahoo Answers! questions, Horgan is a critical point needed to convince on his own terms between Yudkowsky & a substantial chunk of a billion+ people who lived in the 80's who are not following where Science is being taken here.