[POLL] AI-FOOM Debate in Sequence Reruns?

post by MinibearRex · 2012-11-01T04:12:51.457Z · score: 11 (12 votes) · LW · GW · Legacy · 16 comments

We're now at the point in the sequences when the AI-FOOM Debate took place between Eliezer Yudkowsky and Robin Hanson. Do people want me to include them in the sequence rerun posts, and if so, how? Should I make one post a day? Should I post all of the posts that were made in one day back in 2008, so that we would possibly get one Yudkowsky and one Hanson post in a single day of reruns? If I'm rerunning two posts a day, should I make one rerun post or two? When rerunning a Hanson post, how should the standard rerun template be adjusted? 

I titled this as a poll, but that's not really what I want to do, since I'm not sure I have come up with all of the relevant options. I've got my own thoughts on all of the questions I just asked, but I'm going to hold off on mentioning them, in the interests of sparking discussion. Whatever I do, I will need to decide fairly quickly (as in, next three days at the latest).

I'm sorry that this is short term; I've been busy and wasn't really looking that far ahead.

How should I do this?

16 comments

Comments sorted by top scores.

comment by David_Gerard · 2012-11-01T10:50:16.485Z · score: 20 (20 votes) · LW · GW

Run both sides. It's a good worked example of two smart people talking past each other.

comment by fortyeridania · 2012-11-01T14:24:12.561Z · score: 2 (2 votes) · LW · GW

Yes, I agree.

comment by Lapsed_Lurker · 2012-11-01T23:48:17.677Z · score: 0 (0 votes) · LW · GW

I remember seeing a few AI(and other things, sometimes) debates (mostly on YouTube) where they'd just be getting to the point where they are clarifying what it is that each one actually believes and you get: 'agree to disagree'. The end.

Just when the really interesting part seemed to be approaching! :(

For text-based discussions that fail to go anywhere, that brings to mind the 'talking past each other' you mention or 'appears to be deliberately misinterpreting the other person'

comment by Raemon · 2012-11-01T04:26:43.252Z · score: 14 (14 votes) · LW · GW

I have no strong preference (I guess two a day, one of each, sounds good). But I'd like to take this opportunity to thank you for doing this, in general.

comment by fortyeridania · 2012-11-01T14:25:08.628Z · score: 3 (3 votes) · LW · GW

But I'd like to take this opportunity to thank you for doing this, in general.

Seconded.

comment by MinibearRex · 2012-11-02T05:32:57.297Z · score: 1 (1 votes) · LW · GW

Thanks. I appreciate that.

comment by Lapsed_Lurker · 2012-11-01T10:36:16.013Z · score: 5 (5 votes) · LW · GW

Has there been any evolution in either of their positions since 2008, or is that the latest we have?

edit Credit to XiXiDu to sending me this OB link, which contains in the comments this YouTube video of a Hanson-Yudkowsky AI debate in 2011. Boiling it down to one sentence I'd say it amounts to Hanson thinking that a singleton Foom is a lot less likely than Yudkowsky thinks.

Is that more or less what it was in 2008?

comment by MinibearRex · 2012-11-02T05:24:37.864Z · score: 0 (0 votes) · LW · GW

I think so, but truth be told I've actually never read through all of it myself. All of the bits of it I've seen seem to indicate that they hold similar positions in those debates to their positions in the original argument.

comment by RobertLumley · 2012-11-01T15:42:44.315Z · score: 4 (6 votes) · LW · GW

I would say stick with one post per day and alternate between Hanson and Yudkowsky.

comment by Decius · 2012-11-01T16:06:17.510Z · score: 0 (0 votes) · LW · GW

Concur

comment by Furcas · 2012-11-01T04:32:04.045Z · score: 3 (11 votes) · LW · GW

I'd say no, don't re-post the debate at all. First, it teaches nothing about rationality. Second, it was... kind of bad.

comment by wedrifid · 2012-11-01T05:42:11.800Z · score: 3 (7 votes) · LW · GW

First, it teaches nothing about rationality.

Yes it does. In fact memory suggests part of the problem with Eliezer's posts were that he was stuck explaining foundational concepts of how to reason rather than shooting out carefully crafted conclusions.

comment by Furcas · 2012-11-01T06:12:24.036Z · score: 7 (9 votes) · LW · GW

Looking at the list of posts, you're right, there is some stuff about rationality, like Is that your true rejection? It just has very little to do with the AI-foom debate.

So I'll amend my previous post: Don't post any of the actual debate, but extract the posts from the sequence that are about rationality.

comment by MinibearRex · 2012-11-02T05:31:06.229Z · score: 0 (0 votes) · LW · GW

What do people think of this idea? I'm personally interested in reading all of the debate, and I think I will, no matter what I wind up posting, so nobody else needs to feel lonely if they want to see all of it.

comment by chaosmosis · 2012-11-02T04:02:51.732Z · score: 1 (3 votes) · LW · GW

Concur. Hanson didn't apply his knowledge of calculus, or felt it was unjustified to do so because he believes too strongly in empirical data and not strongly enough in analytical arguments. Yudkowsky repeated himself over and over and talked about side issues that weren't the cause of Hanson's rejection.

comment by NancyLebovitz · 2012-11-03T03:50:52.016Z · score: 2 (2 votes) · LW · GW

I think "analytical argument" is the phrase I was looking for.

In Brunner's The Sheep Look Up (a pollution dystopia), someone figures out that there isn't enough clean land to produce the amount of clean food they're selling. At that point, you don't have to check the details of their production methods (assuming that hydroponics aren't feasible), though you still might want to.

Are there comparable terms for other sorts of arguments?