Self-skepticism: the first principle of rationality

post by aaronsw · 2012-08-06T00:51:32.688Z · LW · GW · Legacy · 86 comments

When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.

Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.

What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.

As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.

When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.

And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.

Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".

Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!

Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?

86 comments

Comments sorted by top scores.

comment by Randaly · 2012-08-06T03:54:58.607Z · LW(p) · GW(p)

I agree- the answer given in the FAQ isn't a complete and valid response to the critics of the Singularity. But it was never meant to be; it was meant to be "short answers to common questions." The SI's longer responses to critics of the Singularity are mostly in peer-reviewed research; for example, in:

Luke Muehlhauser and Anna Salamon (2012). Intelligence Explosion: Evidence and Import. In The Singularity Hypothesis, Springer. (http://singularity.org/files/IE-EI.pdf)

Carl Shulman and Nick Bostrom (2012). How Hard is Artificial Intellience?. In Journal of Consciousness Studies, Imprint Academic. (http://www.nickbostrom.com/aievolution.pdf)

Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65. (http://consc.net/papers/singularity.pdf)

Sotala, Kaj (2012) Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1), 275-291. ( http://kajsotala.fi/Papers/DigitalAdvantages.pdf )

Of course, now I feel pretty bad for linking you to several hundred pages of arguments- which are often overlapping and repetitive, and which still don't represent everything the SI has written on the subject (or even a majority, I think). If you have any specific criticisms of the SI's ideas, it might be faster for you to post them here.

I feel even (slightly) worse, because none of the SI's arguments have reached the level of evidence you're comparing it to- eg Givewell's analyses, and the disproofs of spoon-bending and new reading methods. But the SI isn't capable of providing evidence that strong, because whether or not its claims were accurate, they would still be predictions of an abrupt future change, as opposed to claims about the efficacy of past actions. I do think that, where possible, the SI has tried to be very transparent- for example, in my opinion the SI's last yearly progress report was around as thorough as Givewell's last yearly progress reports- part 1, part 2, part 3, part 4.

(On a side note, it might interest you that Holden Karnofsky, co-founder of Givewell, also analyzed the SI and came to a very negative conclusion- posted here. His post is currently the most upvoted post of all time on LessWrong.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-08-06T10:12:39.396Z · LW(p) · GW(p)

Also, in an internal publication:

Kaj Sotala (2010). From Mostly Harmless to Civilization-Threatening. The Singularity Institute. (http://singularity.org/files/MostlyHarmless.pdf)

Well, not technically in an internal publication, at least not an SI one - it was presented at the VIIIth European Conference on Computing and Philosophy, and published in their proceedings.

For an extended version of that paper in a peer-reviewed journal, see

Sotala, Kaj (2012) Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1), 275-291. ( http://kajsotala.fi/Papers/DigitalAdvantages.pdf )

(That one's not actually an SI publication, but then neither was the Chalmers one. Nor is it specifically an answer to critics, but rather an elaboration of the SI argument.)

Replies from: DaFranker, Randaly
comment by DaFranker · 2012-08-06T15:01:32.460Z · LW(p) · GW(p)

For an extended version of that paper in a peer-reviewed journal, see Sotala, Kaj (2012) Advantages of Artificial Intelligences, Uploads, and Digital Minds. International Journal of Machine Consciousness 4 (1), 275-291. ( http://kajsotala.fi/Papers/DigitalAdvantages.pdf )

Oooh, thanks! I'd been wondering if there had been follow-up on that. I've found the first to be an interesting read, so I was curious to know if there were plans to expand on the topic and use it in movement-building strategies.

comment by Randaly · 2012-08-06T16:41:14.359Z · LW(p) · GW(p)

Thanks, fixed!

comment by bramflakes · 2012-08-06T17:32:39.014Z · LW(p) · GW(p)

Was the main post edited? The comments seem entirely disconnected from the article.

Replies from: wedrifid, David_Gerard, DaFranker
comment by wedrifid · 2012-08-06T17:44:45.344Z · LW(p) · GW(p)

Was the main post edited?

Yes, this version can be for most intents and purposes be considered an entirely new post. I can imagine your confusion!

The comments seem entirely disconnected from the article.

We could perhaps consider this a rather startling success for those comments. Usually it is only possible to influence future posts and current perceptions.

comment by David_Gerard · 2012-08-06T17:39:52.195Z · LW(p) · GW(p)

Yes, as noted in comments.

Replies from: lukeprog
comment by lukeprog · 2012-08-07T04:22:39.553Z · LW(p) · GW(p)

This radical change should be noted in the OP somewhere.

Replies from: handoflixue
comment by handoflixue · 2012-08-07T19:21:33.760Z · LW(p) · GW(p)

Agreed

comment by DaFranker · 2012-08-06T17:43:00.004Z · LW(p) · GW(p)

There seems to have been at least one revision to a more contested argument.

There also seems to have been an argument based on the SIAI's public FAQ that might have been ninja'd from the OP.

comment by wedrifid · 2012-08-06T05:02:06.192Z · LW(p) · GW(p)

It might be hard at first to tell the difference, so I'm going to have to use some examples. I'd ask that you try and suspend any emotional reactions you have to the examples I chose and just look at which approach seems more rational.

Bullshit. You aren't providing an example because it is "hard to tell the difference at first". You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.

Contrast this with the Singularity Institute. A skeptic might well ask whether the Singularity is actually going to occur. Well, the SIAI FAQ addresses this, but only to summarily dismiss a couple objections in a cursory paragraph (that evades most of the force of the objections). And that's the closest the FAQ gets to any sort of skepticism, the rest of it is just a straight and confident summary that tries to persuade you of SIAI beliefs.

The FAQ on the website is not the place to signal humility and argue against your own conclusions. All that would demonstrate is naivety and incompetence. You are demanding something that should not exist. This isn't to say that there aren't valid criticisms to be made of SIAI and their FAQ. You just haven't made them.

Which attitude seems more like a serious scientist? Which seems more like Uri Geller?

Am I the only person who is outright nauseated by the quality of reasoning in these recent mud-slinging posts by aaronsw? What I see is a hastily selected bottom line along the lines of "SingInst sux" or perhaps "SingInst folks are too arrogant" then whatever hastily conceived rhetoric he can think of to support it. The problem isn't in the conclusions---it is that the arguments used either don't support or outright undermine the conclusion.

Competent criticism is encouraged. But the mere fact that a post is intended to be critical or 'cynical' isn't sufficient. It needs to meet some kind of minimum intellectual standard too. If it did not represent an appeal to the second-order-contrarians and was evaluated based on actual content this post would probably end up mildly negative, even in the discussion section.

Replies from: John_Maxwell_IV, aaronsw, MugaSofer, hankx7787
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T06:19:54.082Z · LW(p) · GW(p)

You started with an intent to associate SIAI with self delusion

I see, he must be one of those innately evil enemies of ours, eh?

My current model of aaronsw is something like this: He's a fairly rational person who's a fan of Givewell. He's read about SI and thinks the singularity is woo, but he's self-skeptical enough to start reading SI's website. He finds a question in their FAQ where they fail to address points made by those who disagree, reinforcing the woo impression. At this point he could just say "yeah, they're woo like I thought". But he's heard they run a blog on rationality, so he makes a post pointing out the self-skepticism failure in case there's something he's missing.

The FAQ on the website is not the place to signal humility and argue against your own conclusions.

Why not? I think it's an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.

Overall, I thought aaronsw's post had a much higher information to accusations ratio than your comment, for whatever that's worth. As criticism goes his is pretty polite and intelligent.

Also, aaronsw is not the first person I've seen on the internet complaining about lack of self-skepticism on LW, and I agree with him that it's something we could stand to work on. Or at least signalling self-skepticism; it's possible that we're already plenty self-skeptical and all we need to do is project typical self-skeptical attitudes.

For example, Eliezer Yudkowsky seems to think that the rational virtue of "humility" is about "taking specific actions in anticipation of your own errors", not actually acting humble. (Presumably self-skepticism counts as humility by this definition.) But I suspect that observing how humble someone seems is a typical way to gauge the degree to which they take specific actions in anticipation of their own errors. If this is the case, it's best for signalling purposes to actually act humble as well.

(I also suspect that acting humble makes it easier to publicly change your mind, since the status loss for doing so becomes lower. So that's another reason to actually act humble.)

(Yes, I'm aware that I don't always act humble. Unfortunately, acting humble by always using words like "I suspect" everywhere makes my comments harder to read and write. I'm not sure what the best solution to this is.)

Replies from: aaronsw, gjm, wedrifid, David_Gerard
comment by aaronsw · 2012-08-06T11:57:34.489Z · LW(p) · GW(p)

FWIW, I don't think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.

Replies from: wedrifid
comment by wedrifid · 2012-08-06T13:57:19.649Z · LW(p) · GW(p)

my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.

I like the way you phrase it (the "lukeprog" charity). Probably true at that.

comment by gjm · 2012-08-06T08:29:29.748Z · LW(p) · GW(p)

I agree with your model of aaronsw, and think wedrifid's comments are over the top. But wedrifid is surely dead right about one important thing: aaronsw presented his article as "here is a general point about rationality, and I find that I have to think up some examples so here they are ..." but it's extremely obvious (especially if you look at a few of his other recent articles and comments) that that's simply dishonest: he started with the examples and fitted the general point about rationality around them.

(I have no idea what sort of process would make someone as smart as aaronsw think that was a good approach.)

Replies from: John_Maxwell_IV, MugaSofer
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T22:04:22.895Z · LW(p) · GW(p)

(I have no idea what sort of process would make someone as smart as aaronsw think that was a good approach.)

Well, he is heavily involved in the US politics scene, and may have picked up bad habits like focusing on rhetoric over facts, etc.

Replies from: aaronsw
comment by aaronsw · 2012-08-06T22:57:42.892Z · LW(p) · GW(p)

Unlike, say, wedrifid, whose highly-rated comment was just full of facts!

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-08-07T00:53:43.706Z · LW(p) · GW(p)

Well, he is heavily involved in the US politics scene, and may have picked up bad habits like focusing on rhetoric over facts, etc.

...

Unlike, say, wedrifid, whose highly-rated comment was just full of facts!

If you find yourself responding with tu quoque, then it is probably about time you re-evaluated the hypothesis that you are in mind-kill territory.

Replies from: DaFranker
comment by DaFranker · 2012-08-07T01:09:46.432Z · LW(p) · GW(p)

If you find yourself responding with tu quoque, then it is probably about time you re-evaluated the hypothesis that you are in mind-kill territory.

In this particular context, I think a more appropriate label would be the "Appeal to Come on, gimme a friggen' break!"

The comment he was responding to was quite loaded with connotation, voluntarily or not, despite the "mostly true" and "arguably within the realm of likely possibilities" denotations that would make the assertion technically valid.

Being compared, even as a metaphorical hypothesis, to sophistry-flinging rhetoric-centric politicians is just about the most mind-killer-loaded subtext assault you could throw at someone.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-14T21:18:34.343Z · LW(p) · GW(p)

How could I have phrased the point better? Or should I have dropped it altogether?

comment by MugaSofer · 2013-01-10T14:27:13.414Z · LW(p) · GW(p)

it's extremely obvious (especially if you look at a few of his other recent articles and comments) that that's simply dishonest: he started with the examples and fitted the general point about rationality around them.

Considering he has changed the example, I find this unlikely. In any event, he post would appear to stand on it's own.

Replies from: gjm
comment by gjm · 2013-01-10T15:52:25.035Z · LW(p) · GW(p)

The fact that he changed the example doesn't seem to me very strong example that the example wasn't originally the motivation for writing the article.

I made no comment on whether the post stands well on its own; only on wedrifid's accusation of dishonesty.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T13:07:52.990Z · LW(p) · GW(p)

Well, he could just be very good at it, I suppose. I had a much lower prior anyway, so I may be misjudging the strength of the evidence here.

comment by wedrifid · 2012-08-06T09:07:29.931Z · LW(p) · GW(p)

I see, he must be one of those innately evil enemies of ours, eh?

I made no such claim. I do claim that the specific quote I was replying to is a transparent falsehood. Do you actually disagree?

Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.

Forget "innately evil". In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).

The FAQ on the website is not the place to signal humility and argue against your own conclusions.

Why not? I think it's an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.

If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world---and people---work. It would be an attempt at counter-signalling that would fail abysmally. I'd actually feel vicarious embarrassment just reading it.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-06T21:41:00.115Z · LW(p) · GW(p)

Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.

Forget "innately evil". In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).

Hm. Maybe you're right that I'm giving him too much credit just because he's presenting a view unpopular on LW. (Although, come to think of it, having a double standard that favors unpopular conclusions might actually be a good idea.) In any case, it looks like he rewrote his post.

If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world---and people---work. It would be an attempt at counter-signalling that would fail abysmally. I'd actually feel vicarious embarrassment just reading it.

I think the optimal use of an FAQ is to give informed and persuasive answers to the questions it poses, and that an informed and persuasive answer will acknowledge, steel-man, and carefully refute opposing positions.

I'm not sure why everyone seems to think the answers to the questions in an FAQ should be short. FAQs are indexed by question, so it's easy for someone to click on just those questions that interest them and ignore the rest. lukeprog:

the linear format is not ideal for analyzing such a complex thing as AI risk

...

What we need is a modular presentation of the evidence and the arguments, so that those who accept physicalism, near-term AI, and the orthogonality thesis can jump right to the sections on why various AI boxing methods may not work, while those who aren't sure what to think of AI timelines can jump to those articles, and those who accept most of the concern for AI risk but think there's no reason to assert humane values over arbitrary machine values can jump to the article on that subject.

I even suggested creating a question-and-answer site as a supplement to lukeprog's proposed wiki.

I don't fault SI much for having short answers in the current FAQ, but it seems to me that FAQs are ideal tools for presenting longer answers relative to other media.

One option is for each question in the FAQ to have a page dedicated to answering it in depth. Then the main FAQ page could give a one-paragraph summary of SI's response along with a link to the longer answer. Maybe this would achieve the benefits of both a long and and a short FAQ?

comment by David_Gerard · 2012-08-06T14:05:49.166Z · LW(p) · GW(p)

He's also someone with an actual track record of achievement. Could we do with some of those on LW?

Replies from: gwern, Eliezer_Yudkowsky, DaFranker
comment by gwern · 2012-08-06T20:39:37.234Z · LW(p) · GW(p)

Some of which are quite dangerous. Either the JSTOR or PACER incidents could have killed any associated small nonprofit with legal bills. (JSTOR's annual revenue is something like 53x that of SIAI.)

As fun as it is to watch Swartz's activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.

Replies from: wedrifid, Will_Sawin, David_Gerard
comment by wedrifid · 2012-08-07T12:22:10.222Z · LW(p) · GW(p)

As fun as it is to watch Swartz's activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.

Wait, are you saying this aaronsw is the same guy as the guy currently being (tragically, comically) prosecuted for fraud? That's kinda cool!

comment by Will_Sawin · 2012-08-06T23:49:36.723Z · LW(p) · GW(p)

What are the JSTOR and PACER incidents?

Replies from: gwern
comment by gwern · 2012-08-06T23:57:08.445Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Aaron_Swartz#Controversies

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-07T00:17:53.213Z · LW(p) · GW(p)

I don't think it's fair - I think it's a bit motivated - to mention these as mysterious controversies and antics, without also mentioning that his actions could reasonably be interpreted as heroic. I was applauding when I read the JSTOR incident, and only wish he'd gotten away with downloading the whole thing and distributing it.

Replies from: gwern
comment by gwern · 2012-08-07T00:27:56.220Z · LW(p) · GW(p)

I agree they were heroic and good things, and I was disgusted when I looked into JSTOR's financial filings (not that I was happy with the WMF either).

But there's a difference between admiring the first penguin off the ice and noting that this is a good thing to do, and wanting to be that penguin or near enough that penguin that one might fall off as well. And this is especially true for organizations.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-07T00:46:23.917Z · LW(p) · GW(p)

Even if so, one should still at least mention, in a debate on character, that the controversy in question just happened to be about an attempted heroic good deed.

comment by David_Gerard · 2012-08-07T11:54:03.906Z · LW(p) · GW(p)

Did you really just assert that having Swartz post to LessWrong puts SIAI at serious legal and financial risk?

Replies from: gwern
comment by gwern · 2012-08-07T15:02:16.298Z · LW(p) · GW(p)

Good grief. You said, 'Aaron's achievements of type X are really awesome and we could use more achivements on LW!' Me: 'But type X stuff is incredibly dangerous and could kill the website or SIAI, and it's a little amazing Swartz has escaped both past X incidents with as apparently little damage as he has*.' You: 'zomg did you just seriously say Swartz posting to LW endangers SIAI?!'

Er, no, I didn't, unless Swartz posting to LW is now the 'actual track record of achievement' that you are vaunting, which seems unlikely. I said his accomplishments like JSTOR or PACER (to name 2 specific examples, again, to make it impossible to misunderstand me, again) endanger any organization or website they are associated with.

EDIT: * Note that I wrote this comment several months before Aaron Swartz committed suicide due to the prosecution over the JSTOR incident.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-07T13:02:58.767Z · LW(p) · GW(p)

I did once suggest a similar heuristic; but I feel the need to point out that there are many people in this world with track records of achievement, including, like, Mitt Romney or something, and that the heuristic is supposed to be, "Pay attention to rationalists with track records outside rationality", e.g. Dawkins and Feynman.

Replies from: Vaniver, DaFranker, David_Gerard
comment by Vaniver · 2012-08-07T18:33:38.026Z · LW(p) · GW(p)

Mitt Romney strikes me as a fairly poor example, since from my knowledge of his pre-political life, he seems like a strong rationalist. He looks much better on the instrumental rationality side than the epistemic rationality side, but I think I would rather hang out with Mormon management consultants than atheist waiters. (At least, I think I have more to learn from the former than the latter.)

Replies from: None, None
comment by [deleted] · 2012-08-07T18:59:00.898Z · LW(p) · GW(p)

If: 1) being more rational makes you more moral

2) he's saying things during this campaign he doesn't really believe

3) dishonesty, especially dishonesty in the context of a political campaign, is immoral

Then: c) His recent behavior is evidence against his rationality, in the same sense his pre-political success is evidence for it.

Replies from: Vaniver, MugaSofer
comment by Vaniver · 2012-08-07T19:07:27.907Z · LW(p) · GW(p)

1 seems true only in the sense that, in general, immorality is more attractive to bad decision-makers than to good decision-makers, but I would be reluctant to extend beyond that.

Replies from: None
comment by [deleted] · 2012-08-07T19:23:52.532Z · LW(p) · GW(p)

This is probably not something we should argue about here, but I think the whole project of rationality stands or falls on the truth of premise 1.

Replies from: MarkusRamikin, None
comment by MarkusRamikin · 2012-08-07T19:35:00.168Z · LW(p) · GW(p)

Why?

comment by [deleted] · 2012-08-09T06:45:51.395Z · LW(p) · GW(p)

What if it had no effect on morality, but just made people more effective? As long as the sign bit on people's actions is already usually positive, rationality would still be a good idea.

Replies from: None
comment by [deleted] · 2012-08-09T18:33:39.389Z · LW(p) · GW(p)

Well, if you don't mind me answering a question with a questions, more effective at what? If it just makes you more effective at getting what you want, whether or not what you want is the right thing to want, then it's only helpful to the extent that you want the right things, and harmful to the extent that you want the wrong things. That's nothing very great, and certainly nothing to spend a lot of time improving.

But if rationality can make you want, and make you more effective at getting, good things only, then it's an inestimable treasure, and worth a lifetime's pursuit. The 'morally good' seems to me the right word for what is in every possible case good, and never bad.

comment by MugaSofer · 2013-01-10T13:41:10.392Z · LW(p) · GW(p)

dishonesty, especially dishonesty in the context of a political campaign, is immoral

He could expect to do enough good as president to outweigh that.

I doubt it, though.

comment by [deleted] · 2012-08-07T18:56:33.007Z · LW(p) · GW(p)

He looks much better on the instrumental rationality side than the epistemic rationality side, but I think I would rather hang out with Mormon management consultants than atheist waiters.

He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.

Replies from: Vaniver
comment by Vaniver · 2012-08-07T19:04:47.141Z · LW(p) · GW(p)

He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.

Having only interacted with his public persona, I am unwilling to comment on his private beliefs.

His professional life gives a great example of looking into the dark, though, in insisting on a "heads I win tails you lose" deal with Bain to start Bain Capital. I don't know if that was because he was generally cautious or because he stopped and asked "what if our theories are wrong?", but the latter seems more likely to me.

Replies from: wedrifid
comment by wedrifid · 2012-08-08T03:27:35.673Z · LW(p) · GW(p)

He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.

Having only interacted with his public persona, I am unwilling to comment on his private beliefs.

The public persona, that which you can actually interact with, is the only one that matters for the purpose of choosing whether to believe what people say. If this person (I assume he is an American political figure of some sort?) happens to be a brilliant epistemic rationalists merely pretending convincingly that he is utterly (epistemically) irrational then you still shouldn't pay any attention to what he says.

Replies from: Vaniver
comment by Vaniver · 2012-08-08T06:03:35.879Z · LW(p) · GW(p)

I agree that, in general, public statements by politicians should not be taken very seriously, and Romney is no exception. I think the examples of actions he's taken, particularly in his pre-political life, are more informative.

I assume he is an American political figure of some sort?

Yes. Previously, he was a management consultant who helped develop the practice of buying companies explicitly to reshape them, which was a great application of "wait, if we believe that we actually help companies, then we're in a perfect position to buy low and sell high."

comment by DaFranker · 2012-08-07T14:11:57.696Z · LW(p) · GW(p)

I fail to see how finding more already-rationalists with a track record would benefit LW specifically*, unless those individuals are public figures of some renown that can attract public attention to LW and related organisations or can directly contribute content, insight and training methods. Perhaps I'm just missing some evidence here, but my priors place the usefulness of already-rationalists within the same error margin as non-rationalists who are public figures that would bother to read / post on LW.

Paying attention to (rationalists with track records outside rationality)** seems like it would be mostly useful for demonstrating to aware but uninterested/unconvinced people that training rationality and "raising the sanity waterline" are effective strategies that do have real-world usefulness outside "philosophical"*** word problems.

* Any more than, say, anyone else or people with any visible track record who are also public figures.

** Perhaps someone could coin a term for this? It seems like a personspace subgroup relevant enough to have a less annoying label. Perhaps something playing on Beisutsukai or a variation of the Masked Hero imagery?

*** Used here in the layman's definition of "philosophical": airy, cloud-head, idealist, based on pretty assumptions and "clean" models where everything just works the way it's "supposed to" rather than how-things-are-in-real-life. AKA the "Philosophy is a stupid waste of time" view.

Replies from: Will_Sawin
comment by Will_Sawin · 2012-08-16T18:31:34.847Z · LW(p) · GW(p)

I think the idea here is to find people who have found the types of rationality that lead to actual life success - found a replicable method for succeeding at things. Such an individual is expected to be a rationalist and to have a track record of achievement.

comment by David_Gerard · 2012-08-07T13:35:50.995Z · LW(p) · GW(p)

See, even as no fan of his whatsoever, I suspect Mitt Romney is a very smart fellow I would be foolish to pay no heed to in the general case, and who probably has a fair bit of tried and tested knowledge he's gained in the pursuit of thinking about thinking. Even given qualms I have about the quality of some things he's been quoted as saying of late, but then presidential campaigns select for bullshit.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-07T14:52:49.232Z · LW(p) · GW(p)

There are too many accomplished people in the world contradicting each other to not filter it somehow.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2012-08-07T17:37:40.985Z · LW(p) · GW(p)

My filtering criteria (maybe flawed) is "people whose biographies are still read after a few decades". This way "non-rationalist" like Churchill gets read; looking for "rationalists" will end up selecting people too similar to you yourself to learn interesting things.

comment by DaFranker · 2012-08-06T20:57:04.433Z · LW(p) · GW(p)

Here's one: Attracting sufficient attention from people with track records of achievements for said people to begin engaging in active discussion that will further improve LW and related endeavors, namely through public exposure and bringing fresh outside perspective. Example: aaronsw

The whole point is in the detail about getting more people into it, and admitting that stuff is wrong so we can make it less so.

Less... Wrong.

comment by aaronsw · 2012-08-06T11:54:42.783Z · LW(p) · GW(p)

You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.

No, I'd love another example to use so that people don't have this kind of emotional reaction. Please suggest one if you have one.

UPDATE: I thought of a better example on the train today and changed it.

Replies from: magfrump
comment by magfrump · 2012-08-07T18:59:37.418Z · LW(p) · GW(p)

Upvoted the main article due to this.

comment by MugaSofer · 2013-01-10T13:30:59.222Z · LW(p) · GW(p)

You aren't providing an example because it is "hard to tell the difference at first". You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.

This is needlessly inflammatory, far to overconfident and, as it turned out, wrong. Making deductions about intent from their writing is not nearly as easy as you seem to think. Making wild accusations of nefarious attempts to insert subtext critical of you and your interests - indeed all our interests - makes you look hostile, paranoid and irrational, and for good reason.

Replies from: wedrifid
comment by wedrifid · 2013-01-10T14:18:22.947Z · LW(p) · GW(p)

This is needlessly inflammatory, far to overconfident and, as it turned out, wrong. Making deductions about intent from their writing is not nearly as easy as you seem to think. Making wild accusations of nefarious attempts to insert subtext critical of you and your interests - indeed all our interests - makes you look hostile, paranoid and irrational, and for good reason.

To put it as politely as I can manage, this reply, being a reply to something so many months old, strikes me as odd. If my memory from that far back serves me (and I don't expect reliability of anyone's memory over that period) this post was one of a series of three within the space of a week by the author with a common theme.

The comment you are replying to is also in response to a post that has been fundamentally edited in response to this (and other) feedback. Apart from making judgement of the appropriateness of a reply difficult this is also a rare example of someone (aaronsw) being able to update and improve his contribution in response to feedback. Once again it is too long ago for me to remember whether I expressed appreciation and respect for aaronsw willingness to improve his post but I recall experiencing that and evaluating whether it aaronsw would consider such a comment to be positive or merely condescending.

as it turned out, wrong

That isn't something you demonstrated by making that link.

Making wild accusations of nefarious attempts to insert subtext critical of you and your interests

I don't seem to be talking about subtext critical of me of my interests at all. If you use Wei_Dai's user comments script and sort by top posts you might observe contributions that are a mix of support of SingInst and criticism of SingInst, depending on my evaluation of the object level issue in question. (The 'top contributions' sample is of course biased towards criticisms since such criticisms would be in response to, for example, Eliezer and so the conversations get more attention.) The point of this is that it is utterly absurd to be accusing me of making biased hysterical defenses of my personal interests when they aren't my interests at all.

I endorse the grandparent wholeheartedly as a response to the version of the post that it was made to and the temporal and hope that others will make similar contributions fighting against bullshit so that I can upvote them. However, since it is so long ago and especially since the post has since been improved I consider it rather counterproductive to draw attention to it.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T14:36:44.187Z · LW(p) · GW(p)

I may have phrased that too strongly. However, I do think that your deduction regarding the original post - that it was written as an excuse to bash SI - is incompatible with the evidence as it stands and as it stood then, and should not have been presented in such a hostile manner. I appreciate this was some time ago, but it does seem like a good chance for calibration and so on. I know I have made similar mistakes.

To put it as politely as I can manage, this reply, being a reply to something so many months old, strikes me as odd. If my memory from that far back serves me (and I don't expect reliability of anyone's memory over that period) this post was one of a series of three within the space of a week by the author with a common theme.

I was, in fact, reading through old comments of his, in order to get a better idea of his positions, contributions, and, well, possible troll status. I do this pretty often, and indeed I regularly reply to comments made years ago. No-one else has objected. I know people sometimes change their minds, of course, but since you have not changed your mind (or so you claim) I see no reason not to criticize the position you held.

However, since it is so long ago and especially since the post has since been improved I consider it rather counterproductive to draw attention to it.

Why, are you worried that people will fail to realize this was in response to an earlier version and downvote you for misquoting the post?

comment by hankx7787 · 2012-08-06T06:20:25.566Z · LW(p) · GW(p)

Good god man, this!

Replies from: DaFranker
comment by DaFranker · 2012-08-06T19:47:08.303Z · LW(p) · GW(p)

That's what the "Thumbs up" button is there for, hence the thumbs down. In case you were wondering.

Replies from: thomblake, hankx7787
comment by thomblake · 2012-08-07T15:06:24.995Z · LW(p) · GW(p)

That's what the "Thumbs up" button is there for, hence the thumbs down.

NO. Why our kind can't cooperate.

Replies from: DaFranker
comment by DaFranker · 2012-08-07T15:19:34.732Z · LW(p) · GW(p)

I stated a belief with minimal decorative fluff ("I think this implies that most people who would read this comment are of the opinion that...") in an attempt to explain the reaction.

Independently, I also believe the support could have been better phrased.

Thanks for pointing this out though. I'll try to make it a point to voice agreement and positively reinforce agreement when it's not borne of confirmation bias.

comment by hankx7787 · 2012-08-07T08:30:28.875Z · LW(p) · GW(p)

I just wanted to state my appreciation for this comment... someone FINALLY called out aaronsw for what he is doing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-07T06:17:05.737Z · LW(p) · GW(p)

Reply to revised OP: Policy debates should not appear one-sided (because there's no systematic reason why good policies should have only positive consequences) but epistemic debates are frequently normatively one-sided (because if your picture of likelihood ratios is at all well-calibrated, your updates should trend in a particular direction and you should almost never find strong evidence on more than one side of an epistemic debate; "strong evidence" by Bayesian definition is just that sort of evidence which we almost never see when the hypothesis is wrong).

comment by Crystalist · 2012-08-07T12:47:18.833Z · LW(p) · GW(p)

In my own experience, self skepticism isn't sufficient. It's bloody useful of course, but it's also an exceptional time sink -- occasionally to the point where I'll forget to actually think of solutions to the problem.

Does anyone have any algorithms they use to balance self-skepticism with actually solving the problem?

Replies from: MTGandP
comment by MTGandP · 2013-01-17T00:25:56.615Z · LW(p) · GW(p)

Use more concentrated and in-depth self-skepticism for positions that affect more of your life. For example, I spend a great deal of time criticizing my own ethical beliefs because they significantly affect my actions.

comment by Jonathan_Graehl · 2012-08-06T21:03:12.062Z · LW(p) · GW(p)

Responding to the new version of this article, I'll observe that the intent and competence of the Cochrane vs. Heritage folks seems to suggest:

CC signal intellectual honesty and are willing to sacrifice their impact on the beliefs of the masses, expecting in compensation to have better academic reputations, and perhaps influence a few more rationalists.

Heritage signal "don't oppose us or we'll embarrass you" and "our side is going to win. don't even think about switching" and perhaps "we all understand that politics is war. we'll be happy to share our true beliefs with you at the victory orgy, once you've proven your loyalty".

You can tell almost nothing about their competence or rationality without checking the work of the Cochrane analysis (this is straightforward but costly), or by judging the effectiveness of Heritage propaganda, compared to what you'd expect given their goals and a given level of hidden rationality and intelligence (this is tricky).

comment by Douglas_Knight · 2012-08-07T21:06:23.429Z · LW(p) · GW(p)

outside tests showed the new techniques only caused reading scores to go down.

The Feynman article you link says they didn't test them, not that they tested them and failed.

comment by Wei Dai (Wei_Dai) · 2012-08-07T01:36:32.913Z · LW(p) · GW(p)

Related posts: The Proper Use of Humility and How To Be More Confident... That You're Wrong.

BTW, what justifies calling self-skepticism "the first principle of rationality"? Feynman called it that, but Eliezer doesn't seem to think that self-skepticism is as important as, for example, having something to protect.

comment by jimrandomh · 2012-08-06T03:18:05.087Z · LW(p) · GW(p)

I agree with the general principle that self-skepticism is vital, as a personality trait, for rationalism. However, I don't think that a single written work necessarily gives very much evidence of its author's self-skepticism; it's common to make lots of objections and fail to put them in writing, or to put them in different places. I have also noticed that objections to futurist and singularitarian topics, moreso than other things, tend to lead into clouds of ambiguity and uncertainty which cannot be resolved in either direction, which don't suggest such straightforward approaches as surveying people to find out about nets in use, and which take a lot of words to adequately deal with. (This last issue is especially relevant to your particular example; FAQs are a bad medium for long arguments, and while some links would be nice, their absence is not very informative.)

comment by DaFranker · 2012-08-06T18:47:42.145Z · LW(p) · GW(p)

...I really should have listened to that inner voice that said "Don't waste your time!"

comment by DaFranker · 2012-08-07T11:46:13.715Z · LW(p) · GW(p)

The reason I used the term "Dark Arts" is that your post cleverly generalizes from unknown examples, infers this to be true for an overwhelming majority of cases, and then proposes this as a Fully General Counterargument that any LWer attempting to revise themselves is merely plugging in "self-doubt" while generalizing.

Your argument effectively proves by axiom that there are very few LWers if any actually using real epistemic rationality skills, and by hidden connotation also shows that we're all Gray, and thus because Gray isn't White our thoughts and work aren't worth any more than those of any random Internet user.

I'm somewhat impressed at how steeply we were downvoted for this thread, though. Perhaps we were instantly pattern-matched to "Troll" and "Naive guy replying to troll" in some way?

comment by Jonathan_Graehl · 2012-08-06T02:05:28.317Z · LW(p) · GW(p)

I hope SI will agree that the FAQ answer you linked is inadequate (either overlooking some common objections, or lumping them together dismissively as unspecified obstacles that will be revealed in the future). For example, "building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion", or "computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow" are both plausible.

And yes, even if the answer is improved, it does suggest a possible pattern. It could just be a lack of resources available to create high quality, comprehensive answers to objections. Or it could be that SI is slightly more like Uri Geller in not doubting itself than GiveWell is.

Is GiveWell really doubting itself or its premise - that it's worth spending extra money evaluating where to give money? (actually, I think it is worth it, but that's not my point).

Replies from: lukeprog, fubarobfusco, fubarobfusco
comment by lukeprog · 2012-08-06T04:22:30.116Z · LW(p) · GW(p)

I hope SI will agree that the FAQ answer you linked is inadequate

As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.

Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like "put off the singularity until we can make it go well") is nearly the opposite of its original mission ("make the singularity happen as quickly as possible"). The story of that transition is here.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-08-06T20:53:34.320Z · LW(p) · GW(p)

Nonetheless, I object to the FAQ answer on the grounds that its ontology of objections to the likelihood of singularity lacks a category that you'd expect to contain my objections.

I admit that the problem is only in not perfecting the FAQ entry, not that the objections weren't considered in detail elsewhere. Thus, no evidence of Uri-Gellerism, and more for lack of resources spent on it.

Replies from: DaFranker
comment by DaFranker · 2012-08-06T21:34:41.486Z · LW(p) · GW(p)

The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.

Perhaps you'd like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?

Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter "Closed for not taking into account my objection!". If what is currently qualified as "the most common objections" is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time's "most common objections", and then repeat.

I'm sure this argument was made in better form somewhere else before, but I'm not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.

To (very poorly) paraphrase Eliezer*: "The obvious solution to you just isn't. It wasn't obvious to X, it wasn't obvious to Y, and it certainly wasn't obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building "safe" AIs."

This also holds true of objections to SIAI, AFAICT. What seems like an "obvious" rebuttal, objection, etc. or a "common" complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of "common objections" and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses... I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of "people made aware of the issue" and "donators gained" and maybe even "researchers sensitivitized to the issue".

* Edit: Correct quote in reply by Grognor, thanks!

Replies from: Grognor
comment by Grognor · 2012-08-06T22:38:07.230Z · LW(p) · GW(p)

Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.

-Reply to Holden on Tool AI

comment by fubarobfusco · 2012-08-06T03:48:21.961Z · LW(p) · GW(p)

Is GiveWell really doubting itself or its premise - that it's worth spending extra money evaluating where to give money?

That would be a cost-of-information question, which is really quite tricky. For instance, one might ask "Which of these charities would we prefer based on the amount of data we already have?" and then go and gather more data and see if the earlier data was actually sufficiently predictive, or some such ....

comment by fubarobfusco · 2012-08-06T21:28:19.166Z · LW(p) · GW(p)

For example, "building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion", or "computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow" are both plausible.

A short answer to these might note the advancement of applied AI techniques in fields where it was previously common knowledge that "only humans can do that" — e.g. high-quality speech recognition or self-driving cars. I would propose limiting such an answer to "serious" endeavors — ones where humans highly value the outcome, such as understanding a translated message correctly or driving safely — as opposed to games such as chess or TV game-shows.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-08-07T01:17:08.281Z · LW(p) · GW(p)

I agree that people raising the AI-objection I mentioned are often making that mistake. But even after that bit of hopeful perspective, there's still a real concern about (unknown - maybe it will turn out to be a cinch in hindsight once discoveries are made) difficulty. It's not the case that we have AI that merely runs 10^10 times too slow to be nowhere deficient to human intelligence+expertise.

comment by Ben Pace (Benito) · 2012-08-07T11:41:25.983Z · LW(p) · GW(p)

A TED Talk centred on this subject came out today: Dare to Disagree

Replies from: DaFranker
comment by DaFranker · 2012-08-07T15:55:03.969Z · LW(p) · GW(p)

I very much enjoyed the Ted Talk. However, while related, it seems not to be directly on this subject. The Talk is about being a hero and promoting intelligent thoughts, while this is on the topic of one individual or organisation countering itself. The examples and techniques stated in the talk are very reminiscent, but sufficiently different that you can't really say this article and the show are "centered" on the same subject.

They're pretty close in the space of critical thought behaviors beneficial to us, but not quite close enough to be the same point, at least not on my map.

Replies from: Benito
comment by Ben Pace (Benito) · 2012-08-13T15:11:37.727Z · LW(p) · GW(p)

I've just gotten out my magnifying glass, and, by golly, you're right. My apologies. I'll be more scrupulous with my map reading techniques in the future. Also, I'm glad you like it.

comment by DaFranker · 2012-08-06T15:04:05.742Z · LW(p) · GW(p)

Please don't use Dark Arts and Fully General Counterarguments while trolling. Someone might mistake you for a serious poster that just didn't read the sequences.

comment by Robert Miles (robert-miles) · 2012-08-06T16:22:48.433Z · LW(p) · GW(p)

[WARNING: Industrial-strength mind-killing here, tread carefully]

Edit: Well, it was. Post appears to have been heavily edited.