A question of rationality

post by mormon2 · 2009-12-13T02:37:41.722Z · LW · GW · Legacy · 94 comments

Contents

  Thank you For Your Participation
  Response
  Post
None
94 comments

Thank you For Your Participation

I would like to thank you all for your unwitting and unwilling participation in my little social experiment. If I do say so myself you all performed as I had hoped. I found some of the responses interesting, many them are goofy. I was honestly hoping that a budding rationalist community like this one would have stopped this experiment midway but I thank you all for not being that rational. I really did appreciate all the mormon2 bashing it was quite amusing and some of the attempts to discredit me were humorous though unsuccessful. In terms of the questions I asked I was curious about the answers though I did not expect to get any nor do I really need them; since I have a good idea of what the answers are just from simple deductive reasoning. I really do hope EY is working on FAI and actually is able to do it though I certainly will not stake my hopes or money on it. 

Less there be any suspicion I am being sincere here.

 

Response

Because I can I am going to make one final response to this thread I started:

Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire. The point is to give you guys easy ways to avoid answering my questions (things like tone of the post, spelling, grammar, being "hostile (not really)" etc.). I just wanted to see if anyone here could actually look past that, specifically EY, and post some honest answers to the questions (real answers again from EY not pawns on LW). Obviously this was to much to ask, since the general responses, not completely, but for the most part were copouts. I am well aware that EY probably would never answer any challenge to what he thinks, people like EY typically won't (I have dealt with many people like EY). I think the responses here speak volumes about LW and the people who post here (If you can't look past the way the content is posted then you are going to have a hard time in life since not everyone is going to meet your standards for how they speak or write). You guys may not be trying to form a cult but the way you respond to a post like this screams cultish and even a some circle-jerk mentality mixed in there. 

 

Post

I would like to float an argument and a series of questions. Now before you guys vote me down please do me the curtsey of reading the post. I am also aware that some and maybe even many of you think that I am a troll just out to bash SIAI and Eliezer, that is in fact not my intent. This group is supposed to be about improving rationality so lets improve our rationality.

SIAI has the goal of raising awareness of the dangers of AI as well as trying to create their own FAI solution to the problem. This task has fallen to Eliezer as the paid researcher working on FAI. What I would like to point out is a bit of a disconnect between what SIAI is supposed to be doing and what EY is doing.

According to EY FAI is an extremely important problem that must be solved with global implications. It is both a hard math problem and a problem that needs to be solved by people who take FAI seriously first. To that end SIAI was started with EY as an AI researcher at SIAI. 

Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI? If FAI is so important then where does a book on rationality fit? Does that even play into SIAI's chief goals? SIAI spends huge amounts of time talking about risks and rewards of FAI and the person who is supposed to be making the FAI is writing a book on rationality instead of solving FAI. How does this square with being paid to research FAI? How can one justify EY's reasons for not publishing the math of TDT, coming from someone who is committed to FAI? If one is committed to solving that hard of a problem then I would think that the publication of ones ideas on it would be a primary goal to advance the cause of FAI.

If this doesn't make sense then I would ask how rational is it to spend time helping SIAI if they are not focused on FAI? Can one justify giving to an organization like that when the chief FAI researcher is distracted by writing a book on rationality instead of solving the myriad of hard math problems that need to be solved for FAI? If this somehow makes sense then can one also state that FAI is not nearly as important as it has been made out to be since the champion of FAI feels comfortable with taking a break from solving the problem to write a book on rationality (in other words the world really isn't at stake)? 

Am I off base? If this group is devoted to rationality then everyone should be subjected to rational analysis.

94 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2009-12-14T18:47:31.695Z · LW(p) · GW(p)

Upvoted because every group wants to be a cult and this seems to be a topic particularly suspectible to groupthink when faced with criticism. (That is, for this community.)

I also note posters making comments that deny the question because of who posed it or because of the tone in which it is made, while admitting the basic validity of the question, which is a rationality fail of epic proportions. If the question is valid, then it should be discussed, no matter how unpopular the poster happens to be within the inner circle. Period. Even if he isn't as eloquent and well-blessed with social skills than the most vocal people.

Nor does it seem good in the eyes of outsiders to see a valid question dismissed on such superficial reasons. This is separate from the direct effect the dismissal has on our own rationality. It will weaken both the rationalist and SIAI causes in the long run, as outsiders will see the causes as hypocritical and incapable of answering tough questions, respectively. Thus a smaller influx of interested outsiders.

In addition to the main post being voted down and the very validity of the question being denied, mormon2 has made at least one comment containing good points (even if I disagree with some of them). Before I gave it an upvote, it had been voted down to -2, apparently just because of the author.

(All of this being said, I find the answers that were provided to the OP's question to be mostly adequate.)

Replies from: wedrifid
comment by wedrifid · 2009-12-14T20:03:39.341Z · LW(p) · GW(p)

I also note posters making comments that deny the question because of who posed it or because of the tone in which it is made, while admitting the basic validity of the question, which is a rationality fail of epic proportions.

No. You are either mistaken or you are using some arbitrary definition of 'rationality' that I reject. It is not rational to ignore all social implications of a question.

It matters who is asking. It matters how it is asked. And it matters why it is asked. It also matters what you predict the effect of 'feeding' another answer to the questioner will be.

Period.

No. Just no. Nobody is obliged to answer any question on pain of epic rationality fail. Nobody is obliged to accept belligerent disrespect and respond to it as though though a rapport is present. And nobody, not even Eliezer, is required to justify their professional goals at the whim of a detractor.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-12-14T22:12:34.807Z · LW(p) · GW(p)

You are either mistaken or you are using some arbitrary definition of 'rationality' that I reject. It is not rational to ignore all social implications of a question.

Of the two definitions for rationality, I was going for e-rationality. It can certainly be i-rational to take social implications of a question into account to such a degree that one won't even consider it by fear of consequences. Simply deciding that evolution must be false since the social consequences of believing otherwise would be too unpleasant, for instance. Or you could admit, in the safety of your head, that evolution must be true, but hide this belief because you know what the consequences were. That might be rational both ways. But it is always a failure of e-rationality to refuse to consider a valid question because of social consequences. In the case of communities and not individuals, "to consider" means discussion within the community.

"Taking into account the social consequences" is fine in theory, but it's an easy path towards rationalizing away every argument from every person you don't like. I would be a bit more understanding if the poster in question would have been really abrasive and heaped scorn upon scorn in the opening post. I do agree that the one comment of his that got really voted down was over the line and possibly deserved it. (-8 is a bit overkill, though.) But the opening post was, if not exactly polite, not any harsher than critique in general is. Even if a person has a history of being slightly abrasive, by the very least he merits a hearing when he does compose a non-abrasive criticism. Especially so since there is, in my experience, a certain correlation between (real or perceived) abrasiveness and a resistance to groupthink - either because the person in question just happens to care little for social norms, or because their brain is atypically wired, giving them both weak social skills and a tendency not to fall victim to groupthink.

Nobody is obliged to answer any question on pain of epic rationality fail.

I never said anything about needing to answer any question. But here we have a situation were people are basically saying that the question is good and valid, but they don't like the person who asked it, so in order to slight him there won't be any discussion of it this time around. That's a different story.

And sure, certainly nobody is required to justify their professional goals to anyone who isn't paying their wage. That doesn't necessarily mean it's a good idea to refuse to justify those goals if the question is a good one. I am a SIAI donor, not sure of the exact amount but I think I've donated something in the region of $1500 so far. The feeling that SIAI doesn't really seem to be accomplishing much caused several moments in the past when I've reconsidered whether I should donate or if my money would do more good elsewhere. Recently that worry has abated, but more because of SIAI folks other than Eliezer doing things. If I came across SIAI now and wasn't a donor yet, I can't imagine anything that'd throw up a bigger red flag than a refusal to answer the question "how can I know my money is actually helping the cause you're claiming to advance".

(Tangential, but since we got on the topic... even now, the lack of reporting of what SIAI people actually do remains one of my greatest annoyances with the organization. Anna Salamon posted a great report - as a comment in a discussion thread most potential SIAI donors will never read. My request to have it reposted on the SIAI blog was ignored. This is not the way to attract more donors, people. Not necessarily even the way to keep old ones.)

Replies from: AnnaSalamon, wedrifid, MichaelAnissimov
comment by AnnaSalamon · 2009-12-15T04:13:48.357Z · LW(p) · GW(p)

Anna Salamon posted a great report - as a comment in a discussion thread most potential SIAI donors will never read. My request to have it reposted on the SIAI blog was ignored. This is not the way to attract more donors, people. Not necessarily even the way to keep old ones.

Sorry, Kaj. We have been working on a more fleshed out "what we've done, what we're doing, and what more money would let us do" webpage, which should be up within the next week. We have had a large a backlog of worthwhile activities and a comparative shortage of already trained person-hours, lately. Part of the idea in the visiting fellows program.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-12-15T13:09:58.959Z · LW(p) · GW(p)

No problem - I do know you folks are probably overworked, I know how it is from plenty of volunteer work for various groups over the years myself. Not to mention I still have at least one project from at least a year back already that I was doing for SIAI, but never got finished. Just do your best. :)

comment by wedrifid · 2009-12-15T03:10:04.629Z · LW(p) · GW(p)

But it is always a failure of e-rationality to refuse to consider a valid question because of social consequences.

It is a failure of e-rationality to alter your beliefs for social purposes. It is not an epic failure of e-rationality to not accept a particular social challenge. Moreover e-rationality makes no normative claims at all. "If the question is valid, then it should be discussed" is about your preferences and not something required by e-rationality to the degree 'epic fail period'. You can have different preferences to me, that's fine. But I take offence at your accusation of an epic failure of rationality based on advocating ignoring a question that you would choose to answer. It is nonsensical.

I never said anything about needing to answer any question.

It seems my assertion was ambiguous. I don't mean "need to answer any possible question". I insist that nobody is required to answer any question whatsoever.

But here we have a situation were people are basically saying that the question is good and valid, but they don't like the person who asked it, so in order to slight him there won't be any discussion of it this time around. That's a different story.

Substitute "in order to slight him" with "in order not to slight oneself" and that is exactly the story under consideration. It isn't about ignoring a question as a rhetorical ploy to counter an argument. In fact, saying that you would answer such a question under different circumstances serves to waive such a rhetorical use.

You are advocating a norm about the social obligations of people to engage with the challenges and you are advocating it using the threat of being considered 'epically irrational'. I absolutely refuse to submit myself to the norm you advocate and take umbrage at the manner of your assertion of it upon me (as a subset of 'us').

And sure, certainly nobody is required to justify their professional goals to anyone who isn't paying their wage. That doesn't necessarily mean it's a good idea to refuse to justify those goals if the question is a good one.

I have no objection to you suggesting that answering this particular question may be a better than not answering it. You may even be right. I cannot claim to be a master of intricacies of social politics by any stretch of the imagination.

If I came across SIAI now and wasn't a donor yet, I can't imagine anything that'd throw up a bigger red flag than a refusal to answer the question "how can I know my money is actually helping the cause you're claiming to advance".

I would like to see some more details of SIAI's approach and progress made public. Perhaps in the form of some extra PR on the SIAI website and posts here that link to it to allow discussion from the many interested lesswrong participants.

Before I gave it an upvote, it had been voted down to -2, apparently just because of the author.

I don't consider this inappropriate. Karma serves as an organic form of moderation that can alleviate the need for moderators deleting unhelpful posts. Having it serve as an collectively implemented approximation of banning is perhaps better than having requiring an administrator to do the dirty work. This is better than having an official arbitrating such unpleasantness, particularly when the motivating conflict involves someone with administrative status! I expect a trend of improved posting, even a trend of toning down posts to the level of the OP, would quickly eliminate a tendency for mormon2's posts to be downvoted freely.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-12-15T13:03:11.100Z · LW(p) · GW(p)

I think we are talking at cross purposes here. And it seems I may have misunderstood part of the comments which led to that "epic rationality fail" line, in which case I apologize.

This line of yours first led me to see that we really are talking about two different things:

It is not an epic failure of e-rationality to not accept a particular social challenge.

I am puzzled over as to why you would want to consider this a "social challenge". The opening post was formulated in a reasonable tone, asking reasonable and fully warranted questions. I had automatically assumed that any aspiring rationalist would, well, treat such as post like any other post by someone else. I certainly hadn't assumed that people would instead prefer to interpret it as a maneuver in some elaborate social battle, and I am utterly puzzled over why anyone would want to take it that way. Not only does that run a major risk of misinterpretation in case the person in question actually meant what they said in the post, it's also stooping down to their level and making things worse in case they did intend it as a social attack.

It seems my assertion was ambiguous. I don't mean "need to answer any possible question". I insist that nobody is required to answer any question whatsoever.

Okay, it appears I was ambiguous as well. I didn't mean that anyone would be required to answer any question, either. The tone I got from the comments was something along the lines of "this is an important question, and I do find it interesting and worthy enough to discuss and consider, but now that you have brought it up, I'll push it out of my mind or at least delay discussion of it later on".

Does this mean people find this an important topic and would like to discuss it, but will now forever avoid the question? That would indeed be a rationality fail. Does it mean that some poster of a higher status should reword the same questions in his own words, and post them in the open thread / as his own top-level post, and then it would be acceptable to discuss? That just seems petty and pointless, when it could just as well be discussed here.

Certainly there's no requirement on anybody to answer any questions if they don't feel like it. But, how should I put this... it's certainly a worrying sign if they can be deflected from a question they'd otherwise consider, simply because the wrong person also happens to ask it.

Replies from: wedrifid
comment by wedrifid · 2009-12-15T15:06:29.877Z · LW(p) · GW(p)

I am not ignoring this but I will not engage fully with all of it because to do so effectively would require us to write several posts worth of background to even be using the same meaning for words.

Does this mean people find this an important topic and would like to discuss it, but will now forever avoid the question? That would indeed be a rationality fail.

I agree, and rather hope not.

Certainly there's no requirement on anybody to answer any questions if they don't feel like it. But, how should I put this... it's certainly a worrying sign if they can be deflected from a question they'd otherwise consider, simply because the wrong person also happens to ask it.

I would not be quite as worried and would perhaps frame it slightly differently. I may share an underlying concern with maintaining an acceptable 'bullshit to content' ratio. Things like ignoring people or arguments can sometimes fall into that bullshit category and oft times are a concern to me. I think I have a somewhat different conception than you when it comes to "times when ignoring stuff is undesirable".

comment by MichaelAnissimov · 2009-12-15T23:46:03.543Z · LW(p) · GW(p)

I do mention SIAI and what we're up to on my blog, which has about 3K readers, about every other day for several months. It may not be an "official" SIAI news source, but many people read it and gain knowledge about SIAI via that route.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-12-16T18:28:19.256Z · LW(p) · GW(p)

Now that you mention it, yes, your blog is probably the place with the most reporting about what SIAI does. Not enough even there, though.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T21:21:09.635Z · LW(p) · GW(p)

Vote this up if, as a matter of policy, when a post like this gets voted down far enough (though for some reason it still shows as 0), it's okay to remove it and tell the poster to resubmit as an Open Thread comment. I would like posts like this to automatically disappear when voted down far enough, but that would take a code change and those are hard to get.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T21:21:30.248Z · LW(p) · GW(p)

Vote this up if you disagree with the above policy.

Replies from: wedrifid, MrHen
comment by wedrifid · 2009-12-13T22:11:11.701Z · LW(p) · GW(p)

I am reluctant to agree with the aforementioned policy because I do not want to lose the comments on such posts. There have been cases where a '<= 0' post has been flawed but the replies have provided worthwhile insight into the topic being covered. I often search for things I can remember reading months in the past and it would frustrate me if they were not there to be found.

I like the sound of the (unfortunately code requiring) idea of having them not visible in the side bar.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-14T18:37:42.226Z · LW(p) · GW(p)

The mechanism I have for post removal wouldn't prevent the comments from being visible if you knew the old link.

Replies from: wedrifid
comment by wedrifid · 2009-12-14T19:22:02.422Z · LW(p) · GW(p)

Would search still work? (I assume it probably would...)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-14T19:35:00.790Z · LW(p) · GW(p)

I don't know. I too assume it probably would.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-12-14T20:04:26.243Z · LW(p) · GW(p)

If there's a link to it from somewhere, Google ought to index it and keep it indexed. A quick google returned a discussion suggesting that orphan pages with no incoming links will eventually be de-indexed (assuming the bot had the time to index them before they got orphaned), though it might take a long time. It's from 2004, though, and Google has revamped their systems plenty of times afterwards.

comment by MrHen · 2009-12-17T19:34:17.963Z · LW(p) · GW(p)

Why isn't this essentially included in the Preferences? There is a box stating, "Don't show me articles with a score less than X." Is that box not working? Or am I misunderstanding what it does? Or...?

comment by Psy-Kosh · 2009-12-13T03:39:00.005Z · LW(p) · GW(p)

My understanding is the reasoning goes something like this: This is a difficult problem. Eliezer, on his own, might not be smart enough to do this. Fundamentally smart people he can't quite create more of yet. But maybe he can create rationalists out of some them, and then some of those may join SIAI. Besides, boosting human rationality overall is a good thing anyways.

comment by wedrifid · 2009-12-15T22:51:12.185Z · LW(p) · GW(p)

Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire.

You are claiming to be a troll?

Replies from: mormon2
comment by mormon2 · 2009-12-16T22:21:30.080Z · LW(p) · GW(p)

Maybe read a bit more carefully:

"I just wanted to see if anyone here could actually look past that (being the issues like spelling, grammar and tone etc.), specifically EY, and post some honest answers to the questions"

comment by Tyrrell_McAllister · 2009-12-17T01:31:38.617Z · LW(p) · GW(p)

mormon2, have you ever read other people on the Internet who write in the style and tone that you've adopted here? Are these people, in your experience, usually writing sense or nonsense? Are they, in your experience, usually worth the time that it takes to read them?

Replies from: Cyan
comment by Cyan · 2009-12-17T02:21:29.488Z · LW(p) · GW(p)

Either mormon2 is a deliberate troll, or we can expect that the Dunning-Kruger effect will prevent him from being able to answer your questions adequately.

comment by Scott Alexander (Yvain) · 2009-12-13T14:04:30.516Z · LW(p) · GW(p)

Let him who has never used time in a less than maximally-utility-producing way cast the first stone.

Replies from: Eliezer_Yudkowsky, Kaj_Sotala
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T14:23:02.619Z · LW(p) · GW(p)

Spending a year writing a book isn't exactly watching an episode of anime. The question would, under other circumstances, be just - but I don't care to justify myself here. Elsewhere, perhaps.

Replies from: Liron, wedrifid
comment by Liron · 2009-12-13T20:37:04.303Z · LW(p) · GW(p)

Estimated number of copies of said book I will buy: 30

Just putting it out there.

comment by wedrifid · 2009-12-13T22:48:18.024Z · LW(p) · GW(p)

The question would, under other circumstances, be just - but I don't care to justify myself here. Elsewhere, perhaps.

(My upvote unfortunately only brings the parent to 0.) This is the approach that I would have taken (and recommended). The author and context of a post matters. Coming from an author who is consistently arrogant and disrespectful changes the meaning of the question from curiosity to challenge. Justifying oneself in response serves to validate the challenge, signalling that such challenges are acceptable and lowering one's status. It is far better to convey, usually through minimal action, that the 'question' is inappropriate.

I would not have even framed the topic with the phrase 'be just'. You can explain the reasons motivating a decision (and even explain such reasoning for the purposes of PR) without it being a justification. It's just and interesting question.

comment by Kaj_Sotala · 2009-12-14T18:16:54.399Z · LW(p) · GW(p)

This sounds like a dodge - yes, we can certainly all agree that there are worse uses for SIAI's time, but the question of whether SIAI is working most effectively is still a valid and important one.

comment by Blueberry · 2009-12-13T07:27:36.839Z · LW(p) · GW(p)

Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI?

I know! It's a Xanatos Gambit!

Replies from: wedrifid, Eliezer_Yudkowsky
comment by wedrifid · 2009-12-13T11:57:30.231Z · LW(p) · GW(p)

I know! It's a Xanatos Gambit!

You @#$@! I want those hours of my life back. Although I must admit Batman and his Crazy Prepared is kinda cool and I do love later series Wesley, seriously Badass. I was also tempted to create an account and add Althalus and Talen to the Divine Date page.

What is it about that site that is such a trap? We must anticipate a serious social payoff for engaging with stories and identifying the abstractions they represent.

I had been reading for half an hour before I even noticed I was on the TvTropes page and the warning sprung to mind.

Replies from: kpreid
comment by kpreid · 2009-12-13T12:51:44.780Z · LW(p) · GW(p)

Anyone want to write a LW post about (discussing) How To Be Less Unreasonably Attracted To TV Tropes And Other Such Sites? It's certainly a matter of rationality-considered-as-winning, since I've been losing somewhat as a result of wasting time on TV Tropes / Wikipedia / Everything2 / reading the archives of new-to-me webcomics.

Replies from: wedrifid
comment by wedrifid · 2009-12-13T13:08:24.311Z · LW(p) · GW(p)

Anyone want to write a LW post about (discussing) How To Be Less Unreasonably Attracted To TV Tropes And Other Such Sites?

I have had a lot of success with the LeechBlock plugin for firefox. It provides enough of a stimulus to trigger executive control. Unfortunately Chrome is just a whole lot faster than my Rigged-For-Web-Development Firefox these days and so I've lost that tool. I've actually been considering uninstalling chrome for more or less this reason.

Incidently, one of the sites I have blocked on Leechblock is Lesswrong.com. This habit kills far more time that tropes does. I then read the posts from the RSS feed but it block-reminds me of my self imposed policy when I try to click through to comment.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T14:18:02.898Z · LW(p) · GW(p)

The key to strategy is not to choose a path to victory, but to choose so that all paths lead to a victory.

comment by Roko · 2009-12-17T09:42:22.752Z · LW(p) · GW(p)

Mormon2:

How would you act if you were Eliezer?

Bear in mind that you could either work directly on the problem, or you could try to cause others to work on it. If you think that you could cause an average of 10 smart people to work on the problem for every 6 months you spend writing/blogging, how much of your life would you spend writing/blogging versus direct work on FAI?

Replies from: mormon2
comment by mormon2 · 2009-12-18T01:53:01.395Z · LW(p) · GW(p)

"How would you act if you were Eliezer?"

If I made claims of having a TDT I would post the math. I would publish papers. I would be sure I had accomplishments to back up the authority with which I speak. I would not spend a single second blogging about rationality. If I used a blog it would be to discuss the current status of my AI work and to have a select group of intelligent people who could read and comment on it. If I thought FAI was that important I would be spending as much time as possible finding the best people possible to work with and would never resort to a blog to try to attract the right sort of people (I cite LW as evidence of the failure of blogging to attract the right people).

Oh and for the record I would never start a non-profit to do FAI research. I also would do away with the Singularity Summit and replace it with more AGI conferences. I would also do away the most of SIAI's programs and replace them, and the money they cost, with researchers and scientists along with some devoted angel funders.

Replies from: Mitchell_Porter, Roko, wedrifid
comment by Mitchell_Porter · 2009-12-18T12:02:07.586Z · LW(p) · GW(p)

I can see reasons for proceeding indirectly. Eliezer is 30. He thinks his powers may decline after age 40. It's said that it takes 10 years to become expert in a subject. So if solving the problems of FAI requires modes of thought which do not come naturally, writing his book on rationality now is his one chance to find and train people appropriately.

It is also possible that he makes mistakes. Eliezer and SIAI are inadequately supported and have always been inadequately supported. People do make mistakes under such conditions. If you wish to see how seriously the mainstream of AI takes the problem of Friendliness, just search the recent announcements from MIT, about a renewed AI research effort, for the part where they talk about safety issues.

I have a suggestion: Offer to donate to SIAI if Eliezer can give you a satisfactory answer. (The terms of such a deal may need to be negotiated first.)

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-12-19T18:38:45.994Z · LW(p) · GW(p)

search the recent announcements from MIT, about a renewed AI research effort, for the part where they talk about safety issues.

Do you have a link? I can't find anything seemingly relevant with a little searching.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2009-12-20T03:03:53.001Z · LW(p) · GW(p)

I can't find anything seemingly relevant with a little searching.

Neither can I - that's the point.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-12-20T03:06:59.340Z · LW(p) · GW(p)

No, I mean anything about the renewed AI research effort.

Replies from: Mitchell_Porter
comment by Roko · 2009-12-19T06:37:11.052Z · LW(p) · GW(p)

A good rationalist exercise is to try to predict what those who do not adopt your position would say in response to your arguments.

What criticisms do you think I will make of the statement:

I also would do away with the Singularity Summit and replace it with more AGI conferences.

?

comment by wedrifid · 2009-12-18T02:18:26.278Z · LW(p) · GW(p)

(I cite LW as evidence of the failure of blogging to attract the right people).

Couldn't help yourself. The remainder is a reasonable answer.

comment by zero_call · 2009-12-16T04:23:13.230Z · LW(p) · GW(p)

Well, first of all, the tone of your post is very passive aggressive and defensive, and if you are trying to encourage a good, rational discussion like you say, then you should maybe be a little more self-concious about your behavior.

Regarding the content of your post, I think it's a fair question. However, you seem like you are being quite a bit closed minded and EY-centric about the entire issue. This person is just one employee for SIAI, which presumeably can manage their own business, and no doubt have a much better idea of what their employees are doing than any of us do. Just because this one individual employee is engaging in these generalistic activities doesn't mean the entire organization is unfocused and uncommitted to its stated purpose. So I would say your claim is wrong, or at least very unsubstantiated. Now, if you said that EY salary was 50% of the SIAI budget, then that would be a whole different story, and so on and so forth.

Edit: And just FYI, I've actually voted this post up because I think your question is an interesting, fundamental application of rationality to an obviously important inner issue. So I hope you don't feel the need to think everyone is just a cultist -- try to be more positive my friend!

comment by wedrifid · 2009-12-19T04:02:06.522Z · LW(p) · GW(p)

I believe it would be greatly informative to mormon2's experimental result if the profile was blocked. Mormon2 could confirm a hypothesis, we would be rid of him: everybody wins!

comment by CronoDAS · 2009-12-14T21:59:57.111Z · LW(p) · GW(p)

Short answer to the original post:

SIAI has a big Human Resources problem. Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn't immediately set out to do something really, really stupid. So he's blogging and writing a book on rationality in the hope of finding someone worthwhile to work with.

Replies from: Eliezer_Yudkowsky, Blueberry
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-15T00:55:25.041Z · LW(p) · GW(p)

Michael Vassar is much, much better at the H.R. thing. We still have H.R. problems but could now actually expand at a decent clip given more funding.

Unless you're talking about directly working on the core FAI problem, in which case, yes, we have a huge H.R. problem. Phrasing above might sound somewhat misleading; it's not that I hired people for A.I. research but they failed at once, or that I couldn't find anyone above the level of basic stupid failures. Rather that it takes a lot more than "beyond the basic stupid failures" to avoid clever failures and actually get stuff done, and the basic stupid failures give you some idea of the baseline level of competence beyond which we need some number of sds.

Replies from: CronoDAS
comment by CronoDAS · 2009-12-15T07:45:33.807Z · LW(p) · GW(p)

Yeah, sorry for phrasing it wrong. I guess I should have said

Eliezer had a really difficult time finding anyone to hire as an assistant/coworker at SIAI who didn't immediately suggest something really, really stupid when told about what they were working on.

And yes, I did mean that you had trouble finding people to work directly on the core FAI problem.

comment by Blueberry · 2009-12-14T22:04:39.400Z · LW(p) · GW(p)

Now I'm really curious: what were the "really, really stupid" things that were attempted?

Replies from: CronoDAS
comment by Paul Crowley (ciphergoth) · 2009-12-14T08:51:24.321Z · LW(p) · GW(p)

The way you use "rationality" here reminds me of the way that commenters at Overcoming Bias so often say "But isn't it a bias that... (you disagree with me about X)". When you speak of rationality or bias, you should be talking about systematic, general means by which you can bend towards or away from the truth. Just invoking the words to put a white coat on whatever position you are defending devalues them.

comment by smoofra · 2009-12-14T23:56:13.402Z · LW(p) · GW(p)

I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.

comment by Tyrrell_McAllister · 2009-12-19T05:12:22.516Z · LW(p) · GW(p)

Irrelevant questions

The questions are relevant to how you ought to interpret your results. You need to answer them to know what to infer from the reaction to your experiment.

comment by MrHen · 2009-12-19T03:11:42.463Z · LW(p) · GW(p)

While they may have been irrelevant, the questions were certainly interesting. I could probably think of other irrelevant, interesting questions. I don't suppose you'd be willing to answer them?

comment by Tyrrell_McAllister · 2009-12-18T18:07:16.160Z · LW(p) · GW(p)

I am conducting a social experiment as I already explained. The posts are a performance for effect as part my experiment.

Have you yourself participated in this kind of experiment when it was being performed on you by a stranger on the Internet who used the style and tone that you've adopted here? If so, what payoff did you anticipate for doing so?

comment by AndrewKemendo · 2009-12-13T03:27:41.802Z · LW(p) · GW(p)

From my understanding Mr. Yudkowski has two separate but linked interests, that of rationality, which predominates in writings and blog posts and designing AI, which is the interaction with SIAI. While I disagree about their particular approach (or lack thereof) I can see how it is rational to follow both simultaneously toward similar ends.

I would argue that rationality and AI are really the same project at different levels and different stated outcomes. Even if an AI never develops, increasing rationality is a good enough goal in and of itself.

comment by Roko · 2009-12-19T18:59:30.901Z · LW(p) · GW(p)

You seem emotionally resistant to seeking out criticisms of your own arguments. In the long run, reality will punish you for this. Sorry.

comment by wedrifid · 2009-12-18T02:14:04.719Z · LW(p) · GW(p)

I am conducting a social experiment as I already explained. The posts are a performance for effect as part my experiment.

Some of Robin's recent posts have commented on how giving the appearance of trying hard to secure one's status actually lowers your status. Now you are being exemplary.

comment by [deleted] · 2009-12-13T06:41:58.098Z · LW(p) · GW(p)

If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI?

This sentence confused me; it should probably be reworded. Something like:

"If this be accurate, I would ask how this makes sense from someone who has made such a big deal about FAI and about how important it is to both be the first to make AI and ensure that it is Friendly."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T03:43:15.723Z · LW(p) · GW(p)

This belongs as a comment on the SIAI blog, not a post on Less Wrong.

Replies from: PeterS
comment by PeterS · 2009-12-13T11:05:44.490Z · LW(p) · GW(p)

Why?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T11:44:14.682Z · LW(p) · GW(p)

Because Less Wrong is about human rationality, not the Singularity Institute, and not me.

Replies from: PeterS, Wei_Dai, mormon2
comment by PeterS · 2009-12-13T20:55:04.893Z · LW(p) · GW(p)

Then from whence came the Q&A with Eliezer Yudkowsky, your fiction submissions (which I think lately have become of questionable value to LW), and other such posts which properly belong on either your personal blog or the SIAI blog?

I don't think that if any other organization were posting classified ads here that it would be tolerated.

However ugly it sounds, you've been using Less Wrong as a soap box. Regardless of our statement of purpose, you have made it, in part, about you and SIAI.

So I for one think that the OP's post isn't particularly out of place.

Edit: For the record I like most of your fiction. I just don't think it belongs here anymore.

Replies from: Blueberry, CarlShulman, Eliezer_Yudkowsky
comment by Blueberry · 2009-12-14T21:44:44.459Z · LW(p) · GW(p)

Edit: For the record I like most of your fiction. I just don't think it belongs here anymore.

That's like saying the Dialogues don't belong in Godel, Escher, Bach.

Replies from: PeterS
comment by PeterS · 2009-12-14T23:26:17.936Z · LW(p) · GW(p)

To be honest, maybe they didn't. Those crude analogies interspersed between the chapters - some as long as a chapter itself! - were too often unnecessary. The book was long enough without them... but with them? Most could have been summed up in a paragraph.

If you need magical stories about turtles and crabs drinking hot tea before a rabbit shows up with a device which allows him to enter paintings to understand recursion, then you're never going to get it.

On the other hand, if the author's introduction of stories in that manner is necessary to explain his subject or thesis, then something is either wrong with the subject or with his expose of it.

I know GEB is like the Book around Less Wrong, but what I'm saying here isn't heresy. Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn't understand GEB.

Replies from: Vladimir_Nesov, Blueberry
comment by Vladimir_Nesov · 2009-12-15T00:25:03.197Z · LW(p) · GW(p)

It's a question of aesthetics. Of course math doesn't have to be presented this way, but a lot of people like the presentation.

You should make explicit what you are arguing. It seems to me that the cause of your argument is simply "I don't like the presentation", but you are trying to argue (rationalize) it as a universal. There is a proper generalization somewhere in between, like "it's not an efficient way to [something specific]".

comment by Blueberry · 2009-12-15T01:44:20.081Z · LW(p) · GW(p)

Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn't understand GEB.

Wait, what? I Am a Strange Loop was written about 30 years later. Hofstadter wrote four other books on mind and pattern in the meantime, so this doesn't make any sense.

Replies from: PeterS
comment by PeterS · 2009-12-15T01:54:38.388Z · LW(p) · GW(p)

An interview with Douglas R. Hofstadter

What led you to write the book? (I Am a Strange Loop)

. . . two philosophers [Ken Williford and Uriah Kriegel] asked me if I would write about my thoughts about what an "I" is. They said that they had appreciated what I had said of these ideas in Gödel, Escher, Bach many years ago, but that they knew that I felt that my message had not really been absorbed—that Gödel, Escher, Bach had become popular but that the driving force behind the book had not really been perceived by most readers, let alone absorbed by a large number of people, and I was frustrated with this. I felt I had reached people, but not exactly as I had hoped. I had greater success with the book than I'd ever expected, but I didn't have the exact type of success that I wanted. . .

I thought, "This is a good opportunity to at least address the world of philosophers of mind. It's a narrow world, but if I can say it well, at least they'll know what I intended to do in my book GEB almost 30 years ago."

comment by CarlShulman · 2009-12-17T02:57:21.354Z · LW(p) · GW(p)

I don't think that if any other organization were posting classified ads here that it would be tolerated.

Actually, that's not true, classified ads for both SIAI and the Future of Humanity Institute have been posted. The sponsors of Overcoming Bias and Less Wrong have posted such announcements, and others haven't, which is an intelligible and not particularly ugly principle.

Replies from: PeterS
comment by PeterS · 2009-12-17T03:42:50.763Z · LW(p) · GW(p)

You're right. It is the sponsor's prerogative.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T21:16:50.616Z · LW(p) · GW(p)

I'm having some slight difficulty putting perceptions into words - just as I can't describe in full detail everything I do to craft my fictions - but I can certainly tell the difference between that and this.

Since I haven't spent a lot of time here talking about ideas along the lines of Pirsig's Quality, there are readers who will think this is a copout. And if I wanted to be manipulative, I would go ahead and offer up a decoy reason they can verbally acknowledge in order to justify their intuitive perceptions of difference - something along the lines of "Demanding that a specific person justify specific decisions in a top-level post doesn't encourage the spreading threads of casual conversation about rationality" or "In the end, every OBLW post was about rationality even if it didn't look that way at the time, just as much as the Quantum Physics Sequence amazingly ended up being about rationality after all." Heck, if I was a less practiced rationalist, I would be inventing verbal excuses like that to justify my intuitive perceptions to myself. As it is, though, I'll just say that I can see the difference perceptually, and leave it at that - after adding some unnecessary ornaments to prevent this reply from being voted down by people who are still too focused on the verbal.

PS: We post classified ads for FHI, too.

Replies from: PeterS, Eliezer_Yudkowsky
comment by PeterS · 2009-12-13T22:07:55.086Z · LW(p) · GW(p)

You could have just not replied at all. It would have saved me the time spent trying to write up a response to a reply which is nearly devoid of any content.

Incidentally, I don't have "intuitive" perceptions of difference here. It's pretty clear to me, and I can explain why. Though in my estimation, you don't care.

Replies from: wedrifid
comment by wedrifid · 2009-12-14T00:43:56.146Z · LW(p) · GW(p)

Incidentally, I don't have "intuitive" perceptions of difference here. It's pretty clear to me, and I can explain why. Though in my estimation, you don't care.

When I read Eliezer's fiction the concepts from dozens of lesswrong posts float to the surface of my mind, are processed and the implications become more intuitively grasped. Your brain may be wired somewhat differently but for me fiction is useful.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T21:23:38.223Z · LW(p) · GW(p)

PPS: Probing my intuitions further, I suspect that if the above post had been questioning e.g. komponisto's rationality in the same tone and manner, I would have had around the same reaction of offtopicness for around the same reason.

comment by Wei Dai (Wei_Dai) · 2009-12-13T12:33:30.858Z · LW(p) · GW(p)

I can see a couple of reasons why the post does belong here:

  • It concerns Less Wrong itself, specifically it's origin and motivation. This should be of interest to community members.
  • You (Eliezer) are the most visible advocate and practitioner of human rationality improvement. If it turns out that you are not particularly rational, then perhaps the techniques you have developed are not worth learning.

Psy-Kosh's answer seems perfectly reasonable to me. I wonder why you don't just give that answer, instead of saying the post doesn't belong here. Actually if I had known this was one of the reasons for starting OB/LW, I probably would have paid more attention earlier, because at the beginning I was thinking "Why is Eliezer talking so much about human biases now? That doesn't seem so interesting, compared to the Singularity/FAI stuff he used to talk about."

Replies from: timtyler
comment by timtyler · 2009-12-13T13:21:21.783Z · LW(p) · GW(p)

E.Y. has given that answer before:

Rationality: Common Interest of Many Causes

comment by mormon2 · 2009-12-13T16:55:04.818Z · LW(p) · GW(p)

I am going to respond to the general overall direction of your responses.

That is feeble, and for those who don't understand why let me explain it.

Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you're distracted from what you are being paid to do. (If you ever work with a VC and their money you'll know what I mean.)

When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.

EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.

P.S. If you want to educate people to help you out as someone speculated you'd be better off teaching them computer science and mathematics.

Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.

Replies from: Zack_M_Davis, Eliezer_Yudkowsky, Kaj_Sotala
comment by Zack_M_Davis · 2009-12-13T19:41:49.757Z · LW(p) · GW(p)

If you want to educate people to help you out as someone speculated you'd be better off teaching them computer science and mathematics.

Even on the margin? There are already lots of standard textbooks and curricula for mathematics and computer science, whereas I'm not aware of anything else that fills the function of Less Wrong.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T19:18:31.150Z · LW(p) · GW(p)

If you are previously a donor to SIAI, I'll be happy to answer you elsewhere.

If not, I am not interested in what you think SIAI donors think. Given your other behavior, I'm also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.

Replies from: mormon2
comment by mormon2 · 2009-12-13T20:36:05.840Z · LW(p) · GW(p)

I apologize I rippled your pond.

"If not, I am not interested in what you think SIAI donors think."

I never claimed to know what SIAI donors think I asked you to think about that. But I think the fact that SIAI has as little money as it does after all these years speaks volumes about SIAI.

"Given your other behavior, "

Why because I ask questions that when answered honestly you don't like? Or is it because I don't blindly hang on every word you speak?

"I'm also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better."

I never claimed I would donate nor will I ever as long as I live. As for experience telling you better, you have none, and considering the lack of money SIAI has and your arrogance you probably never will so I will keep my own council on that part.

"If you are previously a donor to SIAI, I'll be happy to answer you elsewhere."

Why, because you don't want to disrupt the LW image of Eliezer the genius? Or is it because you really are distracted as I suspect or have given up because you cannot solve the problem of FAI another good possibility? These questions are simple easy to answer and I see no real reason you can't answer them here and now. If you find the answers embarrassing then change, if not then what have you got to loose?

If your next response is as feeble as the last ones have been don't bother posting them for my sake. You claim you want to be a rationalist then try applying reason to your own actions and answer the questions asked honestly.

Replies from: Zack_M_Davis, Dustin
comment by Zack_M_Davis · 2009-12-13T21:16:59.597Z · LW(p) · GW(p)

Why because I ask questions that when answered honestly you don't like?

The questions are fine. I think it's the repetitiveness, obvious hostility, and poor grammar and spelling that get on people's nerves.

Replies from: Nick_Tarleton, Eliezer_Yudkowsky
comment by Nick_Tarleton · 2009-12-13T21:33:31.809Z · LW(p) · GW(p)

Not to mention barely responding to repeated presentations of the correct answer.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-13T21:19:44.262Z · LW(p) · GW(p)

...and it being a top-level post instead of Open Thread comment. Probably would've been a lot more forgiving if it'd been an Open Thread comment.

comment by Dustin · 2009-12-13T23:02:05.258Z · LW(p) · GW(p)

I never claimed I would donate nor will I ever as long as I live.

You're that sure that at this point in time you have all the information you'd ever need to make that decision?

comment by Kaj_Sotala · 2009-12-14T18:41:39.979Z · LW(p) · GW(p)

why you are writing a book on rationality which in no way solves FAI

Rationality is the art of not screwing up - seeing what is there instead of what you want to see, or are evolutionarily suspectible to seeing. When working on a task that may have (literally) earth-shattering consequences, there may not be a skill that's more important. Getting people educated about rationality is of prime importance for FAI.

comment by Tiiba · 2009-12-14T10:17:38.578Z · LW(p) · GW(p)

Eliezer wrote once that he wants to teach a little tiibaism to other FAI designers, so they're less likely to tile the world with smileys. He is an influential person among FAI designers, so perhaps he'll succeed at this. His book will probably be popular among AI programmers, who are the people who matter. And if he doesn't, he'll be the only tiibaist in the world working on a problem that suffers atiibaism poorly. So yes, teaching people how to properly exploit their hundred billion neurons can actually save the world.

Of course, if politicians or judges or advice columnists become more tiibal, that's great. I expect, however, that Eliezer will most influence people who already think he's got something good to say.

(This post assumes that many of Eliezer's fans are AI programmers. But is that true? How many AI programmers are reading this post?

Replies from: wedrifid
comment by wedrifid · 2009-12-14T10:20:45.953Z · LW(p) · GW(p)

Ok, Tiiba is evidently significant enough to you to inspire your name... but what the? I'm confused.

Replies from: Tiiba
comment by Tiiba · 2009-12-14T16:15:10.964Z · LW(p) · GW(p)

What desquires clarifaction?

Replies from: wedrifid, Kaj_Sotala
comment by wedrifid · 2009-12-14T17:17:25.841Z · LW(p) · GW(p)

Tiiba, tiibaism, tiibaist, atiibaism and tiibal. More specifically I was wondering whether your use thereof in places where some would use derivatives of 'rational' was making a particular statement about possible misuse of said word or were perhaps making a reference to a meme that I have not been exposed to. Also, I was slightly curious as to whether you were using words like 'desquires' just to be a d@#$ or whether it was more along the lines of good natured quirkiness. So I know whether to banter or block.

Replies from: Tiiba, Morendil, Zack_M_Davis
comment by Tiiba · 2009-12-14T21:18:28.682Z · LW(p) · GW(p)

All right, all right. I tried to make myself look rational by defining rationality so that I would have it by definition.

I'm not sure how saying "desquires" makes or can make me a dathashdollar, but I assure you that I hold no malice for anyone. Except people who make intrusive advertising. And people who mis-program traffic lights. And people who make DRM. And bullies. And people who embed auto-starting music in their Web pages. And people who eat chicken.

Tiiba is the name of a fairly low-level demon from the anime Slayers. He was imprisoned by the wizard-priest Rezo in a body that looks like an oversized chicken. He stuck in my mind after I read a fanfic where he appeared, briefly but hilariously.

comment by Morendil · 2009-12-14T17:46:34.232Z · LW(p) · GW(p)

a meme that I have not been exposed to

Oh, that's likely to be in TVTropes then... (Hint!)

Replies from: wedrifid
comment by wedrifid · 2009-12-14T17:58:37.898Z · LW(p) · GW(p)

Oh, that's likely to be in TVTropes then... (Hint!)

Yeah, my googling took me to a TV trope page. Something about an Anime demon chicken. Was it an especially rational demon chicken?

Replies from: Tiiba
comment by Tiiba · 2009-12-14T21:19:55.187Z · LW(p) · GW(p)

By chicken standards, I guess so.

comment by Zack_M_Davis · 2009-12-14T17:32:53.274Z · LW(p) · GW(p)

The "D" and "S" keys are close to the "R" ...?

comment by Kaj_Sotala · 2009-12-14T18:27:44.888Z · LW(p) · GW(p)

I think the question was "what heck is Tiiba"?