The Singularity Institute's Arrogance Problem

post by lukeprog · 2012-01-18T22:30:56.058Z · LW · GW · Legacy · 307 comments

Contents

307 comments

I intended Leveling Up in Rationality to communicate this:

Despite worries that extreme rationality isn't that great, I think there's reason to hope that it can be great if some other causal factors are flipped the right way (e.g. mastery over akrasia). Here are some detailed examples I can share because they're from my own life...

But some people seem to have read it and heard this instead:

I'm super-awesome. Don't you wish you were more like me? Yay rationality!

This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:

At least among Caltech undergrads and academic mathematicians, it's taboo to toot your own horn. In these worlds, one's achievements speak for themselves, so whether one is a Fields Medalist or a failure, one gains status purely passively, and must appear not to care about being smart or accomplished. I think because you and Eliezer don't have formal technical training, you don't instinctively grasp this taboo. Thus Eliezer's claim of world-class mathematical ability, in combination with his lack of technical publications, make it hard for a mathematician to take him seriously, because his social stance doesn't pattern-match to anything good. Eliezer's arrogance as evidence of technical cluelessness, was one of the reasons I didn't donate until I met [someone at SI in person]. So for instance, your boast that at SI discussions "everyone at the table knows and applies an insane amount of all the major sciences" would make any Caltech undergrad roll their eyes; your standard of an "insane amount" seems to be relative to the general population, not relative to actual scientists. And posting a list of powers you've acquired doesn't make anyone any more impressed than they already were, and isn't a high-status move.

So, I have a few questions:

 

  1. What are the most egregious examples of SI's arrogance?
  2. On which subjects and in which ways is SI too arrogant? Are there subjects and ways in which SI isn't arrogant enough?
  3. What should SI do about this?

 

307 comments

Comments sorted by top scores.

comment by [deleted] · 2012-01-18T23:50:12.843Z · LW(p) · GW(p)

(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)

I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.

Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and weak results in the New Age circles I hung out in years ago. Your beliefs are much saner, but as long as you can't be more effective than them, I'll always have a problem taking you seriously.

In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".

(This has little to do with the soundness of SIAI's claims about Intelligence Explosion etc., though, but it does decrease my confidence that conclusions reached through your epistemic rationality are to be trusted if the present results seem so lacking.)

Replies from: FiftyTwo, lukeprog, beoShaffer, Solvent
comment by FiftyTwo · 2012-01-19T00:32:05.987Z · LW(p) · GW(p)

Thought experiment

If the SIAI was a group of self interested/self deceiving individuals, similar to new age groups, who had made up all this stuff about rationality and FAI as a cover for fundraising what different observations would we expect?

Replies from: katydee, FAWS, RobertLumley
comment by katydee · 2012-01-19T17:34:39.476Z · LW(p) · GW(p)

I would expect them to:

  • 1- Never hire anybody or hire only very rarely
  • 2- Not release information about their finances
  • 3- Avoid high-profile individuals or events
  • 4- Laud their accomplishments a lot without producing concrete results
  • 5- Charge large amounts of money for classes/training
  • 6- Censor dissent on official areas, refuse to even think about the possibility of being a cult, etc.
  • 7- Not produce useful results

SIAI does not appear to fit 1 (I'm not sure what the standard is here), certainly does not fit 2 or 3, debatably fits 4, and certainly does not fit 5 or 6. 7 is highly debatable but I would argue that the Sequences and other rationality material are clearly valuable, if somewhat obtuse.

Replies from: private_messaging
comment by private_messaging · 2012-07-27T15:50:03.319Z · LW(p) · GW(p)

That goes for self interested individuals with high rationality, purely material goals, and very low self deception. The self deceived case, on the other hand, is the people whose self interest includes 'feeling important' and 'believing oneself to be awesome' and perhaps even 'taking a shot at becoming the saviour of mankind'. In that case you should expect them to see awesomeness in anything that might possibly be awesome (various philosophy, various confused texts that might be becoming mainstream for all we know, you get the idea), combined with absence of anything that is definitely awesome and can't be trivial (a new algorithmic solution to long standing well known problem that others worked on, practically important enough, etc).

comment by FAWS · 2012-01-19T00:49:42.482Z · LW(p) · GW(p)

I wouldn't have expected them to hire Luke. If Luke was a member all along and everything just planned to make them look more convincing that would imply a level of competence at such things that I'd expect all round better execution (which would have helped more than slightly improved believability from faking lower level of PR etc competence).

comment by RobertLumley · 2012-01-19T01:53:11.540Z · LW(p) · GW(p)

I would not expect their brand of rationality to work in my own life. Which it does.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-19T02:43:20.624Z · LW(p) · GW(p)

What evidence have you? Lots of New Age practitioners claim that New Age practices work for them. Scientology does not allow members to claim levels of advancement until they attest to "wins".

For my part, the single biggest influence that "their brand of rationality" (i.e. the Sequences) has had on me may very well be that I now know how to effectively disengage from dictionary arguments.

Replies from: FiftyTwo, RobertLumley
comment by FiftyTwo · 2012-01-19T19:58:59.437Z · LW(p) · GW(p)

Even if certain rationality techniques are effective that's separate from the claims about the rest of the organisation. Similar to the early level Scientology classes being useful social hacks but the overall structure less so.

Replies from: Blueberry
comment by Blueberry · 2012-03-27T12:17:45.542Z · LW(p) · GW(p)

the early level Scientology classes being useful social hacks

They are? Do you have a reference? I thought they were weird nonsense about pointing to things and repeating pairs of words and starting at corners of rooms and so on.

comment by RobertLumley · 2012-01-19T13:08:23.033Z · LW(p) · GW(p)

Markedly increased general satisfaction in life, better success at relationships, both intimate and otherwise, noticing systematic errors in thinking, etc.

I haven't bothered to collect actual data (which wouldn't do much good since I don't have pre-LW data anyway) but I am at least twice as happy with my life as I have been in previous years.

Replies from: Karmakaiser
comment by Karmakaiser · 2012-01-19T15:34:52.194Z · LW(p) · GW(p)

I haven't bothered to collect actual data

This is the core issue with rationality at present. Until and unless some intrepid self data collectors track their personal lives post sequences then we have a collection of smart people who post nice anecdotes. I admit that, like you, I didn't have the presence of mind to start collecting data as I can't keep a diary current. But without real data we will have continued trouble convincing people that this works.

Replies from: RobertLumley, gwern
comment by RobertLumley · 2012-01-19T16:24:34.228Z · LW(p) · GW(p)

I was thinking the other day that I desperately wished I had written down my cached thoughts (and more importantly, cached feelings) about things like cryonics (in particular), politics, or [insert LW topic of choice here] before reading LW so that I could compare them now. I don't think I had ever really thought about cryonics, or if I had, I had a node linking it to crazy people.

Actually, now that I think about it it's not true. I remember thinking about it once when I first started in research, and we were unfreezing lab samples, and considering whether or not cryonicists have a point. I don't remember what I felt about it though.

Replies from: Karmakaiser
comment by Karmakaiser · 2012-01-19T16:28:14.788Z · LW(p) · GW(p)

One of the useful things about the internet is it's record keeping abilities and humans natural ability to comment on things they know nothing about. Are you aware of being on record on a forum or social media site pre LW on issues that LW has dealt with?

Replies from: RobertLumley, gwern
comment by RobertLumley · 2012-01-19T16:41:09.404Z · LW(p) · GW(p)

Useful and harmful. ;-)

Yes, to an extent. I've had Facebook for about six years (I found HPMOR about 8 months ago, and LW about 7?) but I deleted the majority of easily accessible content and do not post anything particularly introspective on there. I know, generally, how I felt about more culturally popular memes, what I really wish I remember though are things like cryonics or the singularity, to which I never gave serious consideration before LW.

Edit: At one point, I wrote a program to click the "Older posts" button on Facebook so I could go back and read all of my old posts, but it's been made largely obsolete by the timeline feature.

comment by gwern · 2012-02-10T23:04:29.262Z · LW(p) · GW(p)

It's probably a bit late for many attitudes of mine, but I have made a stab at this by keeping copies of all my YourMorals.org answers and listing other psychometric data at http://www.gwern.net/Links#profile

(And I've retrospectively listed in an essay the big shifts that I can remember; hopefully I can keep it up to date and obtain a fairly complete list over my life.)

comment by gwern · 2012-01-25T03:55:35.392Z · LW(p) · GW(p)

Until and unless some intrepid self data collectors track their personal lives post sequences then we have a collection of smart people who post nice anecdotes

IIRC, wasn't a bunch of data-collection done for the Bootcamp attendees, which was aimed at resolving precisely that issue?

comment by lukeprog · 2012-01-19T01:08:40.821Z · LW(p) · GW(p)

I appreciate the tone and content of your comment. Responding to a few specific points...

You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?!

There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.

the only publicly visible person who strikes me as having some awesome powers is you

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

Aaaaaaaaaand now Carl will slap me for setting expectations too high. But I don't think I'm exaggerating that much. Maybe I'll get by with another winky-face.

;)

Replies from: None, Karmakaiser
comment by [deleted] · 2012-01-19T02:47:19.874Z · LW(p) · GW(p)

I don't think you're taking enough of an outside view. Here's how these accomplishments look to "regular" people:

CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team.

You wrote something 11 years ago, which you now consider defunct and still is not a mainstream view in any field.

The Sequences are simply awesome.

You wrote series of esoteric blog posts that some people like.

And he did manage to write the most popular Harry Potter fanfic of all time.

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.

You have a guy who is pretty smart. Ok...

The point I'm trying to make is, muflax's diagnosis of "lame" isn't far off the mark. There's nothing here with the ability to wow someone who hasn't heard of SIAI before, or to encourage people to not be put off by arguments like the one Eliezer makes in the Q&A.

Replies from: atucker
comment by atucker · 2012-01-19T10:53:59.280Z · LW(p) · GW(p)

You re-wrote the story of Harry Potter. How is this relevant to saving the world, again?

It's actually been incredibly useful to establishing the credibility of every x-risk argument that I've had with people my age.

"Have you read Harry Potter and the Methods of Rationality?"

"YES!"

"Ah, awesome!"

merriment ensues

topic changes to something about things that people are doing

"So anyway the guy who wrote that also does...."

Replies from: None, private_messaging
comment by [deleted] · 2012-01-19T12:23:54.698Z · LW(p) · GW(p)

Again, take the outside outside view. The kind of conversation you described only happens with people who have read HPMoR--just telling people about the fic isn't really impressive. (Especially if we are talking about the 90+% of the population who know nothing about fanfiction.) Ditto for the Sequences, they're only impressive after the fact. Compare this to publishing a number of papers in a mainstream journal, which is a huge status boost even to people who have never actually read the papers.

Replies from: atucker
comment by atucker · 2012-01-19T12:38:24.655Z · LW(p) · GW(p)

I don't think that that kind of status converts nearly as well as establishing a niche of people who start adopting your values, and then talking to them.

Replies from: None
comment by [deleted] · 2012-01-19T13:46:01.835Z · LW(p) · GW(p)

Perhaps not, but Luke was using HPMoR as an example of an accomplishment that would help negate accusations of arrogance, and for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction.

Replies from: pjeby
comment by pjeby · 2012-01-23T06:37:32.395Z · LW(p) · GW(p)

for the majority of "regular" people, hearing that SIAI published journal articles does that better than hearing that they published Harry Potter fanfiction

The majority of "regular" people don't know what journals are; apart from the Wall Street Journal and the New England Journal of Medicine, they mostly haven't heard of any. If asked about journal articles, many would say, "you mean like a blog?" (if younger) or think you were talking about a diary or a newspaper (if older).

They have, however, heard of Harry Potter. ;-)

comment by private_messaging · 2012-07-27T16:02:23.776Z · LW(p) · GW(p)

You know what would be awesome, it's if Eliezer wrote original Harry Potter to obtain funding for the SI.

Seriously, there is a plenty of people whom I would not pay to work on AI, who accomplished far more than anyone at SI, in the more relevant fields.

comment by Karmakaiser · 2012-01-19T15:31:14.155Z · LW(p) · GW(p)

Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.

I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?

Replies from: jacob_cannell
comment by jacob_cannell · 2012-03-03T05:19:30.178Z · LW(p) · GW(p)

I wasn't even aware of "Google's AGI team" . .

Replies from: lukeprog
comment by lukeprog · 2012-10-19T00:55:02.016Z · LW(p) · GW(p)

Update: please see here.

comment by beoShaffer · 2012-01-19T00:56:06.857Z · LW(p) · GW(p)

Building off of this and my previous comment, I think that more and more visible rationality verification could help. First off, opening your ideas up to tests generally reduces perceptions of arrogance. Secondly, successful results would have similar effects to the technical accomplishments I mentioned above. (Note I expect wide scale rationality verification to increase the amount of pro-LW evidence that can be easily presented to outsiders, not for it to increase my own confidence. Thus this isn't in conflict with the conservation of evidence.)

comment by Solvent · 2012-01-19T11:18:53.677Z · LW(p) · GW(p)

In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".

Eliezer is pretty amazing. He's written some brilliant fiction, and some amazing stuff in the Sequences, plus CFAI, CEV, and TDT.

comment by cousin_it · 2012-01-19T09:36:20.675Z · LW(p) · GW(p)

My #1 suggestion, by a big margin, is to generate more new formal math results.

My #2 suggestion is to communicate more carefully, like Holden Karnofsky or Carl Shulman. Eliezer's tone is sometimes too preachy.

comment by Viliam_Bur · 2012-01-19T15:28:22.736Z · LW(p) · GW(p)

SI is arrogant because it pretends to be even better than science, while failing to publish in significant scientific papers. If this does not seem like a pseudoscience or cult, I don't know what does.

So please either stop pretending to be so great or prove it! For starters, it is not necessary to publish a paper about AI; you can choose any other topic.

No offense; I honestly think you are all awesome. But there are some traditional ways to prove one's skills, and if you don't accept the challenge, you look like wimps. Even if the ritual is largely a waste of time (all signals are costly), there are thousands of people who have passed it, so a group of x-rational gurus should be able to use their magical powers and do it in five minutes, right?

Replies from: Bugmaster, DuncanS
comment by Bugmaster · 2012-01-21T00:41:04.593Z · LW(p) · GW(p)

Yeah. The best way to dispel the aura of arrogance is to actually accomplish something amazing. So, SIAI should publish some awesome papers, or create a powerful (1) AI capable of some impressive task like playing Go (2), or end poverty in Haiti (3), or something. Until they do, and as long as they're claiming to be super-awesome despite the lack of any non-meta achievements, they'll be perceived as arrogant.

(1) But not too powerful, I suppose.
(2) Seeing as Jeopardy is taken.
(3) In a non-destructive way.

Replies from: Regex
comment by Regex · 2016-05-12T17:09:28.029Z · LW(p) · GW(p)

2016 update: Go is now also taken.

Impressive tasks remaining as (t-> inf) approaches zero!

If not to AI or heat death, we're doomed to having already done everything amazing.

comment by DuncanS · 2012-01-19T22:00:08.616Z · LW(p) · GW(p)

There are indeed times you can get the right answer in five minutes (no, seconds), but it still takes the same length of time as for everyone else to write the thing up into a paper.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-01-20T08:15:27.897Z · LW(p) · GW(p)

How much is that "same length of time"? Hours? Days? If 5 days of work could make LW acceptable in scientific circles, is it not worth doing? It is better to complain why oh why more people don't treat SI seriously?

Can some part of that work be oursourced? Just write the outline of the answer, then find some smart guy in India and pay him like $100 to write it? Or if money is not enough for people who could write the paper well, could you bribe someone by offering them co-authorship? Graduate students have to publish in papers anyway, so if you give them a complete solution, they should be happy to cooperate.

Or set up a "scientific wiki" on SI site, where the smartest people will write the outlines of their articles, and the lesser brains can contribute by completing the texts.

These are my solutions, which seem rather obvious to me. It is not sure they would work, but I guess trying them is better than do nothing. Could a group of x-rational gurus find seven more solutions in five minutes?

From outside, this seems like: "Yeah, I totally could do it, but I will not. Now explain me why are people, who can do it, percieved like more skilled than me?" -- "Because they showed everyone they can do it, duh."

Replies from: Benja
comment by Benya (Benja) · 2012-08-30T09:32:44.516Z · LW(p) · GW(p)

Upvoted for clearly pointing out the tradeoff (yes publicly visible accomplishments that are easy to recognize as accomplishments may not be the most useful thing to work on, but not looking awesome is a price paid for that and needs to be taken into account in deciding what's useful). However, I want to point out that if I heard that an important paper was written by someone who was paid $100 and doesn't appear on the author list, my crackpot/fraud meter (as related to the people on the author list) would go ping-Ping-PING, whether that's fair or not. This makes me worry that there's still a real danger of SIAI sending the wrong signals to people in academia (for similar but different reasons than in the OP).

comment by beoShaffer · 2012-01-18T23:26:18.675Z · LW(p) · GW(p)

in combination with his lack of technical publication

I think it would help for EY to submit more of his technical work for public judgment. Clear proof of technical skill in a related domain makes claims less likely to come off as arrogant. For that matter it also makes people more willing to accept actions that they do perceive as arrogant.

comment by FiftyTwo · 2012-01-18T23:51:11.545Z · LW(p) · GW(p)

The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant, though I can see the logic behind it.

The problem is firstly that its an extremely self serving statement, (equivalent to "giving us money is the best thing you can ever possibly do") even if true its credibility is reduced by the claim coming from the same person who would benefit from it.

Secondly it requires me to believe a number of claims which individually require a burden of proof, and gain more from the conjunction. Including: "Strong AI is possible," "friendly AI is possible," "The actions of the SIAI will significantly effect the results of investigations into FAI," and "the money I donate will significantly improve the effectiveness of the SIAI's research" (I expect the relationship between research efffectiveness and funding isn't linear). All of which I only have your word for.

Thirdly, contrast this with other charities who are known to be very effective and can prove it, and whose results affect presently suffering people (e.g. Against malaria).

Caveat, I'm not arguing any of the claims are wrong, but all the arguments I have for it come from people with an incentive in getting me to donate so I have reasonable grounds for questioning the whole construct from outside the argument.

*Can't remember the exact wording but that was the takeaway of a headline in the last fundraiser.

Replies from: lukeprog
comment by lukeprog · 2012-01-19T01:26:54.249Z · LW(p) · GW(p)

The claim made that donating to the SIAI is the charity donation with the highest expected return* always struck me as rather arrogant

I feel like I've heard this claimed, too, but... where? I can't find it.

Can't remember the exact wording but that was the takeaway of a headline in the last fundraiser.

Here is the latest fundraiser; which line were you thinking of? I don't see it.

Replies from: None, Barry_Cotter
comment by [deleted] · 2012-01-19T01:35:19.025Z · LW(p) · GW(p)

I feel like I've heard this claimed, too, but... where? I can't find it.

Question #5.

Replies from: lukeprog
comment by lukeprog · 2012-01-19T01:53:28.828Z · LW(p) · GW(p)

Yup, there it is! Thanks.

Eliezer tends to be more forceful on this than I am, though. I seem to be less certain about how much x-risk is purchased by donating to SI as opposed to donating to FHI or GWWC (because GWWC's members are significantly x-risk focused). But when this video was recorded, FHI wasn't working as much on AI risk (like it is now), and GWWC barely existed.

I am happy to report that I'm more optimistic about the x-risk reduction purchased per dollar when donating to SI now than I was 6 months ago. Because of stuff like this. We're getting the org into better shape as quickly as possible.

Replies from: curiousepic
comment by curiousepic · 2012-01-19T16:42:07.438Z · LW(p) · GW(p)

because GWWC's members are significantly x-risk focused

Where is this established? As far as I can tell, one cannot donate "to" GWWC, and none of their recommended charities are x-risk focused.

Replies from: Thrasymachus
comment by Thrasymachus · 2012-02-05T19:44:33.242Z · LW(p) · GW(p)

(Belated reply): I can only offer anecdotal data here, but as one of the members of GWWC, many of the members are interested. Also, listening to the directors, most of them are also interested in x-risk issues.

You are right in that GWWC isn't a charity (although it is likely to turn into one), and their recommendations are non-x-risk. The rationale for recommending charities is dependent on reliable data: and x-risk is one of those things where a robust "here's more much more likely happy singularity will be if you give to us" analysis looks very hard.

comment by Barry_Cotter · 2012-01-19T01:35:38.482Z · LW(p) · GW(p)

I feel like I've heard this claimed, too, but... where? I can't find it.

Neither can I but IIRC Anna Salamon did an EU calculation which came up with eight lives saved per dollar donated, no doubt impressively caveated and with error bars aplenty.

Replies from: lukeprog
comment by lukeprog · 2012-01-19T01:52:57.162Z · LW(p) · GW(p)

I think you're talking about this video. Without watching it again, I can't remember if Anna says that SI donation could buy something like eight lives per dollar, or whether donation to x-risk reduction in general could buy something like eight lives per dollar.

comment by Shmi (shminux) · 2012-01-18T23:11:04.513Z · LW(p) · GW(p)

Having been through a Physics grad school (albeit not of a Caltech caliber), I can confirm that lack of (a real or false) modesty is a major red flag, and a tell-tale of a crank. Hawking does not refer to the black-hole radiation as Hawking radiation, and Feynman did not call his diagrams Feynman diagrams, at least not in public. A thorough literature review in the introduction section of any worthwhile paper is a must, unless you are Einstein, or can reference your previous relevant paper where you dealt with it.

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.

Replies from: XiXiDu, None
comment by XiXiDu · 2012-01-19T10:08:26.936Z · LW(p) · GW(p)

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org...

Even Greg Egan managed to copublish papers on arxiv.org :-)

ETA

Here is what John Baez thinks about Greg Egan (science fiction author):

He's incredibly smart, and whenever I work with him I feel like I'm a slacker. We wrote a paper together on numerical simulations of quantum gravity along with my friend Dan Christensen, and not only did they do all the programming, Egan was the one who figured out a great approximation to a certain high-dimensional integral that was the key thing we were studying. He also more recently came up with some very nice observations on techniques for calculating square roots, in my post with Richard Elwes on a Babylonian approximation of sqrt(2). And so on!

That's actually what academics should be saying about Eliezer Yudkowsky if it is true. How does an SF author manage to get such a reputation instead?

Replies from: gwern, Pablo_Stafforini, mwengler, Viliam_Bur
comment by gwern · 2012-01-25T04:05:35.144Z · LW(p) · GW(p)

That actually explains a lot for me - when I was reading The Clockwork Rocket, I kept thinking to myself, 'how the deuce could anyone without a physics degree follow the math/physics in this story?' Well, here's my answer - he's still up on his math, and now that I check, I see he has a BS in math too.

Replies from: arundelo
comment by arundelo · 2012-01-26T16:23:46.344Z · LW(p) · GW(p)

I thought this comment by Egan said something interesting about his approach to fiction:

A few reviewers [of Incandescence] complained that they had trouble keeping straight the physical meanings of the Splinterites' [direction words]. This leaves me wondering if they've really never encountered a book before that benefits from being read with a pad of paper and a pen beside it, or whether they're just so hung up on the idea that only non-fiction should be accompanied by note-taking and diagram-scribbling that it never even occurred to them to do this. I realise that some people do much of their reading with one hand on a strap in a crowded bus or train carriage, but books simply don't come with a guarantee that they can be properly enjoyed under such conditions.

(I enjoyed Incandescence without taking notes. If, while I was reading it, I had been quizzed on the direction words, I would have done OK but not great.)

Edit: The other end of the above link contains spoilers for Incandescence. To understand the portion I quoted, it suffices to know that some characters in the story have their own set of six direction words (instead of "up", "down", "north", "south", "east", and "west").

Edit 2: I have a bit of trouble keeping track of characters in novels. When I read on my iPhone, I highlight characters' names as they're introduced, so I can easily refresh my memory when I forgot who someone is.

Replies from: gwern
comment by gwern · 2012-01-26T16:45:52.558Z · LW(p) · GW(p)

Yes, he's pretty unapologetic about his elitism - if you aren't already able to follow his concepts or willing to do the work so you can, you are not his audience and he doesn't care about you. Which isn't a problem with Incandescence, whose directions sound perfectly comprehensible, but is much more of an issue with TCR, which builds up an entire alternate physics.

comment by Pablo (Pablo_Stafforini) · 2014-03-21T19:28:09.522Z · LW(p) · GW(p)

What's the source for that quote? A quick Google search failed to yield any relevant results.

Replies from: XiXiDu
comment by XiXiDu · 2014-03-22T09:41:15.979Z · LW(p) · GW(p)

What's the source for that quote? A quick Google search failed to yield any relevant results.

Private conversation with John Baez (I asked him if I am allowed to quote him on it). You can ask him to verify it.

comment by mwengler · 2012-01-19T15:55:49.873Z · LW(p) · GW(p)

To be fair Eliezer gets good press from Professor Robin Hanson. This is one of the main bulwarks of my opinion of Eliezer and SIAI. (Other bulwarks include having had the distinct pleasure of meeting lukeprog at a few meetups and meeing Anna at the first meetup I ever attended. Whatever else is going on at SIAI, there is a significant amount of firepower in the rooms).

Replies from: ScottMessick
comment by ScottMessick · 2012-01-21T23:47:14.667Z · LW(p) · GW(p)

Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?

comment by Viliam_Bur · 2012-01-19T15:09:16.874Z · LW(p) · GW(p)

How does an SF author manage to get such a reputation instead?

By publishing his results at a place where scientists publish.

comment by [deleted] · 2012-01-19T01:15:49.217Z · LW(p) · GW(p)

Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.

I agree, wholeheartedly, of course -- except the last sentence. There's a not very good argument that the opportunity cost of EY learning LaTeX is greater than the opportunity cost of having others edit afterward. There's also a not very good argument that EY doesn't lose terribly much from his lack of academic signalling credentials. Together these combine to a weak argument that the current course is in line with what EY wants, or perhaps would want if he knew all the relevant details.

Replies from: Maelin, lukeprog
comment by Maelin · 2012-01-19T01:30:18.869Z · LW(p) · GW(p)

For someone who knows how to program, learning LaTeX to a perfectly serviceable level should take at most one day's worth of effort, and most likely it would be spread diffusely throughout the using process, with maybe a couple of hours' dedicated introduction to begin with.

It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper. It's very unlikely that it would take more than two.

Replies from: dbaupp
comment by dbaupp · 2012-01-19T02:12:35.927Z · LW(p) · GW(p)

It is quite possible that, considering the effort required to find an editor and organise for that editor to edit an entire paper into LaTeX, compared with the effort required to write the paper in LaTeX in the first place, the additional effort cost of learning LaTeX may in fact pay for itself after less than one whole paper

And one gets all the benefits of a text document while writing it (grep-able, version control, etc.).

(It should be noted that if one is writing LaTeX, it is much easier with a LaTeX specific editor (or one with an advanced LaTeX mode))

comment by lukeprog · 2012-01-19T01:28:56.337Z · LW(p) · GW(p)

I'm not at all confident that writing (or collaborating on) academic papers is the most x-risk-reducing way for Eliezer to spend his time.

Replies from: Bugmaster, mwengler, None, shminux
comment by Bugmaster · 2012-01-21T03:44:12.438Z · LW(p) · GW(p)

Speaking of arrogance and communication skills: your comment sounds very similar to, "Since Eliezer is always right about everything, there's no need for him to waste time on seeking validation from the unwashed academic masses, who likely won't comprehend his profound ideas anyway". Yes, I am fully aware that this is not what you meant, but this is what it sounds like to me.

Replies from: lukeprog
comment by lukeprog · 2012-01-21T03:48:48.444Z · LW(p) · GW(p)

Interesting. That is a long way from what I meant. I just meant that there are many, many ways to reduce x-risk, and it's not at all clear that writing papers is the optimal way to do so, and it's even less clear that having Eliezer write papers is so.

Replies from: Bugmaster
comment by Bugmaster · 2012-01-21T03:58:25.983Z · LW(p) · GW(p)

Yes, I understood what you meant; my comment was about style, not substance.

Most people (myself included, to some non-trivial degree) view publication in academic journals as a very strong test of one's ideas. Once you publish your paper (or so the belief goes), the best scholars in the field will do their best to pick it apart, looking for weaknesses that you might have missed. Until that happens, you can't really be sure whether your ideas are correct.

Thus, by saying "it would be a waste of Eliezer's time to publish papers", what you appear to be saying is, "we already know that Eliezer is right about everything". And by combining this statement with saying that Eliezer's time is very valuable because he's reducing x-risk, you appear to be saying that either the other academics don't care about x-risk (in which case they're clearly ignorant or stupid), or that they would be unable to recognize Eliezer's x-risk-reducing ideas as being correct. Hence, my comment above.

Again, I am merely commenting on the appearance of your post, as it could be perceived by someone with an "outside view". I realize that you did not mean to imply these things.

Replies from: wedrifid
comment by wedrifid · 2012-01-21T04:34:27.140Z · LW(p) · GW(p)

Thus, by saying "it would be a waste of Eliezer's time to publish papers", what you appear to be saying is, "we already know that Eliezer is right about everything".

That really isn't what Luke appears to be saying. It would be fairer to say "a particularly aggressive reader could twist this so that it means..."

It may sometimes be worth optimising speech such that it is hard to even willfully misinterpret what you say (or interpret based on an already particularly high prior for 'statement will be arrogant') but this is a different consideration to trying not to (unintentionally) appear arrogant to a neutral audience.

Replies from: JoshuaZ, Bugmaster
comment by JoshuaZ · 2012-01-27T20:27:49.427Z · LW(p) · GW(p)

That really isn't what Luke appears to be saying. It would be fairer to say "a particularly aggressive reader could twist this so that it means..."

For what it is worth, I had an almost identical reaction when reading the statement.

comment by Bugmaster · 2012-01-21T04:48:39.946Z · LW(p) · GW(p)

Fair enough; it's quite possible that my interpretation was too aggressive.

Replies from: wedrifid
comment by wedrifid · 2012-01-21T04:51:57.037Z · LW(p) · GW(p)

It's the right place for erring on the side of aggressive interpretation. We've been encouraged (and primed) to do so!

comment by mwengler · 2012-01-19T16:11:45.105Z · LW(p) · GW(p)

I think the evolution is towards a democratization of the academic process. One could say the cost of academia was so high in the middle ages that the smart move was filtering the heck out of participants to at least have a chance of maximizing utility of those scarce resources. And now those costs have been driven to nearly zero, with the largest cost being the sigal-to-noise problem: how does a smart person choose what to look at.

I think putting your signal into locations where the type of person you would like to attract gather is the best bet. Web publication of papers is one. Scientific meetings is another. I don't think you can find an existing institution more chock full of people you would like to be involved with than the Math-Science-Engineering academic institutions. Market in them.

If there is no one who can write an academic math paper that is interested enough in EY's work to translate it into something somewhat recognizable as valuable by his peers, than the emperor is wearing no clothes.

As a PhD calltech applied physicist who has worked with optical interferometers both in real life and in QM calculations (published in journals), EY's stuff on interferometer is incomprehensible to me. I would venture to say "wrong" but I wouldn't go that far without discussing it in person with someone.

Robin Hanson's endorsement of EY is the best credential he has for me. I am a caltech grad and I love Hanson's "freakonomics of the future" approach, but his success at being associated wtih great institutions is not a trivial factor in my thinking I am right to respect him.

Get EY or lukeprog or Anna or someone else from SIAI on Russ Roberts' podcast. Robin has done it.

Overall, SIAI serves my purposes pretty well as is. But I tend to view SIAI as pushing a radical position about some sort of existential risk and beliefs about AI, where the real value is probably not quite as radical as what they push. An example from history would be BF Skinner and behaviorism. No doubt behavioral concepts and findings have been very valuable, but the extreme "behaviorism is the only thing, there are no internal states" behaviorism of its genius pusher BF Skinner is way less valuable than an eclectic theory that includes behaviorism as one piece.

This is a core dump since you ask. I don't claim to be the best person to evaluate EY's interformetry claims as my work was all single-photon (or linear anyway) stuff and I have worked only a small bit with two-photon formalisms. And I am unsophisticated enough to think MWI doesn't pass the smell test no matter how much lesswrong I've read.

Replies from: Adele_L
comment by Adele_L · 2012-06-03T08:05:17.752Z · LW(p) · GW(p)

Robin Hanson's endorsement of EY is the best credential he has for me.

Similarly, the fact that Scott Aaronson and John Baez seem to take him seriously are significant credentials he has for me.

comment by [deleted] · 2012-01-19T01:30:45.983Z · LW(p) · GW(p)

I thought we were talking about the view from outside the SIAI?

Replies from: lukeprog
comment by lukeprog · 2012-01-19T01:45:22.460Z · LW(p) · GW(p)

Clearly, Eliezer publishing technical papers would improve SI's credibility. I'm just pointing out that this doesn't mean that publishing papers is the best use of Eliezer's time. I wasn't disagreeing with you; just making a different point.

Replies from: shminux, None
comment by Shmi (shminux) · 2012-01-19T02:55:40.934Z · LW(p) · GW(p)

Publishing technical papers would be one of the better uses of his time, editing and formatting them probably is not. If you have no volunteers, you can easily find a starving grad student who would do it for peanuts.

Replies from: None
comment by [deleted] · 2012-01-20T15:50:40.593Z · LW(p) · GW(p)

Well, they've got me for free.

Replies from: shminux
comment by Shmi (shminux) · 2012-01-20T18:26:24.352Z · LW(p) · GW(p)

You must be allergic to peanuts.

Replies from: None
comment by [deleted] · 2012-01-20T18:38:51.374Z · LW(p) · GW(p)

Not allergic, per se. But I doubt they would willingly throw peanuts at me, unless perhaps I did a trick with an elephant.

comment by [deleted] · 2012-01-19T01:47:00.207Z · LW(p) · GW(p)

I'm not disagreeing with you either.

comment by Shmi (shminux) · 2012-01-19T02:51:18.636Z · LW(p) · GW(p)

I would see what the formatting standards are in the relevant journals and find a matching document class or a LyX template. Someone other than Eliezer can certainly do that.

comment by [deleted] · 2012-01-19T01:18:05.765Z · LW(p) · GW(p)

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

If this is the case, it sounds like EY has a Chuck Norris problem, i.e., his mythos has spread beyond its reality.

Replies from: lukeprog, Tyrrell_McAllister
comment by lukeprog · 2012-01-19T01:36:49.772Z · LW(p) · GW(p)

Yes. At various times we've considered hiring EY an advanced math tutor to take him to the next level more quickly. He's pretty damn good at math but he's not Terence Tao.

Replies from: None
comment by [deleted] · 2012-01-19T01:37:40.868Z · LW(p) · GW(p)

So did you ask your friend where this notion of theirs came from?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-01-19T10:05:02.657Z · LW(p) · GW(p)

I have a memory of EY boasting about how he learned to solve high school/college level math before the age of ten, but I couldn't track down where I read that.

Replies from: Kaj_Sotala, Desrtopa, mwengler
comment by Kaj_Sotala · 2012-01-19T18:36:14.302Z · LW(p) · GW(p)

Ah, here is the bit I was thinking about:

I don't think I'd have had any trouble following that problem at age 7, which is when I was taught to solve systems of equations.

comment by Desrtopa · 2012-01-19T14:34:23.679Z · LW(p) · GW(p)

I don't remember the post, but I'm pretty sure I remember that Eliezer described himself as a coddled math prodigy, not having made to train seriously and compete, and so he lags behind math prodigies who were made to hone their skills that way, like Marcello.

comment by mwengler · 2012-01-19T15:38:34.446Z · LW(p) · GW(p)

its in the waybackmachine link in the post you are commenting on!

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-01-19T18:31:42.693Z · LW(p) · GW(p)

I hadn't read that link before, so it was somewhere else, too.

comment by Tyrrell_McAllister · 2012-01-19T14:23:42.727Z · LW(p) · GW(p)

I've asked around a bit, and we can't recall when exactly EY claimed "world-class mathematical ability". As far as I can remember, he's been pretty up-front about wishing he were better at math. I seem to remember him looking for a math-savvy assistant at one point.

I too don't remember that he ever claimed to have remarkable math ability. He's said that he was "spoiled math prodigy" (or something like that), meaning that he showed precocious math ability while young, but he wasn't really challenged to develop it. Right now, his knowledge seems to be around the level of a third- or fourth-year math major, and he's never claimed otherwise. He surely has the capacity to go much further (as many people who reach that level do), but he hasn't even claimed that much, has he?

Replies from: private_messaging
comment by private_messaging · 2012-07-27T16:16:36.981Z · LW(p) · GW(p)

This leaves one wondering how the hell would one be this concerned about the AI risk but not study math properly? How the hell can one go on Bayesian this and Bayesian that but not study? How can one trust one's intuitions about how much computational power is needed for AGI, and not want to improve those intuitions?

I've speculated elsewhere that he would likely be unable to implement general Bayesian belief propagation graph or even know what is involved (its NP complete problem in general and the accuracy of solution is up to heuristics. Yes, heuristics. Biased ones, too). That's very bad when it comes to understanding rationality, as you will start going on with maxims like "update all your beliefs" etc, which look outright stupid to e.g. me (I assure you I can implement Bayesian belief propagation graph), and triggers my 'its another annoying person that talks about things he has no clue about' reflex.

Talking about Bayesian this and Bayesian that, one should better know mathematics very well. Because in practice all those equations get awfully hairy on things like graphs in general (not just trees). If you don't know relevant math very well and you call yourself Bayesian, you are professing a belief in belief. If you do not make a claim of extreme mathematical skills and knowledge, and you go on Bayesian this and that, other people will have to assume extreme mathematical skills and knowledge out of politeness.

Replies from: David_Gerard
comment by David_Gerard · 2012-07-27T20:46:17.259Z · LW(p) · GW(p)

If you don't know relevant math very well and you call yourself Bayesian, you are professing a belief in belief.

Yes.

comment by malthrin · 2012-01-19T00:48:45.128Z · LW(p) · GW(p)

There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?

The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?

*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.

Replies from: malthrin, ciphergoth
comment by malthrin · 2012-01-19T01:18:42.085Z · LW(p) · GW(p)

To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:

  • Many of these bullet points are about work in progress and (paywalled?) journal articles. If I can't link it to my friends and say, "Check out this cool thing," I don't care. Tell me what you've finished that I can share with people who might be interested.
  • Lots on transparency and progress reporting. In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely. These people are loud, but they're a small minority of your potential donors.
Replies from: lukeprog
comment by lukeprog · 2012-01-19T01:33:41.013Z · LW(p) · GW(p)

Tell me what you've finished that I can share with people who might be interested.

Of course, things we finished before December 2011 aren't in the progress report. E.g. The Singularity and Machine Ethics.

In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely.

Not really. We're also working on many things accessible to a wider crowd, like Facing the Singularity and the new website. Once the new website is up we plan to write some articles for mainstream magazines and so on.

comment by Paul Crowley (ciphergoth) · 2012-01-20T08:09:40.497Z · LW(p) · GW(p)

"smart and gets things done" I think originates with Joel Spolsky:

http://www.joelonsoftware.com/articles/fog0000000073.html

comment by RolfAndreassen · 2012-01-19T06:33:35.664Z · LW(p) · GW(p)

I agree with what has been said about the modesty norm of academia; I speculate that it arises because if you can avoid washing out of the first-year math courses, you're already one or two standard deviations above average, and thus you are in a population in which achievements that stood out in a high school (even a good one) are just not that special. Bragging about your SAT scores, or even your grades, begins to feel a bit like bragging about your "Participant" ribbon from sports day. There's also the point that the IQ distribution in a good physics department is not Gaussian; it is the top end of a Gaussian, sliced off. In other words, there's a lower bound and an exponential frequency decay from there. Thus, most people in a physics department are on the lower end of their local peer group. I speculate that this discourages bragging because the mass of ordinary plus-two-SDs doesn't want to be reminded that they're not all that bright.

However, all that aside: Are academics the target of this blog, or of lukeprog's posts? Propaganda, to be effective, should reach the masses, not the elite - although there's something to be said for "Get the elite and the masses will follow", to be sure. Although academics are no doubt over-represented among LessWrong readers and indeed among regular blog readers, still they are not the whole world. Can we show that a glowing listing of not-very-specific awesomenesses is counterproductive to the average LW reader, or the average prospective recruit who might be pointed to lukeprog's post? If not, the criticism rather misses its mark. Academics can always be pointed to the Sequences instead; what we're missing is a quick introduction for the plus-one-SD who is not going to read three years of blog output.

Replies from: Karmakaiser
comment by Karmakaiser · 2012-01-19T15:52:07.820Z · LW(p) · GW(p)

So if I could restate the norms of academia vis a vi modesty: "Do the impossible. But don't forget to shut up as well."

Is that a fair characterization?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-01-19T22:45:45.574Z · LW(p) · GW(p)

Well, no, I don't think so. Most academics do not work on impossible problems, or think of this as a worthy goal. So it should be more like "Do cool stuff, but let it speak for itself".

Moderately related: I was just today in a meeting to discuss a presentation that an undergraduate student in our group will be giving to show her work to the larger collaboration. On her first page she had

Subject

Her name

Grad student helping her

Dr supervisor no 1

Dr supervisor no 2

And to start off our critique, supervisor 1 mentioned that, in the subculture of particle physics, it is not the custom to list titles, at least for internal presentations. (If you're talking to a general audience the rules change.) Everyone knows who you are and what you've done! Thus, he gave the specific example that, if you mention "Leon", everyone knows you speak of Leon Lederman, the Nobel-Prize winner. But as for "Dr Lederman", pff, what's a doctorate? Any idiot can be a doctor and many idiots (by physics standards, that is) are; if you're not a PhD it's at least assumed that you're a larval version of one. It's just not a very unusual accomplishment in these circles. To have your first name instantly recognised is a much greater accolade. Doctors are thirteen to the dozen, but there is only one Leon.

Of course this is not really modesty, as such; it's a particular form of status recognition. We don't make much overt show of it, but everyone knows their position in the hierarchy!

Replies from: asr, jsteinhardt
comment by asr · 2012-01-20T02:44:09.456Z · LW(p) · GW(p)

I have seen this elsewhere in the academy as well.

At many elite universities, professors are never referred to as Dr-so-and-so. Everybody on the faculty has a doctorate. They are Professor-so-and-so. At some schools, I'm told they are referred to as Mr or Mrs-so-and-so. Similar effect: "we know who's cool and high-status and don't need to draw attention to it."

comment by jsteinhardt · 2012-01-20T08:26:25.817Z · LW(p) · GW(p)

Wow, I didn't even consciously recognize this convention, although I would definitely never, for instance, add titles to the author list of a paper. So I seem to have somehow picked it up without explicitly deciding to.

comment by Solvent · 2012-01-19T11:10:08.644Z · LW(p) · GW(p)

I've reccommended this before, I think.

I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he's like that. You should yourself, Luke, be more careful about maintaining a humble opinion.

If you need people to say arrogant things, make them ghost-write for Eliezer.

Personally, I think that a lot of Eliezer's arrogance is deserved. He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He's got a reputation as an arrogant eccentric genius anyway.

But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.

Replies from: mwengler, J_Taylor
comment by mwengler · 2012-01-19T15:34:29.546Z · LW(p) · GW(p)

I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.

Eliezer is a real person. He is not "big brother" or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.

Replies from: Solvent
comment by Solvent · 2012-01-20T00:55:01.852Z · LW(p) · GW(p)

Yeah, I kinda agree. I was slightly exaggerating my position for clarity.

Maybe not full on ghost-writing. But occasionally, having someone around who can say what he wants without further offending anybody can be useful. Like, part of the reason the Sequences are awesome is that he personally claims that they are. Also, Eliezer says:

I should note that if I'm teaching deep things, then I view it as important to make people feel like they're learning deep things, because otherwise, they will still have a hole in their mind for "deep truths" that needs filling, and they will go off and fill their heads with complete nonsense that has been written in a more satisfying style.

So occasionally SingInst needs to say something that sounds arrogant.

I just think that when possible, Eliezer should say those things.

comment by J_Taylor · 2012-01-19T21:38:33.269Z · LW(p) · GW(p)

He's explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people's problems.

As a curiosity, what would the world look like if this were not the case? I mean, I'm not even sure what it means for such a sentence to be true or false.

Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It's just that, in professional philosophy, there does not seem to be a consensus on what a "problem of philosophy" is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most "problems" of philosophy are dismissed, rather than ever solved.

Replies from: Solvent
comment by Solvent · 2012-01-20T00:50:52.129Z · LW(p) · GW(p)

Here are examples of these philosophical solutions. I don't know which of these he solved personally, and which he simply summarized others' answer to:

  • What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.

  • What is intelligence? The ability to optimize things.

  • What is knowledge? The ability to constrain your expectations.

  • What should I do with the Newcomb's Box problem? TDT answers this.

...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.

And so on. I know he didn't come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.

I've been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers' proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.

Replies from: J_Taylor, lessdazed
comment by J_Taylor · 2012-01-20T08:45:26.646Z · LW(p) · GW(p)

What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.

I admire the phrase "what an algorithm feels like from the inside". This is certainly one of Yudkowsky's better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel.

Nonetheless, Yudkowsky is not the first compatibilist.

What is intelligence? The ability to optimize things.

One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, "the ability to optimize things" may well be a thing. You may as well call it intelligence, if you are so inclined.

This, nonetheless, may not be a solution to the question "what is intelligence?". It seems as though most competent naturalists have moved passed the question.

What is knowledge? The ability to constrain your expectations.

I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?

What should I do with the Newcomb's Box problem? TDT answers this.

I have absolutely no knowledge of the history of Newcomb's problem. I apologize.

Further apologies for the following terse statements:

I don't think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia.

The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, "The good is the end of inquiry" would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement.

TDT is awesome. Predicted by the superrationality of Hofstadter, but so what?

I don't mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a "Two Dogmas of Empiricism" or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect.

Of course, none of this really matters. He's not trying to be a good philosopher. He's trying to save the world.

Replies from: Solvent, MatthewBaker
comment by Solvent · 2012-01-21T00:14:49.019Z · LW(p) · GW(p)

I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?

Okay, the Gettier problem. I can explain the Gettier problem, but it's just my explanation, not Eliezer's.

The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. "Justified true belief" (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

Incidentally, I just re-read this post, which says:

Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it." When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!

So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.

Replies from: asr, J_Taylor
comment by asr · 2012-01-21T01:03:09.925Z · LW(p) · GW(p)

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-21T10:25:44.066Z · LW(p) · GW(p)

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable.

Isn't this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.

An AI is very likely to have beliefs or behaviors that are irrational...

Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI's might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That's a real danger in my opinion.

Replies from: wedrifid, lessdazed
comment by wedrifid · 2012-01-22T22:52:11.342Z · LW(p) · GW(p)

Is a definition of utility that is precise enough to be usable even possible? Honest question.

Honest answer: Yes. For example 1 utilon per paperclip.

comment by lessdazed · 2012-01-23T15:29:42.673Z · LW(p) · GW(p)

As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.

I appreciate the example. It will serve me well. Upvoted.

comment by J_Taylor · 2012-01-23T21:43:11.204Z · LW(p) · GW(p)

I am aware of the Gettier Problem. I just do not see the phrase, "the ability to constrain one's expectations" as being a proper conceptual analysis of "knowledge." If it were a conceptual analysis of "knowledge", it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term "knowledge". Attempting to define "knowledge" is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.

So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Hooke and calculus, really.

I am not entirely familiar with Eliezer's history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer's espoused beliefs.

Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?

Replies from: Solvent
comment by Solvent · 2012-01-23T23:04:46.173Z · LW(p) · GW(p)

Still, are you certain you are not thinking of Leibniz?

ooops, fixed.

I'll respond to the rest of what you said later.

comment by MatthewBaker · 2012-01-20T17:14:44.756Z · LW(p) · GW(p)

To quickly sum up Newcomb's problem, Its a question of probability where choosing the more "rational" thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.

Replies from: J_Taylor
comment by J_Taylor · 2012-01-20T18:29:51.336Z · LW(p) · GW(p)

Apologies, I know what Newcomb's problem is. I simply do not know anything about its history and the history of its attempted solutions.

comment by lessdazed · 2012-01-23T15:26:56.292Z · LW(p) · GW(p)

The ability to optimize things

...efficiently.

What is knowledge? The ability to constrain your expectations.

Most readers will misinterpret that.

What should I do with the Newcomb's Box problem? TDT answers this.

The question for most was/is instead "Formally, why should I one-box on Newcomb's problem?"

comment by Shmi (shminux) · 2012-01-19T00:06:49.731Z · LW(p) · GW(p)

What should SI do about this?

I think that separating instrumental rationality from the Singularity/FAI ideas will help. Hopefully this project is coming along nicely.

Replies from: lukeprog
comment by lukeprog · 2012-01-19T02:06:50.520Z · LW(p) · GW(p)

Hopefully this project is coming along nicely.

Yes, we're full steam ahead on this one.

comment by Thrasymachus · 2012-02-05T20:32:12.158Z · LW(p) · GW(p)

(I was going to write a post on 'why I'm skeptical about SIAI', but I guess this thread is a good place to put it. This was written in a bit of a rush - if it sounds like I am dissing you guys, that isn't my intention.)

I think the issue isn't so much 'arrogance' per se - I don't think many of your audience would care about accurate boasts - but rather your arrogance isn't backed up with any substantial achievement:

You say you're right on the bleeding edge in very hard bits of technical mathematics ("we have 30-40 papers which could be published on decision theory" in one of lukeprogs Q&As, wasn't it?), yet as far as I can see none of you have published anything in any field of science. The problem is (as far as I can tell) you've been making the same boasts about all these advances you are making for years, and they've never been substantiated.

You say you've solved all these important philosophical questions (Newcomb, Quantum mechanics, Free will, physicalism, etc.), yet your answers are never published, and never particularly impress those who are actual domain experts in these things - indeed, a complaint I've heard commonly is that Lesswrong just simply misunderstand the basics. An example: I'm pretty good at philosophy of religion, and the sort of arguments Lesswrong seems to take as slam-dunks for Atheism ("biases!" "Kolmogorov complexity!") just aren't impressive, or even close to the level of discussion seen in academia. This itself is no big deal (ditto the MWI, phil of mind), but it makes for an impression of being intellectual dilettantes spouting off on matters you aren't that competent in. (I'm pretty sure most analytic philospohers roll their eyes at all the 'tabooing' and 'dissolving problems' - they were trying to solve philosophy that way 80 years ago!) Worse, my (admittedly anecdotal) survey suggests a pretty mixed reception from domain-experts in stuff that really matters to your project, like probability theory, decision theory etc.

You also generally talk about how awesome you all are via the powers of rationalism, yet none of you have done anything particularly awesome by standard measures of achievement. Writing a forest of blog posts widely reputed to be pretty good doesn't count. Nor does writing lots of summaries of modern cogsci and stuff.

It is not all bad. Because there are lots of people who are awesome by conventional metrics and do awesome things who take you guys seriously, and meeting these people has raised my confidence that you guys are doing something interesting. But reflected esteem can only take you so far.

So my feeling is basically 'put up or shut up'. You guys need to build a record of tangible/'real world' achievements, like writing some breakthrough papers on decision theory (or any papers on anything) which are published and taken seriously in mainstream science, a really popular book on 'everyday rationality', going off and using rationality to make zillions from the stock market, or whatever. I gather you folks are trying to do some of these: great! Until then, though, your 'arrogance problem' is simply that you promise lots and do little.

Replies from: lukeprog
comment by lukeprog · 2012-04-01T07:58:40.333Z · LW(p) · GW(p)

"we have 30-40 papers which could be published on decision theory"

No, that wasn't it. I said 30-40 papers of research. Most of that is strategic research, like Carl Shulman's papers, not decision theory work.

Otherwise, I almost entirely agree with your comments.

comment by Dr_Manhattan · 2012-01-19T14:38:19.043Z · LW(p) · GW(p)

I think Eli as being the main representative of SI, should be more careful of how he does things, and resist his natural instinct to declare people stupid (-> Especially <- if he's basically right)

Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim. Now you have this guy and at least one "friend" with loads of free time going around putting down anything associated with Eliezer or SI on the Internet. For 5 minutes of extra thinking and not typing this could have been largely avoided. Eli has to realize that he's in a good position to needlessly hurt his (and our) own causes.

Another case in point was handling the Roko affair. There is doing the right thing, but you can do it without being an asshole (also IMO the "ownership" of LW policies is still an unresolved issue, but at least it's mostly "between friends"). If something like this needs to be done Eli needs to pass the keyboard to cooler heads.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2012-01-19T15:56:11.929Z · LW(p) · GW(p)

Case in point: http://www.sl4.org/archive/0608/15895.html That could have been handled more politically and with more face-saving for the victim.

Note: happened five years ago

Replies from: Multiheaded, Dr_Manhattan
comment by Multiheaded · 2012-01-19T18:16:27.608Z · LW(p) · GW(p)

Certainly anyone building a Serious & Official image for themselves should avoid mentioning any posteriors not of the probability kind in their public things.

comment by Dr_Manhattan · 2012-01-19T16:49:46.541Z · LW(p) · GW(p)

Already noted, and I'm guessing the situation improved. But it's still a symptom of a harmful personality trait.

comment by Incorrect · 2012-01-19T01:11:10.100Z · LW(p) · GW(p)

Why don't SIAI researchers decide to definitively solve some difficult unsolved mathematics, programming, or engineering problem as proof of their abilities?

Yes it would waste time that could have been spent on AI-related philosophy but would unambiguously support the competency of SIAI.

Replies from: WrongBot
comment by WrongBot · 2012-01-19T08:33:49.739Z · LW(p) · GW(p)

You mean, like decision theory? Both Timeless Decision Theory (which Eliezer developed) and Updateless Decision Theory (developed mostly by folks who are now SI Research Associates) are groundbreaking work in the field, and both are currently being written up for publication, I believe.

comment by Vaniver · 2012-01-20T01:39:09.683Z · LW(p) · GW(p)

There are two recurring themes: peer-reviewed technical results, and intellectual firepower.

If you want to show people intellectual firepower and the awesomeness of your conversations, tape the conversations. Just walk around with a recorder going all day, find the interesting bits later, and put them up for people to listen to.

But... you're not selling "we're super bright," you're selling "we're super effective." And for that you need effectiveness. Earnest, bright people wasting their effort is an old thing, and with as goals as large as yours it's difficult to see the difference between progress and floundering.

comment by Raemon · 2012-01-18T23:07:04.666Z · LW(p) · GW(p)

I don't know how to address your particular signalling problem. But a question I need answered for myself: I wouldn't be able to tell the difference between the SIAI folks being "reasonably good at math and science" and "actually being really good - the kind of good they'd need to be for me to give them my money."

ARE there straightforward tests you could hypothetically take (or which some of you may have taken) which probably wouldn't actually satisfy academics, but which are perfectly reasonable benchmarks we should expect you to be able to complete to demonstrate your equivalent education?

Replies from: abramdemski
comment by abramdemski · 2012-01-19T00:57:11.633Z · LW(p) · GW(p)

Why shouldn't the tests satisfy academics?

Why not use something like the GRE with subject tests, plus an IQ test and other relevant tests?

Replies from: Nick_Tarleton, asr, Raemon
comment by Nick_Tarleton · 2012-01-19T15:47:35.852Z · LW(p) · GW(p)

Crackpot Index:

10 points for pointing out that you have gone to school, as if this were evidence of sanity.

I'm not sure, but I think this is roughly how "look, I did great on the GRE!" would sound to someone already skeptical. It's the sort of accomplishment that sounds childish to point out outside of a very limited context.

comment by asr · 2012-01-19T02:29:13.441Z · LW(p) · GW(p)

There are two big problems with standardized tests.

First, the standard tests are badly calibrated for measuring the high-performing tail of the distribution. Something like 6% of all GRE takers get a perfect score on the math portion. So GREs won't separate good from very good.

Second, aptitude for doing GRE-style or IQ-style math problems isn't known to be a close correlate for real ability. Universities are full of people with stellar test scores who don't ever amount to anything. On the other hand, Richard Feynman, who was very smart and very hard working, had a measured IQ of something like 125, which is not all that impressive as a test score.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2012-01-19T11:57:51.179Z · LW(p) · GW(p)

125???! Sh*t, I've got to start working harder. (source?)

Replies from: billswift, Karmakaiser
comment by billswift · 2012-01-19T14:47:29.494Z · LW(p) · GW(p)

I don't know a source for the number, but in one of his popular books he mentioned that Mensa contacted him and he responded that his IQ wasn't high enough, which means it was less than 130.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2012-01-19T15:52:10.780Z · LW(p) · GW(p)

Knowing Feynman, This might well have been a joke at their expense.

Replies from: arundelo, wedrifid
comment by arundelo · 2012-01-19T16:42:18.990Z · LW(p) · GW(p)

According to Feynman, he tested at 125 when he was a schoolboy. (Search for "IQ" in the Gleick biography.)

Gwern says:

There are a couple reasons to not care about this factoid:

  • Feynman was younger than 15 when he took it [....]
  • [I]t was one of the 'ratio' based IQ tests - utterly outdated and incorrect by modern standards.
  • Finally, it's well known that IQ tests are very unreliable in childhood; kids can easily bounce around compared to their stable adult scores.

Steve Hsu says:

I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. [...] It seems quite possible to me that Feynman's cognitive abilities might have been a bit lopsided -- his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities. I recall looking at excerpts from a notebook Feynman kept as an undergraduate. While the notes covered very advanced topics -- including general relativity and the Dirac equation -- they also contained a number of misspellings and grammatical errors. I doubt Feynman cared very much about such things.

comment by wedrifid · 2012-01-19T16:22:23.739Z · LW(p) · GW(p)

Knowing Feynman, This might well have been a joke at their expense.

It is a joke at their expense. The question is whether he based it on a true premise.

comment by Karmakaiser · 2012-01-19T15:47:43.901Z · LW(p) · GW(p)

125 is the average IQ of a Ph.D. I'm not sure what the IQ is for specific domains so I can't say if that is incredibly low for a Physics Ph.D.

comment by Raemon · 2012-01-19T01:58:23.315Z · LW(p) · GW(p)

Why shouldn't the tests satisfy academics?

Because people aren't rational and it's silly to pretend otherwise?

comment by wedrifid · 2012-01-20T17:30:30.629Z · LW(p) · GW(p)

What are the most egregious examples of SI's arrogance?

Public tantrums, shouting and verbal abuse. Those are status displays that pay off for tribal chieftans and some styles of gang leader. They aren't appropriate for leaders of intellectually oriented charities. Eliezer thinking he can get away with that is the biggest indicator of arrogance that I've noticed thus far.

Replies from: Bugmaster
comment by Bugmaster · 2012-01-21T01:16:26.151Z · LW(p) · GW(p)

To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.

Replies from: wedrifid
comment by wedrifid · 2012-01-21T02:06:49.880Z · LW(p) · GW(p)

To be fair, while I personally do perceive the SIAI as being arrogant, I haven't seen any public tantrums. As far as I can tell, all their public discourse has been quite civil.

The most significant example was the Roko incident. The relevant threads and comments were all censored during the later part of his tantrum. Not a good day in the life of Eliezer's reputation.

Replies from: Bugmaster, Solvent
comment by Bugmaster · 2012-01-21T02:14:19.114Z · LW(p) · GW(p)

Fair enough; I was unaware of the Roko incident (understandably so, since apparently it was Sovieted from history). I have now looked it up elsewhere, though. Thanks for the info.

comment by Solvent · 2012-01-21T05:52:07.349Z · LW(p) · GW(p)

I tried to look up this Roko incident, and from what I could see, Eliezer just acted crazily towards saying something Eliezer thought was dangerous. So him deleting everything could be justified without him necessarily being egotistical.

But can you elaborate on what happened, please?

Replies from: wedrifid
comment by wedrifid · 2012-01-21T06:15:50.860Z · LW(p) · GW(p)

So him deleting everything could be justified without him necessarily being egotistical.

Oh, of course. The deletion just explains Bug's unfamiliarity, it isn't an arrogance example itself.

But can you elaborate on what happened, please?

Rationalwiki.

Replies from: Prismattic, Solvent
comment by Prismattic · 2012-01-21T19:18:42.555Z · LW(p) · GW(p)

I'm sort of pleased to see that I guessed roughly what this episode was about despite having arrived at LessWrong well after it unhappened.+ But if the Rationalwiki description is accurate, I'm now really confused about something new.

I was under the impression that Lesswrong was fairly big on the Litany of Gendlin. But an AI that could do the things Roko proposed (something I place vanishingly small probability, fortunately) could also retrospectively figure out who was being willfully ignorant or failing to reach rational conclusions for which they had sufficient priors.

It's disconcerting, after watching so much criticism of the rest of humanity finding ways to rationalize around the "inevitability" of death, to see transhumanists finding ways to hide their minds from their own "inevitable" conclusions.

+Since most people who would care about this subject at all have probably read Three Worlds Collide, I think this episode should be referred to as The Confessar Vanishes, but my humor may be idiosyncratic even for this crowd.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-01-27T20:35:07.967Z · LW(p) · GW(p)

The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals. At the time when it occurred there were at least two people in the general SI/LW cluster who were apparently deeply disturbed by the thought. I expect that the sort who would be vulnerable would be the same sort who if they were religious would lose sleep over the possibility of going to hell.

Replies from: Humbug
comment by Humbug · 2012-01-28T10:52:20.687Z · LW(p) · GW(p)

The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.

The original reasons given:

Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)

...and further:

For those who have no idea why I'm using capital letters for something that just sounds like a random crazy idea, and worry that it means I'm as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

(emphasis mine)

comment by Solvent · 2012-01-21T06:33:10.658Z · LW(p) · GW(p)

I should have known that Rationalwiki would be the place to look for dirt on Eliezer. Thanks for the link.

Wow, that was fascinating reading. I still don't think that we could call it a tantrum of Eliezer's. I mean, I have no doubt he acted like a dick, but he probably at least thought that the Roko guy was being stupid.

Replies from: wedrifid
comment by wedrifid · 2012-01-21T06:59:56.688Z · LW(p) · GW(p)

I still don't think that we could call it a tantrum of Eliezer's.

Whatever you choose to call it the act of shouting at people and calling them names is the kind of thing that looks bad to me. I think Eliezer would look better if he didn't shout or call people names.

but he probably at least thought that the Roko guy was being stupid.

Of course he did. Lack of sincerity is not the problem here. The belief that the other person is stupid and, more importantly, the belief that if he thinks other people are being stupid it is right and appropriate for him to launch into an abusive hysterical tirade is the arrogance problem in this case.

Replies from: Solvent
comment by Solvent · 2012-01-21T07:05:55.632Z · LW(p) · GW(p)

I think Eliezer would look better if he didn't shout or call people names.

I agree. Eliezer is occasionally a jerk, and it looks like this was one of those times. Also, I have no idea what went on and you do, so any disagreement from me is pretty dubious.

Nitpicking: I don't think that's how we should use the work tantrum. Tantrum makes it sound like someone criticized Eliezer and he got mad at them. (I suppose that might have happened, though...) I dunno. I just dislike your choice of words. I would have phrased it as "Eliezer should put more effort into not occasionally being an arrogant dick."

Replies from: wedrifid
comment by wedrifid · 2012-01-21T07:22:47.161Z · LW(p) · GW(p)

The word 'Tantrum' invokes in my mind a picture of either a child or someone with an overwhelmingly high perception of their status responding to things not going their way by acting out emotionally in violation of usual norms of behavior that apply to everyone else.

I would have phrased it as "Eliezer should put more effort into not occasionally being an arrogant dick."

I did not want to make that point. Acting out when things don't go his way is a distinctly different behavior pattern with different connotations with respect to arrogance. I'm going to stick with tantrum because it just seems to be exactly what I'm trying to convey.

Replies from: Raw_Power
comment by Raw_Power · 2012-01-24T01:39:55.029Z · LW(p) · GW(p)

I think he did the right thing there. He did it badly and clumsily, but had I been in his place I'd have had a hard time getting a grip on my emotions, and we know how sensitive and emotional he is.

Rational Wiki are great guys. We try to watch our own step, but it's nice to have someone else watching us too, who can understand and sympathize with what we do.

comment by Risto_Saarelma · 2012-01-20T06:23:55.522Z · LW(p) · GW(p)

What SIAI could do to help the image problem: Get credible grown-ups on board.

The main team looks to be in their early thirties, and the visiting fellows mostly students in their twenties. With the claims of importance SIAI is making, people go looking for people over forty who are well-established as serious thinkers, AI experts or similarly known-competent folk in a relevant field. There should be either some sufficiently sold on the SIAI agenda to be actually on board full-time, or quite a few more in some kind of endorsing partnership role. Currently there's just Ray Kurzweil on the team page, and beyond "Singularity Summit Co-Founder", there's nothing there saying just what his relation to SIAI is, exactly. SIAI doesn't appear to be suitably convincing to have gotten any credible grown-ups as full-time team members.

There are probably good reasons why this isn't useful for what SIAI is actually trying to do, but the demographic of thirty-somethings leading the way and twenty-somethings doing stuff looks way iffier at a glance for "support us in solving the most important philosophical, societal and technological problem humanity has ever faced once and for all!" than it does for "we're doing a revolutionary Web 3.0 SaaS multi mobile OS cloud computing platform!"

comment by brilee · 2012-01-19T15:59:24.236Z · LW(p) · GW(p)

To be honest, I've only ever felt SI/EY/LW's "arrogance" once, and I think that LW in general is pretty damn awesome. (I realize I'm equating LW with SI, but I don't really know what SI does)

The one time is while reading through the Free Willhttp://wiki.lesswrong.com/wiki/Free_will page, which I've copied here: "One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own. "

This smacks strongly of "oh look, there's a classic stumper, and I'm the ONLY ONE who's solved it (naa naa naa). If you want to be a true rationalist/join the tribe, you better solve it on your own, too"

I've also heard others mention that HP from HPMoR is an unsufferable little twat, which I assume is the same attitude they would have if they were to read LW.

I've written some of my thoughts up about the arrogance issue here. The short version is that some people have strongly developed identities as "not one of those pretentious people" and have strong immune responses when encountering intelligence. http://moderndescartes.blogspot.com/2011/07/turn-other-cheek.html

Replies from: wedrifid, ArisKatsaris
comment by wedrifid · 2012-01-19T16:20:23.646Z · LW(p) · GW(p)

The one time is while reading through the Free Willhttp://wiki.lesswrong.com/wiki/Free_will page, which I've copied here: "One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own. "

Ewww! That's hideous. It seems to be totally subverting the point of the wiki. I actually just went as far as to log in planning to remove the offending passage until I noticed that Eliezer put it there himself.

I'm actually somewhat embarrassed by page now that you've brought it to our attention. I rather hope we can remove it and replace it with either just a summary of what free will looks like dissolved or a placeholder with the links to relevant blog posts.

Replies from: thomblake
comment by thomblake · 2012-01-19T20:53:10.932Z · LW(p) · GW(p)

I rather hope we can remove it and replace it with either just a summary of what free will looks like dissolved or a placeholder with the links to relevant blog posts.

The point of that was that dissolving free will is an exercise (a rather easy one once you know what you're doing), and it probably shouldn't be short-circuited.

Replies from: wedrifid
comment by wedrifid · 2012-01-20T03:34:01.048Z · LW(p) · GW(p)

The point of that was that dissolving free will is an exercise (a rather easy one once you know what you're doing), and it probably shouldn't be short-circuited.

My point was that I didn't approve of making that point in that manner in that place.

I refrained from nuking the page myself but I don't have to like it. I support Brilee's observation that going around and doing that sort of thing is bad PR for Eliezer Yudkowsky, which has non-trivial relevance to SingInst's arrogance problem.

Replies from: ScottMessick
comment by ScottMessick · 2012-01-21T23:28:04.042Z · LW(p) · GW(p)

One issue is that the same writing sends different signals to different people. I remember thinking about free will early in life (my parents thought they'd tease me with the age-old philosophical question) and, a little later in life, thinking that I had basically solved it--that people were simply thinking about it the wrong way. People around me often didn't accept my solution, but I was never convinced that they even understood it (not due to stupidity, but failure to adjust their perspective in the right way), so my confidence remained high.

Later I noticed that my solution is a standard kind of "compatibilist" position, which is given equal attention by philosophers as many other positions and sub-positions, fiercely yet politely discussed without the slightest suggestion that it is a solution, or even more valid than other positions except as the one a particular author happens to prefer.

Later I noticed that my solution was also independently reached and exposited by Eliezer Yudkowsky (on Overcoming Bias before LW was created, if I remember correctly). The solution was clearly presented as such--a solution--and one which is easy to find with the right shift in perspective--that is, an answer to a wrong question. I immediately significantly updated the likelihood of the same author having further useful intellectual contributions, to my taste at least, and found the honesty thoroughly refreshing.

comment by ArisKatsaris · 2012-01-20T00:33:35.470Z · LW(p) · GW(p)

I've also heard others mention that HP from HPMoR is an unsufferable little twat, which I assume is the same attitude they would have if they were to read LW.

I also think that HJPEV is a unsufferable little twat / horrible little jerk, but I love LW and have donated hundreds of dollars to SIAI. And I've strongly recommended HPMOR itself even when I warn people it has something of a jerk for a protagonist. Why shouldn't I ? Is anyone disputing that he's much less nice than e.g. Hermione is, and he often treats other people with horribly bad manners? If he's not insufferable, who is actually suffering him other than Hermione (who has also had to punish him by not speaking to him for a week) or Draco (who found him so insufferable in occasion that he locked him up and Gom-Jabbared him...)

Replies from: Bugmaster, CronoDAS, None, wedrifid
comment by Bugmaster · 2012-01-21T01:19:08.896Z · LW(p) · GW(p)

I also think that HPJEV is a unsufferable little twat / horrible little jerk...

I always assumed that this character detail was intentional, especially since some other characters call HP out on it explicitly.

comment by CronoDAS · 2012-01-21T01:21:40.707Z · LW(p) · GW(p)

Well, Professor Quirrell seems to have taken quite a liking to him, but I don't think he counts...

comment by [deleted] · 2012-01-22T19:21:10.451Z · LW(p) · GW(p)

I also think that HJPEV is a unsufferable little twat / horrible little jerk, but I love LW and have donated hundreds of dollars to SIAI.

Got a similar reaction. Well, except the donating dollars part. Though I'm not bothered so much by the way that HJPEV interacts with people but rather by his unique-snowflake/superhero/God-wannabe complex.

comment by wedrifid · 2012-01-21T04:50:07.398Z · LW(p) · GW(p)

If he's not insufferable, who is actually suffering him other than Hermione (who has also had to punish him by not speaking to him for a week)

And Hermione's tendency to pull this sort of stunt makes her even more insufferable than Harry. While I might choose to tolerate those two as allies and associate with them for sake of gaining power or saving the world I'd say Neville is the only actually likable character that Eliezer has managed to include.

Writing about characters that are arrogant prats does seem to come naturally to Eliezer for some reason.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-01-21T14:15:05.539Z · LW(p) · GW(p)

And Hermione's tendency to pull this sort of stunt makes her even more insufferable than Harry.

To you maybe, but Hermione is well-liked by lots of other characters, SPHEW and her army and the professors. "insufferable know-it-all" is how Ron calls her in canon. In HPMOR she actually is nicer, less dogmatic and has many more friends than in canon. Compare canon SPEW with SPHEW, and how she goes about doing each.

Replies from: wedrifid
comment by wedrifid · 2012-01-22T02:26:22.138Z · LW(p) · GW(p)

To you maybe

Yes.

, but Hermione is well-liked by lots of other characters

It is one thing to write about a character that is an arrogant prat that is perceived as an arrogant prat by the other characters. It is far more telling when obnoxious or poorly considered behavior is portrayed within the story as appropriate or wise and so accepted by all the other characters.

In HPMOR she actually is nicer, less dogmatic and has many more friends than in canon. Compare canon SPEW with SPHEW, and how she goes about doing each.

I'm not a huge fan of either of them to be honest. Although MoR!Hermione does get points for doing whichever of those two acronyms is the one that involved beating up bullies. Although now I'm having vague memories about her having a tantrum when Harry saved the lives of the girls she put at risk. Yeah, she's a pratt. A dangerous prat. Apart from making her controlling and unpleasant to be around that ego of hers could get people killed! And what makes it worse is that Hermione's idiotic behavior seems to be more implicitly endorsed as appropriate by the author than Harry's idiotic behavior.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-01-22T03:01:55.519Z · LW(p) · GW(p)

Although MoR!Hermione does get points for doing whichever of those two acronyms is the one that involved beating up bullies.

I don't understand you. The rest of the paragraph seems to be arguing that this was irresponsible idiotic behavior on her part; this sentence seems to be saying it's a point in her favor.

Although now I'm having vague memories about her having a tantrum when Harry saved the lives of the girls she put at risk.

I think you're significantly misremembering what she said -- she explicitly didn't mind Harry saving them, she minded that he scared the bejeezus out of her. Do you belong in that small minority of HPMOR readers who only read each chapter once? :-)

Replies from: wedrifid
comment by wedrifid · 2012-01-22T06:26:26.147Z · LW(p) · GW(p)

I don't understand you. The rest of the paragraph seems to be arguing that this was irresponsible idiotic behavior on her part; this sentence seems to be saying it's a point in her favor.

I approve of fighting bullying. I don't approve of initiating conflict when Harry saves their lives by pulling a Harry. Because his actions in that situation aren't really any of her business. Harry's actions in that scene are in acordance with Harry's Harriness and he would have done them without her involvement. They aren't about her (making this situation different in nature to the earlier incident pretending to be a ghost to stop a gossip.)

I think you're significantly misremembering what she said -- she explicitly didn't mind Harry saving them, she minded that he scared the bejeezus out of her.

Citation needed. Actually for realz, not as the typical 'nerd comeback'. I want to know what chapter to start reading to review the incident. Both because that is one of the most awesome things Harry has done and because I do actually recall Hermione engaging in behavior in the aftermath of the incident that makes me think less of her.

Most significantly she makes Harry give an oath that makes me think less of Harry (and MoR) for submitting to. Because he made a promise the adherence to which could make him lose the fight for the universe! I've actually had a discussion with Eliezer on the subject and was somewhat relieved when he admitted that he wrote in the necessary clauses but omitted them only for stylistic reasons.

Replies from: pengvado
comment by pengvado · 2012-01-23T13:37:12.088Z · LW(p) · GW(p)

Citation needed.

Chapter 75:

"Great!" said Hermione. "So, have you worked out why I was upset, Mr. Potter?"
There was a pause. "You wanted me to keep out of your affairs?" [...]
"No, that part's fine," said Hermione. "We were in over our heads, honestly. Please guess again, Mr. Potter."

comment by Bugmaster · 2012-01-21T00:35:00.192Z · LW(p) · GW(p)

What are the most egregious examples of SI's arrogance?

Well, you do tend to talk about "saving the world" a lot. That makes it sound like you, Eliezer Yudkowsky, plus a few other people are the new Justice League. That sounds at least a little arrogant...

comment by TheOtherDave · 2012-01-19T21:24:42.269Z · LW(p) · GW(p)

If it helps at all, another data point (not quite answers to your questions):

  • I'm a complete SI outsider. My exposure to it is entirely indirectly through Less Wrong, which from time to time seems to function as a PR/fundraising/visibility tool for SI.
  • I have no particular opinion about SI's arrogance or non-arrogance as an organization, or EY's arrogance or non-arrogance as an individual. They certainly don't demonstrate humility, nor do they claim to, but there's a wide middle ground between the two.
  • I doubt I would be noticeably more likely to donate money, or to encourage others to donate money, if SI convinced me that it was now 50% less arrogant than it was in 2011.
  • One thing that significantly lowers my likelihood of donating to SI is my estimate that the expected value of SI's work is negligible, and that the increase/decrease in that EV based on my donations is even more so. It's not clear what SI can really do to increase my EV-of-donating, though.
  • Similar to the comment you quote, someone's boasts:accomplishments ratio is directly proportional to my estimate that they are crackpots. OTOH, I find it likely that without the boasting and related monkey dynamics, SI would not receive the funding it has today, so it's not clear that adopting a less boastful stance is actually a good idea from SI's perspective. (I'm taking as given that SI wants to continue to exist and to increase its funding.)
  • Just to be clear what I mean by "boasts," here... throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral. I don't think that much is at all controversial, but if you really want specific instances I might be motivated to go back through and find some. (Probably not, though.)
Replies from: Vaniver, wedrifid
comment by Vaniver · 2012-01-20T00:44:13.228Z · LW(p) · GW(p)

EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.

I am not impressed by those sorts of ploys.

comment by wedrifid · 2012-01-20T13:59:14.600Z · LW(p) · GW(p)

throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.

I cannot think of one example of a claim along those lines.

Replies from: XiXiDu, TheOtherDave
comment by XiXiDu · 2012-01-20T14:26:18.247Z · LW(p) · GW(p)

...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways...

I cannot think of one example of a claim along those lines.

The closest I can think of right now is the following quote from Eliezer's January 2010 video Q&A:

So if I got hit by a meteor right now, what would happen is that Michael Vassar would take over responsibility for seeing the planet through to safety, and say ‘Yeah I’m personally just going to get this done, not going to rely on anyone else to do it for me, this is my problem, I have to handle it.’ And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them. There’s not really much of a motive in a project like this one to have the project split into pieces; whoever can do work on it is likely to work on it together.

ETA

Skimming over the CEV document I see some hints that could explain where the idea comes from that Eliezer believes that he has the wisdom to transform the world:

This seems obvious, until you realize that only the Singularity Institute has even tried to address this issue. [...] Once I acknowledged the problem existed, I didn't waste time planning the New World Order.

Replies from: wedrifid, Craig_Heldreth
comment by wedrifid · 2012-01-20T14:43:38.959Z · LW(p) · GW(p)

...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways...

I cannot think of one example of a claim along those lines.

The closest I can think of right now is the following quote from Eliezer's January 2010 video Q&A:

You quoted the context of my statement but edited out the part my reply was based on. Don't do that.

and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.

The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those "impossible" transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.

Replies from: TheOtherDave, XiXiDu
comment by TheOtherDave · 2012-01-20T16:13:43.638Z · LW(p) · GW(p)

Thanks for clarifying what part of my statement you were objecting to.

Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various "failed utopias"), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren't wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory.

I recall a long wave of exchanges of the form "Show us some code!" "You know, I could show you code... it's not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven't made enough progress in philosophy and decision theory to do it safely."

But looking at your clarification, I will admit I got sloppy in my formulation, given that that's only one example (albeit a pervasive one). What I should have said was "throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in "impossible" ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral."

Replies from: wedrifid
comment by wedrifid · 2012-01-20T16:46:06.609Z · LW(p) · GW(p)

"You know, I could show you code... it's not that hard a problem, really,

I'd actually be very surprised if Eliezer had ever said that - since it is plainly wrong and as far as I know Eliezer isn't quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of 'impossible'. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them.

If Eliezer thought AI wasn't a hard problem he wouldn't be comfortable dismissing (particular isntances of) AI researchers who don't care about friendliness as "Mostly Harmless"!

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-20T17:31:11.154Z · LW(p) · GW(p)

What I wrote was "it's not that hard a problem, really, for one with (list of qualifications most people don't have)," which is importantly different from what you quote.

Incidentally, I didn't claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don't really have an opinion about EY's supposed arrogance. Neither do I think it especially important.

Replies from: wedrifid
comment by wedrifid · 2012-01-20T17:36:35.917Z · LW(p) · GW(p)

What I wrote was "it's not that hard a problem, really, for one with (list of qualifications most people don't have)," which is importantly different from what you quote.

I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.

The flip side is that if Eliezer has actually claimed that it isn't a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer's credibility in my eyes.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-20T18:09:30.136Z · LW(p) · GW(p)

OK, cool.

Do you also still maintain that if he thought it wasn't a hard problem for people with the right qualifications, he wouldn't be comfortable dismissing particular instances of AI researchers as mostly harmless?

Replies from: wedrifid
comment by wedrifid · 2012-01-21T02:00:49.695Z · LW(p) · GW(p)

Do you also still maintain that if he thought it wasn't a hard problem for people with the right qualifications, he wouldn't be comfortable dismissing particular instances of AI researchers as mostly harmless?

Yes. And again if Eliezer did consider the problem easy with qualifications but still dismissed the aforementioned folks as mostly harmless it would constitute dramatically enhanced boastful arrogance!

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-21T03:22:56.880Z · LW(p) · GW(p)

OK, that's clear. I don't know if I'll bother to do the research to confirm one way or the other, but in either case your confidence that I'm misremembering has reduced my confidence in my recollection.

comment by XiXiDu · 2012-01-20T18:27:58.561Z · LW(p) · GW(p)

You quoted the context of my statement but edited out the part my reply was based on. Don't do that.

My apologies, it wasn't my intention to do that. Careless oversight.

comment by Craig_Heldreth · 2012-01-20T18:12:49.075Z · LW(p) · GW(p)

Yeah I remember that and it was certainly a megalomaniacal slip.

But I do not agree that arrogant is the correct term. I suspect "arrogant" may be a brief and inaccurate substitute for: "unappealing, but I cannot be bothered to come up with anything specific". In my dictionaries (I checked Merriam-Webster and American Heritage), arrogant is necessarily overbearing. If you are clicking on their website or reading their literature or attending their public function there isn't any easy way for them to overbear upon you.

When Terrel Owens does a touchdown dance in the endzone and the cameras are on him for fifteen seconds until the next play your attention is under his thumb and he is being arrogant. Eliezer's little slip of on-webcam megalomania is not arrogant. It would be arrogant if he was running for public office and he said that in a debate and the voters felt they had to watch it, but not when the viewer has surfed to that information and getting away is free of any cost and as easy as a click.

Almost all of us do megalomaniacal stuff all the time when nobody is looking and almost all of us expend some deliberate effort trying to not do it when people are looking.

comment by TheOtherDave · 2012-01-20T14:08:53.041Z · LW(p) · GW(p)

OK; I stand corrected about the controversiality.

comment by moridinamael · 2012-01-19T01:14:54.067Z · LW(p) · GW(p)

There are two obvious options:

The first, boring option is to make fewer bold claims. I personally would not prefer that you take this tack. It would be akin to shooting yourselves in the foot. If all of your claims vis-a-vis saving the world are couched in extremely humble signaling packages, no one will want to ever give you any money.

The second, much better option is to start doing amazing, high-visibility things worthy of that arrogance. Muflax points out that you don't have a Tim Ferriss. Tim Ferriss is an interesting case specifically because he is a huge self-promoter who people actually like despite the fact that he makes his living largely by boasting entertainingly. The reason Tim Ferriss can do this is because he delivers. He has accomplished the things he is making claims about - or at least he convinces you that he is qualified to talk about it.

I really want a Rationality Tim Ferriss who I can use as a model for my own development. You could nominate yourself or Eliezer for this role, but if you did so, you would have to sell that role.

Replies from: lukeprog
comment by lukeprog · 2012-01-19T01:40:36.791Z · LW(p) · GW(p)

I like the second option better, too.

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

Eliezer is still hampered by the cognitive exhaustion problem that he described way back in 2000. He's tried dozens of things and still tries new diets, sleeping patterns, etc. but we haven't kicked it yet. That said, he's pretty damn productive each day before cognitive exhaustion sets in.

Replies from: Caspian, WrongBot, jswan, Solvent, Kaj_Sotala, NancyLebovitz
comment by Caspian · 2012-01-19T10:02:49.046Z · LW(p) · GW(p)

I had the impression of Tim Ferris as being no more trustworthy than anyone else who was trying to sell you something. I would expect him to exaggerate how easy something is, exaggerate how likely something is to help, etc. Now, not having read his stuff, that's second hand and not well informed, but you are asking about you come across, so it's relevant. The doing amazing things part is great if you can manage it.

Replies from: NihilCredo
comment by NihilCredo · 2012-01-20T21:46:18.030Z · LW(p) · GW(p)

I have read about half of his book and skimmed the rest, and I pretty much share that impression. To put it succinctly, that man works a 4-hour workweek only if you adopt a very restrictive definition of what counts as "work".

comment by WrongBot · 2012-01-19T08:46:51.933Z · LW(p) · GW(p)

For what it's worth, that sounds virtually identical to a problem psychologists have told me is ADHD. (I also had a catastrophic school attendance failure in seventh grade, funnily enough.) Adderall has unpleasant side-effects but actually allows me to sit down and work for eight or ten consecutive hours, whenever I want to. Not perfectly, but the effect is remarkable.

Replies from: CronoDAS, thomblake
comment by CronoDAS · 2012-01-21T00:33:52.407Z · LW(p) · GW(p)

I think prescription antidepressants also tend to have a similar energy-boosting effect.

comment by thomblake · 2012-01-19T20:37:32.173Z · LW(p) · GW(p)

I've observed the same problem and solution as well.

comment by jswan · 2012-01-21T02:49:18.194Z · LW(p) · GW(p)

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

Please no. Here's an example. When you say stuff like:

"As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly."

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

You sound like Tim Ferriss and you make me want to ignore you in the same way I ignore him. I don't want to do this because you seem like a good person with a genuine ability to help others. Don't lose that.

Replies from: wedrifid
comment by wedrifid · 2012-01-21T03:39:21.925Z · LW(p) · GW(p)

When you say stuff like:

"As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly."

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

You sound like Tim Ferriss and you make me want to ignore you in the same way I ignore him.

It sounds like you place high importance on public image. In particular, on maintaining a public image that is self effacing or humble. I wonder if, over all, it is more effective for luke to convey confidence and be up front about his achievements and capabilities and so gain influence with a wide range of people or if it is best to optimize his image for that group of people who place high importance on humble decorum.

I don't want to do this because you seem like a good person with a genuine ability to help others. Don't lose that.

Tim Ferris is a good person (as far as people go) and he has been able to positively influence far more people by mastering self promotion than he ever would have been if he restrained himself. Is this about "being a good person and helping others" or keeping your approval? The two seem to be conflated here.

Fortunately for you when luke says "try to be a Rationality Tim Ferris" he does not mean anything at all along the lines of "talk like Tim Ferris". He is talking about being as productive, efficient and resourceful as Tim Ferris. He's talking about Tim's strong capability for instrumental rationality not his even stronger capability for self promotion.

(Incidentally I don't think Tim would make the kind of boast that Luke made there, simply because it is an awkward and poorly implemented boast. Tim boasts by giving a specific example of the awesome thing he has done rather than just making abstract assertions. At least give Tim the credit of knowing how to implement arrogance and boasting somewhat effectively!)

Replies from: jswan
comment by jswan · 2012-01-21T04:55:28.027Z · LW(p) · GW(p)

Yeah, I think you pretty much called it. It doesn't really work for me, but I guess that if such a communication style is the most effective way to go, drive on.

comment by Solvent · 2012-01-19T11:49:26.494Z · LW(p) · GW(p)

That was fascinating to read. Eliezer certainly has toned down the arrogance a bit recently.

I'm certainly going to try to be a Rationality Tim Ferris, but I have a ways to go.

I look forward to watching this.

comment by Kaj_Sotala · 2012-01-19T10:42:54.349Z · LW(p) · GW(p)

Wow, that link is really interesting. Especially this bit:

I was, once again, pondering the question of why I didn't have any mental energy, and I tried thinking about the occasions when I did find mental energy. It occurred to me that when I started a new project, my energy level went up briefly before crashing. Maybe, I thought, energy was produced by new ideas. And that's when the light went on. "Maybe both the genius and the energy deficit were produced by overloading a single force, the force that resists thoughts moving repeatedly in the same channel." (24). And then I thought: "Maybe that's why my genius isn't an evolutionary advantage."

I don't know if that hypothesis is true, but if it is, I probably have a mild version of it. It would explain a lot about my akrasia issues.

comment by NancyLebovitz · 2012-01-19T07:50:47.173Z · LW(p) · GW(p)

Has he tried anything related to breaking movement/tension habits?

comment by RobertLumley · 2012-01-18T22:39:20.723Z · LW(p) · GW(p)

I unfortunately don't have much to offer that can actually be helpful. I (and I feel like this probably applies to many LWers) am not at all turned off by arrogance, and actually find it somewhat refreshing. But this reminds me of something that a friend of mine said after I got her to read HPMOR:

"after finishing chapter 5 of hpmor I have deduced that harry is a complete smarmy shit that I want to punch in the face. no kid is that disrespectful. also he reminds me of a young voldemort....please don't tell me he actually tries taking over the world/embezzling funds/whatever"

ETA: she goes on in another comment (On Facebook), after I told her to give it to chapter 10, like EY suggests, "yeah I'm at chapter 17 and still don't really like harry (he seems a bit too much of a projection of the author perhaps? or the fact that he siriusly thinks he's the greatest thing evarrr/is a timelord) but I'm still reading for some reason?"

Seems to be the same general sentiment, to me. Not specifically the SI, but of course tangentially related. For what it's worth, I disagree. Harry's awesome. ;-)

comment by multifoliaterose · 2012-01-19T13:50:47.331Z · LW(p) · GW(p)

(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.

The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotten there first it's not unlikely that someone else would have gotten there within a few decades." However, I'm bothered by the fact that the norm is so strong that innocuous questions/comments which quite are weak signals of immodesty are frowned upon.

(b) I agree with cousin it that it would be good for SIAI staff to "communicate more carefully, like Holden Karnofsky or Carl Shulman."

Replies from: XiXiDu
comment by XiXiDu · 2012-01-19T17:17:37.948Z · LW(p) · GW(p)

I agree with cousin it that it would be good for SIAI staff to "communicate more carefully, like Holden Karnofsky or Carl Shulman."

I agree with this. I probably would have never voiced any skepticism/criticism if most SI/LW folks would be more like Holden Karnofsky, Carl Shulman, Nick Bostrom or cousin_it.

comment by lsparrish · 2012-01-20T02:42:31.845Z · LW(p) · GW(p)

I'm pretty sure most everyone here already knows this, but the perception of arrogance is basically a signalling/counter-signalling problem. If you boast (produce expensive signals of your own fitness), that tells people you are not too poor to have anything to boast about. But it can also signal that you have a need to brag to be noticed, which in turn can be interpreted to mean you aren't truly the best of the best. The basic question is context.

Is there a serious danger your potential contributions will be missed? If so, it is wisest to boast. Is there already an arms race of other boasts to compete with? Is boasting so cheap nobody will pay it any attention? In that case, the best strategy is to stun people with unexpected modesty. You can also save resources that way, as long as nobody interprets that as a need to save resources.

Pulling off the modesty trick can turn out to be harder than an effective boast, which is of course related to why it works. People have to receive the information that you are competent somehow -- a subtle nudge of some kind, preexisting reputation, etc. It also comes to a point of saturation, just like loud/direct boasting does, it just is harder to notice when it does.

So when someone unexpectedly acts arrogant in a niche where modesty has become commonplace, my theory is that it can actually act as a counter-counter-signal. To pull it off they would have to somehow distinguish their arrogance from that of a low-status blowhard who is only making noise because otherwise they wouldn't be noticed.

Logically extrapolating this, we might then get the more seemingly modest counter-counter-counter signaler, who is able to signal (through a supremely sophisticated mechanism) that they don't need to signal arrogance and separate themselves from modest folk who are so pretentious as to signal their modesty by keeping quiet in order to prevent themselves from being confused with blowhards who signal expensively. However, for counter(3)-signaling to be an advantage there would first need to be a significant population of counter(2)-signalers to compete against. I'm guessing this probably just sort of slides into different kinds of signal/counter-signal forms rather than going infinitely meta.

comment by Wei Dai (Wei_Dai) · 2012-02-21T23:43:58.068Z · LW(p) · GW(p)

I intended [...]

But some people seem to have read it and heard this instead [...]

When I write posts, I'd often be tempted to use examples from my own life, but then I'd think:

  1. Do I really just intend to use myself to illustrate some point of rationality, or do I subconsciously also want to raise my social status by pointing out my accomplishments?
  2. Regardless of what I "really intend", others will probably see those examples as boasting, and there's no excuse (e.g., I couldn't any better examples) I can make to prevent that.

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished. I'm not saying that you should do the same since you have different costs and benefits to consider (or I could well be wrong myself and shouldn't care so much about not being seen as boasting), but the fact that people interpret your posts filled with personal examples/accomplishments as being arrogant shouldn't have come as a surprise.

Another point I haven't seen brought up yet is that social conventions seem to allow organizations to be more boastful than individuals. You'd often see press releases or annual reports talking up an organization's own accomplishments, while an individual doing the same thing would be considered arrogant. So an idea to consider is that when you want to boast of some accomplishment, link it to the Institute and not to an individual.

Replies from: Bongo
comment by Bongo · 2012-02-22T20:15:43.345Z · LW(p) · GW(p)

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.

You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

comment by Aleksei_Riikonen · 2012-01-19T12:51:56.478Z · LW(p) · GW(p)

So, I have a few questions:

  1. What are the most egregious examples of SI's arrogance?

Since you explicitly ask a question phrased thus, I feel obligated to mention that last April I witnessed a certain email incident that I thought was somewhat extremely bad in some ways.

I do believe that lessons have been learned since then, though. Probably there's no need to bring the matter up again, and I only mention it since according to my ethics it's the required thing to do when asked such an explicit question as above.

(Some readers may wonder why I'm not providing details here. That's because after some thought, I for my part decided against making the incident public, since I expect it might subsequently get misrepresented to look worse than what's fair. (There might be value in showing records of the incident to new SIAI employees as an example of how not to do things, though.))

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2012-01-20T13:04:24.815Z · LW(p) · GW(p)

Curse me for presenting myself as someone having interesting secret knowledge. Now I get several PMs asking for details.

In short, this "incident" was about one or two SIAI folks making a couple of obvious errors of judgment, and in the case of the error that sparked the whole thing, getting heatedly defensive about it for a moment. Other SIAI folks however recognized the obvious mistakes as such, so the issue was resolved, even though unprofessional conduct was observed for a moment.

The actual mistakes were rather minor, nothing dramatic. The surprising thing was that heated defensiveness took place on the way to those mistakes getting corrected.

(And since Eliezer is the SIAI guy most often accused of arrogance, I'll additionally state that here that is not the case. Eliezer was very professional in the email exchange in question.)

comment by thomblake · 2012-01-20T19:25:50.921Z · LW(p) · GW(p)

A lot of people are suggesting something like "SIAI should publish more papers", but I'm not sure anyone (including those who are making the suggestion) would actually change their behavior based on that. It sounds an awful lot like "SIAI should hire a PhD".

Replies from: Kaj_Sotala, antigonus, None, TheOtherDave
comment by Kaj_Sotala · 2012-01-20T21:44:17.896Z · LW(p) · GW(p)

I've been a donor for a long time, but every now and then I've wondered whether I should be - and the fact that they don't publish more has been one of the main reasons why I've felt those doubts.

I do expect the paper thing to actually be the true rejection of a lot of people. I mean, demanding some outputs is one of the most basic expectations you could have.

Replies from: CronoDAS
comment by CronoDAS · 2012-01-21T01:27:45.829Z · LW(p) · GW(p)

I consider "donating to SIAI" to be on the same level as "donating to webcomics" - I pay Eliezer for the entertainment value of his writing, in the same spirit as when I bought G.E.B. and thereby paid Douglas Hofstadter for the entertainment value of his writing.

comment by antigonus · 2012-01-22T08:07:21.027Z · LW(p) · GW(p)

Of course it depends on the specific papers and the nature of the publications. "Publish more papers" seems like shorthand for "Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously." It seems to me that doing this would change a lot of people's behavior.

comment by [deleted] · 2012-01-20T23:13:58.280Z · LW(p) · GW(p)

How would someone convince you that it was their true rejection?

Replies from: faul_sname
comment by faul_sname · 2012-01-21T22:17:05.347Z · LW(p) · GW(p)

Donate to groups that actually demonstrate results.

Replies from: None
comment by [deleted] · 2012-01-21T23:19:52.669Z · LW(p) · GW(p)

Like who? I don't know any other non-profit working on FAI.

Replies from: faul_sname
comment by faul_sname · 2012-01-21T23:41:55.582Z · LW(p) · GW(p)

If you limit your choice of charity to one working on FAI, I am not aware of any others. However, for a group that has demonstrated results in their domain: Schistosomiasis Control Initiative.

Replies from: None
comment by [deleted] · 2012-01-21T23:52:18.627Z · LW(p) · GW(p)

I don't see why donating to SCI would convince people with thomblake's skepticism.

Replies from: faul_sname
comment by faul_sname · 2012-01-21T23:56:36.666Z · LW(p) · GW(p)

It would convince them that at least some people donate to organizations with visible outputs (like SCI). (Disclaimer: the lack of publications actually is not my true rejection of donating to SIAI, which has more to do with the lack of evidence that SIAI's cause is not only important, but urgent.)

Replies from: None
comment by [deleted] · 2012-01-22T00:01:29.937Z · LW(p) · GW(p)

Many people already do that through GiveWell, and yet he appears unconvinced.

comment by TheOtherDave · 2012-01-20T22:02:55.806Z · LW(p) · GW(p)

Agreed. Then again, the OP didn't actually pose the question "What would change your behavior?" (Which I assume translates to "What would cause you to donate more to SI and encourage others to do so?")

comment by tetsuo55 · 2012-01-19T20:10:44.671Z · LW(p) · GW(p)

People tell me SI is arrogant but I don't see it myself. When you tell someone something and open it up to falsification and criticism I no longer see it as arrogance ( but I am wrong there for some reason)

In any case, what annoys me about the claims made is that its mostly based on anecdotal evidence and very little has come from research. Also as a regular guy and not a scientist or engineer I've noticed a distinct lack of any discussion of SI's viewpoints in the news.

I don't see anyone actively trying to falsify any of the claims in the sequences for example, and I think it's because you cannot really take them all that seriously.

A second problem is that there are many typos, little mistakes and (due to new experimental evidence) wrong things in the sequences and they never get updated. I'd rather see the sequences as part of a continually updated wiki-like lesson plan, where feedback is reviewed by a kind of board and they change what the texts accordingly.

The nitpicks mentioned on rationalwiki also contribute to the feeling of cultishness and arrogance:

http://rationalwiki.org/wiki/LessWrong The part about quantum mechanics could use some extra posts, especially since EY does explain why he makes the claim when you take the whole of the sequences into account. He uses evidence from unrelated fields to prove many worlds.

EDIT: for some unknown reason people are downvoting my comment, if you downvote(d) this post or see why please tell me why so I can learn and improve future posts. Private messages are ok if you don't want to do it through a response here.

Replies from: prase
comment by prase · 2012-01-20T14:36:28.526Z · LW(p) · GW(p)

there are many typo's

Murphy's law: a sentence criticising typos will contain a typo itself.

Replies from: tetsuo55
comment by tetsuo55 · 2012-01-20T16:35:43.242Z · LW(p) · GW(p)

Thanks, google docs is not flagging any typos, could you point some out for me?

Replies from: arundelo
comment by arundelo · 2012-01-20T16:47:11.847Z · LW(p) · GW(p)

Apostrophes are not used to form plurals. (Some style guides give some exceptions, but this is not one of them.) The plural of "typo" is "typos". "Typo's" is a word, but it's the possesive form of "typo" (so it's not the word you want here).

(Ninja edit: better link.)

Replies from: tetsuo55, prase
comment by tetsuo55 · 2012-01-20T18:50:43.841Z · LW(p) · GW(p)

Thanks that helped. Too bad the spellchecker missed it.

comment by prase · 2012-01-20T16:58:44.715Z · LW(p) · GW(p)

In what circumstances we use 's to form a plural? The link doesn't appear to suggest any.

Replies from: arundelo
comment by arundelo · 2012-01-20T17:22:35.304Z · LW(p) · GW(p)

Rule 11:

Exception:
Use apostrophes with capital letters [sic -- the first example uses a lowercase letter] and numbers when the meaning would be unclear otherwise.

Examples:
Please dot your i's.
You don't mean is.

If you were looking at the link I posted before editing my comment, search for "tired" and "DO use the apostrophe to form the plural".

My 1992 Little, Brown Handbook says:

Use an apostrophe plus -s to form the plurals of letters, numbers, and words named as words.

That sentence has too many but's.

Remember to dot your i's and cross your t's, or your readers may not be able to distinguish them from e's and l's.

At the end of each chapter the author had mysteriously written two 3's and two &'s.

[...]

Exception: References to the years in a decade are not underlined [italicized] and often omit the apostrophe. Thus either 1960's or 1960s is acceptable as long as usage is consistent.

Replies from: wedrifid, prase
comment by wedrifid · 2012-01-21T04:42:30.000Z · LW(p) · GW(p)

Examples: Please dot your i's. You don't mean is.

Correct or not, the style guide is lame. A clearly superior way to prevent the ambiguity with unfortunate clear default is to use single quotes on both side of the 'i'. So 'i's, not i's.

comment by prase · 2012-01-20T22:32:53.123Z · LW(p) · GW(p)

I've missed that, thanks.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T14:45:33.823Z · LW(p) · GW(p)

Find someplace I call myself a mathematical genius, anywhere.

(I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern. I don't know what this alarm feels like, so it's hard to guess what sets it off.)

Replies from: XiXiDu, Matt_Simpson, sixes_and_sevens, wedrifid, jeremysalwen
comment by XiXiDu · 2012-01-19T17:24:10.797Z · LW(p) · GW(p)

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

Some quotes by you that might highlight why some people think you/SI is arrogant :

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?" (Competent Elites)

More:

I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community. (Above-Average AI Scientists)

Even more:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)

And:

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't. (Eliezer_Yudkowsky August 2010 03:57:30PM)

Replies from: lukeprog, None, amcknight, wedrifid
comment by lukeprog · 2012-01-19T20:19:50.987Z · LW(p) · GW(p)

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

Replies from: XiXiDu, Bugmaster, erratio
comment by XiXiDu · 2012-01-20T10:46:56.039Z · LW(p) · GW(p)

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

I am the wrong person to ask if a "a doctorate in AI would be negatively useful". I guess it is technically useful. And I am pretty sure that it is wrong to say that others are "not remotely close to the rationality standards of Less Wrong". That's of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.

But that's besides the point. Those statements are clearly false when it comes to public relations.

If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.

Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don't think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.

It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won't be enough to say, "I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards." Because at the point that you utter the word "Singularity" you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.

Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a "research fellow of the Singularity Institute"? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn't write that line myself, it's actually from an email conversation with a top-notch person that didn't give me their permission to publish it). In any case, you won't make them listen to you, let alone do what you want.

Compare the following:

Eliezer Yudkowsky, research fellow of the Singularity Institute.

Education: -

Professional Experience: -

Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.

vs.

Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.

Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.

Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.

Awards and Honors: He holds various awards and is listed in the Who's Who in computer science.

Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn't have enough time for that.

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.

Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.

Replies from: Viliam_Bur, Rain, FeepingCreature, wedrifid, None, TrE, lessdazed, None
comment by Viliam_Bur · 2012-01-20T15:17:40.165Z · LW(p) · GW(p)

I mostly agree with the first 3/4 of your post. However...

Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won't even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.

You can't make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have "no censorship, except in extreme cases" policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator's action) and some users producing huge amounts of noise. And that just wastes my time.

People leaving LW should be considered on case-by-case basis. They are not all in the same category.

Why does that rational wiki entry about lesswrong exist?

To express opinions of rationalwiki authors about lesswrong, probably. And that opinion seems to be that "belief in many worlds + criticism of science = pseudoscience".

I agree with them that "nonstandard belief + criticism of science = high probability of pseudoscience". Except that: (1) among quantum physicists the belief in many worlds is not completely foreign; (2) the criticism of science seems rational to me, and to be fair, don't forget that scholarship is an officially recognized virtue at LW; (3) the criticism of naive Friendly AI approaches is correct, though I doubt the SI's ability to produce something better (so this part really may be crank), but the rest of LW again seems rational to me.

Now, how much rational are the arguments on the talk page of rational wiki? See: "the [HP:MoR link] is to a bunch of crap", "he explicitly wrote [HP:MoR] as propaganda and LessWrong readers are pretty much expected to have read it", "The stuff about 'luminosity' and self-help is definitely highly questionable", "they casually throw physics and chemistry out the window and talk about nanobots as if they can exist", "I have seen lots of examples of 'smart' writing, but have yet to encounter one of 'intelligent' writing", "bunch of scholastic idiots who think they matter somehow", "Esoteric discussions that are hard to understand without knowing a lot about math, decision theory, and most of all the exalted sequences", "Poor writing (in terms of clarity)", "[the word 'emergence'] is treated as disallowed vocabulary", "I wonder how many oracular-looking posts by EY that have become commonplaces were reactions to an AI researcher that had annoyed him that day" etc. To be fair, there are also some positive voices, such as: "Say what you like about the esoteric AI stuff, but that man knows his shit when it comes to cognitive biases and thinking", "I believe we have a wiki here about people who pursue ideas past the point of actual wrongness".

Seems to me like someone has a hammer (a wiki for criticizing pseudoscience) and suddenly everything unusual becomes a nail.

You are just lucky that they are the only people who really care about lesswrong/SI.

Frankly, most people don't care about lesswrong or SI or rational wiki.

comment by Rain · 2012-01-21T04:16:07.946Z · LW(p) · GW(p)

I wish I could decompile my statements of "they need to do a much better job at marketing" into paragraphs like this. Thanks.

Replies from: wedrifid
comment by wedrifid · 2012-01-21T06:19:58.212Z · LW(p) · GW(p)

I wish I could decompile my statements of "they need to do a much better job at marketing" into paragraphs like this.

Practice makes perfect!

comment by FeepingCreature · 2012-01-20T13:20:51.426Z · LW(p) · GW(p)

Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing.

I hope you understand that this is not an argument against LW's policy in this matter.

comment by wedrifid · 2012-01-20T11:20:36.128Z · LW(p) · GW(p)

Concepts like "Well-Kept Gardens Die By Pacifism" will at some point explode in your face.

Counterprediction: The optimal degree of implementation of that policy for the purpose of PR maximisation is somewhat higher than it currently is.

You don't secure an ideal public image by being gentle.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-20T12:39:18.504Z · LW(p) · GW(p)

You don't secure an ideal public image by being gentle.

Don't start a war if you don't expect to be able to win it. It is much easier to damage a reputation than to build one, especially if you support a cause that can easily trigger the absurdity heuristic in third-party people.

Being rude to people who don't get it will just cause them to reinforce their opinion and tell everyone that you are wrong instead. Which will work, because your arguments are complex and in support of something that sounds a lot like science fiction.

A better route is to just ignore them, if you are not willing to discuss the matter over, or to explain how exactly they are wrong. And if you consider both routes to be undesirable, then do it like FHI and don't host a public forum.

Replies from: wedrifid
comment by wedrifid · 2012-01-20T13:07:24.747Z · LW(p) · GW(p)

Being rude to people

Being gratuitously rude to people isn't the point. 'Maintaining a garden' for the purpose of optimal PR involves far more targeted and ruthless intervention. "Weeds" (those who are likely to try sabotage your reputation, otherwise interfere with your goals, or significantly provoke 'rudeness' from others) are removed early before they have a chance to take root.

comment by [deleted] · 2012-01-20T13:07:59.096Z · LW(p) · GW(p)

I've had these thoughts for a while, but I undoubtedly would have done much worse in writing them down than you have. Well done.

comment by TrE · 2012-01-20T13:48:48.582Z · LW(p) · GW(p)

Related: http://www.overcomingbias.com/2012/01/dear-young-eccentric.html

Don't appear like a rebel, be a rebel. Don't signal rebel-ness, instead, be part of the systemand infiltrate it with your ideas. If those ideas are decent, this has a good chance of working.

comment by lessdazed · 2012-01-23T15:54:01.376Z · LW(p) · GW(p)

members are unwilling to talk to them in a charitable way

The problem is will?

comment by [deleted] · 2012-01-22T20:28:21.923Z · LW(p) · GW(p)

Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks?

Organizations are made of people. People in highly technical or scientific lines of work are likely to pay less attention to social signaling bullshit and more to actual validity of arguments or quality of insights. By writing the sequences Eliezer was talking to those people and by extension to the organizations that employ them.

A somewhat funny example: there's an alternative keyboard layout, called Colemak that was developed about 5 years ago by people from the Internet and later promoted by enthusiasts on the Internet. Absolutely no institutional muscle to back it up. Yet it somehow ended included in the latest version of Mac OS X. Does that mean that Apple started caring about Colemak? I don't think the execs had a meeting about it. Maybe the question of whether an organization "cares" about something isn't that well defined.

Replies from: asr
comment by asr · 2012-01-22T20:43:26.726Z · LW(p) · GW(p)

Organizations are made of people. People in highly technical or scientific lines of work are likely to pay less attention to social signaling bullshit and more to actual validity of arguments or quality of insights. By writing the sequences Eliezer was talking to those people and by extension to the organizations that employ them.

I am skeptical of this claim and would like evidence. My experience is that scientists are just as tribal, status-conscious and signalling-driven as anybody else. (I am a graduate student in the science at a major research university.)

comment by Bugmaster · 2012-01-21T01:53:54.437Z · LW(p) · GW(p)

The first three statements can be boiled down to saying, "I, Eliezer, am much better at understanding and developing AI than the overwhelming majority of professional AI researchers".

Is that statement true, or false ? Is Eliezer (or, if you prefer, the average SIAI member) better at AI than everyone else (plus or minus epsilon) who is working in the field of AI ?

The prior probability for such a claim is quite low, especially since the field is quite large, and includes companies such as Google and IBM who have accomplished great things. In order to sway my belief in favor of Eliezer, I'll need to witness some great things that he has accomplished; and these great things should be significantly greater than those accomplished by the mainstream AI researchers. The same sentiment applies to SIAI as a whole.

comment by erratio · 2012-01-19T21:48:12.360Z · LW(p) · GW(p)

To repeat something I said in the other thread, truth values have nothing to do with tone. It's the same issue some people downthread have with Tim Ferriss - no one denies that he seems very effective, but he communicates in a way that gives many people an unpleasant vibe. Same goes if you communicate in a way that pattern-matches to 'arrogant'.

Replies from: lukeprog
comment by lukeprog · 2012-01-19T22:18:29.161Z · LW(p) · GW(p)

Of course. That's why I said I can "smell the arrogance," and then went on to ask a different question about whether XiXiDu thought the claims were false.

Replies from: kbaxter, jmmcd
comment by kbaxter · 2012-01-19T22:57:06.799Z · LW(p) · GW(p)

I can smell the "arrogance," but do you think any of the claims in these paragraphs is false?

When I read that, I interpreted it to mean something like "Yes, he does come across as arrogant, but it's okay because everything he's saying is actually true." It didn't come across to me like a separate question - it read to me like a rhetorical question which was used to make a point. Maybe that's not how you intended it?

I think erratio is saying that it's important to communicate in a way that doesn't turn people off, regardless of whether what you're saying is true or not.

comment by jmmcd · 2012-01-19T23:19:46.652Z · LW(p) · GW(p)

But I don't get it. You asked for examples and XiXiDu gave some. You can judge whether they were good or bad examples of arrogance. Asking whether the examples qualify under another, different criterion seems a bit defensive.

Also, several of the examples were of the form "I was tempted to say X" or "I thought Y to myself", so where does truth or falsity come into it?

Replies from: lukeprog
comment by lukeprog · 2012-01-20T00:04:13.107Z · LW(p) · GW(p)

Okay, let me try again...

XiXiDu, those are good examples of why people think SI is arrogant. Out of curiosity, do you think the statements you quote are actually false?

Replies from: None
comment by [deleted] · 2012-01-21T04:41:45.886Z · LW(p) · GW(p)

.

comment by [deleted] · 2012-01-20T22:06:09.510Z · LW(p) · GW(p)

(So You Want To Be A Seed AI Programmer)

I hadn't seen that before. Was it written before the sequences?

I ask because it all seemed trivial to my sequenced self and it seemed like it was not supposed to be trivial.

I must say that writing the sequences is starting to look like it was a very good idea.

Replies from: katydee
comment by katydee · 2012-01-21T16:32:53.559Z · LW(p) · GW(p)

I believe so; I also believe that post is now considered obsolete.

comment by amcknight · 2012-01-19T20:25:55.970Z · LW(p) · GW(p)

FWIW, I'm not sure why you added the 2nd quote and the 3rd is out of context. Also, remember that we're talking about 700+ blog posts and other articles. Just be careful you're not cherry-picking.

Replies from: None
comment by [deleted] · 2012-01-20T13:01:06.503Z · LW(p) · GW(p)

This isn't a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.

Replies from: None, amcknight
comment by [deleted] · 2012-01-20T21:17:05.499Z · LW(p) · GW(p)

The point was "you may be quote mining" which is a useful thing to tell a LWer, even if it doesn't mean a thing to "the masses".

comment by amcknight · 2012-01-20T20:38:14.946Z · LW(p) · GW(p)

Good point.

comment by wedrifid · 2012-01-22T06:14:07.945Z · LW(p) · GW(p)

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)

I love this quote. Yes, it's totally arrogant, but I love it just the same. It would be a shame if Eliezer had to lose this attitude. (Even though all things considered it may be better if he did.)

Replies from: lessdazed
comment by lessdazed · 2012-01-23T16:12:57.515Z · LW(p) · GW(p)

OK, before we get started, I have one question. Has anyone here passed the Series 7 Exam?

comment by Matt_Simpson · 2012-01-19T17:44:18.183Z · LW(p) · GW(p)

Interestingly, the first sentence of this comment set off my arrogance sensors (whether justified or not). I don't think it's the content of your statement, but rather the way you said it.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T19:37:32.419Z · LW(p) · GW(p)

I believe that. My first-pass filter for theories of why some people think SIAI is "arrogant" is whether the theory also explains, in equal quantity, why those same people find Harry James Potter-Evans-Verres to be an unbearably snotty little kid or whatever. If the theory is specialized to SIAI and doesn't explain the large quantities of similar-sounding vitriol gotten by a character in a fanfiction in a widely different situation who happens to be written by the same author, then in all honesty I write it off pretty quickly. I wouldn't mind understanding this better, but I'm looking for the detailed mechanics of the instinctive sub-second ick reaction experienced by a certain fraction of the population, not the verbal reasons they reach for afterward when they have to come up with a serious-sounding justification. I don't believe it, frankly, any more than I believe that someone actually hates hates hates Methods because "Professor McGonagall is acting out of character".

Replies from: CharlesR, thomblake, Matt_Simpson, None
comment by CharlesR · 2012-01-20T18:34:11.995Z · LW(p) · GW(p)

I once read a book on characterization. I forget the exact quote, but it went something like, "If you want to make your villian more believable, make him more intelligent."

I thought my brain had misfired. But apparently, for the average reader it works.

comment by thomblake · 2012-01-19T20:50:45.800Z · LW(p) · GW(p)

I acquired my aversion to modesty before reading your stuff, and I seem to identify that "thing", whatever it is shared by you and Harry, as "awesome" rather than "arrogant".

You're acting too big for your britches. You can't save the world; you're not Superman. Harry can't invent new spells; he's just a student. The proper response to that sort of criticism is to ignore it and (save the world / invent new spells) anyway. I don't think there really is a way to make it go away without actually diminishing your ability to do awesome stuff.

comment by Matt_Simpson · 2012-01-20T01:49:52.107Z · LW(p) · GW(p)

FWIW I don't ever recall having this reaction to Harry, though my memory is pretty bad and I think I'm easily manipulated by stories.

It may have something to do with being terse and blunt - this often makes the speaker seem as though they think they're "better" than their interlocutors. I had a Polish professor for one of my calculus classes in undergrad who, being a Pole speaking english, naturally sounded very blunt to our American ears. There were several students in that class who just though he was an arrogant asshole who talked down to his students. I'm mostly speculating here though.

comment by [deleted] · 2012-01-19T20:55:19.997Z · LW(p) · GW(p)

Self-reference and any more than a moderate degree of certainty about anything that isn't considered normal by whoever happens to be listening are both (at least, in my experience) considered less than discreet.

Trying to demonstrate that one isn't arrogant probably qualifies as arrogance, too.

I don't know how useful this observation is, but I thought it was at least worth posting.

comment by sixes_and_sevens · 2012-01-19T17:00:44.222Z · LW(p) · GW(p)

"Here is a threat to the existence of humanity which you've likely never even considered. It's probably the most important issue our species has ever faced. We're still working on really defining the ins and outs of the problem, but we figure we're the best people to solve it, so give us some money."

Unless you're a fictional character portrayed by Will Smith, I don't think there's enough social status in the world to cover that.

Replies from: Vladimir_Nesov, amcknight
comment by Vladimir_Nesov · 2012-01-19T17:17:15.734Z · LW(p) · GW(p)

If trying to save the world requires having more social status than humanly obtainable, then the world is lost, even if it was easy to save...

Replies from: sixes_and_sevens, Epiphany
comment by sixes_and_sevens · 2012-01-19T17:40:13.566Z · LW(p) · GW(p)

The question is one of credibility rather than capability. In private, public, academic and voluntary sectors it's a fairly standard assumption that if you want people to give you resources, you have to do a little dance to earn it. Yes, it's wasteful and stupid and inefficient, but it's generally easier to do the little dance than convince people that the little dance is a stupid system. They know that already.

It's not arrogant to say "my time is too precious to do a little dance", and it may even be true. The arrogance would be to expect people to give you those resources without the little dance. I doubt the folk at SIAI expect this to happen, but I do suspect they're probably quite tired of being asked to dance.

Replies from: NihilCredo
comment by NihilCredo · 2012-01-20T21:39:48.542Z · LW(p) · GW(p)

The little dance is not wasteful and stupid and inefficient. For each individual with the ability to provide resources (be they money, manpower, or exposure), there are a thousand projects who would love to be the beneficiaries of said resources. Challenging the applicants to produce some standardised signals of competence is a vastly more efficient approach than expecting the benefactors to be able to thoroughly analyse each and every applicant's exoteric efforts.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-01-21T00:02:51.709Z · LW(p) · GW(p)

I agree that methods of signalling competence are, in principle, a fine mechanism for allowing those with resources to responsibly distribute them between projects.

In practise, I've seen far too many tall, attractive, well-spoken men from affluent background go up to other tall, attractive, well-spoken men from affluent backgrounds and get them to allocate ridiculous quantities of money and man-hours to projects on the basis of presentations which may as well be written in crayon for all the salient information they contain.

The amount this happens varies from place to place, and in the areas where I see it most there does seem to be an improving trend of competence signalling actually correlating to whatever it is the party in question needs to be competent at, but there is still way too much scope for such signalling being as applicable to the work in question as actually getting up in front of potential benefactors and doing a little dance.

comment by Epiphany · 2012-09-07T06:34:38.304Z · LW(p) · GW(p)

Unless people wake up to the fact that people are requiring an appeal to authority as a prerequisite for important decisions, AND gain the ability to determine for themselves whether something is a good cause. I think the reason people rely on appeals to popularity, authority and the "respect" that comes with status is that they do not feel competent to judge for themselves.

comment by amcknight · 2012-01-19T20:36:17.452Z · LW(p) · GW(p)

This isn't fair. Use a real quote.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-01-19T21:15:44.475Z · LW(p) · GW(p)

Uh...no. It's in quotation marks because it's expressed as dialogue for stylistic purposes, not because I'm attributing it as a direct statement made by another person. That may make it a weaker statement than if I'd used a direct quote, but it doesn't make it invalid.

Replies from: amcknight, Vaniver
comment by amcknight · 2012-01-19T21:55:33.313Z · LW(p) · GW(p)

Arrogance is probably to be found in the way things are said rather than the content. By not using a real example, you've invented the tone of the argument.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-01-19T22:38:35.796Z · LW(p) · GW(p)

It's not supposed to be an example of arrogance, through tone or otherwise. It's a broad paraphrasing of the purpose and intent of SIAI to illustrate the scope, difficulty and nebulousness of same.

Replies from: amcknight
comment by amcknight · 2012-01-20T20:47:59.341Z · LW(p) · GW(p)

OK, sure. But now I'm confused about why you said it. Aren't we specifically talking about arrogance?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-01-21T00:09:15.011Z · LW(p) · GW(p)

EY made a (quite reasonable) observation that the perceived arrogance of SIAI may be a result of trying to tackle a problem disproportionately large for the organisation's social status. My point was that the problem (FAI) is so large, that no-one can realistically claim to have enough social status to try and tackle it.

comment by Vaniver · 2012-01-19T23:59:41.400Z · LW(p) · GW(p)

Typically, when I paraphrase I use apostrophes rather than quotation marks to avoid that confusion. I don't know if that's standard practice or not.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-01-20T00:39:22.789Z · LW(p) · GW(p)

It's my understanding there's no formal semantic distinction between single- or double-quotes as punctuation, and their usage is a typographic style choice. Your distinction does make sense in a couple of different ways, though. The one that immediately leaps to mind is the distinction between literal and interpreted strings in Perl, et al., though that's a bit of a niche association.

Also single-quotes are more commonly used for denoting dialogue, but that has more to do with historical practicalities of the publishing and printing industries than any kind of standard practise. The English language itself doesn't really seem to know what it's doing when it puts something in quotes, hence the dispute over whether trailing commas and full stops belong inside or outside quotations. One makes sense if you're marking up the text itself, while another makes sense if you're marking up what the text is describing.

I think I may adopt this usage.

Replies from: NihilCredo
comment by NihilCredo · 2012-01-20T21:40:54.989Z · LW(p) · GW(p)

LessWrong, at least, has a markup function that is specifically designed for the purpose of quoting.

- NihilCredo

comment by wedrifid · 2012-01-22T06:10:19.459Z · LW(p) · GW(p)

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

My thinking when I read this post went something along these lines but where you put "made up because" I put "actually consists of". That is, acting in a way that (the observer perceives) is beyond your station is a damn good first approximation at a practical definition of 'arrogance'. I would go as far as to say that if you weren't being arrogant you wouldn't be able to do you job. Please keep on being arrogant!

The above said, there are other behaviors that will provoke the label 'arrogant' which are not beneficial. For example:

  • Acting like one is too good to have to update based on what other people say. You've commented before that high status can make you stupid. Being arrogant - acting in an exaggerated high status manner - certainly enhances this phenomon. As far as high status people go you aren't too bad along the "too arrogant to be able to comprehend what other people say" axis but "better than most high status people" isn't the bar you are aiming for.
  • Acting oblivious to how people think of you isn't usually the optimal approach for people whose success (in, for example, saving the @#%ing world) depends on the perceptions of others (who give you the money).

When I saw Luke make this post I thought that ahh, Luke is taking his new role seriously and actively demonstrated that he is committed to being open to feedback and managing public perception. I expected both he and others from SingInst to actively resist the temptation to engage with the (requested!) criticism so as to avoid looking defensive and undermining the whole point of what he was attempting.

What was your reasoning when you decided to make this reply? Did you think to yourself "What's the existential-opportunity-maximising approach here? I know! I'm going to reply with aggressive defensiveness and cavalierly dismiss all those calling me arrogant as suffering bias because they are unable to accept how awesome we are!" Of course what you say is essentially correct yet saying it in this context strikes me as a tad naive. It's also (a behavior that will prompt people to think of you as) rather arrogant.

(As a tangent that I find at least mildly curious I've just gone and rather blatantly condescended to Eliezer Yudkowsky. Given that Eliezer is basically superior to me in every aspect (except, I've discovered, those abilities that are useful when doing Parkour) this is the very height of arrogance. But then in my case the very fate of the universe doesn't depend on what people think of me!)

comment by jeremysalwen · 2012-04-02T03:20:52.569Z · LW(p) · GW(p)

Here: http://lesswrong.com/lw/ua/the_level_above_mine/

I was going to go through quote by quote, but I realized I would be quoting the entire thing.

Basically:

A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes's level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with "Darn"). C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them. D) You say that not being a genius like Jaynes and Conway is a "possibility" you must "confess" to. E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn't study quite enough math as a kid.

So basically, yes, you don't explicitly say "I am a mathematical genius", but you certainly positions yourself as hanging out on the fringes of this "genius" concept. Maybe I'll say "Schrodinger’s Genius".

Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.

comment by IlyaShpitser · 2012-03-14T20:57:29.932Z · LW(p) · GW(p)

Hi Luke,

I think you are correct that SI has an image problem, and I agree that it's at least partially due to academic norm violations (and partially due to the personalities involved). And partially due to the fact that out of possible social organizations, SI most readily maps to a kind of secular cult, where a charismatic leader extracts a living from his followers.

If above is seen as a problem in need of correcting then some possibilities for change include:

(a) Adopting mainstream academic norms strategically. (b) Competing in the "mainstream marketplace of ideas" by writing research grant proposals.

comment by NancyLebovitz · 2012-01-19T07:52:40.303Z · LW(p) · GW(p)

There's the signalling problem from boasting in this culture, but should we also be taking a look at whether boasting is a custom that there are rational reasons for encouraging or dropping?

comment by linkhyrule5 · 2013-07-15T06:31:29.024Z · LW(p) · GW(p)

Since it's been seven months, I'm curious - how much of this, if any, has been implemented? TDT has been published, but it doesn't get too many hits outside of LessWrong/MIRI, for example.

comment by Epiphany · 2012-12-25T06:45:40.954Z · LW(p) · GW(p)

This is the best example I've seen so far:

I actually intend to fix the universe (or at least throw some padding atop my local region of it, as disclaimed above)

The padding version seems more reasonable next to the original statement, but neither of these are very realistic goals for a person to accomplish. There is probably not a way to present grandiosity such as this without it coming across as arrogance or worse.

http://lesswrong.com/lw/uk/beyond_the_reach_of_god/nsh

comment by Armok_GoB · 2012-01-25T22:56:52.660Z · LW(p) · GW(p)

I still don't get what's actually supposed to be wrong about being arrogant. In all the examples I've found of actual arrogance it seems a good and sensible reaction when justified, and in the alleged cases of it casuing bad outcomes it never actualy is the arrogance itself that does, there is just an overconfidence causing both the arrogance and bad outcome. Is this just some social tabo because it *correlates with overconfidence?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-26T05:19:14.503Z · LW(p) · GW(p)

If I behave arrogantly and as a consequence other people are less willing/able to coordinate effectively with me, would you consider that a bad outcome? If so, do you believe that never happens? Or would you say that in that case the cause of the bad outcome is other people's reactions to my arrogance, rather than the arrogance itself? Or something else?

Replies from: Armok_GoB
comment by Armok_GoB · 2012-01-26T14:56:42.233Z · LW(p) · GW(p)

Yea, the only case of that bad outcome is peoples bad reactions to it, and further I can't see why people should react badly to. It seem like an arbitrary and unfair taboo against a perfectly valid personality trait/emotional reaction.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-26T15:25:58.051Z · LW(p) · GW(p)

So, just making sure I understand: you acknowledge that it does have these negative consequences at the moment, but you're arguing that it's a mistake to conclude that therefore arrogant people ought to change anything about themselves; the proper conclusion is that arrogance-averse folks should get over it and acknowledge the importance of equal treatment for arrogant people. Yes?

Replies from: Armok_GoB
comment by Armok_GoB · 2012-01-26T16:33:26.862Z · LW(p) · GW(p)

Ideally, yes, but that's obviously not going to happen.

If I were to propose a coarse of action it'd be "Realize the question is more complex than it seems, that the situation is likely to require messy compromise and indirection, and to hold of on proposing solutions"

comment by Bugmaster · 2012-01-23T22:33:30.722Z · LW(p) · GW(p)

I'd be curious to see your feedback regarding the comments on this post. Do you believe that the answers to your questions were useful ? If so, what are you going to do about it (and if not, why not) ? If you have already done something, what was it, and how effective did it end up being ?

comment by fburnaby · 2012-01-23T04:24:10.738Z · LW(p) · GW(p)

What about getting some tech/science savvy public-relations practitioners involved? Understanding and interacting effectively with the relevant publics might just be a skill worthy of dedicated consideration and more careful management.

comment by [deleted] · 2012-01-20T00:35:09.878Z · LW(p) · GW(p)

Personally, I don't think SI is arrogant, but rather that should would harder to publish books/papers so that they would be more accepted by the general scientific (and non-scientific, even) community. Not that I think think that they aren't trying already...

comment by thomblake · 2012-01-19T20:58:22.242Z · LW(p) · GW(p)

Are there subjects and ways in which SI isn't arrogant enough?

Informally, let us suppose perceived arrogance in attempting a task is the perceived competence of the individual divided by the perceived difficulty of the task. SIAI is attempting an impossible task, without infinite competence. Thus, there is no way SIAI can be arrogant enough.

Replies from: None
comment by [deleted] · 2012-01-22T19:03:48.251Z · LW(p) · GW(p)

I'm pretty sure you flipped the fraction upside-down here. Shouldn't it be perceived difficulty of the task divided by perceived competence? Gifted high-school student who boldly declares that he will develop a Theory of Everything over the course of summer vacation is arrogant (low competence, high difficulty). Top-notch theoretical physicist who boldly declares that he will solve a problem from a high-school math contest is not. So SIAI is actually infinitely arrogant, according to your assumptions.

Replies from: thomblake
comment by thomblake · 2012-01-23T17:05:20.698Z · LW(p) · GW(p)

I'm pretty sure I did too. But the whole explanation seems much less intuitive to me now, so I'll retract rather than correct it.

Replies from: komponisto
comment by komponisto · 2012-01-23T17:30:04.704Z · LW(p) · GW(p)

It seems to me that the "perceived arrogance quotient" used by most people is the following: (status asserted by speaker as perceived by listener)/(status assigned to speaker by listener)

However, I think this is wrong and unfair, and it should instead be: (status asserted by speaker as perceived by speaker)/(status assigned to speaker by listener)

That is, before you call someone arrogant, you should have to put in a little work to determine their intention, and what the world looks like from their point of view.

comment by roland · 2012-01-18T23:17:01.476Z · LW(p) · GW(p)

You asked for it so I will raise this topic yet again, I hope not to get downvoted:

In http://lesswrong.com/lw/1ww/undiscriminating_skepticism/

EY wrote:

I don't believe there were explosives planted in the World Trade Center. I don't believe in haunted houses. I don't believe in perpetual motion machines. I believe that all these beliefs are not only wrong but visibly insane.

I presented evidence to the contrary of that claim only to be downvoted: http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r5v

I asked EY to explain why he thinks so only to be downvoted: http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1t7r

My 9/11 survey was also downvoted. I'm fine with people having their opinion on 9/11 but why would you prevent others from collecting objective data? http://lesswrong.com/lw/8ac/911_survey/

I think LW has a great difficulty in allowing dissenting voices.

Replies from: Nornagest, shminux, faul_sname, roland
comment by Nornagest · 2012-01-18T23:33:22.630Z · LW(p) · GW(p)

You didn't get downvoted for raising a dissenting voice: well-thought-out posts dissenting from the perceived SingInst party line on subjects perceived as on-topic, such as singularitarianism or cryonics, can and do get heavily upvoted. You got downvoted for pushing a fringe opinion on a highly politicized topic, in a forum where it's perceived as inappropriate to do so, and now you're getting downvoted for doing it again.

Politics makes people stupid. Attempts to politicize LW are correctly interpreted as attacks on the site's collective sanity, about half a step up from actual trolling. I'm not sure it's possible to discuss "9/11 Truth" theories in a politically neutral manner, but for your posts to have even a chance of being well received you'll probably need to exercise much greater caution than you've presented them with so far.

I'd recommend not trying.

Replies from: RobertLumley, roland
comment by RobertLumley · 2012-01-18T23:38:41.162Z · LW(p) · GW(p)

This. You are the person every online community has who brings up his own pet topic, even if it's not even remotely related to the thread. That is the primary reason I downvoted. It's nothing but obnoxious.

Replies from: dbaupp
comment by dbaupp · 2012-01-19T00:16:59.317Z · LW(p) · GW(p)

(Is this a reply to the wrong comment?)

Replies from: Nornagest, RobertLumley
comment by Nornagest · 2012-01-19T00:38:54.075Z · LW(p) · GW(p)

I'm pretty sure the "you" refers to my comment's parent and that the "This." refers to mine. It'd have made more sense in a non-threaded forum, granted.

comment by RobertLumley · 2012-01-19T01:25:40.654Z · LW(p) · GW(p)

Yes, sorry. I can see how that was confusing. Nornagest's interpretation of my comment is correct.

comment by roland · 2012-01-19T00:11:23.689Z · LW(p) · GW(p)

I agree politics shouldn't be discussed on LW, but I'm not the one who started the whole 911 thing. The first post on that topic was from Robin Hanson on OB, later EY made at least two postings on that topic on LW, and then it has sprung up sometimes in reference to conspiracy theories, etc... if it was possible to raise the topic in the first place it should also be allowed to present dissenting views.

Replies from: Nornagest
comment by Nornagest · 2012-01-19T00:32:19.239Z · LW(p) · GW(p)

It's become something of a canonical example of a conspiracy theory. Eliezer would have been better served choosing something less topical, but I don't think bringing it up in the context of an example fringe belief is politicized to anywhere near the same extent as bringing it up as a serious proposal would be, for the same reason it isn't particularly controversial to say that, for example, President Obama is not regularly engaged in Satanic ritual abuse.

There's an asymmetry there, but it's one that you should expect when dealing with uncommon beliefs.

comment by Shmi (shminux) · 2012-01-18T23:20:04.715Z · LW(p) · GW(p)

Downvoted for beating the dead horse and straying off topic.

comment by faul_sname · 2012-01-19T00:21:50.659Z · LW(p) · GW(p)

Elizer probably shouldn't have brought up 9/11 in the first place (it is a mindkiller even more than normal politics). Politics in general, and 9/11 in particular, are off topic for this site. You will note that LW did not shut down to protest SOPA, even though a large number of members oppose the legislation.

comment by roland · 2012-01-19T00:01:23.823Z · LW(p) · GW(p)

What should SI/LW do about this? I think there should be some rules as to when downvoting is allowed. Alternatively I would suggest that there is some betting market, every time you down/upvote something you are making a bet on your karma, so if it later somehow turns out that your bet was "wrong" you will lose karma, but I'm not sure how it would be possible to implement this.

Disallowing rude language, like calling people insane or implying it, etc...(this should be a given but it doesn't seem to be the case here), unless it is agreed upon(Crocker's rules).

Replies from: shminux, JoshuaZ, RobertLumley, mwengler
comment by Shmi (shminux) · 2012-01-19T00:14:30.047Z · LW(p) · GW(p)

Given the reception you get here, time and again, consider looking for a more suitable forum for your fringe ideas. Otherwise you will keep finding yourself in this situation.

comment by JoshuaZ · 2012-01-27T20:44:52.901Z · LW(p) · GW(p)

Alternatively I would suggest that there is some betting market, every time you down/upvote something you are making a bet on your karma, so if it later somehow turns out that your bet was "wrong" you will lose karma, but I'm not sure how it would be possible to implement this.

This is a very interesting statement given that I and others have repeatedly suggested that to get people to listen to you about 9/11 issues you could use Intrade, PredictionBook, Longbets or make a specific bet with someone here.(Example discussion one, example discussion two).

So apparently you want to make other people risk resources but not yourself. Do you see why this sort of attitude is going to prompt downvotes? (And of course, I'm still willing to make a monetary bet along the lines discussed earlier.)

comment by RobertLumley · 2012-01-19T02:05:10.266Z · LW(p) · GW(p)

It is just as rude of you to continue to bring up a topic that we have told you time and time again we have made up our minds on and do not wish to consider any further.

I typically am quick to downvote if I see rudeness, but I will always make an exception when it is responding to you bringing up the same topic again with no new information or evidence.

(Note: This is not a request for more "evidence". I have no interest in whether or not the US government is responsible for 9/11. It is not a helpful use of my time to consider that question as I have nothing to gain from changing my mind, if that were even possible.)

comment by mwengler · 2012-01-19T15:50:00.078Z · LW(p) · GW(p)

comment deleted.