Holden Karnofsky's Singularity Institute critique: Is SI the kind of organization we want to bet on?

post by Paul Crowley (ciphergoth) · 2012-05-11T07:25:56.637Z · LW · GW · Legacy · 11 comments

Contents

  Is SI the kind of organization we want to bet on?
    Wrapup
None
11 comments

The sheer length of GiveWell co-founder and co-executive director Holden Karnofsky's excellent critique of the Singularity Institute means that it's hard to keep track of the resulting discussion.  I propose to break out each of his objections into a separate Discussion post so that each receives the attention it deserves.

Is SI the kind of organization we want to bet on?

This part of the post has some risks. For most of GiveWell's history, sticking to our standard criteria - and putting more energy into recommended than non-recommended organizations - has enabled us to share our honest thoughts about charities without appearing to get personal. But when evaluating a group such as SI, I can't avoid placing a heavy weight on (my read on) the general competence, capability and "intangibles" of the people and organization, because SI's mission is not about repeating activities that have worked in the past. Sharing my views on these issues could strike some as personal or mean-spirited and could lead to the misimpression that GiveWell is hostile toward SI. But it is simply necessary in order to be fully transparent about why I hold the views that I hold.

Fortunately, SI is an ideal organization for our first discussion of this type. I believe the staff and supporters of SI would overwhelmingly rather hear the whole truth about my thoughts - so that they can directly engage them and, if warranted, make changes - than have me sugar-coat what I think in order to spare their feelings. People who know me and my attitude toward being honest vs. sparing feelings know that this, itself, is high praise for SI.

One more comment before I continue: our policy is that non-public information provided to us by a charity will not be published or discussed without that charity's prior consent. However, none of the content of this post is based on private information; all of it is based on information that SI has made available to the public.

There are several reasons that I currently have a negative impression of SI's general competence, capability and "intangibles." My mind remains open and I include specifics on how it could be changed.

A couple positive observations to add context here:

Wrapup

While SI has produced a lot of content that I find interesting and enjoyable, it has not produced what I consider evidence of superior general rationality or of its suitability for the tasks it has set for itself. I see no qualifications or achievements that specifically seem to indicate that SI staff are well-suited to the challenge of understanding the key AI-related issues and/or coordinating the construction of an FAI. And I see specific reasons to be pessimistic about its suitability and general competence.

When estimating the expected value of an endeavor, it is natural to have an implicit "survivorship bias" - to use organizations whose accomplishments one is familiar with (which tend to be relatively effective organizations) as a reference class. Because of this, I would be extremely wary of investing in an organization with apparently poor general competence/suitability to its tasks, even if I bought fully into its mission (which I do not) and saw no other groups working on a comparable mission.

11 comments

Comments sorted by top scores.

comment by Rain · 2012-05-11T12:47:23.188Z · LW(p) · GW(p)

Harsh but true. Luke seems ready to take all this to heart, and make improvements to address each of these points.

Replies from: lukeprog, tenlier
comment by lukeprog · 2012-05-12T02:46:59.824Z · LW(p) · GW(p)

Luke seems ready to take all this to heart, and make improvements to address each of these points.

Yes, especially if by "ready to take all this to heart" you mean "already agreed with most of the stuff on organizational problems before Holden wrote the post." :)

comment by tenlier · 2012-05-11T19:20:28.500Z · LW(p) · GW(p)

That was my half my initial reaction as well,the other half:

The critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don't most people regard this as a great summary of their current views, rather than persuasive in any way? In fact, the only effect I suspect this had on most people's thinking was to increase their willingness to listen to Karnofsky in the future if he should change his mind. Since the post is basically directed at LessWrongians as an audience, I find all of that a bit suspicious (not in the sense that he's doing this deliberately).

Also, the only part of the post that interested me was this one (about the SI as an organization); the other stuff seemed kinda minor - praising with faint damns, relative to true outsiders, and so perhaps slightly misleading to LessWrongians.

Reading this (at least a year old, I believe) makes me devalue current protestations:

http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc

I just assume people are pretty good at manipulating my opinion, and honestly, that often seems more the focus in the "academic outreach". People who think about signalling (outside of economics, evolution, etc) are usually signalling bad stuff. Paying 20K or whatever to have someone write a review of your theory is also really really interesting, as apparently SI is doing (it's on the balance sheet somewhere for that "commissioned" review; forget the exact amount). Working on a dozen papers on which you might only have 5% involvement (again: or whatever) is also really really interesting. I can't evaluate SI, but they smell totally unlike scientists and quite like philosophers. Which is probably true and only problematic inasmuch as EY thinks other philosophy is mostly bunk. The closest thing to actually performed science on LW I've seen was that bit about rates of evolution, which was rather scatterbrained. If anyone can point me to some science, I'd be grateful. The old joke about Comp Sci (neither about Comp nor Sci) need not apply.

Replies from: David_Gerard
comment by David_Gerard · 2012-05-11T20:19:32.356Z · LW(p) · GW(p)

Apart from the value of having a smart, sincere person who likes and has seriously tried to appreciate you give you their opinion of you ... Holden's post directly addresses "why the hell should people give money to you?" Particularly as his answer - as a staff member of a charity directory - is "to support your goals, they should not give money to you." That's about as harsh an answer as anyone could give a charity: "you are a net negative."

My small experience is on the fringes of Wikimedia. We get money mostly in the form of lots of small donations from readers. We have a few large donations (and we are very grateful indeed for them!) but we actively look for more small donations (a) to make ourselves less susceptible to the influence of large donors (b) to recruit co-conspirators: if people donate even a small amount, they feel like part of the team, and that's worth quite a lot to us.

The thing is that Wikimedia has never been very good at playing the game. We run this website and we run programmes associated with it. Getting money out of people has been a matter of shoving a banner up. We do A/B testing on the banners! But if we wanted to get rabid about money, there's a lot more we could be doing. (At possible expense of the actual mission.)

SIAI doesn't have the same wide reader base to get donations from. But the goal of a charity that cares about its objectives should be independence. I wonder how far they can go in this direction: to be able to say "well, we don't care what you say about us, us and our readers are enough." I wonder how far the CMR will go.

Replies from: tenlier
comment by tenlier · 2012-05-11T20:44:59.425Z · LW(p) · GW(p)

Sorry, I'm not quite understanding your first paragraph. The subsequent piece, I agree completely with and think applies to a lot SI activities in principle (even if not looking for small donors). The same idea could roughly guide their outlook to "academic outreach", except it's a donation of time rather than money. For example, gaining credibility from a few big names is probably a bad idea, as is trying to play the game of seeking credibility.

On the first paragaph, apologies for repeating, but just clarifying: I'm assuming that everyone already should know that even if you're sympathetic to SI goals, it's a bad idea to donate to them. Maybe it was a useful article for the SI to better understand why people might feel that way. I'm just saying I don't think it was strictly speaking "persuasive" to anyone. Except, I was initially somewhat persuaded that Karnofsky is worth listening to in evaluating SI. I'm just claiming, I guess, that I was way more persuaded that it was worth listening to Karnofsky on this topic than I should have been since I think everything he says is too obvious to imply shared values with me. So, in a few years, if he changes his mind on SI, I've now decided that I won't weight that as very important in my own evaluation. I don't mean that as a criticism of Karnofsky (his write-up was obviously fantastic). I'm just explicating my own thought process.

Replies from: Rain, Suryc11
comment by Rain · 2012-05-12T03:54:40.591Z · LW(p) · GW(p)

I don't think it was strictly speaking "persuasive" to anyone

I felt it was very persuasive.

comment by Suryc11 · 2012-05-12T01:51:22.490Z · LW(p) · GW(p)

I don't think it was strictly speaking "persuasive" to anyone

Just as a data point, I was rather greatly persuaded by Karnofsky's argument here. As someone who reads LW more often for the cognitive science/philosophy stuff and not so much for the FAI/Singularity stuff, I did not have a very coherent opinion of the SI, particularly one that incorporated objective critiques (such as Karnofsky's).

Furthermore, I certainly did not, as you assert, know that it is a bad idea to donate to the Singularity Institute. In fact, I had often heard the opposite here.

Replies from: tenlier
comment by tenlier · 2012-05-12T02:52:54.207Z · LW(p) · GW(p)

Thanks. That's very interesting to me, even as an anecdote. I've heard the opposite here too; that's why I made it a normative statement ("everyone already should know"). Between the missing money and the publication record, I can't imagine what would make SI look worth investing in to me. Yes, that would sometimes lead you astray. But even posts like, oh: http://lesswrong.com/lw/43m/optimal_employment/?sort=top

are pretty much the norm around here (I picked that since Luke helped write it). Basically, an insufficient attempt to engage with the conventional wisdom.

How much should you like this place just because they're hardliners on issues you believe in? (generic you). There are lots of compatibilists, materialists, consequentialists, MWIers, or whatever in the world. Less Wrong seems unusual in being rather hardline on these issues, but that's usually more a sign that people have turned it into a social issue than a matter of intellectual conviction (or better, competence). Anyway, probably I've become inappropriately off topic for this page; I'm just rambling. To say at least something on topic: A few months back there was an issue of Nature talking about philanthropy in science (cover article and a few other pieces as I recall); easily searchable I'm sure, and may have some relevance (both as SI tries to get money or "commission" pieces).

comment by Viliam_Bur · 2012-05-13T15:43:12.368Z · LW(p) · GW(p)

By the way, was the 2009 theft resolved successfully, preferably in a "money back in SI" way?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-05-13T18:30:11.450Z · LW(p) · GW(p)

Luke mentioned this in his long list of recent improvements made by SI:

Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-14T07:58:17.531Z · LW(p) · GW(p)

Sounds like the criminal case didn't work out :(