GiveWell.org interviews SIAI

post by Paul Crowley (ciphergoth) · 2011-05-05T16:29:09.944Z · LW · GW · Legacy · 17 comments

Contents

17 comments

Holden Karnofsky of GiveWell.org interviewed Jasen Murray of SIAI and published his notes (Edit: PDF, thanks lukeprog!), with updates from later conversations. Lots of stuff to take an interest in there - thanks to jsalvatier for drawing our attention to it. One new bit of information stands out in particular:

17 comments

Comments sorted by top scores.

comment by [deleted] · 2011-05-06T05:30:42.381Z · LW(p) · GW(p)

I'm surprised that no one has addressed Karnofsky's main point: even if we accept all of the premises on which SIAI was founded, it doesn't look like SIAI is doing anything particularly impressive (from an Outside View). Jasen's reply seemed to consist mostly of "we're working on it." What does the community think about this?

I was also rather disappointed that SIAI doesn't have a definitive plan for what they would do with more money. This might end up being a self-fulfilling prophecy that works against them, because GiveWell may decrease SIAI's rating based on the fact that SIAI apparently haven't planned out how they would scale up their operation. (But I think this is definitely a smaller concern than the previous one.)

Replies from: None, curiousepic, syllogism
comment by [deleted] · 2011-05-08T13:31:24.441Z · LW(p) · GW(p)

That's been one of the reasons I've not donated either (apart from one £10 donation made for odd reasons). Unfortunately, the SIAI seem to have made a decision that they need to keep what work they're doing secret. That's understandable, but it means we have absolutely no idea what the actual effect of donation is.

comment by curiousepic · 2011-05-06T15:50:23.479Z · LW(p) · GW(p)

I don't know how formal this interview was, but quite honestly the interviewee seemed unprepared to give thorough answers for these questions.

comment by syllogism · 2011-05-06T05:58:25.867Z · LW(p) · GW(p)

Honesty is important, and the truth is that it's about the people. They need to have enough funding to snap up the right people as they come along, but they can't just "scale up" the way you can scale up, say, a health intervention in the third world.

Replies from: jsalvatier
comment by jsalvatier · 2011-05-06T14:51:14.472Z · LW(p) · GW(p)

This seems very plausible to me, but they need to verbalize this and make a convincing case that there are people they could hire that would do good work on existential risk reduction. What would a good answer look like? "The visiting fellows program has been quite successful. Visiting fellows (who were not hired) have come up with ideas X,Y,Z and accomplished a, b, c and there are people who we would like to employ but we cannot because we don't have the funding to do so."

comment by Vladimir_Nesov · 2011-05-05T17:03:07.448Z · LW(p) · GW(p)

it would also be a demonstration that there is such a thing as "skill at making sense of the world."

There probably isn't such a thing with visible advantage for most topics that enough people already worked on.

Replies from: torekp
comment by torekp · 2011-05-06T00:05:28.740Z · LW(p) · GW(p)

A summary of the diet/nutrition literature which sorts the well-established points from the chaff would be extremely useful. Is there already a comprehensive online user-friendly treatment, written by extremely rational thinker(s) who have done thorough research? I would be surprised. The area is a minefield of cleverly disguised bad research.

Replies from: ciphergoth, Vaniver
comment by Paul Crowley (ciphergoth) · 2011-05-06T12:58:22.670Z · LW(p) · GW(p)

How will people be able to tell that you've done the job well?

My feeling is that there is generally very impressive rationality on the market in fields of endeavour where you will quickly get a clear answer on whether you were right, and so it's hard for rationalists to show off their skills there. But when you start to compete outside of those fields, no-one can tell that you're doing better.

Replies from: jsalvatier
comment by jsalvatier · 2011-05-06T14:58:55.486Z · LW(p) · GW(p)

Excellent point.

Is that a fixable problem? Maybe, though I agree it seems hard. One way to show you're more right than others (at least to other rationalists) might be to explicitly describe contradicting evidence and ways in which you could be wrong.

comment by Vaniver · 2011-05-06T04:36:27.970Z · LW(p) · GW(p)

The problem, though, is that what the rationalists will probably conclude is "there's no solid evidence, and it seems likely that any advice works well for a subset of the population and hurts other subsets of the population. You should quantify yourself and experiment."

Which is advice that people are going to have a hard time taking. Giving up specialization of labor is rather hard and rarely worth it.

Now, they might find a few gems- like "you should figure out diaphragmatic breathing"- but it'll be hard to separate those from fads or overbroad advice ("this diet agrees with my evolutionary views, thus I suspect it's good for everyone").

Replies from: jsalvatier
comment by jsalvatier · 2011-05-06T15:00:29.089Z · LW(p) · GW(p)

I think even this would be pretty useful; it basically means 'we don't know anything of use; average person should ignore this field for at least 10 years'.

comment by lukeprog · 2011-05-05T16:58:36.024Z · LW(p) · GW(p)

PDF.

comment by XFrequentist · 2011-05-05T17:32:34.356Z · LW(p) · GW(p)

My comment:

Eric Drexler made what sounds to me like a very similar proposal, and something like this is already done by a few groups, unless I'm missing some distinguishing feature.

I'd be very interested in seeing what this particular group's conclusions were, as well as which methods they would choose to approach these questions. It does seem a little tangential to the SIAI's stated mission through.

Replies from: ata
comment by ata · 2011-05-05T19:42:26.559Z · LW(p) · GW(p)

unless I'm missing some distinguishing feature.

I think (at least part of) the idea is to have young skilled rationalists work with experienced scientists in expectation that their powers combined will be more effective at making sense of some topic than either of them separately.

comment by Kutta · 2011-05-06T06:41:04.419Z · LW(p) · GW(p)

It seems odd to me that the answers on SIAI's behalf are so short. Is there any particular reason behind this? Most possible trajectories of explanation were left out, and it felt a bit like an interrogation.

comment by Vladimir_Golovin · 2011-05-06T06:41:58.116Z · LW(p) · GW(p)

Michael Vassar is working on an idea he calls the "Persistent Problems Group" or PPG. The idea is to assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition.

I like this idea a lot, and I think that it deserves at least a top level discussion post.

Michael, could you post a list of topics the PPG is planning to address, and perhaps an example of the final product (i.e. a summary, tl;dr, mini-paper, presentation or whatever the planned output format is)?

comment by DanielLC · 2011-05-05T23:48:45.685Z · LW(p) · GW(p)

I can't seem to open the doc or the pdf.

EDIT: It's working now.