The Future of Humanity Institute could make use of your money

post by danieldewey · 2014-09-26T22:53:37.931Z · LW · GW · Legacy · 25 comments

Contents

25 comments

Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.

Academic research is generally funded through grants, but because the FHI is researching important but unusual problems, and because this research is multi-disciplinary, we've found it difficult to attract funding from the usual grant bodies. This has meant that we’ve had to prioritise a certain number of projects that are not perfect for existential risk reduction, but that allow us to attract funding from interested institutions.

With more assets, we could both liberate our long-term researchers to do more "pure Xrisk" research, and hire or commission new experts when needed to look into particular issues (such as synthetic biology, the future of politics, and the likelihood of recovery after a civilization collapse).

We are not in any immediate funding crunch, nor are we arguing that the FHI would be a better donation target than MIRI, CSER, or the FLI. But any donations would be both gratefully received and put to effective use. If you'd like to, you can donate to FHI here. Thank you!

25 comments

Comments sorted by top scores.

comment by lukeprog · 2014-09-23T19:10:27.636Z · LW(p) · GW(p)

In case this helps and isn't obvious to everyone, I'll briefly mention that I'm the Executive Director of MIRI and I agree with what Daniel wrote above.

Also, the linked Ross Andersen piece on FHI is really good and people should read it.

comment by Stabilizer · 2014-09-23T20:30:02.835Z · LW(p) · GW(p)

$30 donated. It may become quasi-regular, monthly.

Thanks for letting us know. I wanted to donate to x-risk, but I didn't really want to give to MIRI (even though I like their goals and the people) because I worry that MIRI's approach is too narrow. FHI's broader approach, I feel, is more appropriate given our current ignorance about the vast possible varieties of existential threats.

Replies from: Stuart_Armstrong, danieldewey
comment by Stuart_Armstrong · 2014-09-24T09:47:36.660Z · LW(p) · GW(p)

Thanks!

comment by danieldewey · 2014-09-24T23:09:33.922Z · LW(p) · GW(p)

Yes, thank you!

comment by Evan_Gaensbauer · 2014-09-23T23:56:38.185Z · LW(p) · GW(p)

A heuristic I've previously encountered being thrown around about whether to donate to the MIRI, or the FHI, is to fund whichever one has more room for more funding, or whichever one is experiencing more of a funding crunch at a given time. As Less Wrong is a hub for an unusually large number of donors to each of these organizations, it might be nice if there was a (semi-)annual discussion on these matters with representatives from the various organizations. How feasible would this be?

Replies from: danieldewey, Sean_o_h
comment by danieldewey · 2014-09-24T23:14:30.872Z · LW(p) · GW(p)

This is worth thinking about in the future, thanks. I think right now, it's good to take advantage of MIRI's matched giving opportunities when they arise, and I'd expect either organization to announce if they were under a particular crunch or aiming to hit a particular target.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2014-09-26T08:17:58.452Z · LW(p) · GW(p)

.impact is a volunteer task force of effective altruists who take upon projects not linked to any one organization. .impact deals in particular with implementing open-source software resources that are useful to effective altruists. Well, that's what it's trying to specialize in; the decentralized coordination of remote volunteers is very difficult.

Anyway, on the effective altruism forum, I was involved with a discussion about building an interactive visual map that updates on what the status of projects, and funding, for effective altruist organizations. Anybody trying to reduce existential risk would fall under effective altruism, so ostensibly, they'd be included on such a map, too. This would solve most of the problem I myself posed above.

I'll update Less Wrong in the future if I get wind of any progress on such a project. Anyone: send me a private message if you want more information.

comment by Sean_o_h · 2014-09-29T20:12:40.521Z · LW(p) · GW(p)

I agree that this would be a good idea, and agree with the points below. Some discussion of this took place in this thread last Christmas: http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/

On that thread I provided information about FHI's room for more funding (accurate as of start of 2014) plus the rationale for FHI's other, less Xrisk/Future of Humanity-specific projects (externally funded). I'd be happy to do the same at the end of this year, but instead representing CSER's financial situation and room for more funding.

comment by AshwinV · 2014-09-27T14:34:32.250Z · LW(p) · GW(p)

I have a suspicion that one of the factors holding back donations from big names (think Peter Thiel level), is the absence of visibility. Both from the point of view that it isn't as "cool" as the Bill and Melinda gates foundation (i.e. to say there isn't already an existing public opinion that issues such as x risk are charity worthy, as opposed to something like say donating for underprivileged children to take part in some sporting event) and that it isn't as "visible" (to continue with the donation to children example, a lot of publicity can be obtained by putting up photos of apparently malnourished children sitting together in a line, full of smiles for the camera).

The distinction I have made between the two is artificial, but I thought it was the best way to illustrate that the disadvantages suffered my FHI, MIRI and that cluster of institutes are happening on two different levels.

However, the second point about visibility is actually a bit of a teeny bit concerning. The MIRI has been criticized for not doing much except publishing papers.That doesn't look good and it is hard for a layman to feel that giving away a portion of his salary just to see a new set of math formulas (looking much like the same formulas he saw last month) a good use of his money, especially if he doesn't see it directly helping anyone out.

I understand that by the nature of the research being undertaken, this may be all that we can hope for, but if there is a better way that MIRI can signal it's accountability, then I think that it should be done. Pronto.

Also, could someone who is so inclined get the math/code that is happening and dumb it down enough so that an average LW-er such as yours truly could make more sense of it?

Replies from: Kaj_Sotala, ChristianKl, EHeller, V_V, AshwinV
comment by Kaj_Sotala · 2014-09-29T05:14:52.084Z · LW(p) · GW(p)

The MIRI has been criticized for not doing much except publishing papers.

Really? Before, MIRI was being constantly criticized for not publishing any papers.

Replies from: AshwinV
comment by AshwinV · 2014-09-29T15:34:04.487Z · LW(p) · GW(p)

I see.

I take it that this is a damned if you do and damned if you don't kind of situation.

I'm not able to find the source right now (that criticized the MIRI on said grounds), but I'm pretty certain it wasn't a very authentic/respectable source to begin with. As far as I can recall, it was Stephen Bond, the same guy who wrote the article on "the cult of bayes theorem", though there was a link to his page from Yudkowsky's wikipedia page which is not there anymore.

I simply brought up this example to show how easy it is tarnish an image, something I'm sure you're well aware of. Nonetheless, my point still stands. IMAGE MATTERS.

It doesn't make a difference that the good (and ingenious) folk at MIRI are doing some of the most important work, that may at any given moment solve a large number of headaches for the human race. There are others out there making that same claim. And because some of those others are politicians wearing fancy suits, people will listen to them. (Don't even get me started on the saints and the priests who successfully manage to make decent hard working folk part with large portions of their lifetime's worth of savings, but those cases are a little bit beyond the scope of this particular argument).

A real estate agent can point to a rising skyscraper as evidence of money being put to good use. A NASA type of organisation (slightly tongue in cheek, just indicating a cluster) can point to a satellite orbiting Mars. A biotech company may one day point to a fully lab grown human with perfect glowing skin. A nanotech company can one day point to the world's smallest robot to "do the robot".

The above examples have two things in common, one that they are visible in the most literal sense of the word. The second is (I believe) that most people have a ready intuition by which they can see how achieving any of the above would require a large amount of cash/funding.

Software is harder to impress people with. Even harder if the software is genuinely complicated. To make matters worse, the media has flooded the imagination of newspaper readers all over the world with rage to riches stories of entrepreneurs who made it big and were ok with being only ramen profitable for long years.

And yet institutions that are ostensibly purely academic and research oriented also require funding. And I don't disagree. I've read HPMoR and I've read portions of the LW site as well. I know that this is likely for real, and that there is more than enough credibility built up by the proponents for research into these areas.

Unfortunately, I'm in the minority. And as of now, I'm a far cry from being financially sound. If the MIRI/FHI have to accelerate their research and they need funding for it, then it is not a bad idea to make their progress seem more tangible, even if they can't deliver every single detail every single time.

One possible major downside of this approach of course is that it might eat into valuable time which could otherwise be spent making the real progress that these institutions were created for in the first place.

comment by ChristianKl · 2014-10-02T09:37:45.109Z · LW(p) · GW(p)

I have a suspicion that one of the factors holding back donations from big names (think Peter Thiel level), is the absence of visibility.

I don't think you can call Nick Bostrom not visible. He made Foreign Policy's Top 100 Global Thinkers list. He also wrote the book last year.

comment by EHeller · 2014-09-29T05:38:03.063Z · LW(p) · GW(p)

The MIRI has been criticized for not doing much except publishing papers.

By whom? By the traditional metric of published papers, MIRI is an exceptionally unproductive research organization- only a few low-impact peer-reviewed papers, mostly in the last few years, despite a decade of funding. Its probably fair to say that donations to the old SIAI were more likely to go toward blog posts and fanfic than toward research papers.

Replies from: AshwinV, Princess_Stargirl
comment by AshwinV · 2014-09-29T15:38:46.308Z · LW(p) · GW(p)

Yeah, I was actually trying to say that they need to do other stuff too, not cut down on publishing papers.

You might wanna weigh in on this: http://lesswrong.com/lw/l13/the_future_of_humanity_institute_could_make_use/be4o

comment by Princess_Stargirl · 2014-10-02T20:58:20.822Z · LW(p) · GW(p)

Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers. If current donations happened to lead to blog posts of that caliber the donations would be money well spent.

Replies from: EHeller
comment by EHeller · 2014-10-03T04:35:08.790Z · LW(p) · GW(p)

Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers.

How are we measuring useful or important? The sequences are entertaining, but its not clear to me they do much to actually help with the core goals of MIRI (besides the goal of entertaining people enough to fund MIRI, perhaps).

The advantage of a high-impact academic paper is it shapes the culture of academic research. A good idea in a well-received research paper will almost instantly lead to lots of other researchers working on the same problems. A great idea on a well-received research paper can get an entire sub-field working on the same problem.

The sequences are more advertisements than formalized research. Its papers like the one on Lob's obstacle that get researchers interested in working on these problems.

Replies from: AshwinV
comment by AshwinV · 2014-10-03T04:57:27.258Z · LW(p) · GW(p)

The sequences are more advertisements than formalized research. Its papers like the one on Lob's obstacle that get researchers interested in working on these problems.

I think that's up for debate.

And the sequences aren't "just advertisements".

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

Can't really speak on behalf of researchers, but their motivations could literally be anything, maybe just finding the work interesting, to altruistic reasons or financial incentives.

Replies from: EHeller
comment by EHeller · 2014-10-03T05:11:55.149Z · LW(p) · GW(p)

I don't know any LW-ers in person, but I'm sure that at least some people have benefited from reading the sequences.

You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.

I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.

Replies from: Cyan, RobbBB
comment by Cyan · 2014-10-03T05:53:31.918Z · LW(p) · GW(p)

The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

comment by Rob Bensinger (RobbBB) · 2014-10-05T10:02:54.838Z · LW(p) · GW(p)

A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. 'Friendly AI' is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people's tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don't fully persuade them on the first go, there's a case to be made for improving people's epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.

comment by V_V · 2014-10-03T09:21:59.253Z · LW(p) · GW(p)

The MIRI has been criticized for not doing much except publishing papers.

By whom? I mean, what should MIRI do other than publishing research papers?

comment by AshwinV · 2014-09-27T14:37:09.990Z · LW(p) · GW(p)

Of course, If I did get such a version of the code, I may end up tinkering with it and inadvertently create the paperclip Maximiser.

Though if I ended up creating Quirinus Quirell, I'm not sure if it would be a good thing or not.

PS. this was meant as a joke.

comment by Mitchell_Porter · 2014-09-25T02:14:25.184Z · LW(p) · GW(p)

What a coincidence - I could make use of the Future of Humanity Institute's money, too.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-09-26T13:41:29.948Z · LW(p) · GW(p)

By donating it to the top altruistic cause, I assume ;-)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2014-09-28T07:35:22.558Z · LW(p) · GW(p)

It will be the job of my new Institute for Verifiably Estimating, Guessing, and Extrapolating the Most Important Thing Ever (Subject to Availability of a Nutritious Diet, with Wholesome Ingredients in Culturally and Historically Expedient Servings) to figure out what that is.

ETA of course, we shall be affiliated with the University of Woolloomooloo