Long Term Future Fund applications open until June 28th

post by habryka (habryka4) · 2019-06-10T20:39:58.183Z · LW · GW · 9 comments

Contents

  Apply to the Long Term Future Fund
  What kind of applications can we fund?
None
9 comments

The Long Term Future just reopened its applications. You can apply here:

Apply to the Long Term Future Fund

We will from now on have rolling applications, with a window of about 3-4 months between responses. The application window for the coming round will end on the 28th of June 2019. Any application received after that will receive a response around four months later during the next evaluation period (unless it indicates that it is urgent, though we are less likely to fund out-of-cycle applications).

We continue to be particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$100k, but more than $10k, since we can’t give grants below $10k). Here are some concrete examples:

You are also likely to find reading the writeups of our past grant decisions valuable to help you decide whether your project is a good fit:

Apply Here

What kind of applications can we fund?

After last round, CEA clarified what kinds of grants we are likely able to make, which includes the vast majority of applications we have received in past rounds. In general you should err on the side of applying, since I think it is very likely we will be able to make something work. However, because of organizational overhead we are more likely to fund applications to registered charities and less likely to fund projects that require complicated arrangements to be compliant with charity law.

For grants to individuals, we can definitely fund the following types of grants:

We will likely not be able to make the following types of grants:

If you have any questions about the application process or other questions related to the funds, feel free to submit them in the comments. You can also contact me directly under (ealongtermfuture@gmail.com).

9 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2019-06-11T21:27:22.778Z · LW(p) · GW(p)

Have you thought about what the currently most neglected areas of x-risk are, and how to encourage more activities in those areas specifically? Some neglected areas that I can see are:

  1. metaphilosophy in relation to AI safety [LW · GW]
  2. economics of AI risk
  3. human-AI safety problems
  4. better coordination / exchange of ideas between different groups working on AI risk (see this question and I have a draft post about this)

Maybe we do need some sort of management layer in x-risk, where there's some people who specialize in looking at the big picture and saying "hey, here's an opportunity that seems to be neglected, how can we recruit more people to work on it?" instead of the current situation where we just wait for people to notice such opportunities on their own (which might not be where their comparative advantage lies) and then applying for funding. Maybe this management layer is something that LTFF could help fund, or organize, or grow into (since you're already thinking about similar issues while making grant decisions)?

Second question is, do you do post-evaluations of your past grants, to see how successful they were?

(Edit: Added links and reformatted in response to comments.)

Replies from: habryka4, calebo, Chris_Leong, Raemon
comment by habryka (habryka4) · 2019-06-12T00:53:57.258Z · LW(p) · GW(p)

Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.

My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.

Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here [EA(p) · GW(p)].

Replies from: Chris_Leong
comment by Chris_Leong · 2019-06-12T11:46:17.674Z · LW(p) · GW(p)

Perhaps it'd be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.

comment by calebo · 2019-06-11T22:31:34.316Z · LW(p) · GW(p)

Can you say more about what you mean by I. metaphilosophy in relation to AI safety? Thanks.

comment by Chris_Leong · 2019-06-12T11:51:29.900Z · LW(p) · GW(p)

How strongly do you think improving human meta-philosophy would improve computational meta-philosophy?

comment by Raemon · 2019-06-11T22:53:51.031Z · LW(p) · GW(p)

Minor – I found the formatting of that comment slightly hard to read. Would have preferred more paragraphs and possibly breaking the the numbered items into separate lines.

comment by calebo · 2019-06-11T18:53:12.231Z · LW(p) · GW(p)

Have there been explicit requests for web apps that have may solve an operations bottleneck at x-risk organisations? Pointers towards potential projects would be appreciated.

Lists of operations problems at x-risk orgs would also be useful.

Replies from: habryka4, ozziegooen
comment by habryka (habryka4) · 2019-06-11T19:30:20.192Z · LW(p) · GW(p)

I am actually not a huge fan of the "operations bottleneck" framing, and so don't really have a great response to that. Maybe I can write something longer on this at some point, but the very short summary is that I've never seen the term "operations" used in any consistent way, and instead I've seen it refer to a very wide range of skillsets of barely-overlapping skillsets that are often very high-skill tasks that people hope to find a person for who is both willing to work with very little autonomy and with comparably little compensation.

I think many orgs have very concrete needs for specific skillsets they need to fill and for which they need good people, but I don't think there is something like a general and uniform "operations skillset" missing at EA orgs, which makes building infrastructure for this a lot harder.