Request for proposals for Musk/FLI grants
post by danieldewey · 2015-02-05T17:04:35.652Z · LW · GW · Legacy · 11 commentsContents
Proposals References and further resources None 11 comments
As a follow-on to the recent thread on purchasing research effectively, I thought it'd make sense to post the request for proposals for projects to be funded by Musk's $10M donation. LessWrong's been a place for discussing long-term AI safety and research for quite some time, so I'd be happy to see some applications come out of LW members.
Here's the full Request for Proposals.
If you have questions, feel free to ask them in the comments or to contact me!
Here's the email FLI has been sending around:
Initial proposals (300–1000 words) due March 1, 2015
The Future of Life Institute, based in Cambridge, MA and headed by Max Tegmark (MIT), is seeking proposals for research projects aimed to maximize the future societal benefit of artificial intelligence while avoiding potential hazards. Projects may fall in the fields of computer science, AI, machine learning, public policy, law, ethics, economics, or education and outreach. This 2015 grants competition will award funds totaling $6M USD.
This funding call is limited to research that explicitly focuses not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial; for example, research could focus on making machine learning systems more interpretable, on making high-confidence assertions about AI systems' behavior, or on ensuring that autonomous systems fail gracefully. Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.
Please do forward this email to any colleagues and mailing lists that you think would be appropriate.
Proposals
Before applying, please read the complete RFP and list of example topics, which can be found online along with the application form:
http://futureoflife.org/grants/large/initial
As explained there, most of the funding is for $100K–$500K project grants, which will each support a small group of collaborators on a focused research project with up to three years duration. For a list of suggested topics, see the complete RFP [1] and the Research Priorities document [2]. Initial proposals, which are intended to require merely a modest amount of preparation time, must be received on our website [1] on or before March 1, 2015.
Initial proposals should include a brief project summary, a draft budget, the principal investigator’s CV, and co-investigators’ brief biographies. After initial proposals are reviewed, some projects will advance to the next round, completing a Full Proposal by May 17, 2015. Public award recommendations will be made on or about July 1, 2015, and successful proposals will begin receiving funding in September 2015.
References and further resources
[1] Complete request for proposals and application form: http://futureoflife.org/grants/large/initial
[2] Research Priorities document: http://futureoflife.org/static/data/documents/research_priorities.pdf
[3] An open letter from AI scientists on research priorities for robust and beneficial AI: http://futureoflife.org/misc/open_letter
[4] Initial funding announcement: http://futureoflife.org/misc/AI
Questions about Project Grants: dewey@futureoflife.org
Media inquiries: tegmark@mit.edu
11 comments
Comments sorted by top scores.
comment by [deleted] · 2015-02-08T01:09:56.477Z · LW(p) · GW(p)
I certainly hope that Kaj Sotala recent proposal is considered for funding as part of this programme. (Kaj, you around?)
http://intelligence.org/files/ConceptLearning.pdf
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2015-02-08T08:03:45.975Z · LW(p) · GW(p)
Thanks, Mark. I'm definitely thinking about applying, but my current problem is that I have too many potential proposals that I could write:
- There's the concept learning research program that you mentioned.
- I also have an interest in figuring just what exactly human values and preferences are: this seems like a topic where there is likely to be plenty of low-hanging fruit, given that a number of fields touch upon the topic: it relates to concept learning, also in AI there's preference learning, philosophers have tried to ask the question of what value is, there's a bunch of stuff in neuroscience about motivation and preference, a bunch of stuff that I've read about emotion research lately is relevant, there's the economic definition of preferences and their research into the topic, there's intercultural comparisons about values in sociology and anthropology... It seems like if someone went through all of this stuff while keeping in mind the question of "okay, so exactly what part would we want an AI to extrapolate and how and why", one should be able to make considerable progress.
- I was just recently talking to some friends about ontology identification and ontological crises, and based on some preliminary discussion, it seemed to us like one could make progress on it using an approach inspired by conceptual metaphors. Briefly, a conceptual metaphor is a mapping from one domain to another, "that allows us to use the inferential structure of one conceptual domain (say, geometry) to reason about another (say, arithmetic)" (Lakoff & Núñez, p. 6). Intuitively, the physicists who discovered quantum mechanics knew that their discovery didn't make everything they knew of classical mechanics obsolete - all the observations supporting CM were still there, so there had to exist some mapping from CM concepts to QM concepts in a way that allowed us to preserve most of what we knew of CM. That would suggest that an AI that discovered that its current understanding of the world was lacking could attempt to take its old world-model, and identify the things that it valued in the new model by looking for the inferential rules of the old model that could still be mapped into the new one, and using that to identify the corresponding mappings to entities. One of the friends I was talking with indicated that there's research on the deep learning side that might be able to do something like this.
- At this rate, tomorrow I'll probably come up with a fourth thing that seems like a promising research direction.
I guess I could just write proposals for each of these and let FLI decide which one they find the most promising - their FAQ says that one can submit several proposals but they'll only invite one Full Proposal from a single PI.
Replies from: danieldewey, None↑ comment by danieldewey · 2015-02-09T18:57:11.444Z · LW(p) · GW(p)
I would encourage you to apply, these ideas seem reasonable!
As far as choosing, I would advise you to choose the idea for which you can make the case most strongly that it is Topical and Impactful, as defined here.
↑ comment by [deleted] · 2015-02-08T17:02:58.059Z · LW(p) · GW(p)
The concept learning proposal is basically finished right? Submit it now. (OK, I'm biased -- I really like this proposal.) Start writing up the others in the order you think is most important / most likely to be funded and submit those as well.
As some feedback, I don't think "what are human values?" is likely to get much funding from FLI, although obviously a representative of that organization should correct me if I'm wrong. It seems they have a preference to projects more directly connected to code.
Regarding your third idea, I'm pretty sure there is already some published work in this area. I certainly recall some discussion in the OpenCog community about the nature of creativity and concept formation via conceptual metaphors. I'm pretty sure that was in response to some published academic papers, but I'll have to dig those up...
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2015-02-10T09:01:03.128Z · LW(p) · GW(p)
The concept learning proposal is basically finished right? Submit it now.
Good point, that makes sense.
I guess "can't choose the right one" wasn't actually my true rejection, rather I'm hesitating because I'm not sure whether this field is actually where my comparative advantage lies, and whether this is the kind of thing that I'd want to be doing. I do fine when it comes to vague philosophizing at the level of my original concept learning paper, but I'm much less certain of my ability to do actual rigorous technical work. Meanwhile I seem to be getting promising feedback of doing well on some other (non-technical) high-impact projects I've been pursuing.
Though I guess I could apply for the first stage of the grants anyway and decide later, since it doesn't commit me to anything yet...
Replies from: None↑ comment by [deleted] · 2015-02-11T18:54:33.268Z · LW(p) · GW(p)
I'm not sure whether this field is actually where my comparative advantage lies.
What else are you considering?
I would advise that's only half the the equation though. You should also weight by how unique that contribution would be. We simply don't have enough people doing AGI work like concept formation. Not to place too much pressure, but if you don't work on this then it's not clear who would. It's an underfunded area academically (hence these grants are a great opportunity), and too long term to be a part of industrial research efforts...
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2015-03-01T18:41:54.150Z · LW(p) · GW(p)
What else are you considering?
Rationality training and community-building, basically.
But I just submitted my FLI grant application for the concept learning project anyway. :-)
Replies from: None↑ comment by [deleted] · 2015-03-09T22:47:06.823Z · LW(p) · GW(p)
Rationality training by itself is worse than useless. Apply things in practice or you risk building free-floating castles detached from any practical application. A basic rule of thumb: if you spend more than 10-15% of your time on meta improvements, you are probably accomplishing less in your life than you could be. That means 85% to 90% of your time should be spent doing actual work.
As for community building, if that floats your boat, sure why not. I'm hoping you choose the FLI grant instead however :)
Replies from: Kaj_Sotala, Kaj_Sotala↑ comment by Kaj_Sotala · 2015-04-04T16:47:23.501Z · LW(p) · GW(p)
Oh yeah, forgot to say that my initial grant application on concept learning was accepted to the second round of proposals.
Working on the full-length proposal now.
Replies from: None↑ comment by Kaj_Sotala · 2015-03-10T16:52:14.322Z · LW(p) · GW(p)
Rationality training by itself is worse than useless. Apply things in practice or you risk building free-floating castles detached from any practical application. A basic rule of thumb: if you spend more than 10-15% of your time on meta improvements, you are probably accomplishing less in your life than you could be. That means 85% to 90% of your time should be spent doing actual work.
Yeah, CFAR-style rationality training is the goal: carried out by actually troubleshooting and solving one's real-life problems, while also building a community of like-minded people to remind you to actually think about your problems instead of doing whatever default thing comes to mind.