Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial

post by Paul Crowley (ciphergoth) · 2015-01-15T16:33:48.640Z · LW · GW · Legacy · 52 comments

Contents

52 comments

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity." 

[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy  (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna. 

[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories. 

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015

52 comments

Comments sorted by top scores.

comment by RyanCarey · 2015-01-15T20:59:49.997Z · LW(p) · GW(p)

Excellent news. Considered together with the announcement of AI scientists endorsing a statement in favour of researching how to make AI beneficial, this is the best weeks for AI safety that i can remember.

Taken together with the publication of Superintelligence, founding of FLI and CSER, and teansition of SI into a research organisation MIRI, it's becoming clearer that the last few years have started to usher in a new chapter in AI safety.

I know that machine learning capabilities are also increasing but let's celebrate successes like these!

Replies from: Kaj_Sotala, ciphergoth, James_Miller
comment by Kaj_Sotala · 2015-01-16T18:41:24.372Z · LW(p) · GW(p)

Now we can say that we were into AI risk before it was cool.

comment by Paul Crowley (ciphergoth) · 2015-01-15T21:23:50.561Z · LW(p) · GW(p)

If you'd asked me two years ago I would have put today's situation in the most optimistic 10% of outcomes. It's nice to be wrong in that direction :)

Replies from: Benito
comment by Ben Pace (Benito) · 2015-01-16T06:10:52.029Z · LW(p) · GW(p)

Damn. Good point. Woo!

comment by James_Miller · 2015-01-16T07:00:00.466Z · LW(p) · GW(p)

Is it excellent news? Ignoring the good that will come from the money, shouldn't the fact that Musk is donating the funds increase our estimate that AI is indeed an existential threat? Imagine you have a condition that a very few people think will probably kill you, but most think is harmless. Then a really smart doctor examines you, says you should be worried, and pays for part of your treatment. Although this doctor has helped you, he has also lowered your estimate of how long you are going to live.

Replies from: JoshuaFox, ciphergoth, None
comment by JoshuaFox · 2015-01-16T08:14:23.947Z · LW(p) · GW(p)

Musk's position on AI risk is useful because he is contributing his social status and money to the cause.

However, other than being smart, he has no special qualifications in the subject -- he got his ideas from other people.

So, his opinion should not update our beliefs very much.

Replies from: gjm, SanguineEmpiricist, James_Miller
comment by gjm · 2015-01-16T11:19:03.284Z · LW(p) · GW(p)

Should not update our beliefs much. Musk is a smart guy, he has access to roughly the same information as we do, and his interpretation of that information is that the danger is enough to justify him in spending millions on it. But not, e.g., enough for him to drop everything else and dedicate his life to trying to solve it.

I think most of us should adjust our beliefs about the danger either up or down a little in response to that.

comment by SanguineEmpiricist · 2015-01-17T04:52:27.023Z · LW(p) · GW(p)

Disagree. Meet a lot of the Less Wrong style people in real life and a totally different respectable elite emerge than what you see on forums and some people collapse. Musk is far more trustworthy. Less wrong people over-estimate themselves.

Replies from: Brillyant
comment by Brillyant · 2015-01-17T05:21:18.769Z · LW(p) · GW(p)

Will you elaborate?

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2015-01-17T06:05:26.350Z · LW(p) · GW(p)

Uh. I don't know, you see many more dimensions that causes you to harshly devalue a significant amount of individuals while finding you missed out of many good people. Less Wrong people are incredibly hit or miss, and many are "effective narcissists" and have highly acute issues that they use their high verbal intelligence to argue against.

Also there exists a tendency for speaking in extreme declarative statements and using meta-navigation in conversations as a crutch for lack of fundamental social skills. Furthermore I have met many quasi-famous LW people that are unethical in a straightforward fashion.

A large chunk of less wrong people you meet, including named individuals turn out to be not so great, or great in ways other than intelligence that you can appreciate them for. The great people you do meet however significantly make up for and surpass losses.

When people talk about "smart LW people" they often judge via forum posts or something, when that turns out to be only a moderately useful metric. If you ever meet the extended community I'm sure you will agree. It's hard for me to explain.

tl;dr Musk is just more trustworthy and competent overall unless you are restricting yourself to a strict subset of Less Wrong people. Also LW people tend to overestimate how advanced they are compared to other epistemic blocs that are as elite, or are more elite.

http://lesswrong.com/user/pengvado/ <---- is some one I would trust. Not every other LW regular.

Replies from: John_Maxwell_IV, FourFire, None
comment by John_Maxwell (John_Maxwell_IV) · 2015-01-18T05:45:54.553Z · LW(p) · GW(p)

The halo effect is when your brain tricks you in to collapsing all of a person's varied attributes and abilities in to a single dimension of how much you respect them. Dan Quayle's success in politics provides limited evidence that he's a good speller. Satoshi Nakamoto's high status as the inventor of Bitcoin provides limited evidence that he is good looking. Justin Bieber's success as a pop star provides limited evidence that he's good at math. Etc.

  • Elon Musk is famous for being an extremely accomplished in the hi-tech world. This provides strong evidence that Musk is "competent". "Trustworthy" I'm not as sure about.

  • Less Wrong users can be highly rational and make accurate predictions worth listening to while lacking "fundamental social skills".

  • An individual Less Wronger who has lots of great ideas and writes well online might be too socially anxious to be a good conversationalist.

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2015-01-18T22:18:49.784Z · LW(p) · GW(p)

I was being very generous in my post. Less Wrong has many people that have megalomaniac tendencies. This would be almost impossible to argue against. I gave a wide margin and said many great people, but to pretend that there aren't illegitimate people who hide and are many in number is something else entirely.

Elon Musk is certainly trustworthy. You can calculate trustworthiness via amount of accumulated capital because the effect trust has on said serial accumulation.

You have referenced relatively elementary mistakes that do not apply in this situation. Your examples are extremely off base

  • Dan Quayle as a good speller is irrelevant and unbelievable that is not even in the same ballpark of what I was saying

  • Satoshi Nakamoto being good looking because of his invention is ridiculous

  • Just Bieber example is worse.

Social approximation is far more robust than peoples assumed identities and online persona's. I am not foolishly judging people randomly. Most of the "effective altruists" you meet are complete pushovers and if they wish to justify their megalomaniac tendencies constantly going on and on about how effective they are, they should be able to put their foot down and stop bad people and set boundaries, or cut the effective act until they are able to.

They use their "effective altruism" to make up for the fact of the huge ethical opportunity costs they miss by what they DO NOT do. They then engage in extremely obscurant arguments as cover.

As an example see the lack of ethics that many people complain about in mathematics ie Grothendieck or Perelman.

Mathematicians trends towards passivity and probably are "good people" but they do not stop their peers in engaging unethical behavior and thus that is the sordid state of mathematics. Stopping bad people is primary, doing good things is second. Effective altruism is incomplete until they admit that it is not doing good things, but stopping bad things, and you need a robust personality structure to do so.

comment by FourFire · 2015-01-18T01:19:22.486Z · LW(p) · GW(p)

Your comment is enlightening, thanks for sharing your thoughts.

comment by [deleted] · 2015-01-18T22:23:44.493Z · LW(p) · GW(p)

Test

comment by James_Miller · 2015-01-16T15:08:16.470Z · LW(p) · GW(p)

he has no special qualifications in the subject

What about determining how much money investors will spend attempting to develop an AI, or how difficult it would be to coordinate the activities of programmers to create an AI, or how much to trust the people who claim that unfriendly AI is a threat.

comment by Paul Crowley (ciphergoth) · 2015-01-16T07:45:41.681Z · LW(p) · GW(p)

I was already pretty convinced it was a problem, but it turns out very pessimistic about the chances of anyone taking it seriously, so the effect on the latter greatly outweighs the effect on the former for me.

comment by [deleted] · 2015-01-17T02:00:26.930Z · LW(p) · GW(p)

Map, territory.

Replies from: James_Miller
comment by James_Miller · 2015-01-17T02:14:26.746Z · LW(p) · GW(p)

Sorry general the map you have been using is wrong, the correct one shows that the enemy is about to destroy us. This would be horrible news.

Replies from: None
comment by [deleted] · 2015-01-17T07:24:38.900Z · LW(p) · GW(p)

Would it be better to remain ignorant? It's a false choice if you think the comparison is between being told the enemy is about to destroy us vs the enemy being where we thought they were. The enemy is about to destroy us, whether we know about it or not. The real alternative is remaining ignorant until the very last moment. It is better to be told the truth, no matter how much you hope for reality to be otherwise.

Replies from: James_Miller
comment by James_Miller · 2015-01-17T16:03:00.828Z · LW(p) · GW(p)

Would it be better to remain ignorant?

No, and I think Musk is doing a great thing, but the fact that he thinks it needs to be done is not "excellent news". I think we are talking past each other.

comment by JoshuaFox · 2015-01-16T08:17:47.221Z · LW(p) · GW(p)

I think that this is almost as much money as has gone into AI existential risk research to all organizations ever.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-01-16T10:49:46.037Z · LW(p) · GW(p)

Yep. Check out the MIRI top donors list to put the amount in perspective.

The survey indicates that LW has nontrivial experience with academia: 7% of LW has a PhD and 9.9% do academic computer science. I wonder if it'd be useful to create an "awarding effective grants" repository type thread on LW, to pool thoughts on how grant money can be promoted and awarded to effectively achieve research goals. For example, my understanding is that there is a skill called "grantwriting" that is not the same as research ability that makes it easier to be awarded grants; I assume one would want to control for grantwriting ability if one wanted to hand out grants with maximum effectiveness. I don't have much practical experience with academia though... maybe someone who does could frame the problem better and go ahead and create the thread? (Or alternatively tell me why this thread is a bad idea. For example, maybe grantwriting skill consists mostly of knowing what the institutions that typically hand out grants like to see, and FLI is an atypical institution.)

An example of the kind of question we could discuss in such a thread: would it be a good idea for grant proposals to be posted for public commentary on FLI's website, to help them better evaluate grants and spur idea sharing on AI risk reduction in general?

Edit: Here's the thread I created.

comment by JoshuaFox · 2015-01-16T08:17:01.087Z · LW(p) · GW(p)

Do we know why he chose to donate in this way: donating to FLI (rather than FHI, MIRI, CSER, some university, or a new organization), and setting up a grant fund (rather than directly to researchers or other grantees)?

Replies from: Sean_o_h, John_Maxwell_IV
comment by Sean_o_h · 2015-01-16T10:44:10.634Z · LW(p) · GW(p)

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.

There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area - this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.

Consider it seed funding for the whole field of AI safety!

Sean (CSER)

Replies from: Vika
comment by Vika · 2015-01-16T18:20:52.765Z · LW(p) · GW(p)

Seconded (as an FLI person)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2015-01-17T08:40:35.820Z · LW(p) · GW(p)

Vika, thank you and all at FLI so much for all you've done recently. Three amazing announcements from FLI on each others' heels, each a gigantic contribution to increasing the chances that we'll all see a better future. Really extraordinary work.

Replies from: Vika
comment by Vika · 2015-01-17T20:33:51.167Z · LW(p) · GW(p)

Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).

comment by John_Maxwell (John_Maxwell_IV) · 2015-01-16T10:38:10.191Z · LW(p) · GW(p)

My guesses: he chose to donate to FLI because their star-studded advisory board makes them a good public face of the AI safety movement. Yes, they are a relatively young organization, but it looks like they did a good job putting the research priorities letter together (I'm counting 3685 signatures, which is quite impressive... does anyone know how they promoted it?) Also, since they will only be distributing grants, not spending the money themselves, organizational track record is a bit less important. (And they may rely heavily on folks from MIRI/FHI/etc. to figure out how to award the money anyway.) The money will be distributed as grants because grant money is the main thing that motivates researchers, and Musk wants to change the priorities of the AI research community in general, not just add a few new AI safety researchers on the margin. And holding a competition for grants means you can gather more proposals from a wider variety of people. (In particular, people who currently hold prestigious academic jobs and don't want to leave them for a fledgling new institute.)

Replies from: Vika
comment by Vika · 2015-01-16T18:22:06.692Z · LW(p) · GW(p)

Most of the signatures came in after Elon Musk tweeted about the open letter.

comment by BenLowell · 2015-01-16T03:25:18.976Z · LW(p) · GW(p)

This is awesome!!!

Replies from: folkTheory
comment by folkTheory · 2015-01-16T03:25:41.378Z · LW(p) · GW(p)

Upvoted for phaticness

comment by [deleted] · 2015-01-18T03:32:25.255Z · LW(p) · GW(p)

Is FLI using this money to fund research proposals? Where would one send such a proposal for consideration?

Replies from: Sean_o_h
comment by Sean_o_h · 2015-01-18T10:06:19.185Z · LW(p) · GW(p)

Yes. The link with guidelines, grant portal, should be on FLI website within the coming week or so.

comment by gjm · 2015-01-15T17:25:53.677Z · LW(p) · GW(p)

Interesting. Will MIRI be applying for grants?

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2015-01-15T18:32:54.622Z · LW(p) · GW(p)

Using this picture http://futureoflife.org/images/conference150104.jpg as evidence I imagine they will (and should).

Replies from: drethelin
comment by drethelin · 2015-01-15T22:28:14.771Z · LW(p) · GW(p)

wow everyone is so squinty

Replies from: lukeprog
comment by lukeprog · 2015-01-15T22:40:36.806Z · LW(p) · GW(p)

It was so bright out! The photo has my eyes completely closed, unfortunately. :)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2015-01-16T07:51:50.312Z · LW(p) · GW(p)

Next time take lots of pictures and release a composite :)

comment by JoshuaZ · 2015-01-15T23:26:00.187Z · LW(p) · GW(p)

This is good news. In general, since all forms of existential risk seem underfunded as a whole, funding more to any one of them is a good thing. But a donation of this size for AI specifically makes me now start to wonder if people should identify other existential risks which are now more underfunded. In general, it takes a very large amount of money to change what has the highest marginal return, but this is a pretty large donation.

Replies from: Sean_o_h, CarlShulman
comment by Sean_o_h · 2015-01-16T11:13:59.010Z · LW(p) · GW(p)

This will depend on how many other funders are "swayed" towards the area by this funding and the research that starts coming out of it. This is a great bit of progress, but alone is nowhere near the amount needed to make optimal progress on AI. It's important people don't get the impression that this funding has "solved" the AI problem (I know you're not saying this yourself).

Consider that Xrisk research in e.g. biology draws usefully on technical and domain-specific work in biosafety and biosecurity being done more widely. Until now AI safety research hasn't had that body to draw on in the same way, and has instead focused on fundamental issues on the development of general AI, as well as outlining the challenges that will be faced. Given that much of this funding will go towards technical work by AI researchers, this will hopefully get this side of things going in a big way, and help build a body of support and involvement from the non-risk AI/CS community, which is essential at this moment in time.

But there's a tremendous amount of work that will need to be done - and funded - in both the technical, fundamental, and broader (policy, etc) areas. Even if FHI/CSER are successful in applying, the funds that are likely to be allocated to the work we're doing from this pot is not going to be near what we would need for our respective AI research programmes (I can't speak for MIRI, but I presume this to be the case also). But it will certainly help!

comment by CarlShulman · 2015-01-17T00:11:10.895Z · LW(p) · GW(p)

GiveWell is on the case, and has said it is looking at bio threats (as well as nukes, solar storms, interruptions of agriculture). See their blog post on global catastrophic risks potential focus areas.

The open letter is an indication that GiveWell should take AI risk more seriously, while the Musk donation is an indication that near-term room for more funding will be lower. That could go either way.

On the room for more funding question, it's worth noting that GiveWell and Good Ventures are now moving tens of millions of dollars per year, and have been talking about moving quite a bit more than Musk's donation to the areas the Open Philanthropy Project winds up prioritizing.

However, even if the amount of money does not exhaust the field, there may be limits on how fast it can be digested, and the efficient growth path, that would favor gradually increasing activity.

comment by Gondolinian · 2015-01-15T23:53:56.603Z · LW(p) · GW(p)

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday February 15, 2015

Did you mean to write January instead of February?

Replies from: Dorikka, Vika, ciphergoth
comment by Dorikka · 2015-01-16T04:19:13.634Z · LW(p) · GW(p)

Damn those time travelers, always forgetting the current date. >.>

Replies from: None
comment by [deleted] · 2015-01-16T22:47:11.795Z · LW(p) · GW(p)

Elon Musk donates $10M and it only takes a month from that point to invent an AI capable of time-travel. Truly, money makes the world go round.

comment by Vika · 2015-01-16T18:13:10.286Z · LW(p) · GW(p)

It was a typo on the FLI website, which has now been corrected to January.

comment by Paul Crowley (ciphergoth) · 2015-01-16T07:47:08.784Z · LW(p) · GW(p)

I'm sure I cut and paste the date from the FLI announcement, so I can only assume that mistake was present there at one point!

comment by Gondolinian · 2015-01-15T21:39:08.083Z · LW(p) · GW(p)

This seems like pretty big news. Anyone think this post should be moved to Main?

[pollid:810]

ETA: Is anyone in favor of Nightspacer's idea below?

[pollid:811]

Replies from: Nightspacer
comment by Nightspacer · 2015-01-15T23:35:17.671Z · LW(p) · GW(p)

My only problem with moving this to main is that fewer people check main (and not promoting it would be worst of all) as often. But I could see a case that after a week it could be moved there as it would be a big piece of news that could stay at the top for a while.

Replies from: MakoYass, TylerJay, Gondolinian
comment by mako yass (MakoYass) · 2015-01-22T07:57:52.965Z · LW(p) · GW(p)

Is the proliferation of a policy of not checking main a problem? Shouldn't we do something about it? Something like posting extremely relevant articles to main?

comment by TylerJay · 2015-01-18T22:06:58.961Z · LW(p) · GW(p)

Seconded. I don't check Main anymore. Maybe once a month

comment by Gondolinian · 2015-01-16T00:02:37.084Z · LW(p) · GW(p)

My only problem with moving this to main is that fewer people check main (and not promoting it would be worst of all) as often.

Ah, good point.

But I could see a case that after a week it could be moved there as it would be a big piece of news that could stay at the top for a while.

That sounds like it could work. I'll add it to the poll.

comment by advancedatheist · 2015-01-16T00:16:37.122Z · LW(p) · GW(p)

Dale Carrico weighs in:

Futurology's Shortsighted Foresight on AI

http://www.wfs.org/blogs/dale-carrico/futurologys-shortsighted-foresight-ai

Carrico puts quite a bit of work into some of his posts on his blog, but I wonder why he bothers, given how few comments his posts receive, compared with bloggers who have significant followings like Megan McCardle, Vox Day, Roosh Valizadeh, Heartiste or Steve Sailer.