An OpenAI board seat is surprisingly expensive
post by Benquo · 2017-04-19T09:05:04.032Z · LW · GW · Legacy · 16 commentsContents
16 comments
The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.
To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.
If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.
By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.
(Cross-posted at my personal blog.)
16 comments
Comments sorted by top scores.
comment by gwern · 2017-04-21T01:15:16.786Z · LW(p) · GW(p)
Replies from: ESRogsbecause the price in dollars was such a low share of OpenAI's eventual endowment: 3%.
↑ comment by ESRogs · 2020-01-21T20:07:03.731Z · LW(p) · GW(p)
For others who didn't get the reference: https://en.wikipedia.org/wiki/Jam_tomorrow
comment by tristanm · 2017-04-19T14:50:39.467Z · LW(p) · GW(p)
a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.
Why would negging them be useful?
Replies from: Benquo↑ comment by Benquo · 2017-04-19T19:00:14.799Z · LW(p) · GW(p)
Otherwise OpenAI's status would be reduced towards their level, by accepting a similarly-sized grant from OpenPhil as though they were just another supplicant.
Replies from: Lumifer, tristanm↑ comment by tristanm · 2017-04-19T20:19:24.101Z · LW(p) · GW(p)
But is this really even a "neg" to begin with? My understanding is that MIRI's approach to AI safety is substantially different and that they are primarily performing pure mathematical research as opposed to being based around software development and actual AI implementation. This would mean that their overhead costs are substantially lower than OpenAI's. Additionally, OpenAI might have a shot at attracting more of the big-shot AI researchers whose market value are extremely high at the moment - to do this it would need a great deal of money to offer the appropriate financial incentives. Whereas for MIRI to convince mathematicians to join would be based more on whether or not they can find or convince someone that working on their problem is both important and interesting, and my guess is that it would be a lot cheaper, since I would think that in general mathematicians are getting paid quite a bit less than ML researchers on average. So a $30 million grant might be able to accomplish a lot more at OpenAI than at MIRI, at least in the short term.
Replies from: Benquo↑ comment by Benquo · 2017-04-20T04:45:42.623Z · LW(p) · GW(p)
The grant writeup says that the main benefit of the grant is to buy influence, not to scale up OpenAI. I'm ready to believe OpenAI thinks it can do more with more money. I'm sure MIRI thinks it has uses for more money too (at least freeing up staff time from fundraising). If money's not especially scarce, and AI risk is so important, why not just give MIRI as much as it thinks it can use?
Replies from: tristanm↑ comment by tristanm · 2017-04-20T16:23:56.366Z · LW(p) · GW(p)
Hmm. I'm reading OPP's grant write up for MIRI from 8/2016 and I think in that context I can see why it seems a little odd. For one thing, they say:
this research agenda has little potential to decrease potential risks from advanced AI in comparison with other research directions that we would consider supporting.
This in particular strikes me as strange because 1) If MIRI's approach can be summarized as "Finding method(s) to ensure guaranteed safe AI and proving them rigorously", then technically speaking, that approach should have nearly unlimited "potential", although I suppose it could be argued that progress would be made slowly compared to the speed at which practical AI improves. 2) "Other research directions" is quite vague. Can they point to where these other directions are outlined, a summary of accomplishments in that direction, and why they might feel they have a better potential?
My feeling is that in order to feel that MIRI's overall approach lacks potential, given that all current approaches to AI safety are fairly speculative and that there is no general consensus on how the problem should specifically be approached, then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly. I also speculate that they might have unstated reasons for being skeptical of MIRI's approach.
All of what I've said above is highly speculative and is based on my current, fairly uninformed outsider view.
Replies from: jsteinhardt, ChristianKl↑ comment by jsteinhardt · 2017-04-20T21:03:58.348Z · LW(p) · GW(p)
then the technical advisors at OPP must have a very specific approach to AI safety they are pushing very hard to get support for, but are unwilling or unable to articulate why they prefer theirs so strongly.
I don't think there is consensus among technical advisors on what directions are most promising. Also, Paul has written substantially about his preferred approach (see here for instance), and I've started to do the same, although so far I've been mostly talking about obstacles rather than positive approaches. But you can see some of my writing here and here. Also my thoughts in slide form here, although those slides are aimed at ML experts.
Replies from: tristanm↑ comment by tristanm · 2017-04-22T22:21:35.415Z · LW(p) · GW(p)
I haven't seen that your approach nor Paul's necessarily conflicts with that of MIRI's. There may be some difference of opinion on which is more likely to be feasible, but seeing as how Paul works closely with MIRI researchers and they seem to have a favorable opinion of him, I would be surprised if it were really true that OpenPhil's technical advisors were that pessimistic about MIRI's prospects. If they aren't that pessimistic, then it would imply Holden is acting somewhat against the advice of his advisors, or that he has strong priors against MIRI that were not overcome by the information he was receiving from them.
↑ comment by ChristianKl · 2017-04-20T18:12:26.119Z · LW(p) · GW(p)
I also speculate that they might have unstated reasons for being skeptical of MIRI's approach.
Holden spent a lot of effort stating reasons in http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
Replies from: Benquo, tristanm↑ comment by Benquo · 2017-04-20T21:22:56.472Z · LW(p) · GW(p)
"We think MIRI is literally useless" is a decent reason not to fund MIRI at all, and is broadly consistent with Holden's early thoughts on the matter. But it's a weird reason to give MIRI $500K but OpenAI $30M. It's possible that no one has the capacity to do direct work on the long-run AI alignment problem right now. In that case, backwards-chaining to how to build the capacity seems really important.
Replies from: Raemon↑ comment by Raemon · 2017-04-20T23:03:55.590Z · LW(p) · GW(p)
While I disagree with Holden that MIRI is near-useless, I think his stated reasons for giving MIRI $500k are pretty good reasons that I'd do myself if I had that money and thought MIRI was near-useless.
(Namely, that so far MIRI has had a lot of good impacts regardless of the quality of their research, in terms of general community building, and that this should be rewarded so that other orgs are incentivized to do things like that)
↑ comment by tristanm · 2017-04-20T23:36:05.096Z · LW(p) · GW(p)
True, but 2012 might be long enough ago that many of the concerns he had then may now be irrelevant. In addition, based on my understanding of MIRI's current approach and their arguments for that approach, I feel that many of his concerns either represent fundamental misunderstandings or are based on viewpoints that have significantly changed within MIRI since that time. For example, I have a hard time wrapping my head around this objection:
Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.
This seems to be precisely the same concern expressed by MIRI and one of the fundamental arguments that their Agent Foundations approach is based on, in particular, what they deem the Value Specification problem. And I believe Yudkowsky has used this as a primary argument for AI safety in general for quite a while, very likely before 2012.
There is also the "tool/agent" distinction cited as objection 2 that I think is well addressed in MIRI's publications as well as Bostrom's Superintelligence, where it's made pretty clear that the distinction is not quite that clear cut (and gets even blurrier the more intelligent the "tool AI" gets).
Given that MIRI has had quite some time to refine their views as well as their arguments, as well as having gone through a restructuring and hiring quite a few new researchers since that time, what is the likelihood that Holden holds the same objections that were stated in the 2012 review?
comment by tukabel · 2017-04-19T18:12:13.033Z · LW(p) · GW(p)
oh boy, FacebookFilantropy buying a seat in OpenNuke
honestly, don't know what's worse: Old Evil (govt/military/intelligence hawks) or New Evil (esp. those pretending they are no evil) doing this (AI/AGI etc)
with OldEvil we are at least more or less sure that they will screw it up and also roughly how... but NewEvil may screw it up much more royally, as they are much more effective and faster