Singularity Institute Strategic Plan 2011
post by MichaelAnissimov · 2011-08-26T23:34:13.479Z · LW · GW · Legacy · 21 commentsContents
21 comments
Thanks to the hard work and cooperation of Singularity Institute staff and volunteers, especially Louie Helm and Luke Muehlhauser (lukeprog), we now have a Strategic Plan, which outlines the near-term goals and vision of the Institute, and concrete actions we can take to fulfill those goals.
http://singinst.org/blog/2011/08/26/singularity-institute-strategic-plan-2011/
We welcome your feedback. You can send any comments to institute@intelligence.org.
The release of this Strategic Plan is part of an overall effort to increase transparency at Singularity Institute.
21 comments
Comments sorted by top scores.
comment by Alexandros · 2011-08-27T11:59:19.959Z · LW(p) · GW(p)
Thank you so much for doing this. It makes a very big difference.
Some comments:
Strategy #1, Point 2e seems to cover things that should be either in point 3 or 4. Also points 3 and 4 seem to bleed into each other
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
The level 2 plan includes " Offer large financial prizes for solving important problems related to our core mission". I remember cousin_it mentioning that he's had very good success asking for answers in communities like MathOverflow, but the main cost was in formalizing the problems. It seems intuitive that geeks are not too much motivated by cash, but are very much motivated by a delicious open problem (and the status solving it brings). Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?.
Thank you again for publishing a document so that this discussion can be had.
Replies from: wedrifid, JoshuaZ, jimrandomh, lukeprog, aletheilia↑ comment by wedrifid · 2011-08-29T02:50:10.501Z · LW(p) · GW(p)
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
Just throwing it out there: It's the SIAI not the RIAI.
Right now one could be legitimately confused given that Eliezer is working on Rationality books and some of their more visible programs are rationality training.
↑ comment by JoshuaZ · 2011-08-29T03:14:49.449Z · LW(p) · GW(p)
This spin-off makes sense: The SIAI's goal is not improving human rationality. The SIAI's goal is to try to make sure that if a Singularity occurs that it is one that doesn't destroy humanity or change us into something completely counter to what we want.
This is not the same thing as improving human rationality. The vast majority of humans will do absolutely nothing connected to AI research. Improving their rationality is a great goal, and probably has a high pay-off. But it is not the goal of the SIAI. When people give money to the SIAI they expect that money to go towards AI research and related issues, including the summits. Moreover, many people who are favorable to rational thinking don't necessarily see a singularity type event as at all likely. Many even in the more sane end of the internet (e.g. the atheist and skeptics movements) consider it to be one more fringe belief, associating it with careful rational thinking is more likely to bring down LW-style rationality's status than to raise the status of singularity beliefs.
From my own perspective, as someone who agrees with a lot of the rationality, considers a fast hard-take off of AI to be unlikely, but thinks that it is likely enough that someone should be paying attention to it, this seems like a good strategy.
↑ comment by jimrandomh · 2011-08-28T16:52:09.237Z · LW(p) · GW(p)
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
Just speculation here, but the rationality training stuff seems to have very different scalability properties than the rest of Singinst; in the best case, there could end up being a self-supporting rationality training program in every major city. That would be awesome, but it could also dominate Singinst's attention at the expense of all the other stuff, if it wasn't partitioned off.
↑ comment by lukeprog · 2011-08-28T19:15:23.666Z · LW(p) · GW(p)
Thanks for your comments.
It may be the case that the Singularity Summit is spun off at some point, but the higher priority is to spin off rationality training. Also see jimrandomh's comment. People within SI seem to generally agree that rationality training should be spun off, but we're still working out how best to do that.
Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?
Yes. I'm working (with others, including Eliezer) on that project right now, and am quite excited about it. That project falls under strategy 1.1.
Replies from: Alexandros↑ comment by Alexandros · 2011-08-29T07:57:32.296Z · LW(p) · GW(p)
It appears that all the responses to my comment perceive me to be recommending the Summit be spun off. I am not saying anything like that. I am commenting on the document and presenting what I think is a reasonable question in the mind of a reader. So the point is not to convince me that keeping the summit is a good idea. The point is to correct the shape of the document so that this question does not arise. Explaining how the Summit fits into the re-focused mission but the rationality training does not would do the trick.
I'm particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E's stance on doing research in the open?
Replies from: lukeprog↑ comment by lukeprog · 2011-08-29T16:11:28.279Z · LW(p) · GW(p)
I'm particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E's stance on doing research in the open?
I don't think it was ever Eliezer's position that all research had to be done in secret. There is a lot of Friendliness research that can be done in the open, and the 'FAI Open Problems' document will outline what that work is.
↑ comment by aletheilia · 2011-08-28T15:37:15.305Z · LW(p) · GW(p)
Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?
The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...
comment by XiXiDu · 2011-08-27T10:07:28.787Z · LW(p) · GW(p)
I especially like the following points:
- 1.1. Clarify the open problems relevant to our core mission.
- 1.5. Estimate current AI risk levels.
- 2.2.b. Make use of LessWrong.com for collaborative problem-solving (in the manner of the
- earlier LessWrong.com progress on decision theory).
- 2.3. Spread our message and clarify our arguments with public-facing academic deliverables.
What I would add to the list is to directly and publicly engage people like Holden Karnofsky from GiveWell or John Baez. They seem to have the necessary background knowledge and know the math. If you can convince them, or show that they are wrong, you defeated your strongest critics. Other people include Katja Grace and Robin Hanson. All of them are highly educated, have read the sequences and disagree with the Singularity Institute.
I admit that you pretty much defeated Hanson and Baez as they haven't been able or willing to put forth much substantive criticism regarding the general importance of an organisation like the Singularity Institute. I am unable to judge the arguments made by Grace and Karnofsky as they largely evade my current ability to grasp the involved math, but judged by the upvotes of the latest post by Karnofsky and his position I suppose that it might be a productive exercise to refute his objections.
Replies from: lukeprog↑ comment by lukeprog · 2011-08-28T19:18:08.901Z · LW(p) · GW(p)
SI has an internal roadmap of papers it would like to publish to clarify and extend our standard arguments, and these would at the same time address many public objections. At the same time, we don't want to be sidetracked from pursuing our core mission by taking the time to respond to every critic. It's a tough thing to balance.
comment by curiousepic · 2011-08-28T14:09:10.482Z · LW(p) · GW(p)
Having contributed a significant amount (for me; $500) during the last matching drive in January, I was not considering donating during this round, especially after reading the disappointing interviews with GiveWell. This document changes that, especially seeing action points for increasing transparency and efficiency, and outreach to other organizations. I'm very pleased to see SI reacting to the criticisms. I have just donated another $500.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-08-28T14:46:54.523Z · LW(p) · GW(p)
What disappointed you in the GiveWell interviews?
comment by ahartell · 2011-08-27T01:58:19.697Z · LW(p) · GW(p)
Awesome! I know a lot of people were (are?) wary of donating without a clearer understanding of what the money will do and how SI will ACTUALLY mitigate risk from Unfriendly AI. I didn't voice this opinion personally, but I was curious.
Not to be a jerk, but on the fifth page there is a typo. The fourth part of Strategy 2 says,
"4. Build more relationships the optimal philanthropy, humanist, and critical thinking communities, which share many of our values."
Shouldn't there be a "with" after the word "relationships"?
Replies from: lukeprogcomment by Normal_Anomaly · 2011-08-27T03:12:34.485Z · LW(p) · GW(p)
I'm glad that you've done this! I look forward to seeing the list of open problems you intend to work on.
Replies from: aletheilia↑ comment by aletheilia · 2011-08-27T09:35:26.295Z · LW(p) · GW(p)
...open problems you intend to work on.
You mean we? :)
...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-08-27T14:36:18.924Z · LW(p) · GW(p)
I said "you" because I don't see myself as competent to work on decision theory-type problems.
Replies from: aletheilia↑ comment by aletheilia · 2011-08-28T15:42:08.875Z · LW(p) · GW(p)
Time to level-up then, eh? :)
(Just sticking to my plan of trying to encourage people for this kind of work.)
Replies from: Dorikka↑ comment by Dorikka · 2011-08-28T16:01:54.685Z · LW(p) · GW(p)
Or such problems are not Normal Anomaly's comparative advantage, and her time is actually better spent on other things. :P
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-08-28T20:41:12.998Z · LW(p) · GW(p)
Yeah, I'm actually leveling toward working in neuroscience.