Revisiting SI's 2011 strategic plan: How are we doing?
post by lukeprog · 2012-07-16T09:10:43.863Z · LW · GW · Legacy · 20 commentsContents
Summary None 20 comments
Progress updates are nice, but without a previously defined metric for success it's hard to know whether an organization's achievements are noteworthy or not. Is SI making good progress, or underwhelming progress?
Luckily, in August 2011 we published a strategic plan that outlined lots of specific goals. It's now almost August 2012, so we can check our progress against the standard set nearly one year ago. The plan doesn't specify a timeline for the stated goals, but I remember hoping that we could do most of them by the end of 2012, while understanding that we should list more goals than we could actually accomplish given current resources.
Let's walk through the goals in that strategic plan, one by one. (Or, you can skip to the "summary and path forward" section at the end.)
1.1. Clarify the open problems relevant to our core mission
This was accomplished to some degree with So You Want to Save the World, and is on track to be accomplished to a greater degree with Eliezer's sequence "Open Problems in Friendly AI," which you should begin seeing late in August.
1.2. Identify and recruit researcher candidates who can solve research problems.
Several strategies for doing this were listed, but the only one worth doing at our current level of funding was to recruit more research associates and hire more researchers. Since August 2011 we have done both, adding half a dozen research associates and hiring nearly a dozen remote researchers, including a few who are working full-time on papers and other projects (e.g. Kaj Sotala).
1.3. Use researchers and research associates to solve open problems related to Friendly AI theory.
I never planned to be doing this by the end of 2012; it's more of a long-term goal. A first step in this direction is to have Eliezer transition back to FAI work, e.g. with his "Open Problems in Friendly AI" Summit 2011 talk and forthcoming blog sequence. And actually, SI research associate Vladimir Slepnev has been making interesting progress in LW-style decision theory, and is working on a paper explicating one of his results. (Some credit is due to Vladimir Nesov and others.)
1.4. Estimate current AI risk levels.
Alas, we haven't done much of this. There's some analysis in Intelligence Explosion: Evidence and Import, Reply to Holden on Tool AI, and Reply to Holden on The Singularity Institute. Also, Anna is working on a simple model of AI risk in MATLAB (or some similar program). But I would have liked to have the cash to hire a researcher to continue things like AI Risk and Opportunity: A Strategic Analysis.
2.1. Continue operation of the Singularity Summit, which is beginning to yield a profit while also reaching more people with our message.
We did run Singularity Summit 2011, and Singularity Summit 2012 is on track to be noticeably more fun and professional than all past Summits. (So, register now!)
The strategic plan listed subgoals of gaining corporate sponsors and possibly expanding the Summit outside the USA. We gained corporate sponsors for Summit 2011, and are on track to gain even more of them for Summit 2012. Early in 2011 we also pursued an opportunity to host the first Singularity Summit in Europe, but the financing didn't quite come through.
2.2 Cultivate LessWrong.com and the greater rationality community as a resource for Singularity Institute.
The strategic plan lists 5 subgoals, all of which we achieved. SI (a) used LessWrong to recruit additional supporters, (b) made use of LessWrong for collaborative problem solving (e.g. this and this), (c) published lots of top-level posts, (d) and published How to Run a Successful Less Wrong Meetup Group. The early efforts of CFAR, and our presence at (e.g.) Skepticon IV, made headway on 2.2.e: "Encourage improvements in critical thinking in the wider world. We need a larger community of critical thinkers for use in recruiting, project implementation, and fundraising."
2.3. Spread our message and clarify our arguments with public-facing academic deliverables.
We did exceptionally well on this, though much more is needed. In addition to detailed posts like Reply to Holden on Tool AI and Reply to Holden on the Singularity Institute, SI has more peer-reviewed publications in 2012 than in all past years combined.
2.4. Build more relationships with the optimal philanthropy, humanist, and critical thinking communities, which share many of our values.
Though this work has been mostly invisible, Carl Shulman has spent dozens of hours on building relationships with the optimal philanthropy community. We've also built relationships with the humanist and critical thinking communities, through our presence at Skepticon IV but especially through the early activities of CFAR.
2.5. Cultivate and expand Singularity Institute’s Volunteer Program.
SI's volunteer program got a new website (though we'd like to launch another redesign soon), and we estimate that SI volunteers have done 2x-5x more work per month this year than in the past few years.
2.6. Improve Singularity Institute’s web presence.
Done. We got a new domain, Singularity.org, and put up a new website there. We produced additional introductory materials, like Friendly-AI.com and IntelligenceExplosion.com. We produced lots of "landing pages," for example our tech summaries. We did not, however, complete subgoals (d) and (e) — "Continue to produce articles on targeted websites and other venues" and "Produce high-quality videos to explain Singularity Institute’s mission" — because their ROI isn't high enough to do at our current funding level.
2.7. Apply for grants, especially ones that are given to other organizations and researchers concerned with the safety of future technologies (e.g. synthetic biology and nanotechnology).
This one was always meant as a longer-range goal. SI still needs to be "fixed up" in certain ways before this is worth trying.
2.8. Continue targeted interactions with the public.
We didn't do much of this, either. In particular, Eliezer's rationality books are on hold for now; we have the author of a best-selling science book on retainer to take a crack at Eliezer's rationality books this fall, after he completes his current project.
2.9. Improve interactions with current and past donors.
Success. We created and cleaned up our donor database, communicated more regularly with our support base (previously via monthly updates and now our shiny new newsletter, which you can sign up for here), and updated our top donors list.
3.1. Encourage a new organization to begin rationality instruction similar to what Singularity Institute did in 2011 with Rationality Minicamp and Rationality Boot Camp.
This is perhaps the single most impressive thing we did this year, in the sense that it required dozens of smaller pieces to all work, and work together. The organization is now called the Center for Applied Rationality (CFAR), and it was recently approved for 501c3 status. It has its own website, has been running extremely well-reviewed rationality retreats, and has lots more exciting stuff going on that hasn't been described online yet. Sign up for CFAR's newsletter to get these juicy details when they are written up.
3.2. Use Charity Navigator’s guidelines to improve financial and organizational transparency and efficiency.
There are 9 subgoals listed here. We've since decided we don't want to grow to five independent board members (subgoal b) at this time, because a smaller board runs more efficiently. (I've now heard too many nightmare stories about trying to get things done with a large board.) We did achieve (a), (d), (e), (g), (h), and (i). Subgoal (c) is a longer term goal that we are working toward (we need a professional bookkeeper to clean up our internal processes before we can have a hired CPA audit, and we're interviewing bookkeepers now). Subgoal (f) — a records retention policy — is in the works.
3.3. Ensure a proper orientation for new Singularity Institute staff and visiting fellows.
This is in process; we're creating orientation materials.
3.4. Secure lines of credit to increase liquidity and smooth out the recurring cash-flow pinches that result from having to do things like make payroll and rent event spaces.
We've done this.
3.5. Improve safe return on financial reserves
For starters, we put a large chunk of our resources in an ING Direct high-interest savings account.
3.6. Ensure high standards for staff effectiveness.
There are two subgoals here. Subgoal (b) was to have staff maintain work logs, which we've been doing for many months now. Subgoal (a) is more ambiguous. We haven't given people job descriptions because at such a small organization, such roles change quickly. But I do provide stronger management of SI staff and projects than ever before, and this clarifies the expectations for our staff, often including task and project deadlines.
3.7. When hiring, advertise for applications to find the best candidates.
We've been doing this for several months now, e.g. here and here.
Summary
That's it for the main list! Now let's check in on what we said our top priorities for 2011-2012 were:
- Public-facing research on creating a positive singularity. Check. SI has more peer-reviewed publications in 2012 than in all past years combined.
- Outreach / education / fundraising. Check. Especially, through CFAR.
- Improved organizational effectiveness. Check. Lots of good progress on this.
- Singularity Summit. Check.
In summary, I think SI is a bit behind where I hoped we'd be by now, though this is largely because we've poured so much into launching CFAR, and as a result, CFAR has turned out to be significantly more cool at launch than I had anticipated.
Fundraising has been a challenge. One donor failed to actually give their $46,000 pledge despite repeated reminders and requests, and our support base is (understandably) anxious to see a shift from movement-building work to FAI research, a shift I have been fighting for since I was made Executive Director. (Note that spinning off rationality work to CFAR is a substantial part of trimming SI down into being primarily an FAI research institute.)
Reforming SI into a more efficient, effective organization has been my greatest challenge. Frankly, SI was in pretty bad shape when Louie and I arrived as interns in April 2011, and there have been an incredible number of holes to dig SI out of — and several more remain. (In contrast, it has been a joy to help set up CFAR properly from the very beginning, with all the right organizational tools and processes in place.) Reforming SI presents a fundraising problem, because reforming SI is time consuming and sometimes costly, but is generally unexciting to donors. I can see the light at the end of the tunnel, though. We won't reach it if we can't improve our fundraising success in the next 3-6 months, but it's close enough that I can see it.
SI's path forward, from my point of view, looks like this:
- We finish launching CFAR, which takes over the rationality work SI was doing. (Before January 2013.)
- We change how the Singularity Summit is planned and run so that it pulls our core staff away from core mission work to a lesser degree. (Before January 2013.)
- Eliezer writes the "Open Problems in Friendly AI" sequence. (Before January 2013.)
- We hire 1-2 researchers to produce technical write-ups from Eliezer's TDT article and from his "Open Problems in Friendly AI" sequence. (Beginning September 2012, except that right now we don't have the cash to hire the 1-2 people who I know who could do this and who want to do this as soon as we have the money to hire them.)
- With the "Open FAI Problems" sequence and the technical write-ups in hand, we greatly expand our efforts to show math/compsci researchers that there is a tractable, technical research program in FAI theory, and as a result some researchers work on the sexiest of these problems from their departments, and some other math researchers take more seriously the prospect of being hired by SI to do technical research in FAI theory. (Beginning, roughly, in April 2013.) Also: There won't be classes on x-risk at SPARC (rationality camp for young elite math talent), but some SPARC students might end up being interested in FAI stuff by osmosis.
- With a more tightly honed SI, improved fundraising practices, and visible mission-central research happening, SI is able to attract more funding and hire even more FAI researchers. (Beginning, roughly, in September 2013.)
If you want to help us make this happen, please donate during our July matching drive!
20 comments
Comments sorted by top scores.
comment by Rain · 2012-07-16T12:28:40.288Z · LW(p) · GW(p)
Compare/contrast with SI's previous attempt at setting goals, the 2010 Singularity Research Challenge (Wayback Machine), which I said would be a good test of SI's ability to make progress: status from 9 June 2011.
Luke is a really turning things around.
Replies from: GLaDOScomment by Mitchell_Porter · 2012-07-16T12:37:55.807Z · LW(p) · GW(p)
This looks like a happy ending to one phase of SI's history, and the beginning of the next. You managed to spin off CFAR, which should have a bright future, and also managed to maintain in existence an organization recognizably devoted to the FAI problem, without the organization collapsing into dogma or dissolving into nebulosity, both of which are serious risks when there are so many imponderables still to solve. When the dust settles, we'll all be able to get on with the task of seeing how SI fits into the ecology of organizations out there that are pushing at the threshold of AI, and what may lie ahead.
comment by KPier · 2012-07-17T03:24:50.463Z · LW(p) · GW(p)
The July matching drive was news to me; I wonder how many other readers hadn't even heard about it.
Is there a reason this hasn't been published on LessWrong, i.e. with the usual public-commitment thread?
Also, if a donation is earmarked for CFAR, does the "matching" donation also go to CFAR?
Replies from: lukeprogcomment by Paul Crowley (ciphergoth) · 2012-07-16T11:02:24.713Z · LW(p) · GW(p)
I'm very pleased to hear you've hired a pop science author to write the rationality books - takes pressure off Eliezer, moves the book forward and gives it name recognition. Good thinking!
comment by ChrisHallquist · 2012-07-16T13:35:14.108Z · LW(p) · GW(p)
...our support base is (understandably) anxious to see a shift from movement-building work to FAI research, a shift I have been fighting for since I was made Executive Director.
This raises several questions for me:
- How do you divide movement building/FAI research? What is the Singularity Summit? What is "writing papers like the two for the Singularity Hypothesis volume"? I worry about these questions because it seems to me that so far, the Singularity Institute's big successes are in things that could be considered "movement building," and I worry moving away from those could actually make SIAI less effective.
- What the expectations of the donor base? In particular, do a substantial number of them feel that the main reason to donate to SIAI is in hopes that SIAI will develop FAI relatively soon? I worry about these questions because I worry about SIAI being in the position of having to appease donors with unrealistic expectations.
↑ comment by JGWeissman · 2012-07-16T17:20:03.088Z · LW(p) · GW(p)
I like to see some object level FAI research to ground meta level strategies like movement building. But movement building is very important right now, as SI needs to recruit and fund additional FAI researchers. Publishing a sequence on open problems in FAI seems like a very well grounded form of movement building.
My expectation on timeframes is a wide distribution. I am uncertain about how much work is needed to solve the problem, and how fast SI will be able to recruit researchers.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-07-16T17:40:49.868Z · LW(p) · GW(p)
An explicit list of open problems seems closer to the metal of object level progress than, say, a better website. (I'm not implying that the less object level things aren't very worth doing - they make the object level work possible).
comment by Sly · 2012-07-17T01:39:55.585Z · LW(p) · GW(p)
"2.8. Continue targeted interactions with the public.
We didn't do much of this, either. In particular, Eliezer's rationality books are on hold for now; we have the author of a best-selling science book on retainer to take a crack at Eliezer's rationality books this fall, after he completes his current project."
This was the one part that really disappointed me. I expected a lot more progress on the book front.
comment by Shmi (shminux) · 2012-07-16T17:20:19.495Z · LW(p) · GW(p)
Kudos, Luke! It's nice to see some serious applied rationality at the executive levels of SI.
One thing I'd love to see is some more risk analysis, which seems natural, given that SI is all about x-risk. What can go wrong in the next year? What are the odds? What would be the consequences if a certain risk is left unmitigated? What would such mitigation look like, how much would it cost and what would be the odds of its success? What potential risks are existential risks for SI as an organization (and can you list them without flinching away)?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-07-16T18:07:14.295Z · LW(p) · GW(p)
In general, next year is too soon. What the risks are next year will be affected most by what you did five or more years ago.
comment by [deleted] · 2012-09-23T03:52:31.835Z · LW(p) · GW(p)
Eliezer's sequence "Open Problems in Friendly AI," which you should begin seeing late in August.
It is now late September. Is an Open Problems sequence still being developed?
comment by buybuydandavis · 2012-07-16T09:59:48.241Z · LW(p) · GW(p)
In all this research by SI into making a friendly super intelligence, how much effort is being expended on making us the friendly super intelligence? Is there any institute particularly looking into that?
From the limited amount I've seen, SI seems to be planning on the super AI as more a descendant of a PC than a descendant of Homo Sapiens.
Replies from: lukeprog↑ comment by lukeprog · 2012-07-16T10:06:16.811Z · LW(p) · GW(p)
Do you mean whole brain emulation as opposed to what Anna and I called "de novo" AI? E.g. the sort of thing Carl discussed in Whole Brain Emulation and the Evolution of Superorganisms?
Replies from: buybuydandavis↑ comment by buybuydandavis · 2012-07-16T19:22:07.956Z · LW(p) · GW(p)
I mean neither, although whole brain emulation would be one avenue for what I mean. If you could upload you into silicon, then interfacing and merging with other programs would be simplified, and effectively you(now) could more easily transform into the you(later) Super Intelligence.
I mean improving the wetware through drugs, training, genetic modification, and improving wetware/hardware interfaces. Watson beat humans, but Watson + Human would beat Watson. Currently, the most capable intelligent system is human+machine, and that will remain so for a while. How do we make that a while longer, until we have effectively merged with machines, and the Super Intelligence is literally us? Making existing human friendliness the base of the solution seems more promising than trying to build friendliness from scratch.
I'm skeptical of our ability to use Mathemagic to constrain a Super Intelligence. I wish SingInst luck in the endeavor, but I expect any Super Intelligence to display emergent properties likely to subvert our best laid plans - not woo woo emergence, just functional interactions creating capabilities that we won't anticipate and don't understand.
If you think silicon will win, that's fine, but improving the wetware is at least a risk mitigation strategy versus a silicon SI - the smarter we are, the more likely we can keep up, and the more capable we'll be to create a SI that doesn't turn us into paper clips.
Maybe the same thing happens to us if we merge with technology; we change in ways we don't anticipate, and become paper clip maximizers. Oh well. Better a paper clip maximizer than a paper clip.
Replies from: Rain↑ comment by Rain · 2012-07-16T19:37:30.754Z · LW(p) · GW(p)
"Merging" is vague. And homo sapiens aren't Friendly.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2012-07-17T02:55:23.396Z · LW(p) · GW(p)
I like my chances with homo sapiens better than an alien intelligence designed by us.
Replies from: MatthewBaker↑ comment by MatthewBaker · 2012-07-18T16:44:45.600Z · LW(p) · GW(p)
If its an alien intelligence and doesn't have a global association table like Starmap-AI we are already doomed.