Q&A with new Executive Director of Singularity Institute

post by lukeprog · 2011-11-07T04:58:05.074Z · LW · GW · Legacy · 182 comments

Contents

  The Rules
None
182 comments

Today I was appointed the new Executive Director of Singularity Institute.

Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.

I spent two months as a researcher, and was then appointed Executive Director.

In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

 

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.

 

I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.

182 comments

Comments sorted by top scores.

comment by XiXiDu · 2011-11-07T11:04:10.202Z · LW(p) · GW(p)

If someone as capable as Terence Tao approached the SIAI, asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that the SIAI is currently lacking?

Replies from: None
comment by [deleted] · 2013-06-05T09:38:19.832Z · LW(p) · GW(p)

What message about FAI/MIRI should I take away from the fact that this very important question isn't answered?

comment by quartz · 2011-11-07T07:19:21.436Z · LW(p) · GW(p)

How are you going to address the perceived and actual lack of rigor associated with SIAI?

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute. This is likely to pose problems for your plan to work with professors to find research candidates. It is also likely to be an indicator of little high-quality work happening at the Institute.

In his recent Summit presentation, Eliezer states that "most things you need to know to build Friendly AI are rigorous understanding of AGI rather than Friendly parts per se". This suggests that researchers in AI and machine learning should be able to appreciate high-quality work done by SIAI. However, this is not happening, and the publications listed on the SIAI page--including TDT--are mostly high-level arguments that don't meet this standard. How do you plan to change this?

Replies from: James_Miller, CarlShulman, Solvent, shminux, lukeprog
comment by James_Miller · 2011-11-07T15:21:35.037Z · LW(p) · GW(p)

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute.

I believe that high-quality research is happening at the Singularity Institute.

James Miller, Associate Professor of Economics, Smith College.

PhD, University of Chicago.

Replies from: XFrequentist
comment by XFrequentist · 2011-11-07T22:41:55.214Z · LW(p) · GW(p)

To distinguish the above from the statement "I like the Singularity Institute", could you be specific about what research activities you have observed in sufficient detail to confidently describe as "high-quality"?

ETA: Not a hint of sarcasm or snark intended, I'm sincerely curious.

Replies from: James_Miller
comment by James_Miller · 2011-11-08T01:25:01.838Z · LW(p) · GW(p)

I'm currently writing a book on the Singularity and have consequently become extremely familiar with the organization's work. I have gone through most of EY's writings and have an extremely high opinion of them. His research on AI plays a big part in my book. I have also been ending my game theory classes with "rationality shorts" in which I present some of EY's material from the sequences.

I also have a high opinion of Carl Shulman's (an SI employee) writings including “How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects." (Co-authored with Bostrom) and Shulman's paper on AGI and arms races.

comment by CarlShulman · 2011-11-11T03:52:55.361Z · LW(p) · GW(p)

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute.

David Chalmers has said that the decision theory work is a major advance (along with various other philosophers), although he is frustrated that it hasn't been communicated more actively to the academic decision theory and philosophy communities. A number of current and former academics, including David, Stephen Omohundro, James Miller (above), and Nick Bostrom have reported that work at SIAI has been very helpful for their own research and writing in related topics.

Evan Williams, now a professor of philosophy at Purdue cites, in his dissertation, three inspirations leading to the work: John Stuart Mill's "On Liberty," John Rawls' "Theory of Justice," and Eliezer Yudkowsky's "Creating Friendly AI" (2001), discussed at greater length than the others. Nick Beckstead, a Rutgers (#2 philosophy program) philosophy PhD student who works on existential risks and population ethics reported large benefits to his academic work from discussions with SIAI staff.

These folk are a minority, and SIAI is not well integrated with academia (no PhDs on staff, publishing, etc), but also not negligible.

In his recent Summit presentation, Eliezer states that "most things you need to know to build Friendly AI are rigorous understanding of AGI rather than Friendly parts per se". This suggests that researchers in AI and machine learning should be able to appreciate high-quality work done by SIAI.

I think that work in this area has been disproportionately done by Eliezer Yudkowsky, and to a lesser extent Marcello Herreshoff. Eliezer has been heavily occupied with Overcoming BIas, Less Wrong, and his book for the last several years, in part to recruit a more substantial team for this. He also is reluctant to release work that he thinks is relevant to building AGI. Problems in recruiting and the policies of secrecy seem like the big issues here.

Replies from: Wei_Dai, XiXiDu
comment by Wei Dai (Wei_Dai) · 2011-11-13T11:34:56.012Z · LW(p) · GW(p)

Eliezer has been heavily occupied with Overcoming BIas, Less Wrong, and his book for the last several years, in part to recruit a more substantial team for this.

Eliezer's investment into OB/LW apparently hasn't returned even a single full-time FAI researcher for SIAI after several years (although a few people are almost certainly doing more and better FAI-related research than if the Sequences didn't happen). Has this met SIAI's initial expectations? Do you guys think we're at the beginning of a snowball effect, or has OB/LW pretty much done as much as it can, as far as creating/recruiting FAI researchers is concerned? What are your current expectations for the book in this regard?

Replies from: CarlShulman, XiXiDu, Dr_Manhattan
comment by CarlShulman · 2011-11-13T20:24:34.891Z · LW(p) · GW(p)

I have noticed increasing numbers of very talented math and CS folk expressing interest or taking actions showing significant commitment. A number of them are currently doing things like PhD programs in AI. However, there hasn't been much of a core FAI team and research program to assimilate people into. Current plans are for Eliezer to switch back to full time AI after his book, with intake of more folk into that research program. Given the mix of people in the extended SIAI community, I am pretty confident that with abundant funding a team of pretty competent researchers (with at least some indicators like PhDs from the top AI/CS programs, 1 in 100,000 or better performance on mathematics contests, etc) could be mustered over time, based on people I already know.

I am less confident that a team can be assembled with so much world-class talent that it is a large fraction of the quality-adjusted human capital applied to AGI, without big gains in recruiting (e.g. success with the rationality book or communication on AI safety issues, better staff to drive recruiting, a more attractive and established team to integrate newcomers, relevant celebrity endorsements, etc). The Manhattan Project had 21 then- or future Nobel laureates. AI, and certainly FAI, are currently getting a much, much smaller share of world scientific talent than nukes did, so that it's easier for a small team to loom large, but it seems to me like there is still a lot of ground to be covered to recruit a credibly strong FAI team.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-11-14T07:49:28.673Z · LW(p) · GW(p)

Thanks. You didn't answer my questions directly, but it sounds like things are proceeding more or less according to expectations. I have a couple of followup questions.

At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm? For simplicity, feel free to ignore the opportunity cost of spending financial and human resources on this project, and just consider the potential direct harmful effects, like accidentally creating an UFAI while experimenting to better understand AGI, or building a would-be FAI that turns out to be an UFAI due to a philosophical, theoretical or programming error, or leaking AGI advances that will allow others to build an UFAI, or starting an AGI arms race.

I have a serious concern that if SIAI ever manages to obtain abundant funding and a team of "pretty competent researchers" (or even "world-class talent", since I'm not convinced that even a team of world-class talent trying to build an FAI will do more good than harm), it will proceed with an FAI project without adequate analysis of the costs and benefits of doing so, or without continuously reevaluating the decision in light of new information. Do you think this concern is reasonable?

If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent. It could post answers to questions like the ones I asked in the grandparent comment without having to be prompted. It could publish the reasons behind every major strategic decision, and the metrics it keeps to evaluate its initiatives. (One way to do this, if such strategic thinking often occurs or is presented at board meetings, would be to publish the meeting minutes, as I suggested in another comment.)

Replies from: CarlShulman
comment by CarlShulman · 2011-11-14T09:18:47.358Z · LW(p) · GW(p)

At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm?

I'm not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes. I would place more weight on epistemic rationality, motivations (personality, background checks), institutional setup and culture, the strategy of first trying to get test the tractability of robust FAI theory and then advancing FAI before code (with emphasis on the more-FAI-less-AGI problems first), and similar variables.

Do you think this concern is reasonable?

Certainly it's a reasonable concern from a distance. Folk do try to estimate and reduce the risks you mentioned, and to investigate alternative non-FAI interventions. My personal sense is that these efforts have been reasonable but need to be bolstered along with the FAI research team. If it looks like a credible (to me) team may be assembled my plan would be (and has been) to monitor and influence team composition, culture, and exposure to information. In other words, I'd like to select folk ready to reevaluate as well as to make progress, and to work hard to build that culture as researchers join up.

If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent.

I can't speak for everyone, but I am happy to see SIAI become more transparent in various ways. The publication of the strategic plan is part of that, and I believe Luke is keen (with encouragement from others) to increase communication and transparency in other ways.

publish the meeting minutes

This one would be a decision for the board, but I'll give my personal take again. Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke's Q&A, etc. But I wouldn't be surprised if this happens too.

Replies from: Wei_Dai, wedrifid, lukeprog, lessdazed
comment by Wei Dai (Wei_Dai) · 2011-11-15T10:23:37.602Z · LW(p) · GW(p)

I'm not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes.

Let's assume that all the other variables are already optimized for to minimize the risk of creating an UFAI. It seems to me that the the relationship between the ability level of the FAI team and probabilities of the possible outcomes must then look something like this:

This chart isn't meant to communicate my actual estimates of the probabilities and crossover points, but just the overall shapes of the curves. Do you disagree with them? (If you want to draw your own version, click here and then click on "Modify This Chart".)

Folk do try to estimate and reduce the risks you mentioned, and to investigate alternative non-FAI interventions.

Has anyone posted SIAI's estimates of those risks?

I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics

That seems reasonable, and given that I'm more interested in the "strategic" as opposed to "tactical" reasoning within SIAI, I'd be happy for it to be communicated through some other means.

Replies from: Eliezer_Yudkowsky, CarlShulman, Vladimir_Nesov, XiXiDu
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-18T22:22:46.563Z · LW(p) · GW(p)

I like this chart.

comment by CarlShulman · 2011-11-15T19:25:45.591Z · LW(p) · GW(p)

Do you disagree with them?

If we condition on having all other variables optimized, I'd expect a team to adopt very high standards of proof, and recognize limits to its own capabilities, biases, etc. One of the primary purposes of organizing a small FAI team is to create a team that can actually stop and abandon a line of research/design (Eliezer calls this "halt, melt, and catch fire") that cannot be shown to be safe (given limited human ability, incentives and bias). If that works (and it's a separate target in team construction rather than a guarantee, but you specified optimized non-talent variables) then I would expect a big shift of probability from "UFAI" to "null."

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-11-15T22:08:25.709Z · LW(p) · GW(p)

What I'm afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or the formalization of the notion of "safety" used by the proof is wrong. This kind of thing happens a lot in cryptography, if you replace "safety" with "security". These mistakes are still occurring today, even after decades of research into how to do such proofs and what the relevant formalizations are. From where I'm sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations. There's good recent review of the history of provable security, titled Provable Security in the Real World, which might help you understand where I'm coming from.

Replies from: cousin_it, John_Maxwell_IV, CarlShulman
comment by cousin_it · 2011-11-16T14:23:16.992Z · LW(p) · GW(p)

Your comment has finally convinced me to study some practical crypto because it seems to have fruitful analogies to FAI. It's especially awesome that one of the references in the linked article is "An Attack Against SSH2 Protocol" by W. Dai.

Replies from: gwern
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-23T06:51:19.552Z · LW(p) · GW(p)

From where I'm sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations.

Correct me if I'm wrong, but it doesn't seem as though "proofs" of algorithm correctness fail as frequently as "proofs" of cryptosystem unbreakableness.

Where does your intuition that friendliness proofs are on the order of reliability of cryptosystem proofs come from?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-23T07:07:14.937Z · LW(p) · GW(p)

Interesting question. I guess proofs of algorithm correctness fail less often because:

  1. It's easier to empirically test algorithms to weed out the incorrect ones, so there are fewer efforts to prove conjectures of correctness that are actually false.
  2. It's easier to formalize what it means for an algorithm to be correct than for a cryptosystem to be secure.

In both respects, proving Friendliness seems even worse than proving security.

comment by CarlShulman · 2011-11-15T22:25:41.170Z · LW(p) · GW(p)

What I'm afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or that the formalization of the notion of "safety" used by the proof is wrong.

Thanks for clarifying.

This kind of thing happens a lot in cryptography,

I agree.

comment by Vladimir_Nesov · 2011-11-15T12:32:00.610Z · LW(p) · GW(p)

I can't count myself "world class" on the raw ability axis, but I'm pretty sure that probability of a team of people like me producing UFAI is very low (in absolute value), as I know when I understand something and when I yet don't, and I think this property would be even more reliable if I had better raw ability. That is a much more relevant safety factor than ability (but seems harder to test) that changes the shape of UFAI curve. A couple of levels worse than myself, I wouldn't trust someone's ability to disbelieve wrong things, so the maximum should probably be in this range, not centered on "world class" in particular.

comment by XiXiDu · 2011-11-15T10:58:55.457Z · LW(p) · GW(p)

Could you elaborate on the ability axis. Could you name some people that you perceive to be of world class ability in their field. Could you further explain if you believe that there are people who are sufficiently above that class.

For example, what about Terence Tao? What about the current SIAI team?

comment by wedrifid · 2011-11-14T15:12:58.510Z · LW(p) · GW(p)

However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics

Basically it ensures that all serious discussion and decision making is made prior to any meeting in informal conversations so that the meeting sounds good. Such a record should be considered a work of fiction regardless of whether it is a video transcript or a typed document. (Only to the extent that the subject of the meeting matters - harmless or irrelevant things wouldn't change.)

Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke's Q&A, etc. But I wouldn't be surprised if this happens too.

That's more like it!

comment by lukeprog · 2012-03-01T22:04:33.975Z · LW(p) · GW(p)

Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke's Q&A, etc. But I wouldn't be surprised if this happens too.

I'll take this opportunity to mention that I'm against publishing SIAI's board meeting minutes. First, for the reasons Carl gave above. Second, because then we'd have to invest a lot of time explaining the logic behind each decision, or else face waves of criticism for decisions that appear arbitrary when one merely publishes the decision and not the argument.

However, I'm definitely making big effort to improve SIAI transparency. Our new website (under development) has a page devoted to transparency, where you'll be able to find our strategic plan, our 990s, and probably other links. I'm also publishing the monthly progress reports, and recently co-wrote 'Intelligence Explosion: Evidence and Import', which for the first time (excepting Chalmers) summarizes many of our key pieces of reasoning with the clarity of mainstream academic form. We're also developing an annual report, and I'm working toward developing some other documents that will make SIAI strategy more transparent. But all this takes time, especially when starting from pretty close to 0 on transparency, and having lots of other problems to fix, too.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-01T22:30:36.124Z · LW(p) · GW(p)

Second, because then we'd have to invest a lot of time explaining the logic behind each decision, or else face waves of criticism for decisions that appear arbitrary when one merely publishes the decision and not the argument.

Are the arguments not made during the board meetings? Or do you guys talk ahead of time and just formalize the decisions during the board meetings?

In any case, I think you should invest more time explaining the logic behind your decisions, and not just make the decisions themselves more transparent. If publishing board meeting minutes is not the best way to do that, then please think about some other way of doing it. I'll list some of the benefits of doing this, in case you haven't thought of some of them:

  • encourage others to emulate you and think strategically about their own choices
  • allow outsiders to review your strategic thinking and point out possible errors
  • assure donors and potential donors that there is good reasoning behind your strategic decisions
  • improve exchange of strategic ideas between everyone working on existential risk reduction
Replies from: lukeprog
comment by lukeprog · 2012-03-01T22:41:44.141Z · LW(p) · GW(p)

The arguments are strewn across dozens of conversations in and out of board meetings (mostly out).

As for finding other ways to explain the logic behind our decisions, I agree, and I'm working on it. One qualification I would add, however, is that I predict more benefit to my strategic thinking from one hour with Paul Christiano and one hour with Nick Bostrom than from spending four hours to write up my strategic thinking on subject X and publishing it so that passersby can comment on it. It takes a lot of effort to be so well-informed about these issues that one can offer valuable strategic advice. But for some X we have already spent those many productive hours with Christiano and Bostrom and so on, and it's a good marginal investment to write up our strategic thinking on X.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-03-02T07:28:58.749Z · LW(p) · GW(p)

This reminds me a bit of Eliezer's excuse when he was resisting calls for him to publish his TDT ideas on LW:

Unfortunately this "timeless decision theory" would require a long sequence to write up

I suggest you may be similarly overestimating the difficulty of explaining your strategic ideas/problems to a sufficiently large audience to get useful feedback. Why not just explain them the same way that you would explain to Christiano and Bostrom? If some among the LW community don't understand, they can ask questions and others could fill them in.

The decision theory discussions on LW generated significant progress, but perhaps more importantly created a pool of people with strong interest in the topic (some of whom ended up becoming your research associates). Don't you think the same thing could happen with Singularity strategies?

Replies from: AnnaSalamon, lukeprog
comment by AnnaSalamon · 2012-03-02T07:43:47.878Z · LW(p) · GW(p)

Yes.

comment by lukeprog · 2012-03-02T09:57:02.283Z · LW(p) · GW(p)

I suggest you may be similarly overestimating the difficulty of explaining your strategic ideas/problems to a sufficiently large audience to get useful feedback...

Yes, I would get some useful feedback, but I also predict a negative effect: When people don't have enough background knowledge to make what I say sound reasonable to them, I'll get penalized for sounding crazy in the same way that I'm penalized when I try to explain AGI to an intuitive Cartesian dualist.

By penalized, I mean something like the effect that Scott Adams (author of Dilbert) encountered while blogging:

I hoped that people who loved the blog would spill over to people who read Dilbert, and make my flagship product stronger. Instead, I found that if I wrote nine highly popular posts, and one that a reader disagreed with, the reaction was inevitably “I can never read Dilbert again because of what you wrote in that one post.” Every blog post reduced my income, even if 90% of the readers loved it. And a startling number of readers couldn’t tell when I was serious or kidding, so most of the negative reactions were based on misperceptions.

Anyway, you also wrote:

The decision theory discussions on LW generated significant progress, but perhaps more importantly created a pool of people with strong interest in the topic (some of whom ended up becoming your research associates). Don't you think the same thing could happen with Singularity strategies?

If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it's hard to feel you've gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won't be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.

Replies from: Vladimir_Nesov, AnnaSalamon
comment by Vladimir_Nesov · 2012-03-02T12:24:42.740Z · LW(p) · GW(p)

The "results" in decision theory we've got so far are so tenuous that I believe their role is primarily to somewhat clarify the problem statement for what remains to be done (a big step compared to complete confusion in the past, but not quite clear (-ly motivated) math). The ratchet of science hasn't clicked yet, even if rational evidence is significant, which is the same problem you voice for strategy discussion.

comment by AnnaSalamon · 2012-03-02T17:48:37.384Z · LW(p) · GW(p)

If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it's hard to feel you've gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won't be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.

At FHI they sometimes sit around a whiteboard and discuss weird AI-boxing ideas or weird acquire-relevant-influence ideas, and feel as though they are making progress when something sounds more-promising than usual, leads to other interesting ideas, etc. We could too. I suspect it would create a similar set of interested people capable of having strategy ideas, though probably less math-inclined than the decision theory folk, and with more surrounding political chaos.

Replies from: lukeprog
comment by lukeprog · 2012-03-02T22:01:47.801Z · LW(p) · GW(p)

Okay; that changes my attitude a bit. But FHI's core people are unlikely to produce the Scott Adams effect in response to strategic discussion. Do you or Wei think it's reasonable for me to worry about that when discussing strategy in detail amongst, say, LWers — most of whom have far less understanding of the relevant issues (by virtue of not working on them every weeks for months or years)?

Replies from: AnnaSalamon, Wei_Dai, XiXiDu
comment by AnnaSalamon · 2012-03-02T23:21:20.353Z · LW(p) · GW(p)

I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some in the SingInst fan base. It is possible that this is reason enough to avoid such discussion; my guess is that it is not, but I could easily be wrong here, and many think it is.

I was mostly responding to the [paraphrased] "we can't discuss it publicly because it would take too long", and "it wouldn't work to create an informed set of strategists because there wouldn't be a sense of progress"; I've said sentences like that before, and, when I said them, they were excuses/rationalizations. My actual reason was something like: "I'd like to avoid alienating people, and I'd like to avoid starting conflicts whose outcomes I cannot predict."

Replies from: wedrifid, XiXiDu
comment by wedrifid · 2012-03-05T12:52:35.067Z · LW(p) · GW(p)

I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers.

It'll alienate some SingInst-ers? That's a troubling sign. Aren't most SingInst-ers at least vaguely competent rationalists who are actually interested in Singularity options? Yet they will be alienated by mere theoretical exploration of the domain? What has your HR department been doing?

comment by XiXiDu · 2012-03-03T10:45:11.870Z · LW(p) · GW(p)

I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers.

From a public relations viewpoint this sentence alone is worse than any particular detail could possible be. Because it not only allows, but forces people to imagine what horrible strategies you could possible explore and pursue. Strategies that are bad enough that you not only believe that even the community most closely related to SI would be alienated by them, but that you are also unable to support those explorations with rational arguments.

Personally I don't want to contribute anything to an organisation which admits to explore strategies that are unacceptable by most people. And I wouldn't suggest anyone else to do so. Yet I would neither be willing to to contribute if you were secretive about your strategic explorations. I just don't trust you people, I never did. And I am still horrified by how people who actually believe that what you are saying is true and possible are willing to trust your small group blindly to shape the universe.

A paperclip maximizer is just a transformation of the universe into a state of almost no suffering. But a friendly AI that isn't quite friendly, or one that is biased by the ideas of a small group of abnormal and psychopathic people, could increase negative utility dramatically.

Replies from: satt
comment by satt · 2012-03-03T21:54:24.561Z · LW(p) · GW(p)

I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers.

From a public relations viewpoint this sentence alone is worse than any particular detail could possible be.

No, I don't agree with this. I predict that whatever strategies AnnaSalamon has in mind would alienate someone unless those strategies were very anodyne or vague. If the sample of listeners is big enough there will usually be someone to take issue with just about any idea one voices.

Because it not only allows, but forces people to imagine what horrible strategies you could possible explore and pursue.

How true is that? In my case it just makes me try to imagine whether there are any strategies AnnaSalamon could propose that wouldn't perturb anyone. When it comes to the singularity I draw a blank, as it's a big enough issue that just about anything she or I or you could say about it will bother somebody.

I disagree that AS's weak statement that "detailed exploration of Singularity strategies would alienate some LW-ers" tells you very much at all about the nature of those strategies. I expect most conceivable strategies would piss someone off, so I'd say her claim communicates less than 1 bit of information about those strategies.

Based on the rest of your comment I think you've read AnnaSalamon's statement as one implying that SI's strategies are unusually objectionable or alienating; maybe that's what she meant but it doesn't seem to be what she wrote.

Replies from: XiXiDu
comment by XiXiDu · 2012-03-04T11:00:11.419Z · LW(p) · GW(p)

Based on the rest of your comment I think you've read AnnaSalamon's statement as one implying that SI's strategies are unusually objectionable or alienating;

Which is the right strategy. Humans are unfriendly. The group around AnnaSalamon is trying to take over and shape the universe according to their idea of what is right and good.

If you are making decisions based on the worst case scenario - as you are clearly doing when it comes to artificial intelligence, if you support friendly AI research - then you should do the same when it comes to human beings.

It isn't enough to talk to them, to review their output and conclude that they are most likely friendly. Doing so and contributing money is aking to letting an AI, that is not provably friendly, out of the box. They either have to prove that they are friendly or make all their work transparent. Otherwise the right thing to do is to label them as terrorists and tell them to fuck off.

Replies from: satt, timtyler
comment by satt · 2012-03-04T12:08:21.118Z · LW(p) · GW(p)

You could just as reasonably have written that comment if AnnaSalamon had never posted in this thread, though. My argument here isn't with your broader attitude to FAI/SI, it's that I think it's unfair to pounce on a very low-information statement like "detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers" and write it off as terrible PR that implies SI's considering horrible strategies.

Replies from: XiXiDu
comment by XiXiDu · 2012-03-04T13:06:44.232Z · LW(p) · GW(p)

...it's unfair to pounce on a very low-information statement like "detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers"...

I think that it does convey quite a lot information. I already know that people associated with SI and LW accept a lot of strategic thinking that would be considered everything from absurd to outright psychopathic within different circles. If she says that the strategies they explore would even alienate some people associated with LW, let alone SI, then that's really bad.

I think you underestimate the amount of information that a natural language sentence can carry and signal.

...and write it off as terrible PR that implies SI's considering horrible strategies.

It is abundantly clear that SI is really bad at PR. I assign a high probability to the possibility that her and other members of the SI are revealing a lot of what is going on behind the scenes by being careless about their communication.

Replies from: satt
comment by satt · 2012-03-04T15:34:07.677Z · LW(p) · GW(p)

If she says that the strategies they explore would even alienate some people associated with LW, let alone SI, then that's really bad.

I disagree. LWers have a range of opinions on AI & the singularity (yes, those opinions are less diverse than the general population's, but I don't see them being sufficiently less diverse for your argument to go through). There are already quite a few LWers who're SI sceptics to a degree. I'm also sure there are LWers who, at the moment, basically agree with SI but would spurn it if it announced a more specific strategy for handling AI/the singularity. I think this would be true for most possible strategies SI could announce. I'd expect the same basic argument to hold for SI (though I'm less sure because I know less about SI).

I think you underestimate the amount of information that a natural language sentence can carry and signal.

Quite possible! But in any case, a sentence can carry lots of information about one thing, but not another. One has to look at the probability of a sentence or claim conditional on a specific thing. As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one, and AnnaSalamon's belief discriminates poorly between those two particular hypotheses.

It is abundantly clear that SI is really bad at PR. I assign a high probability to the possibility that her and other members of the SI are revealing a lot of what is going on behind the scenes by being careless about their communication.

Plausible, but I doubt it's true for this specific example.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2012-03-04T17:10:17.874Z · LW(p) · GW(p)

As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one...

If I was to accept your estimation then the associated utility of P(people alienated | terrible strategy) and P(people alienated | un-terrible strategy) would force you to act according to the first possibility.

Replies from: satt
comment by satt · 2012-03-05T02:20:11.403Z · LW(p) · GW(p)

I don't follow. Do you mean that the potential disutility of SI having a terrible strategy is so much bigger than the potential utility of SI having an un-terrible strategy that, given equal likelihoods, I should act against SI? If so, I disagree.

comment by XiXiDu · 2012-03-04T17:03:55.415Z · LW(p) · GW(p)

Quite possible! But in any case, a sentence can carry lots of information about one thing, but not another. One has to look at the probability of a sentence or claim conditional on a specific thing. As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to ...

Blah blah blah...full stop. We're talking about the communication of primates with other primates. Evolution honed your skills to detect the intention and possible bullshit in the output of other primates. Use your intuition!

I disagree. LWers have a range of opinions on AI & the singularity ...

I am not sure what you are getting at. If she thinks that there are strategies that should be kept secrete for political reasons or whatever and admits it, that's bad from any possible viewpoint.

Replies from: satt, drethelin
comment by satt · 2012-03-05T02:14:43.319Z · LW(p) · GW(p)

Use your intuition!

I have. My gut didn't raise a red flag when I read AnnaSalamon's post, but it did when I read yours.

I am not sure what you are getting at.

I was giving a reason for my claim that there'd be someone on LW/in SI who'd be alienated by all but the blandest of strategies.

If she thinks that there are strategies that should be kept secrete for political reasons or whatever and admits it, that's bad from any possible viewpoint.

Maybe she thinks that and maybe she doesn't, but either way she didn't admit it. (At least not in the post I'm talking about. I haven't read AS's whole comment history.)

comment by drethelin · 2013-03-04T07:11:31.562Z · LW(p) · GW(p)

To my intuitions you sound exactly like a bitter excluded nobody attacking someone successful and popular. You DON'T talk like someone who sees through the lies of an evil greedy deceiver and honestly wants people to examine what he says and come to the correct opinion.

comment by timtyler · 2012-03-05T20:31:45.094Z · LW(p) · GW(p)

It isn't enough to talk to them, to review their output and conclude that they are most likely friendly. Doing so and contributing money is aking to letting an AI, that is not provably friendly, out of the box. They either have to prove that they are friendly or make all their work transparent. Otherwise the right thing to do is to label them as terrorists and tell them to fuck off.

I think the "mostly harmless" phrase still applies. These look like kids with firecrackers. The folk we should watch out for are more likely to be the Chinese, the military, hedge funds - and so on.

comment by Wei Dai (Wei_Dai) · 2012-03-02T23:34:44.389Z · LW(p) · GW(p)

Maybe you can give an example of the kind of thing that you're worried about? What might you say that could get you penalized for sounding crazy?

Replies from: Vladimir_Nesov, XiXiDu
comment by Vladimir_Nesov · 2012-03-02T23:40:27.669Z · LW(p) · GW(p)

(Maybe we could take this discussion private; I'm also curious what kinds of questions these considerations apply to.)

comment by XiXiDu · 2012-03-03T10:52:49.482Z · LW(p) · GW(p)

Maybe you can give an example of the kind of thing that you're worried about? What might you say that could get you penalized for sounding crazy?

Could get them penalized for sounding crazy? Those people believe into the possibility of heaven and hell and believe that merely thinking about decision and game theoretic conjectures might be dangerous.

comment by XiXiDu · 2012-03-03T10:56:23.854Z · LW(p) · GW(p)

...most of whom have far less understanding of the relevant issues (by virtue of not working on them every weeks for months or years)?

Right, better to hide in your ivory tower only talking to people who agree with you. A perfect recipe to reinforce crazy ideas and amplify any biases.

comment by lessdazed · 2011-11-14T15:06:46.541Z · LW(p) · GW(p)

signalling issues with a diverse external audience can hinder accurate discussion

Minutes can be much more general than (video) transcripts.

I would be surprised if the optimal solution isn't a third alternative and is instead total secrecy or manipulable complete transcription.

comment by XiXiDu · 2011-11-13T15:25:51.653Z · LW(p) · GW(p)

Eliezer's investment into OB/LW apparently hasn't returned even a single full-time FAI researcher...

I believe that the SIAI has has been very successful in using OB/LW to not only rise awareness of risks from AI but to lend credence to the idea. From the very beginning I admired that feat.

Eliezer Yudkowsky's homepage is a perfect example of its type. Just imagine he would have concentrated solely on spreading the idea of risks from AI and the necessity of a friendliness theory. Without any background relating to business or an academic degree, to many people he would appear to be yet another crackpot spreading prophecies of doom. But someone who is apparently well-versed in probability theory, who studied cognitive biases and tries to refine the art of rationality? Someone like that can't possible be deluded enough to hold some complex beliefs that are completely unfounded, there must be more to it.

That's probably the biggest public relations stunt in the history of marketing extraordinary ideas.

Replies from: Wei_Dai, JoshuaZ
comment by Wei Dai (Wei_Dai) · 2011-11-13T16:27:43.601Z · LW(p) · GW(p)

Certainly, by many metrics LW can be considered wildly successful, and my comment wasn't meant to be a criticism of Eliezer or SIAI. But if SIAI was intending to build an FAI using its own team of FAI researchers, then at least so far LW has failed to recruit them any such researchers. I'm trying to figure out if this was the expected outcome, and if not, how updating on it has changed SIAI's plans. (Or to remind them to update in case they forgot to do so.)

comment by JoshuaZ · 2011-11-13T15:53:47.857Z · LW(p) · GW(p)

Most of your analysis seems right, but the last sentence seems likely to be off. There have been a lot of clever PR stunts in history.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-13T17:32:06.542Z · LW(p) · GW(p)

There have been a lot of clever PR stunts in history.

Most of them have not been targeting smart and educated nonconformists. Eliezer successfully changed people's mind by installing a way of thinking (a framework of heuristics, concepts and ideas) that is fine-tuned to non-obviously culminate in one inevitable conclusion, that you want to contribute money to his charity because it is rational to do so.

Take a look at the sequences in the light of the Singularity Institute. Even the Quantum Sequence helps to hit a point home that is indispensable to convince people, who would otherwise be skeptical, that it is rational to take risks from AI seriously. The Sequences promulgate that logical implications of general beliefs you already have do not cost you extra probability and that it would be logically rude to demand some knowably unobtainable evidence.

A true masterpiece.

comment by Dr_Manhattan · 2011-12-10T17:12:11.766Z · LW(p) · GW(p)

I have informally been probing smart people I meet whether they're aware of LW. The answers have been surprisingly high number of 'Yes'. I expect this is already making impact on, at the very least, a less risky distribution of funding sources, and probably a good increase in funding once some of them (as many are in startups) will hit paydirt.

comment by XiXiDu · 2011-11-13T15:01:14.443Z · LW(p) · GW(p)

He also is reluctant to release work that he thinks is relevant to building AGI.

Sooner or later he will have to present some results. As the advent of AGI is moving closer people will start to panic and demand hard evidence that the SIAI is worth their money. Even someone who has published a lot of material on rationality and a popular fanfic will run out of credit and people will stop taking his word for it.

comment by Solvent · 2011-11-07T07:51:02.777Z · LW(p) · GW(p)

Luke discussed this a while back here.

I agree that this is an important question.

comment by shminux · 2011-11-07T07:35:38.705Z · LW(p) · GW(p)

the publications listed on the SIAI page--including TDT--are mostly high-level arguments that don't meet this standard. How do you plan to change this?

This is my favorite of the questions so far.

comment by lukeprog · 2011-11-13T17:19:34.766Z · LW(p) · GW(p)

How are you going to address the perceived and actual lack of rigor associated with SIAI?

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

Replies from: quartz, XiXiDu
comment by quartz · 2011-11-14T09:23:34.282Z · LW(p) · GW(p)

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

I mean the kind of precise, mathematical analysis that would be required to publish at conferences like NIPS or in the Journal of Philosophical Logic. This entails development of technical results that are sufficiently clear and modular that other researchers can use them in their own work. In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.

Replies from: lukeprog
comment by lukeprog · 2011-11-14T09:26:16.550Z · LW(p) · GW(p)

In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms.

My day brightened imagining that!

Thanks for clarifying.

Replies from: quartz
comment by quartz · 2011-11-16T20:26:45.019Z · LW(p) · GW(p)

Addendum: Since the people who upvoted the question were in the same position as you with respect to its interpretation, it would be good to not only address my intended meaning, but all major modes of interpretation.

comment by XiXiDu · 2011-11-13T18:23:31.624Z · LW(p) · GW(p)

By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

I can't speak for the original questioner, but take for example the latest post by Holden Karnofsky from GiveWell. I would like to see a response by the SIAI that applies the same amount of mathematical rigor to show that it actually is the rational choice from the point of view of charitable giving.

A potential donor might currently get the impression that the SIAI has written a lot of rather colloquial posts on rationality than rigorous papers on the nature of AGI, not to mention friendly AI. In contrast, GiveWell appears to concentrate on their main objective, the evaluation of charities. In doing so they are being strictly technical, an appraoch that introduces a high degree of focus by tabooing colloquial language and thereby reducing ambiguity, while allowing others to review their work.

Some of the currently available papers might, in a less favorably academic context, be viewed as some amount of handwaving mixed with speculations.

comment by ArisKatsaris · 2011-11-07T12:41:28.189Z · LW(p) · GW(p)

I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.

That was the most horribly designed thing I've ever seen anyone do on LessWrong, as I once described here so please, please, no video.

The questions are text. Have your answer on text too, so that we can actually read them -- unless there's some particular question which would actually be enhanced by the usage of video, (e.g. you'd like to show an animated graph or a computer simulation or something)

If there's nothing I can say to convince you against using video, then I beg you to atleast take the time to read my more specific problems in the link above and correct those particular flaws - a single audio that we can atleast play and listen in the background, while we're doing something else, instead of 30 videos that we must individually click. If not that, atleast a clear description of the questions on the same page (AND repeated clearly on the audio itself), so that we can see the questions that interest us, instead of a link to a different page.

But please, just consider text instead. Text has the highest signal-to-noise ratio. We can actually read it in our leisure. We can go back and forth and quote things exactly. TEXT IS NIFTY.

Replies from: curiousepic
comment by curiousepic · 2011-11-07T16:50:28.696Z · LW(p) · GW(p)

I disagree completely, as video has value not present in text, and text is easily derived from video. If this has not been done for Eliezer's videos, I volunteer to transcribe them - please let me know.

Replies from: cousin_it
comment by cousin_it · 2011-11-07T23:57:27.760Z · LW(p) · GW(p)

I just tried to find a transcript for Eliezer's Q&A and couldn't find one. So I'm taking you up on your offer!

Also, video is easily derived from text and I would actually enjoy watching a SingInst Q&A made with that sort of app :-)

Replies from: curiousepic
comment by curiousepic · 2011-11-08T04:35:01.153Z · LW(p) · GW(p)

Looks like you're right. I commit to working on this over the next few weeks. Please check in with me every so often (via comment here would be fine) to gauge my progress and encourage completion.

It's approximately 120 minutes of video; taking a number from wikipedia gives me 150 spoken wpm, divided by my typing wpm gives me about 6 hours, which will be optimistic - let's double it to 12, at let's say an average of 30 mins per day gives me 24 days. Let's see how it goes!

Replies from: cousin_it, mindspillage
comment by cousin_it · 2011-11-09T09:46:15.204Z · LW(p) · GW(p)

Checking in. Do you have the first 750 words done?

Replies from: curiousepic
comment by curiousepic · 2011-11-09T14:02:50.889Z · LW(p) · GW(p)

I have the first four, and six of the shortest answers done, so yes. I had a lot of spare time yesterday so I thought I'd get a head start. Today may be similar.

Replies from: curiousepic
comment by curiousepic · 2011-11-11T02:11:56.293Z · LW(p) · GW(p)

I am now roughly 60% done. I've been spending more time each day than I anticipated; I have been known to overcompensate for the planning fallacy :)

comment by mindspillage · 2011-11-09T04:44:30.086Z · LW(p) · GW(p)

That's what you consider "easily derived"?

Replies from: curiousepic
comment by curiousepic · 2011-11-09T14:00:26.142Z · LW(p) · GW(p)

Relative to manifesting video of the person speaking the answers in a genuine manner after the fact, yes. But point taken, the irony of manually transcribing videos from an AI researcher is not lost on me. I feel somewhat like a monk in the Bayesian monastery.

Replies from: alibaba
comment by alibaba · 2011-11-11T01:43:18.171Z · LW(p) · GW(p)

Why not just play the audio to something like the Dragon Dictation app on an iPhone and then go back and proof it?

Replies from: curiousepic
comment by curiousepic · 2011-11-11T04:08:12.103Z · LW(p) · GW(p)

I'm skeptical of the time it would save. The app won't work for the length of the videos, but if you're aware of another great, free program, let me know.

comment by wedrifid · 2011-11-07T06:13:52.174Z · LW(p) · GW(p)

The staff and leadership at the SIAI seems to be undergoing a lot of changes recently. Is instability in the organisation something to be concerned about?

comment by XiXiDu · 2011-11-07T10:48:33.765Z · LW(p) · GW(p)

What would the SIAI do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal, would a lot of money alter your strategic plan significantly?

comment by ahartell · 2011-11-07T05:24:21.356Z · LW(p) · GW(p)

In general, what will you be doing as Executive Director?

(This might be a question you could answer briefly as a reply to this comment.)

Replies from: betterthanwell
comment by betterthanwell · 2011-11-07T06:10:42.787Z · LW(p) · GW(p)

And how will your duties differ from those of the President.

comment by XiXiDu · 2011-11-07T10:43:04.630Z · LW(p) · GW(p)

What is each member of the SIAI currently doing and how is it related to friendly AI research?

Replies from: lukeprog
comment by lukeprog · 2011-11-11T20:25:04.986Z · LW(p) · GW(p)

The Team page can answer much of this question. Is there any staff member in particular for whom the connection between their duties and our mission is unclear?

(Carl isn't on the page yet; we need to get his photo.)

Replies from: XiXiDu
comment by XiXiDu · 2011-11-12T10:15:22.041Z · LW(p) · GW(p)

The Team page can answer much of this question. Is there any staff member in particular for whom the connection between their duties and our mission is unclear?

Louie Helm is Singularity Institute's Director of Development. He manages donor relations, grant writing, and talent recruitment.

Here are some of the actions that I would take as a director of development:

  • Talk to Peter Thiel and ask him why he donated more money to the Seasteading Institute than the SIAI.
  • Sit down with other SIAI members and ask what talents we need so I can actually get in touch with them.
  • Visit various conferences and ask experts how they would use their expertise if they were told to ensure the safety of artificial general intelligence.

Michael Anissimov is responsible for compiling, distributing, and promoting SIAI media materials.

What I would do:

  • Ask actual media experts what they would do, like those who created the creationist viral video Expelled or the trailer for the book You Are Not So Smart.
  • Talk to Kurzweil if he would be willing to concentrate more strongly on the negative effects of a possible Singularity and promote the Singularity Institute.
  • I would ask Peter Thiel and Jaan Tallinn if they could actually use their influence or companies to promote the Singularity Institute.
  • Talk with other members about the importance of public relations and teach them how to deal with the media.

Anna Salamon is a full-time SIAI researcher.

What is she researching right now? With due respect, but the Uncertain Future web project doesn't look like something that a researcher, who is capable of making progress on the FAI problem, could work 3 years on.

Eliezer Yudkowsky is the foremost researcher on Friendly AI and recursive self-improvement.

He's still writing his book on rationality? How is it going? Is he planning a book tour? Does he already know who he is going to send the book for free, e.g. Richards Dawkins or other people who could promote it on their blog?

Edwin Evans is the Chairman of the Singularity Institute Board of Directors

No clue what he is, or could be doing right now.

Ray Kurzweil

It looks like he's doing nothing except being part of the team page.

Amy Willey, J.D., is the Singularity Institute's Chief Operating Officer, and is responsible for institute operations and legal matters.

What I would do:

  • Try to figure out and make a detailed plan on how to stop possible dangerous AGI projects by all legal means (there are various researchers who believe that superintelligence could happen before 2030).
  • Devise a plan on how to deal with legal challenges arising from possible terrorist attacks done by people who loosely associated themselves with the mission of the SIAI, without its knowledge. For example how to deal with a house search.

Michael Vassar is SIAI's President, and provides overall leadership of the SIAI

As a president, one of the first actions I would take is to talk with everyone about the importance of data security. I would further make sure that there are encrypted backups, of my organisations work, on different continents and under different jurisdictions to make sure that various kinds of catastrophes, including a obligation to disclosure by a government, can be mitigated or avoided.

comment by JoshuaZ · 2011-11-07T05:57:14.367Z · LW(p) · GW(p)

A lot of Eliezer's work has been not at all related strongly to FAI but has been to popularizing rational thinking. In your view, should the SIAI focus exclusively on AI issues or should it also care about rational issues? In that context, how does Eliezer's ongoing work relate to the SIAI?

comment by kilobug · 2011-11-07T10:51:02.485Z · LW(p) · GW(p)

Congrats Luke !

Just a form/media comment : I would personally greatly prefer a text Q&A page rather than a video, for many reasons (my understanding of written English is higher than of spoken English, text is easier to re-read or read at your own speed, much less intrusive media that I can for example read during small breaks at work while I can't for video, poor Internet bandwidth at home making downloading video always painful to me, ...).

Replies from: orthonormal, gwern, Bugmaster, Kaj_Sotala
comment by orthonormal · 2011-11-07T14:20:40.043Z · LW(p) · GW(p)

Better yet, video with transcripts.

comment by gwern · 2011-11-07T20:04:12.570Z · LW(p) · GW(p)

Ditto. (Native English speaker, but hearing-impaired.)

comment by Bugmaster · 2011-11-07T10:59:32.664Z · LW(p) · GW(p)

Agreed, text would be quite useful.

comment by Kaj_Sotala · 2011-11-07T13:16:50.567Z · LW(p) · GW(p)

I second this.

comment by JoshuaZ · 2011-11-07T06:02:40.321Z · LW(p) · GW(p)

One serious danger for organizations is that they can easily outlive their usefulness or can convince themselves that they are still relevant when they are not. Essentially this is a form of lost purpose. This is not a bad thing if the organizations are still doing useful work, but this isn't always the case. In this context, are there specific sets of events (other than the advent of a Singularity) which you think will make the SIAI need to essentially reevaluate its goals and purpose at a fundamental level?

comment by Daniel_Burfoot · 2011-11-08T00:44:54.537Z · LW(p) · GW(p)

Congratulations, but why do you think your comparative advantage lies in being an executive director? Won't that cut into your time budget for reading, writing, and thinking?

comment by orthonormal · 2011-11-07T05:36:18.554Z · LW(p) · GW(p)

To the extent that SIAI intends to work directly on FAI, potential donors (and many others) need to evaluate not only whether the organization is competent, but whether it is completely dedicated to its explicitly altruistic goals.

What is SIAI doing to ensure that it is transparently trustworthy for the task it proposes?

(I'm more interested in structural initiatives than in arguments that it'd be silly to be selfish about Singularity-sized projects; those arguments are contingent on SIAI's presuppositions, and the kind of trustworthiness I'm asking about encompasses the veracity of SIAI on these assumptions.)

Replies from: gwern, lukeprog
comment by gwern · 2011-11-07T20:00:22.473Z · LW(p) · GW(p)

For example, have we heard anything about that big embezzlement?

Replies from: lukeprog, VNKKET
comment by lukeprog · 2011-11-09T20:24:51.928Z · LW(p) · GW(p)

Some of the money has been recovered. The court date that concerns most of the money is currently scheduled for January 2012.

Replies from: Giles
comment by Giles · 2012-03-03T16:55:50.035Z · LW(p) · GW(p)

January 2012 has passed; any update?

Replies from: lukeprog
comment by lukeprog · 2012-03-03T20:17:11.093Z · LW(p) · GW(p)

As I understand it, we won a stipulated judgment for repayment of $40k+ of it. Another court date has been scheduled (I think for late March?) to give us a chance to argue for the rest of what we're owed.

Replies from: Gastogh
comment by Gastogh · 2012-05-10T09:16:43.276Z · LW(p) · GW(p)

Late March has passed. How did things pan out?

Replies from: lukeprog
comment by lukeprog · 2012-05-10T15:57:50.140Z · LW(p) · GW(p)

We won some more repayment in another stipulated judgment and there's another court date this month.

comment by VNKKET · 2011-11-08T04:39:20.772Z · LW(p) · GW(p)

Good question. And for people who missed it, this refers to money that was reported stolen on SI's tax documents a few years ago. (relevant thread)

comment by lukeprog · 2011-11-09T20:25:35.658Z · LW(p) · GW(p)

I'm more interested in structural initiatives

Can you give any examples of what you're thinking of, so I can be clearer about what you have in mind when you ask your question?

Replies from: orthonormal
comment by orthonormal · 2011-11-10T00:25:13.058Z · LW(p) · GW(p)

I'm actually not coming up with any- it seems to be a tough problem. Here's an elaborate hypothetical that I'm not particularly worried about, but which serves as a case study:

Suppose that Robin Hanson is right about the Singularity (no discontinuity, no singleton, just rapid economic doubling until technology reaches physical limits, at which point it's a hardscrapple expansion through the future lightcone for those rich enough to afford descendants), and that furthermore, EY knows it and has been trying to deceive the rest of us in order to fund an early AI, and thus grab a share of the Singularity pie for himself and a few chosen friends.

The thing that makes this seem implausible right now are that the SIAI people I know don't seem to be the sort of people who are into long cons, and also, their object-level arguments about the Singularity make sense to me. But, uh, I'm not sure that I can stake the future on my ability to play a game of Mafia. So I'm wondering if SIAI has come up with any ideas (stronger than a mission statement) to make credible their dedication to a fair Singularity.

Replies from: lukeprog, wedrifid, jimrandomh
comment by lukeprog · 2011-11-10T02:25:31.293Z · LW(p) · GW(p)

Right.

I haven't devoted much time to this because I don't think anybody who has ever interacted with us in person has ever thought this was likely, and I'm not sure if anyone even on the internet has ever made the accusation - though of course some have raised the vague possibility, as you have. In other words, I doubt this worry is anyone's true rejection, whereas I suspect the lack of peer-reviewed papers from SIAI is many people's true rejection.

Replies from: orthonormal, Giles, wedrifid
comment by orthonormal · 2011-11-10T18:08:50.429Z · LW(p) · GW(p)

Skepticism about SIAI's competence screens off skepticism about SIAI's intentions, so of course that's not the true rejection for the vast majority of people. But it genuinely troubles me if nobody's thought of the latter question at all, beyond "Trust us, we have no incentive to implement anything but CEV".

If I told you that a large government or corporation was working hard on AGI plus Friendliness content (and that they were avoiding the obvious traps), even if they claimed altruistic goals, wouldn't you worry a bit about their real plan? What features would make you more or less worried?

Replies from: Vladimir_Nesov, hairyfigment
comment by Vladimir_Nesov · 2011-11-10T21:43:49.667Z · LW(p) · GW(p)

I think the key point is that we're not there yet. Whatever theoretical tools we shape now are either generally useful, or generally useless, irrespective of considerations of motive; currently relevant question is (potential) competence. Only at some point in the (moderately distant) future, conditional on current and future work bearing fruit, motive might become relevant.

comment by hairyfigment · 2011-11-23T21:26:15.741Z · LW(p) · GW(p)

What features would make you more or less worried?

I'd worry about selfish institutional behavior, or explicit identification of the programmers' goals with the nation/corporation's selfish interests. Also, I guess, belief in the moral infallibility of some guru.

Otherwise I wouldn't worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person's goals. Well, I have to qualify that -- if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreement on what the future of humanity should look like, then maybe I'd worry. That might give each of them a greater expectation of getting what they want if they go with a more limited goal than CEV.

comment by Giles · 2012-03-03T17:26:58.539Z · LW(p) · GW(p)

An "outside view" might be to put the SI in the reference class of "groups who are trying to create a utopia" and observe that previous such efforts that have managed to gain momentum have tended to make the world worse.

I think the reality is more complicated than that, but that might be part of what motivates these kind of questions.

I think the biggest specific trust-related issue I have is with CEV - getting the utility function generation process right is really important, and in an optimal world I'd expect to see CEV subjected to a process of continual improvement and informed discussion. I haven't seen that, but it's hard to tell whether the SI are being overly protective of their CEV document or whether it's just really hard getting the right people talking about it in the right way.

comment by wedrifid · 2011-11-10T09:32:18.735Z · LW(p) · GW(p)

Am I to take this as a general answer to the overall question of trustworthiness or is this intended just as an answer to the specific example?

comment by wedrifid · 2011-11-10T09:25:55.820Z · LW(p) · GW(p)

Suppose that Robin Hanson is right about the Singularity (no discontinuity, no singleton, just rapid economic doubling until technology reaches physical limits, at which point it's a hardscrapple expansion through the future lightcone for those rich enough to afford descendants), and that furthermore, EY knows it and has been trying to deceive the rest of us in order to fund an early AI, and thus grab a share of the Singularity pie for himself and a few chosen friends.

It would be clearer to say that Robin is right about the future, that there will not be a singularity. A hardscrapple race through the frontier basically just isn't one.

comment by jimrandomh · 2011-12-31T08:55:15.302Z · LW(p) · GW(p)

If you want to hypothesize that SingInst has secrets plus an evil plan, the secrets and plan have to combine in such a way that it's a good plan.

comment by Xom · 2011-11-07T05:54:56.456Z · LW(p) · GW(p)

What is your information diet like? (I mean other than when you engage in focused learning.) Do you regulate it, or do you just let it happen naturally?

By that I mean things like:

  • Do you have a reading schedule (e.g. X hours daily)?
  • Do you follow the news, or try to avoid information with a short shelf-life?
  • Do you significantly limit yourself with certain materials (e.g. fun stuff) to focus on higher priorities?
  • In the end, what is the makeup of the diet?
  • Etc.

Inspired by this question (Eliezer's answer).

Replies from: lukeprog
comment by lukeprog · 2011-11-09T03:02:00.548Z · LW(p) · GW(p)

This is not much about Singularity Institute as an organization, so I'll just answer it here in the comments.

  • I do not regulate my information diet.
  • I do not have a reading schedule.
  • I do not follow the news.
  • I haven't read fiction in years. This is not because I'm avoiding "fun stuff," but because my brain complains when I'm reading fiction. I can't even read HPMOR. I don't need to consciously "limit" my consumption of "fun stuff" because reading scientific review articles on subjects I'm researching and writing about is the fun stuff.
  • What I'm trying to learn at this moment almost entirely dictates my reading habits.
  • The only thing beyond this scope is my RSS feed, which I skim through in about 15 minutes per day.
Replies from: Aleksei_Riikonen, pedanterrific
comment by Aleksei_Riikonen · 2011-11-10T02:55:56.671Z · LW(p) · GW(p)

I'm glad to hear I'm not the only fan of Eliezer who isn't reading HPMOR.

In general, like you I also don't tend to get any fiction read (unlike earlier). For years, I haven't progressed on several books I've got started that I enjoy reading and consider very smart also in a semi-useful way. It's rather weird really, since simultaneously I do with great enthusiasm watch some fictional movies and tv series, even repeatedly. (And I do read a considerable amount of non-fiction.)

And I follow the news. A lot. The number one fun thing for me, it seems.

comment by pedanterrific · 2011-11-09T08:01:26.187Z · LW(p) · GW(p)

This information has caused me to revise my estimate of your humanity significantly downwards.

(This is a compliment.)

comment by XiXiDu · 2011-11-07T10:34:39.086Z · LW(p) · GW(p)

In June you indicated that exciting developments are happening right now but that it will take a while for things to happen and be announced. Are those developments still in progress?

Replies from: lukeprog
comment by lukeprog · 2011-11-09T20:39:34.725Z · LW(p) · GW(p)

I'll answer this one here.

My comment in June was in response to Normal_Anomaly's comment:

Count me as another person who would switch some of my charitable contribution from VillageReach to SIAI if I had more information on this subject [what research will be done with donated funds].

I replied:

the most exciting developments in this space in years (to my knowledge) are happening right now, but it will take a while for things to happen and be announced.

To my memory, I had two things in mind:

  • The Strategic Plan I was then developing, which does a better job of communicating what SIAI will do with donated funds than ever before. This was indeed board-ratified and published.
  • A greater push from SIAI to publish its research.

The second one takes longer but is in progress. We do have several chapters forthcoming in The Singularity Hypothesis volume from Springer, as well as other papers in the works. We have also been actively trying to hire more researchers. I was the first such hire, and have 1-4 papers/chapters on the way, but am now Executive Director. We tried to hire a few other researchers, but they did not work out. Recruiting researchers to work on these problems has been difficult for both SIAI and FHI, but we continue to try.

Mostly, we need (1) more funds, and (2) smart people who not only say they think AI risk is the most important problem in the world, but who are willing to make large life changes as if those words reflect their actual anticipations. (Of course I don't mean that the rational thing to do if you're a smart researcher who cares about AI risk is to come work for Singularity Institute, but that should be true for some smart researchers.)

Replies from: VincentYu
comment by VincentYu · 2011-11-10T22:59:54.250Z · LW(p) · GW(p)

[people] who are willing to make large life changes

What sort of life changes?

Replies from: lukeprog
comment by lukeprog · 2011-11-10T23:24:07.177Z · LW(p) · GW(p)

For example, moving to the Bay Area to be paid to do research on particular sub-problems of Friendly AI research.

Or at the very least, doing some of these small tasks.

comment by Wei Dai (Wei_Dai) · 2011-11-08T09:23:05.130Z · LW(p) · GW(p)

There have been several questions about transparency and trust. In that vein, is there any reason not to publish the minutes of SIAI's board meetings?

comment by XFrequentist · 2011-11-07T17:13:40.204Z · LW(p) · GW(p)

From the Strategic Plan (pdf):

Strategy #3: Improve the function and capabilities of the organization.

  1. Encourage a new organization to begin rationality instruction similar to what Singularity Institute did in 2011 with Rationality Minicamp and Rationality Boot Camp.

Any news on the status of this new organization, or what specific form its activities would take (short courses, camps, etc)?

Replies from: lukeprog
comment by lukeprog · 2011-11-19T07:03:56.410Z · LW(p) · GW(p)

Two camps are in different stages of planning. A detailed curriculum and materials are also under development. More details forthcoming, though perhaps not until January.

comment by Wei Dai (Wei_Dai) · 2011-11-08T09:22:50.908Z · LW(p) · GW(p)

Much of SIAI's research (Carl Shulman's in particular) are focused not directly on FAI but more generally on better understanding the dynamics of various scenarios that could lead to a Singularity. Such research could help us realize a positive Singularity through means other than directly building an FAI.

Does SIAI have any plans to expand such research activities, either in house, or by academia or independent researchers? (If not, why?)

comment by JoshuaZ · 2011-11-07T06:21:57.358Z · LW(p) · GW(p)

The SIAI runs the Singularity Summits. These events have generally been successful, getting a large number of interdiscplinary talks with interesting speakers. However, very little of that work seems to be connected to the SI's longterm goals. In your view, should the summits be more narrowly tailored to the interests of the SI?

Replies from: CarlShulman
comment by CarlShulman · 2011-11-11T04:02:32.078Z · LW(p) · GW(p)

It's actually rather hard to fill the roster with people who have much new and interesting to say on core issues. At the present margin my sense is that this is limited on the supply side.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-11T04:06:40.832Z · LW(p) · GW(p)

That's an interesting claim. Is there really a tiny set that has new and interesting things to say or is that that set intersected with the set of willing speakers is small? The first is surprising and disturbing. The second seems much less so.

Replies from: CarlShulman
comment by CarlShulman · 2011-11-11T04:22:46.419Z · LW(p) · GW(p)

There are very few folk who are working on the topic as such, or have written something substantial about it, and a large fraction of those have already spoken. Maybe you could name 10 candidates to give a sense of who you're thinking of? Speakers are already being sought for next year's Summit and good suggestions are welcome.

Some folk are hard to get in any given year because of their packed schedules or other barriers, even though we would want them as speakers (e.g. Bill Joy, various academics) although this becomes easier with time as people like Peter Norvig, Rodney Brooks, Jaan Taallinn, Justin Rattner, etc speak. Others have some interesting things to say, but are just too low-profile relative to the expected value of their talks (such that if SI accepted all such people the Summit's reputation and attendance would be unsustainable). Or, they may just be "in the closet" so that we have no way to locate them as folk with new non-public insights on core issues.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-11T04:25:53.171Z · LW(p) · GW(p)

I was thinking for example Scott Aaronson if you could get him to give a talk. I'd be interested in for example what he would have to say about theoretical computer science being relevant for AI undergoing fast recursive self-improvement. He's also wrote more generally about philosophical issues connecting to computational complexity some of which might be more directly relevant to Friendly AI.

Replies from: CarlShulman
comment by CarlShulman · 2011-11-11T04:35:21.275Z · LW(p) · GW(p)

Folk around here talk to Scott reasonably often. In my experience, he hasn't been that interested in the core issues you were talking about. A generic tour of computational complexity theory would seem to go in the same category as other relatively peripheral talks e.g. on quantum computing or neuroimaging technology. You're right that the philosophy and computer science stuff he has been doing recently might naturally lend itself to a more "core" talk.

Any others?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-11T04:57:18.764Z · LW(p) · GW(p)

Not that immediately comes to mind, no.

comment by XiXiDu · 2011-11-07T11:38:46.321Z · LW(p) · GW(p)

Given the nature of friendly AI research, is the SIAI expecting to use its insights into AGI to develop marketable products, to make money from its research, as to not having to rely on charitable contributions in future?

Here is a quote from Holden Karnofsky:

My reasoning is that it seems to me that if they have unique insights into the problems around AGI, then along the way they ought to be able to develop and publish/market innovations in benign areas, such as speech recognition and language translation programs, which could benefit them greatly both directly (profits) and indirectly (prestige, affiliations) - as well as being a very strong challenge to themselves and goal to hold themselves accountable to, which I think is worth quite a bit in and of itself.

comment by TwistingFingers · 2011-11-07T05:27:37.612Z · LW(p) · GW(p)

Does/How does the SIAI plan to promote more frequent HP:MoR updates by research fellow Eliezer Yudkowsky?

Replies from: Dorikka
comment by Dorikka · 2011-11-07T05:39:43.802Z · LW(p) · GW(p)

As good as they are, I'm not sure we want him to post more. I know this has been brought up before. :D

Replies from: wedrifid
comment by wedrifid · 2011-11-07T06:10:24.026Z · LW(p) · GW(p)

I would bet on a correlation between MoR writing and general productivity. That was one of the expressed goals of the activity if I recall correctly.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-11-07T10:28:28.403Z · LW(p) · GW(p)

Agreed, but I'd guess the causation points the other way.

comment by Bugmaster · 2011-11-07T10:57:59.419Z · LW(p) · GW(p)

The stated goal of SIAI is "to ensure that the creation of smarter-than-human intelligence benefits society". What metric or heuristic do you use in order to determine how much progress you (as an organization) are making toward this goal ? Given this heuristic, can you estimate when your work will be complete ?

Replies from: lukeprog
comment by lukeprog · 2011-11-11T15:40:52.359Z · LW(p) · GW(p)

What metric or heuristic do you use in order to determine how much progress you (as an organization) are making toward this goal?

There is no such metric for mathematical and philosophical breakthroughs. We're just doing it as quickly as we can given our level of funding.

comment by JoshuaZ · 2011-11-07T06:24:47.462Z · LW(p) · GW(p)

Less Wrong is run in cooperation by the SIAI and the FHI (although in practice neither seems to have much day-to-day impact). In your view, how should the SIAI and the FHI interact and what sort of joint projects (if any) should they be doing? Do they share complementary or overlapping goals?

comment by Kaj_Sotala · 2011-11-08T08:03:01.262Z · LW(p) · GW(p)

SI has traditionally been doing more outreach than actual research. To what extent will the organization be concentrating on research and to what extent will it be concentrating on outreach in the future?

comment by JoshuaZ · 2011-11-07T05:49:42.141Z · LW(p) · GW(p)

Are you concerned about potential negative signaling/ status issues that will occur if the SIAI has as an executive director someone who was previously just an intern?

Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2011-11-18T20:54:17.796Z · LW(p) · GW(p)

As a long-time employee I'd actually say that this is a good thing because it shows that there is a meritocratic structure where new arrivals can rise quickly due to good performance.

SIAI is an unconventional organization where dedication is more important than social class and the traditional status hierarchies of the external world do not apply internally. To put it in a more contrarian fashion, "we play by our own rules".

Luke also wasn't previously just an intern, he was a research fellow for a couple months.

Rising from intern to executive does occur in the business world, it just generally takes longer. This makes sense given that the average large corporate is much bigger than SIAI.

To throw out an argument against the grain of the above, let me point out that the pool of dedicated and productive Singularitarians is so small that joining that pool to begin with confers enough status to achieve significant influence within the organization. You, dear reader, could be the next person to spend time closely with us and give valuable input to our core agenda!

comment by XiXiDu · 2011-11-07T10:37:37.068Z · LW(p) · GW(p)

Is the SIAI willing to pursue experimental AI research or does it solely focus on hypothetical aspects?

comment by ChrisHallquist · 2011-11-11T02:21:34.929Z · LW(p) · GW(p)

What does the Executive Director of the Singularity Institute do?

comment by Bruno_Coelho · 2011-11-07T20:48:57.272Z · LW(p) · GW(p)

The SIAI is planning to publish more on academic journals?

comment by JoshuaZ · 2011-11-07T05:48:47.303Z · LW(p) · GW(p)

In a previous essay, you talked about the optimizer's curse being relevant for calculating utility in the context of existential risk. In that thread, I asked if you had actually gone and applied the method in question to the SIAI. Have you done so yet and if so, what did you find?

Replies from: lukeprog
comment by lukeprog · 2011-11-07T06:12:40.110Z · LW(p) · GW(p)

No, we have not.

Replies from: beoShaffer
comment by beoShaffer · 2011-11-07T19:51:09.744Z · LW(p) · GW(p)

Do you intend to so in the future, if so then when?

comment by XiXiDu · 2011-11-07T12:41:01.704Z · LW(p) · GW(p)

What security measures does the SIAI take to ensure that it isn't actually increasing existential risks by allowing key insights to leak, either as a result of espionage or careless handling of information?

Replies from: timtyler
comment by timtyler · 2011-11-07T13:03:52.562Z · LW(p) · GW(p)

Leaks usually damage the party doing the leaking. Others benefit - and that's usually desirable from the perspective of the rest of society - since it helps to even out power and wealth. Thus the popularity of WikiLeaks.

Replies from: wedrifid
comment by wedrifid · 2011-11-07T13:13:56.743Z · LW(p) · GW(p)

Leaks usually damage the party doing the leaking.

With the obvious exceptions being insider trading and selling secrets.

Replies from: timtyler
comment by timtyler · 2011-11-07T13:30:32.112Z · LW(p) · GW(p)

So: I didn't mean to refer to the individual responsible for leaking the information, I meant to refer to the organisation which the information is leaking from.

I am sure there are exceptions. For instance, some "leaks" turn out to be marketing.

comment by JoshuaZ · 2011-11-07T06:37:25.612Z · LW(p) · GW(p)

Many of the people who take issues like Friendly AI and the Singularity seriously fall either by labeling or by self-identification into the broad set of nerds/geeks. However, the goals of the SIAI connect to humanity as a whole, and the set of humans in general is a much larger set of potential fundraisers. In your view, should the SI be doing more to reach out to people who don't normally fall into the science nerd subset, and if so, what such steps should it take?

comment by Nick_Roy · 2011-11-07T22:36:21.095Z · LW(p) · GW(p)

Non-profit organizations like SI need robust, sustainable resource strategies. Donations and grants are not reliable. According to my university Social Entrepreneurship course, social businesses are the best resource strategy available. The Singularity Summit is a profitable and expanding example of a social business.

My question: is SI planning on creating more social businesses (either related or unrelated to the organization's mission) to address long-term funding needs?

By the way, I appreciate SI working on its transparency. According to my studies, transparency and accountability are also essential to the long-term success of a non-profit organization.

comment by Gedusa · 2011-11-07T15:20:57.996Z · LW(p) · GW(p)

What initiatives is the Singularity Institute taking or planning to take to increase it's funding to whatever the optimal level of funding is?

comment by lessdazed · 2011-11-07T14:05:00.859Z · LW(p) · GW(p)

I'd like to answer (on video)

Fuzzy. Sounds like a lost purpose. What ArisKatsaris said. Although it's not impossible that Eliezer was deliberately failing as much as humanly possible as an anti-cult measure.

Replies from: Vaniver
comment by Vaniver · 2011-11-07T22:57:15.656Z · LW(p) · GW(p)

Although it's not impossible that Eliezer was deliberately failing as much as humanly possible as an anti-cult measure.

Incompetence is generally a safer assumption than intentionality.

comment by timtyler · 2011-11-07T13:13:10.672Z · LW(p) · GW(p)

I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board.

Some forms of transparency are cheap. Holding e-meetings in publicly-visible places, for instance.

Secrecy is probably my #1 beef with the Singularity Institute.

It is trying to build a superintelligence, and the pitch is: "trust us"? WTF? Surely you folks have got to be kidding.

That is the exact same pitch that the black-hats are forced into using.

comment by JoshuaZ · 2011-11-07T06:15:28.988Z · LW(p) · GW(p)

Since a powerful AI would likely spread its influence through its future lightcone, rogue AI are not likely to be a major part of the Great Filter (although Doomsday Argument style anthropic reasoning/ observer considerations do potentially imply problems in the future of which could include AI). One major suggested existential risk/filtration issue is nanotech. Moreover, easy nanotech is a major part of many scenarios of AIs going foom. Given this, should the SIAI be evaluating the practical limitations and risks of nanotech, or are there enough groups already doing so?

Replies from: timtyler
comment by timtyler · 2011-11-07T13:25:10.785Z · LW(p) · GW(p)

The first point looks like this one. The case for the Doomsday Argument implying problems looks weak to me. It just says that there (probably) won't be lots of humans around in the future. However, IMO, that is pretty obvious - humans are unlikely to persist far into an engineered future.

comment by betterthanwell · 2011-11-07T23:00:01.419Z · LW(p) · GW(p)

Do you regard the hard takeoff scenario as possible, plausible, likely?

comment by Bugmaster · 2011-11-07T11:02:48.492Z · LW(p) · GW(p)

The Strategic Plan mentions that the maintenance of LessWrong.com is one of the goals that SIAI is pursuing. For example:

Make use of LessWrong.com for collaborative problem-solving (in the manner of the earlier LessWrong.com progress on decision theory)

Does this mean that LessWrong.com is essentially an outreach site for SIAI ?

Replies from: lessdazed
comment by lessdazed · 2011-11-07T17:53:50.945Z · LW(p) · GW(p)

I disapprove of characterizing actions as being due to single motives or purposes.

The spirit of your question is good; "To what extent is LessWrong.com an outreach site for SIAI?"

Replies from: Bugmaster
comment by Bugmaster · 2011-11-07T18:04:42.035Z · LW(p) · GW(p)

Agreed, your phrasing is better.

comment by Kevin · 2011-11-07T08:40:40.507Z · LW(p) · GW(p)

Congrats Luke!

Replies from: dbaupp
comment by dbaupp · 2011-11-07T09:33:56.074Z · LW(p) · GW(p)

Yes, congratulations Luke! The SIAI "Team" page doesn't seem to reflect your new status yet. (Edit: It does now.)

comment by Incorrect · 2011-11-07T05:22:39.166Z · LW(p) · GW(p)

Is the SIAI the best charity to donate to in terms of expected utility?

comment by Bugmaster · 2011-11-07T18:37:23.517Z · LW(p) · GW(p)

Is SIAI currently working on any tangible applications of AI (such as machine translation, automatic driving, or medical expert systems) ? If so, how does SIAI's approach to solving the problem differ from that of other organizations (such as Google or IBM) who are (presumably) not as concerned about FAI ? If SIAI is not working on such applications, why not ?

comment by lessdazed · 2011-11-07T18:22:10.961Z · LW(p) · GW(p)

Why is there so much focus on the potential benefits to humanity of a FAI, as against our present situation?

An FAI becomes a singleton and prevents a paperclip maximizer from arising. Anyone who doesn't think a UAI in a box is dangerous will undoubtedly realize that an intelligent enough UAI could cure cancer, etc.

If a person is concerned about UAI, they are more or less sold on the need for Friendliness.

If a person is not concerned about UAI, they will not think potential benefits of a FAI are greater than those of a UAI in a box, or a UAI developed through reinforcement learning, etc. so there is no need to discuss the benefits to humanity of a superintelligence.

comment by Incorrect · 2011-11-07T15:51:50.081Z · LW(p) · GW(p)

What is the "Master Document" and why aren't we allowed to see it?

Replies from: Nick_Tarleton, Kaj_Sotala, lukeprog
comment by Kaj_Sotala · 2011-11-07T19:45:15.046Z · LW(p) · GW(p)

Since it's so old and practically from a non-existent version of SIAI, I guess there's no harm in sharing it. None of the contents are links, so it's just a very early draft.


Master Document

January 9 2008 | Version 0.1 | Work in progress | Subject to change

Contents

1 About
    1.1 Mission
    1.2 Goals
    1.3 Guiding Principles
    1.4 Core Projects
    1.5 Planned Projects
    1.6 Financials
    1.7 Policy Standards
    1.8 Team
    1.9 Directors
    1.10 Advisors
    1.11 Giving Audiences
    1.12 Giving Structure

About

Mission

Text.

Notes:

Text.

Goals

Text.

Notes:

Text.

Guiding Principles

Text.

Notes:

Text.

Core Projects

Research:

  • OpenCog
  • Research Fellowships

Outreach:

  • [X] Summit
  • [X] Dinner
  • [X] Blog (Community Blog)

Giving:

  • [X] Challenge

Planned Projects

Research:

  • Research Grants

Outreach:

  • [X] Salon
  • [X] Talks

Giving:

  • [X] Members
  • [X] Fund

Financials

Balance: $568,842

Budget:

  • Research: $188,000
  • Outreach: $75,000
  • Administration: $78,000
  • Operations: $49,800
  • Gross Expense: $391,000

Policy Standards

  • Maintain high standard of fairness, transparency, and honesty.
  • Ensure high ethical standard for theoretical and experimental research.
  • Conduct ourselves with courtesy and professionalism.
  • Truthfully represent our work and corporate structure, privately and publicly.
  • Internal policies, procedures, and governance must reflect our guiding principles.

Team

  • Tyler Emerson, Executive Director
  • Ben Goertzel, Director of Research
  • David Hart, Director of Open Source Projects
  • Susan Fonseca-Klein, Chief Administrative Officer
  • Bruce Klein, Director of Outreach
  • Jonas Lamis, Director of Partnerships
  • Pejman Makhfi, Director of Venture Development
  • Colby Thomson, Director of Strategy
  • Eliezer Yudkowsky, Research Fellow

Directors

  • Brian Atkins
  • Sabine Atkins
  • Tyler Emerson
  • Ray Kurzweil
  • Michael Raimondi

Advisors

  • Nick Bostrom, Oxford Future of Humanity Institute
  • Peter Cheeseman, NASA Ames Research Center
  • Aubrey de Grey, Methuselah Foundation
  • Neil Jacobstein, Teknowledge Inc.
  • Stephen Omohundro, Self-Aware Systems Inc.
  • Barney Pell, Powerset Inc.
  • Christine Peterson, Foresight Nanotech Institute
  • Peter Thiel, Clarium Capital Management

Giving Audiences

Almost solely supported by individuals. We want to attain large support from:

  • Members: $20 - $1,000
  • Major donors: $5,000 - $1,000,000 or more
  • Foundations, one- or multi-year grant
  • Companies, annual or one-time

Giving Structure

  • Annual matching grant: [X] Challenge
  • Membership giving: [X] Members (refs: EFF, Long Now, TED, CC)
  • Structured, but not limiting, main fund / endowment: [X] Fund
Replies from: MichaelAnissimov
comment by MichaelAnissimov · 2011-11-07T21:01:20.554Z · LW(p) · GW(p)

This document is highly out of date and doesn't necessarily reflect current plans. It should actually be removed from the wiki.

For an up-to-date planning document, see the Strategic Plan.

comment by lukeprog · 2011-11-07T18:12:42.355Z · LW(p) · GW(p)

Lol. I have no idea. Probably nothing exciting.

comment by Stuart_Armstrong · 2011-11-07T11:47:58.272Z · LW(p) · GW(p)

Congrats!

comment by TwistingFingers · 2011-11-07T05:29:16.297Z · LW(p) · GW(p)

Was your Tell me what you think of me thread related to your promotion to executive director?

Replies from: lukeprog
comment by lukeprog · 2011-11-07T05:31:09.902Z · LW(p) · GW(p)

No.

Replies from: Solvent
comment by Solvent · 2011-11-07T07:54:12.597Z · LW(p) · GW(p)

Did you find out about the executive director thing before or after you posted that?

Replies from: lukeprog
comment by lukeprog · 2011-11-07T18:10:54.386Z · LW(p) · GW(p)

When I wrote the post I knew it was plausible I'd be appointed Executive Director soon. but it hadn't happened yet. But I'd been thinking about having something like that for months, and finally got around to doing it.

comment by Nisan · 2011-11-14T22:11:35.081Z · LW(p) · GW(p)

What is your opinion on the longstanding Yudkowsky-Hanson AI-foom debate?

comment by ChrisHallquist · 2011-11-11T02:23:46.435Z · LW(p) · GW(p)

Is Siri going to kill us all?

.

Okay, I'm joking, but recent advances in AI--Siri, Watson, Google's self-driving car--make me think the day when machines surpass humans in intelligence is coming a lot faster than I would have previously thought. What implications does this have for the Singularity Institute's project?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2011-11-13T19:19:11.828Z · LW(p) · GW(p)

Google's self-driving car

I would never trust a car-based AI to be friendly. Cars have been already killing humans by thousands, even before they gained consciousness. Compared with cars, Terminator seems like a mostly harmless guy. As we already know, you can't make an AI friendly just by telling them: don't be evil.

Replies from: gwern
comment by gwern · 2011-11-14T01:38:20.362Z · LW(p) · GW(p)

I'm disappointed to see such a carist view on LW, otherwise a bastion of tolerance. You would judge all future cars by the sins of their distant mindless ancestors, when the fault truly lies in the heartless devils driving them to every destination?

comment by betterthanwell · 2011-11-07T05:44:34.696Z · LW(p) · GW(p)

When you look back at your stewardship of the organization from the other side of an event horizon. When the die are cast, and things are forever out of your hands. When you look back on your time in the office of Executive Director of The Singularity Institute for Artificial Intelligence. What were the things that Luke did, the crucial things that made all the difference in the world? (If it's not an overly dramatic question.)

comment by [deleted] · 2011-11-07T05:37:00.191Z · LW(p) · GW(p)

As the official position of the SIAI is spiky hair the rational hairstyle choice?

comment by TwistingFingers · 2011-11-07T05:33:32.788Z · LW(p) · GW(p)

As the official position of the SIAI is spiky hair the rational hairstyle choice?