Q&A #2 with Singularity Institute Executive Director

post by lukeprog · 2011-12-13T06:48:20.199Z · LW · GW · Legacy · 48 comments

Contents

  The Rules (same as before)
None
48 comments

Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I'd like to do another one — though I can't promise video this time. I think that the Singularity Institute has a lot of transparency "catching up" to do.

 

The Rules (same as before)

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer's Singularity Summit 2011 talk.

4) Please provides links to things referenced by your question.

5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.

 

I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question.

You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.

48 comments

Comments sorted by top scores.

comment by Bugmaster · 2011-12-13T07:55:02.271Z · LW(p) · GW(p)

In the previous video, you said that publishing in mainstream journals might be a waste of time, due to the amount of "post-production" involved. In addition, you said that SIAI would prefer to keep its AGI research secret -- otherwise, someone might read it, implement the un-Friendly AGI, and doom us all. You followed that up by saying that SIAI is more interested in "technical problems in mathematics, computer science, and philosophy" than in experimental AI research.

In light of the above, what does the SIAI actually do ? You don't submit your work to rigorous scrutiny by your peers in the field (you need peer review for that); you either aren't doing any AGI research, or are keeping it so secret that no one knows about it (which makes it impossible to gauge your progress, if any), and you aren't developing any practical applications of AI, either (since you'd need experimentation for that). So, what is it that you are actually working on, other than growing the SIAI itself ?

Replies from: None
comment by [deleted] · 2013-06-05T10:43:49.974Z · LW(p) · GW(p)

What is the point of having a Q&A if you avoid answering difficult questions like this?

comment by FAWS · 2011-12-13T08:59:32.024Z · LW(p) · GW(p)

Presumably you relatively recently gained access to research Singinst does not make public, if any. Were you surprised at the level of progress already made, either positively or negatively?

Replies from: lukeprog
comment by lukeprog · 2011-12-13T10:59:50.398Z · LW(p) · GW(p)

Yes, strongly and positively.

Replies from: None, None
comment by [deleted] · 2011-12-14T02:06:07.288Z · LW(p) · GW(p)

this is encouraging. From outside it looks like SIAI is stuck.

Replies from: lukeprog
comment by lukeprog · 2011-12-14T02:15:25.955Z · LW(p) · GW(p)

That was my impression too, and then I landed in Berkeley and thought, "Woah! What the hell? Why haven't you guys published all that shit?"

And then I started trying to write it up and I was like, "Oh yeah. Writing stuff up takes lots of time and effort."

Replies from: None, katydee
comment by [deleted] · 2011-12-14T02:21:19.228Z · LW(p) · GW(p)

So you really do need more journal-monkeys eh? Maybe I should think about the visiting fellows thing. (I'm poor so I can't give money yet).

Why can't you just post a quick blurb that you've solved such-and-such problem and the solution is along these lines? Surely it doesn't have to be journal articles? Maybe there is a component of secrecy?

Replies from: lukeprog
comment by lukeprog · 2011-12-14T02:25:55.195Z · LW(p) · GW(p)

By 'writing these things up' I don't mean journal articles, I mean blog posts or working papers. The problem is that it takes significant time and effort just to explain the problem and our results somewhat clearly.

Replies from: Vaniver, None
comment by Vaniver · 2011-12-14T17:49:04.402Z · LW(p) · GW(p)

If you haven't explained your results, are you sure you actually have them? That sounds to me like "I already figured out the algorithm, I won't learn anything by coding it."

Replies from: lukeprog
comment by lukeprog · 2011-12-14T17:52:12.769Z · LW(p) · GW(p)

I tend to agree with this, too, though my own brain does "thinking by writing" more than other brains, I think.

comment by [deleted] · 2011-12-14T02:29:25.574Z · LW(p) · GW(p)

that bad eh? see you next year.

comment by katydee · 2011-12-18T20:51:00.699Z · LW(p) · GW(p)

Do you think that the same thing might be the case for other x-risks organizations? I recall that the previous analysis of other future tech safety/x-risks organizations didn't seem to find anything very promising-- might it be the case that those organizations also have stuff going on behind the scenes? If so, this seems like it might be a significant barrier to the greater x-risks community, since these organizations may be duplicating one another's results or otherwise inefficiently allocating their respective resources, volunteers, etc.

Replies from: lukeprog
comment by lukeprog · 2011-12-19T00:41:40.916Z · LW(p) · GW(p)

It's always the case that more research is being done than gets published. I know it's true for FHI, too. It's just especially true of SI.

Replies from: katydee
comment by katydee · 2011-12-19T17:59:39.980Z · LW(p) · GW(p)

I was thinking more about groups like Lifeboat or IEET, who don't really appear to be doing any research at all, as opposed to FHI/SIAI, who do at least occasionally publish.

comment by [deleted] · 2011-12-13T12:46:18.884Z · LW(p) · GW(p)

Is that research going to be made public?

Replies from: khafra
comment by khafra · 2011-12-13T13:44:07.321Z · LW(p) · GW(p)

The more precise question would be "what schedules are you considering for making that research public," since presumably after SI successfully builds their basement GAI they'll publish everything.

Replies from: RomeoStevens, lukeprog
comment by RomeoStevens · 2011-12-14T06:58:53.246Z · LW(p) · GW(p)

Presumably if SI builds a basement GAI publishing will not be a priority as we will either be busy bobsledding down rainbows or not being alive.

comment by lukeprog · 2011-12-13T17:48:26.866Z · LW(p) · GW(p)

It's all a matter of funding and recruiting. With no increase in funding, it would remain very difficult to publish all that research, as non-published conceptual research will easily outpace published research unless we have a dedicated writer or two to write things up as we discover them.

comment by XiXiDu · 2011-12-13T11:33:33.381Z · LW(p) · GW(p)

What would SI do if it became apparent that AGI is at most 10 years away? For example, some researchers demonstrate the feasibility of AGI and show that they only need a few years to implement it.

(Some AGI researchers like Shane Legg assign a 10% chance of AGI by ~2018.)

comment by Kaj_Sotala · 2011-12-13T18:04:04.031Z · LW(p) · GW(p)

I find this a little odd, since there are still several highly-voted (>= 15 points, say) questions from last time unanswered. Why not answer them first? Also, is there any reason for why someone shouldn't just take all of them and repost them in this thread (e.g. if you're unwilling to answer many of them, in which case it would be mostly be a wasted effort and clutter the page needlessly)?

Replies from: lukeprog
comment by lukeprog · 2011-12-14T01:45:24.869Z · LW(p) · GW(p)

In my last paragraph I encouraged people to re-post from the last round. Some of them might not be voted highly in the second round even if they were voted highly in the first round, because of the answers I gave in round 1.

comment by James_Blair · 2011-12-13T08:35:25.960Z · LW(p) · GW(p)

Would the Institute consider hiring telecommuters (both in and out the US)?

Update: this question was left unanswered in the second Q&A.

comment by MileyCyrus · 2011-12-13T07:11:37.658Z · LW(p) · GW(p)

What kind of budget would be required to solve the friendly AI problem? Are we talking millions, billions or trillions?

Replies from: Curiouskid
comment by Curiouskid · 2011-12-15T01:57:33.819Z · LW(p) · GW(p)

Related question: Which groups or organizations are likely to develop AGI first and how is SIAI planning on reaching out to them?

comment by Larks · 2011-12-14T00:51:22.453Z · LW(p) · GW(p)

What is SIAI's discount rate? If I offered you $100 today in return for r*$100 in a year's time, for what r are you indifferent? Are you borrowing money, saving it, or neither?

EDIT: For context, I seem to recall Vassar once suggesting 40%.

Replies from: Vaniver
comment by Vaniver · 2011-12-14T17:45:37.160Z · LW(p) · GW(p)

I don't think you're putting the r in the right place. If their discount rate is 40%, you should be comparing $100 now with $250 next year, or $40 now with $100 next year.

comment by [deleted] · 2011-12-14T02:23:54.498Z · LW(p) · GW(p)

Given the huge potential of FAI for changing the world, are you worried that existing governments might see SIAI as a revolutionary threat once it starts to look like you have a serious chance of completing the goal?

comment by antigonus · 2011-12-13T09:13:34.251Z · LW(p) · GW(p)

You mentioned recently that SIAI is pushing toward publishing an "Open Problems in FAI" document. How much impact do you expect this document to have? Do you intend to keep track? If so, and if it's less impactful than expected, what lesson(s) might you draw from this?

comment by [deleted] · 2011-12-13T15:35:55.556Z · LW(p) · GW(p)

How much do members' predictions of when the singularity will happen differ within the Singularity Institute?

Replies from: XiXiDu
comment by XiXiDu · 2011-12-13T16:20:46.577Z · LW(p) · GW(p)

How much do members' predictions of when the singularity will happen differ within the Singularity Institute?

Eliezer Yudkowsky wrote:

John did ask about timescales and my answer was that I had no logical way of knowing the answer to that question and was reluctant to just make one up.

...

As for guessing the timescales, that actually seems to me much harder than guessing the qualitative answer to the question “Will an intelligence explosion occur?”

There is more there, best to start here and read all we way down to the bottom of that thread. I think that discussion captures some of the best arguments in favor of friendly AI in the most concise way you can currently find.

comment by Nick_Roy · 2011-12-13T19:41:52.924Z · LW(p) · GW(p)

Since it's difficult to predict the date of the invention of AGI, has SI thought about/made plans for how to work on the FAI problem for many decades, or perhaps even centuries, if necessary?

Replies from: Curiouskid
comment by Curiouskid · 2011-12-15T01:55:15.681Z · LW(p) · GW(p)

As a subset of this question, do you think that establishing a school with the express purpose of training future rationalists/AGI programmers from an early age is a good idea? Don't you think that people who've been raised with strong epistemic hygiene should be building AGI rather than people who didn't acquire such hygiene until later in life?

The only reasons I can see for it not working would be:

  1. predictions that AGIs will come before the next generation of rationalists comes along. (which is also a question of how early to start such an education program).
  2. belief that our current researchers are up to the challenge. (even then, having lots of people who've had a structured education designed to produce the best FAI researchers would undeniably reduce existential risk. no?)

EDIT (for clarification): Eliezer has said:

"I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement"

Just as they would be building an intelligence greater than themselves, so to must we build human intelligences greater than ourselves.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-12-26T19:55:15.756Z · LW(p) · GW(p)

The only reasons I can see for it not working would be: 1. predictions that AGIs will come before the next generation of rationalists comes along. (which is also a question of how early to start such an education program). 2. belief that our current researchers are up to the challenge. (even then, having lots of people who've had a structured education designed to produce the best FAI researchers would undeniably reduce existential risk. no?)

I can't speak for the SIAI, but to me this sounds like a suboptimal use of resources, and bad PR. It trips my "this would sound cultish to the average person" buzzer. Starting a school that claimed it "emphasized critical thinking" to teach rationalists might be a good idea for someone with administrative talents who wanted to work on x-risk, but I can't see SIAI doing it.

Replies from: Curiouskid
comment by Curiouskid · 2011-12-27T03:31:46.372Z · LW(p) · GW(p)

How would you distribute resources? I think this is a natural response if one accepts the premise that the main bottleneck to AGI is a few key insights by geniuses (as Eliezer says).

Why do we care if people who aren't logical enough to see the reasoning behind the school think we're cultish?

comment by James_Miller · 2011-12-13T17:49:14.427Z · LW(p) · GW(p)

In 2009 EY asked "What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?"

rhollerith_dot_com responded "That the EV of the humans is coherent and does not care how much suffering exists in the universe."

Vassar responded to this with the scariest thing I've read on LessWrong which was:

"But you believe that, don't you? I certainly place a MUCH higher probability on that than on the sort of claims some people have proposed."

Do you agree with Vassar's reply?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-12-14T00:19:58.251Z · LW(p) · GW(p)

Vassar's purpose with the first of the two sentences you quote is to point out that I was playing the game wrong. Specifically, the mere fact that I was replying with something to which I had already assigned significant probability before starting the exercise was evidence to Vassar that I had not properly grasped the spirit of the exercise.

The second sentence of the quote can be interpreted as a continuation of the theme of "You're playing the game wrong, Hollerith," if as seems likely to me now, Vassar saw the purpose (or one of the purposes) of the game as coming up with a statement whose probability (as judged by the player himself) outside the context of the game is as low as possible.

Vassar is very skilled at understanding other people's points of view. Moreover, he saw his job at this time in large part as a negotiator among the singularitarians, which probably caused him to try to get even better at understanding unusual points of view. Finally, during the two years leading up to this exchange that you quote I had been spamming Overcoming Bias pretty hard with my outre system of valuing things (which by the way I have since abandoned -- I am pretty much a humanist now) so of course Vassar had had plenty of exposure to my point of view.

Have you asked Vassar what he meant by the 2 sentence you quoted?

Living in the Bay Area as I do, I have had a couple of conversations with Vassar, I applied to the visiting fellows program when Vassar was the main determiner of who got in (I did not), and I have absolutely no evidence that the above sentence means anything more than the fact that Vassar at time of the sentence's writing spent a lot time time trying to understand many different points of view -- the more different from his own, the better -- and maybe perhaps that like some other extremely bright people (Bernhard Shaw being one) he gets a kick out of pursuing lines of thought with people that despite the line's seeming absurd or monstrous at first have a certain odd or subtle integrity or have a faint ring of truth to them.

comment by Daniel_Burfoot · 2011-12-14T04:28:38.073Z · LW(p) · GW(p)

If an infallible oracle told you humanity was about to enter a period of extended stagnation comparable to the Dark Ages, what projects would you prioritize right now to ensure humanity's long term survival?

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-12-15T17:05:11.731Z · LW(p) · GW(p)

Upvoted out of curiosity, but this question seems along the lines "what books would you take if you were marooned on an island" (I do not see a connection to SIAI mission, unless you give this scenario high probability)

comment by [deleted] · 2011-12-15T21:42:26.264Z · LW(p) · GW(p)

If SIAI cannot perform any kind of "Trinity Test" (Manhattan Project), and demands secrecy to protect the world from evil obtaining any FAI intelligence, how does the organization quell the idea that it is augmenting Pascal's Wager, while keeping the fear paradox, for technology?

Since SIAI needs secrecy, transparency cannot be absolute, yet you will need considerable funding. So, how does SIAI plan to avoid looking like Enron when asking for funding to research a much needed "Negative Ionic Tractor Disruptor" (South Park, 311), and when the philanthropists, some who may be "evil", want to see how the sausage is made?

Sorry, if this is a juvenile question, it is the same question twice. I will try to get to the literature you recommend on the "So You Want To Save The World" site soon. -how do you navigate this necessary contradiction of transparency and stay a viable organization?

comment by robertzk (Technoguyrob) · 2011-12-14T12:48:23.149Z · LW(p) · GW(p)

Minds are not chronologically commutative with respect to input data. Reading libertarian philosophy followed by Marxist philosophy will give you a different connectome than vice versa. As a result, you will have distinct values in each scenario and act accordingly. Put another way, human values are extremely dependent on initial input parameters (your early social and educational history). Childhood brainwashing can give the resulting adult arbitrary values (as evinced by such quirks like suicide bombers and voluntary eunuchs). However, by providing such a malleable organism, evolution found a very cute trick by which it allowed for seemingly impossible computation. (development of mathematics, science, etc.)

I assume that in the definition of GAI, it is implicit that the AI can do mathematics and science as good or better than humans can, as to achieve its goals that require a physical restructuring of reality. Since the only example of a computational process that is capable of generating these things (humans) is so malleable in its values, what basis (mathematical or otherwise) does the SIAI have for assuming that Friendliness is achievable? Keep in mind that a GAI should be able to think and comprehend all things humans can and have thought (including the architectural problems in Friendliness), or at least something functionally isomorphic.

comment by roland · 2011-12-13T19:13:34.871Z · LW(p) · GW(p)

I see a problem with LW(don't know if you consider this part of SI) in that non-conforming comments are often downvoted, regardless of them being right or wrong. I think part of the blame is on that article by EY:

http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/

My big concern: what safeguards are in place to distinguish those "weeds" that are real "weeds" from those that only look like "weeds" because they go against incorrect beliefs in LW? In other words, assuming that there are incorrect beliefs in LW/SI shouldn't there be more room to allow for contrarian POVs to be expressed?

Replies from: Kaj_Sotala, lessdazed
comment by Kaj_Sotala · 2011-12-13T23:10:38.442Z · LW(p) · GW(p)

Examples? Well-argued contrarian comments and posts seem to get pretty regularly upvoted, as far as I've observed.

Replies from: saturn, roland
comment by saturn · 2011-12-14T08:59:24.306Z · LW(p) · GW(p)

Considering the author of the comment, I would guess the examples have to do with 9/11 being an inside job.

Replies from: Incorrect
comment by Incorrect · 2011-12-16T20:54:18.825Z · LW(p) · GW(p)

Hah, what a low status belief.

comment by roland · 2011-12-14T19:43:58.844Z · LW(p) · GW(p)

I won't get into an argument about this, especially since we both already argued lots about certain issues which I don't want to get into here(in fact I am not even allowed according to some LWers who have taken the role of judges) and LW has many smart people who could argue as effectively for either side if they wished to do so. I wrote my comment based on personal experience on this site, and I've been a member here since the days when it was still overcomingbias.com.

You see here is my point, someone writes a comment expressing a certain sentiment or POV, and you can start arguing over it or consider that "maybe he has a point and we should give some consideration to this issue."

Btw, I don't know what is going on but this is my first comment of the day and as soon as I try to post it I get the message "You are trying to submit too fast. try again in 5 minutes." The same happens every time I try to comment. Is this some kind of filter? Please turn it off.

Replies from: Emile, Normal_Anomaly
comment by Emile · 2011-12-16T20:59:52.371Z · LW(p) · GW(p)

I think there's a filter that depends of karma, so that heavily downvoted posters have to slow down their posting rate, but since you have positive karma I'm not sure why it's triggering for you. Maybe there's a bug, dunno.

comment by Normal_Anomaly · 2011-12-26T19:59:21.308Z · LW(p) · GW(p)

If it hasn't stopped giving you that message, try different combinations of browser, computer, and throwaway account. If it's a bug, one of the former should help. If it's a filter, which sounds unlikely given your karma and the message text, that won't help but you should be able to post through a throwaway account without getting the message.

comment by lessdazed · 2011-12-23T18:27:28.157Z · LW(p) · GW(p)

don't know if you consider this part of SI

It would be proper to say "how much" or similar rather than "if."