Timescales Matter

post by paulfchristiano · 2011-03-30T02:50:36.129Z · LW · GW · Legacy · 14 comments

In an interview with John Baez, Eliezer responds:

I’ll try to answer the question about timescales, but first let me explain in some detail why I don’t think the decision should be dominated by that question.

He was in part addressing the tradeoff between environmental work and work on technology related to AGI or other existential risks. In this context I agree with his position.

But more broadly, as a person setting out into the wold and deciding what I should do with each moment, the question about timescales is one of the most important issues bearing on my decision and my uncertainty about it (coupled with the difficulty of acquiring evidence) is almost physically painful.

If AGI is likely in the next couple of decades (I am rather skeptical) then long-term activism or outreach are probably pointless. If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.

I believe it is quite possible that I am smart enough to have a significant effect on the course of whatever field I participate in.  I also believe I could have a significant impact on the number of altruistic rationalists in the world. It seems likely that one of these options is way better than the other, and spending some time figuring out which one (and answering related, more specific questions) seems important. One of the most important ingredients in that calculation is a question of timescales. I don't trust the opinion of anyone involved with the SIAI. I don't trust the opinion of anyone in the mainstream. (In both cases I am happy to update on evidence they provide.) I don't have any good ideas on how to improve my estimate, but it feels like I should be able to.

I encounter relatively smart people giving estimates completely out of line with mine which would radically alter my behavior if I believed them. What argument have I not thought through? What evidence have I not seen? I like to believe that smart, rational people don't disagree too dramatically about questions of fact that they have huge stakes in. General confusion about AI was fine when I had it walled off in a corner of my brain with other abstruse speculation, but now that the question matters to me my uncertainty seems more dire.

14 comments

Comments sorted by top scores.

comment by Vaniver · 2011-03-30T04:31:05.228Z · LW(p) · GW(p)

It seems to me that the difference between your estimate of the time scale and other people's estimates of the time scale is that everyone has different garbage and the computation is GIGO. People aren't disagreeing about facts, they're disagreeing about the weights of fit parameters, which have so little data to work off of the uncertainties dwarf the values.

It seems like the resolution is to get rid of your feeling that you can improve your timescale guess, and to get rid of any paralysis you have on making decisions with imperfect information. That's a necessary part of life.

comment by Vladimir_Nesov · 2011-03-30T10:43:32.991Z · LW(p) · GW(p)

If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.

Ideally, Friendly AI should be understood long before AGI is feasible. Like, 200 years ago. It'll be too late when AGI is possible.

Replies from: paulfchristiano
comment by paulfchristiano · 2011-03-30T15:37:36.805Z · LW(p) · GW(p)

This seems fair. But I am unconvinced that the marginal benefit of understanding AGI now is even close to the marginal benefit of spreading and training rationality now. Unless you believe that either FAI research is very non-parallelizable or that AGI will be feasible in the near future, it seems likely that spreading rationality is more important today (even to the instrumental goal of understanding FAI before AGI is feasible). But if you do believe that AGI will be feasible in the near future, and in particular before long-term efforts to produce more rationalists will have a significant effect on the world, then working on FAI directly (or, more likely, funding current research on FAI, or attempting to influence the AI research community, or trying to do research which is likely to lead to understandable AGI, etc.) is urgent.

So I concede that understanding FAI should precede the feasibility of AGI, but without some additional argument about the timescales on which AGI is feasible, the difficulty of parallelizing FAI research, or some unexpected obstruction to spreading rationality, I am not yet convinced that I should work on or fund FAI research or do anything related to AI.

Replies from: JGWeissman
comment by JGWeissman · 2011-03-30T18:23:30.569Z · LW(p) · GW(p)

I am very skeptical about causes that engage exclusively in spreading awareness. By directing the efforts of a small proportion of the rationalists we produce towards direct work on FAI, we validate that we are in fact producing people capable of working on the problem, as opposed to merely having an exponentially growing group of people who profess "Yay FAI!".

Replies from: paulfchristiano
comment by paulfchristiano · 2011-03-31T17:01:50.739Z · LW(p) · GW(p)

I am very skeptical about causes that engage exclusively in spreading awareness.

As am I. However, here are some things I believe about the SIAI and FAI:

  1. To the average well-educated person, the efforts of the SIAI are indistinguishable from a particularly emphatic declaration of "Yay FAI!" To the average person who cares strongly about FAI, the performance of the SIAI still does not validate that "we are in fact producing people capable of working on the problem," because there are essentially no standards to judge against, no concrete theoretical results in evidence, and no suggestion that impressive theoretical advances are forthcoming. Saying "the problem is difficulty" is a perfectly fine defense, but it does not give the work being done any more value as validation.

  2. The average intelligent (and even abnormally rational) non-singulatarian has little respect for the work of the SIAI, to the extent that the affiliation of the SIAI with outreach significantly reduces its credibility with the most important audience, and the (even quite vague) affiliation of an individual with SIAI makes it significantly more difficult for that individual to argue credibly about the future of humanity.

  3. It is not at all obvious that FAI is the most urgent technical problem currently in view. For example, pushing better physical understanding of the brain, better algorithmic understanding of cognition, and technology for interfacing with human brains all seem like they could have a much larger effect on the probability of a positive singularity. The real argument for normal humans working on FAI is extremely complicated and uncertain.

  4. I place fairly little value on an exponentially growing group of people interested in FAI, except insofar as they can be converted into an exponentially large group of people who care about the future of humanity and act rationally on that preference. I think there are easier ways to accomplish this goal; and on the flip side I think "merely" having an exponentially large group of rational people who care about humanity is incredibly valuable.

  5. My main concern in the direction you are pointing is the difficulty of effective outreach when the rationality on offer appears to be disconnected from reality (in particular the risk that what you are spreading will almost certainly cease to be "rationality" without some good grounding). I believe working on FAI is a uniquely bad way to overcome this difficulty, because most of the target audience (really smart people whose help is incredibly valuable) considers work on FAI even more disconnected from reality than rationality outreach itself, and because the quality or relevance of work on FAI is essentially impossible for almost anyone not directly involved with that work to assess.

comment by JoshuaZ · 2011-03-30T04:30:31.199Z · LW(p) · GW(p)

Timescales don't just matter in this way. The more time that goes on the faster an AGI will be to start off since the default hardware will be so much better. That means that if any sort of AGI is restricted primarily by the right insights and not general processing power and it takes a while to hit on those insights, then the chance of bad things happening goes up. So even as the timescale extends outwards that is arguably more of a reason to focus on Friendliness related issues.

That said, I have absolutely no idea how this balances with the timescale issues discussed in your post.

comment by Vladimir_Nesov · 2011-03-30T10:30:27.447Z · LW(p) · GW(p)

Please link to the interview or to the discussion post about the interview in the post.

comment by Zvi · 2011-03-31T00:44:50.401Z · LW(p) · GW(p)

Time scales matter a lot. As opposed to all the genres the HPatMoR characters think they're in, I think I'm living in a 4-X game like Civilization or Master of Orion. You don't work directly on the Science Victory Condition (in this case, FAI) if you can do better growing your research and production capacities to build the FAI faster, but you also don't waste your time building up capacity when it's time to race for the finish line. A median of 2030 is, in my very rough and not at all accurate estimate, annoying close to the border line, although mine is more along the lines of 2050.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2011-03-31T01:08:27.141Z · LW(p) · GW(p)

I think I'm living in a 4-X game like Civilization or Master of Orion.

Beware the Ludic Fallacy.

There's a great section in "The Black Swan" where Taleb is called in to consult for a casino or something, and he discovers that the biggest losses the casino ever suffered weren't due to bad luck in the blackjack pits or anything like that. One loss involved a lawsuit caused by a stage tiger that got loose and hurt a bunch of people; another loss was caused when an employee failed, for inexplicable reasons, to send a special tax form to the IRS, so the casino was hit with a big penalty.

comment by atucker · 2011-03-30T03:28:47.072Z · LW(p) · GW(p)

I think that your marginal impact is also important to consider.

It seems conceivable that you could hedge your bets by arranging to work on one thing, while someone else you have confidence in works on something else. Like, you do AGI and Bob does activism.

Does adding Bob to AGI once you're there help? By how much? If (for some odd reason) it's negligible, then it's probably better to split up.

How dedicated does Bob have to be to AGI to be helpful? If an hour a week of him doing work on it gets 80% of the utility of his all, then it's also probably better for him to be doing other things. And vice versa.

comment by Wei Dai (Wei_Dai) · 2012-04-18T10:55:41.753Z · LW(p) · GW(p)

If AGI is likely in the next couple of decades (I am rather skeptical) then long-term activism or outreach are probably pointless. If AGI is not likely within this century (which also seems unlikely) then working on AGI is probably pointless.

Paul, it looks like you ended up deciding to go into FAI research instead of long-term activism or outreach. I'm curious how you reached that decision. Have you explained it anywhere?

Replies from: paulfchristiano
comment by paulfchristiano · 2012-04-21T20:17:51.138Z · LW(p) · GW(p)

Paul, it looks like you ended up deciding to go into FAI research instead of long-term activism or outreach.

I haven't made any commitments, but I'm not currently doing much FAI research.

comment by JenniferRM · 2011-03-30T19:42:03.110Z · LW(p) · GW(p)

I would be very interested in reading about your timeline opinions after you play with the Uncertain Future Web App which was created to help clarify some of these timing issues. Also, the software behind the site was open sourced a few months ago so if you are frustrated by that tool it might be possible to improve it based on feedback.

Replies from: paulfchristiano
comment by paulfchristiano · 2011-03-31T17:07:47.593Z · LW(p) · GW(p)

I didn't find the app very helpful in refining my estimate. There are too many particular ingredients (especially: how hard is AI? How well do you have to simulate a brain?) with incredible uncertainty. Not coincidentally, this is the same reason that I can't come up with a good estimate on my own.