How much to optimize for the short-timelines scenario?

post by SoerenMind · 2022-07-21T10:47:50.018Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    5 JanBrauner
None
2 comments

Some have argued that one should tend to act as if timelines are short since in that scenario it's possible to have more expected impact. But I haven't seen a thorough analysis of this argument.

Question: Is this argument valid and if yes how strong is it?

The basic argument seems to be: if timelines are short, the field (AI alignment) will be relatively smaller and have made less progress. So there will be more low-hanging fruits and so you have more impact.

The question affects career decisions. For example, if you optimize for long timelines, you can invest more time into yourself and delay your impact.

The question interacts with the following questions in somewhat unclear ways:

If someone would like to seriously research the overall question, please reach out. The right candidate can get funding.

Answers

answer by JanB (JanBrauner) · 2022-07-21T12:30:57.988Z · LW(p) · GW(p)

I'm super interested in this question as well. Here are two thoughts:

  1. It's not enough to look at the expected "future size of the AI alignment community", you need to look at the full distribution.

Let's say timelines are long. We can assume that the benefits of alignment work scale roughly logarithmically with the resources invested. The derivative of log is 1/x, and that's how the value of a marginal contribution scales.

There is some probability, let's say 50%, that the world starts dedicating many resources to AI risk and the number of people working on alignment is massive, let's say 10000x today. In these cases, your contribution would be roughly zero. But there is some probability (let's say 50%) that the world keeps being bad at preparing for potentially catastrophic events, the AI alignment community is not much larger than today. In total, you'd only discount your contribution by 50% (compared to short timelines).

This is just for illustration, and I made many implicit assumptions, like: the timing of the work doesn't matter as long as it's before AGI, early work does not influence the amount of future work, "the size of the alignment community at crunchtime" is identical to "future work done", and so on...

  1. It matters a lot how much better "work during crunchtime" is vs "work before crunchtime".

Let's say timelines are long, with AGI happening in 60 years. It's totally conceivable that the world keeps being bad at preparing for potentially catastrophic events, and the AI alignment community in 60 years is not much larger than today. If mostly work done at "crunch-time" (in the 10 years before AGI) matters, then the world would not be in a better situation than in the short timelines scenario. If you could do productive work now to address this scenario, this would be pretty good (but you can't, by assumption).

But if work done before crunchtime matters a lot, then even if the AI alignment community in 60 years is still small, we'll probably at least have had 60 years of AI alignment work (from a small community). That's much more than what we have in short timeline scenarios (e.g. 15 years from a small community)

2 comments

Comments sorted by top scores.

comment by Dagon · 2022-07-21T18:12:38.023Z · LW(p) · GW(p)

I think the shorter the timeline, the more specific your plan and actions need to be.  For short (< 10 year, up to maybe 40 year with very high confidence) timelines for radical singularity-like disruption, you aren't talking about "optimizing", but "preparing for" or "reacting to" the likely scenarios.

It's the milder disruptions, or longer timelines for radical changes, that are problematic in this case.  What have you given up in working to make the short-timeline more pleasant/survivable that you will be sad about if the world doesn't end?  

Having kids and how much energy to invest in them (including before you have them, in earning money you don't donate, and in otherwise preparing your life) rather than in AIpocalypse preparedness is probably the biggest single decision related to this prediction.