How much to optimize for the short-timelines scenario?
post by SoerenMind · 2022-07-21T10:47:50.018Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 5 JanBrauner None 2 comments
Some have argued that one should tend to act as if timelines are short since in that scenario it's possible to have more expected impact. But I haven't seen a thorough analysis of this argument.
Question: Is this argument valid and if yes how strong is it?
The basic argument seems to be: if timelines are short, the field (AI alignment) will be relatively smaller and have made less progress. So there will be more low-hanging fruits and so you have more impact.
The question affects career decisions. For example, if you optimize for long timelines, you can invest more time into yourself and delay your impact.
The question interacts with the following questions in somewhat unclear ways:
- How fast do returns to more work diminish (or increase)?
- If returns don't diminish, the argument above fails.
- If the field will grow very quickly, returns will diminish faster.
- Is your work much more effective when it's early?
- This may happen because work can be hard to parallelize - ‘9 women can't make a baby in 1 month’. And field-building can be more effective earlier as the field can compound-grow over time. So someone should start early.
- If work is most effective earlier, you shouldn’t lose too much time investing in yourself.
- Is work much more effective at crunch time?
- If yes, you should focus more on investing in yourself (or do field-building for crunch time) instead of doing preparatory research.
- If timelines are longer, is this evidence that we'll need a paradigm shift in ML that makes alignment easier/harder?
- (This question seems less tractable than the others.)
- Is your comparative advantage to optimize for short or long timelines?
- For example, young people can contribute more easily given longer timelines and vice versa.
If someone would like to seriously research the overall question, please reach out. The right candidate can get funding.
Answers
I'm super interested in this question as well. Here are two thoughts:
- It's not enough to look at the expected "future size of the AI alignment community", you need to look at the full distribution.
Let's say timelines are long. We can assume that the benefits of alignment work scale roughly logarithmically with the resources invested. The derivative of log is 1/x, and that's how the value of a marginal contribution scales.
There is some probability, let's say 50%, that the world starts dedicating many resources to AI risk and the number of people working on alignment is massive, let's say 10000x today. In these cases, your contribution would be roughly zero. But there is some probability (let's say 50%) that the world keeps being bad at preparing for potentially catastrophic events, the AI alignment community is not much larger than today. In total, you'd only discount your contribution by 50% (compared to short timelines).
This is just for illustration, and I made many implicit assumptions, like: the timing of the work doesn't matter as long as it's before AGI, early work does not influence the amount of future work, "the size of the alignment community at crunchtime" is identical to "future work done", and so on...
- It matters a lot how much better "work during crunchtime" is vs "work before crunchtime".
Let's say timelines are long, with AGI happening in 60 years. It's totally conceivable that the world keeps being bad at preparing for potentially catastrophic events, and the AI alignment community in 60 years is not much larger than today. If mostly work done at "crunch-time" (in the 10 years before AGI) matters, then the world would not be in a better situation than in the short timelines scenario. If you could do productive work now to address this scenario, this would be pretty good (but you can't, by assumption).
But if work done before crunchtime matters a lot, then even if the AI alignment community in 60 years is still small, we'll probably at least have had 60 years of AI alignment work (from a small community). That's much more than what we have in short timeline scenarios (e.g. 15 years from a small community)
2 comments
Comments sorted by top scores.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-07-21T19:55:33.034Z · LW(p) · GW(p)
Related question: What considerations influence whether I have more influence over short or long timelines [LW · GW]
comment by Dagon · 2022-07-21T18:12:38.023Z · LW(p) · GW(p)
I think the shorter the timeline, the more specific your plan and actions need to be. For short (< 10 year, up to maybe 40 year with very high confidence) timelines for radical singularity-like disruption, you aren't talking about "optimizing", but "preparing for" or "reacting to" the likely scenarios.
It's the milder disruptions, or longer timelines for radical changes, that are problematic in this case. What have you given up in working to make the short-timeline more pleasant/survivable that you will be sad about if the world doesn't end?
Having kids and how much energy to invest in them (including before you have them, in earning money you don't donate, and in otherwise preparing your life) rather than in AIpocalypse preparedness is probably the biggest single decision related to this prediction.