When you plan according to your AI timelines, should you put more weight on the median future, or the median future | eventual AI alignment success? ⚖️
post by Jeffrey Ladish (jeff-ladish) · 2023-01-05T01:21:37.880Z · LW · GW · 10 commentsContents
10 comments
This is a question I'm puzzling over, and my current answer is that when it comes to decisions about AI alignment strategy, I will put more planning weight on median futures where we survive, making my effective timelines longer for some planning purposes, but not removing urgency.
I think that in most worlds where we manage to build aligned AGI systems, we managed to do this in large part because we bought more time to solve the alignment problem, probably via one of two mechanisms:
- 🤝 coordination to hold off building AGI
- 🦾 solving AI alignment in a limited way, and using a capabilities-limited AGI to negotiate for more time to effort to develop aligned AGI
I think we are likely to buy >5 years of time via one or both of these routes in >80% of worlds where we successfully build aligned AGI.
I have less of a good estimate about how long my AI timelines are for the median future | eventual AGI alignment success. 20 years? 60 years? I haven't thought about it enough to give a good estimate, but I think at least 10 years. Though, I think time bought for additional AI alignment work is not equally useful.
[1] 🧐
Implications for me:
- 🗺 For strategy purposes, I should plan according to a distribution of worlds around my median timeline | eventual AI alignment success. For me, that's like ~30 years (it's always 30 years 💀), though I've only done a very cursory estimate of this and plan to think about it more.
- ⌛️ There's still a lot of urgency! My timelines are NOT distributed around ~30 years! If we get that time, it's mostly because we BOUGHT it. So there's urgency to work towards coordination on buying time and/or figuring out how to build aligned-and-corrigible-enough-AGI to buy more time and shepherd alignment research.
Personal considerations
- 🪣 I do have a bucket list, and for the purposes of "things I'd really like to do before I die", I go with my estimated lifespan in my median world
Terms and assumptions:
- 🤖 By AGI I mean general intelligence with significantly greater control / optimization power than human civilization
- 🦾 By capabilities-limited AGI, I mean a general intelligence with significantly greater capabilities than humans in some domains, but corrigible enough not to self improve / seize power to accomplish arbitrary goals
- 😅 My timelines are 70% chance < 20 years
- 💀 I'm assuming AGI + no alignment = human extinction
- 🏆 Solving the alignment problem = building aligned AGI
I'd love to hear how other people are answer this question for themselves, and any thoughts / feedback on how I'm thinking about it. 🦜
This post is also on the EA Forum [EA · GW]
- ^
I think the time bought by solving AI alignment in a limited way & using that to buy time, compared to the time obtained through human coordination efforts, is more likely to be a greater proportion of the time in the median world where we eventually solve alignment. However, I also think my own efforts are less important (though potentially still important) in the use-AI-to-buy-time world. So it's hard to know how to weight it, so I'm not distinguishing much between these types of additional time right now.
10 comments
Comments sorted by top scores.
comment by Jackson Wagner · 2023-01-05T02:15:06.688Z · LW(p) · GW(p)
I would assume it's most impactful to focus on the marginal future where we survive, rather than the median? ie, the futures where humanity barely solves alignment in time, or has a dramatic close-call with AI disaster, or almost fails to build the international agreement needed to suppress certain dangerous technologies, or etc.
IMO, the marginal futures where humanity survives, are the scenarios where our actions have the most impact -- in futures that are totally doomed, it's worthless to try anything, and in other futures that go absurdly well it's similarly unimportant to contribute our own efforts. Just in the same way that our votes are more impactful when we vote in a very close election, our actions to advance AI alignment are most impactful in the scenarios balanced on a knife's edge between survival and disaster.
(I think that is the right logic for your altruistic, AI safety research efforts anyways. If you are making personal plans, like deciding whether to have children or how much to save for retirement, that's a different case with different logic to it.)
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2023-01-05T03:09:42.460Z · LW(p) · GW(p)
I agree that this is accurate but worry that it doesn't help the sort of person who wants just one future to put more weight on. What futures count as marginal depend on the strategy you're considering, and on what actions you expect other people to take - you can't just find some concrete future that is "the marginal future," and only take actions that affect that one future.
If you want to avoid the computational burden of consequentialism, rather than focusing on just one future I think a solid recommendation is the virtue-ethical death with dignity strategy [LW · GW].
comment by Zach Stein-Perlman · 2023-01-05T02:16:09.209Z · LW(p) · GW(p)
Neither
Many factors are relevant to which possible futures you should upweight. For example, the following are all reasons to pay more attention to a possible set of futures (where a "possible set of futures" could be characterized by "AGI in 2050" or any other condition):
- They're more likely
- They're more tractable
- Because you see them more clearly (related: important events occur sooner, short-timelines)
- Because other actors won't be paying attention around important events (related: important events occur sooner, short-timelines)
- Because you'll have more influence in them
- Because P(doom) is closer to 50%
(Also take into account future research– for example, if you focus on the world in 2030 (or assume that human-level AI is developed in 2030) you can be deferring, not neglecting, work on 2040.)
Replies from: jeff-ladish↑ comment by Jeffrey Ladish (jeff-ladish) · 2023-01-05T05:53:55.163Z · LW(p) · GW(p)
I sort of agree with this abstractly and disagree on practice. I think we're just very limited in what kinds of circumstances we can reasonably estimate / guess at. Even the above claim, "a big proportion of worlds where we survived, AGI probably gets delayed" is hard to reason about.
But I do kind of need the know the timescale I'm operating in when thinking about health and money and skill investments, etc. so I think you need to reason about it somehow.
comment by Zach Stein-Perlman · 2023-01-05T02:24:20.036Z · LW(p) · GW(p)
If you're just taking into account P(AGI in year t) and P(doom | AGI in year t), I think you should weight by probability times leverage. So weight AGI in year t by P(AGI in year t) * (P(doom | AGI in year t) - P(doom | AGI in year t)^2).
Certainly ignoring P(doom) is wrong, and certainly the asymmetry where you condition on success is wrong (conversely: why not condition on alignment failure because those are the worlds that need you to work on them) (or: notice that you're giving most weight to the worlds with lowest P(doom), when a world with extremely low P(doom) doesn't need you much in expectation; you have more influence over worlds with P(doom) close to 50%), it seems to me.
comment by JustinShovelain · 2023-01-05T12:09:04.117Z · LW(p) · GW(p)
Roughly speaking, in terms of the actions you take, various timelines should be weighted as P(AGI in year t)*DifferenceYouCanProduceInAGIAlignmentAt(t). This produces a new, non normalized distribution of how much to prioritize each time (you can renormalize it if you wish to make it more like "probability").
Note that this is just a first approximation and there are additional subtleties.
- This assumes you are optimizing for each time and possible world orthogonality but much of the time optimizing for nearby times is very similar to optimizing for a particular time.
- The definition of "you" here depends on the nature of the decision maker which can vary between a group, a person, or even a person at a particular moment.
- Using different definitions of "you" between decision makers can cause a coordination issue where different people are trying to save different potential worlds (because of their different skills and ability to produce change) and their plans may tangle with each other.
- It is difficult to figure out how much of a difference you can produce in different possible worlds and times. You do the best you can but you might suffer a failure of imagination in either finding ways your plans wont work, ways your plans will have larger positive effects, or ways you may in the future improve your plans. For more on the difference one can produce see this [LW · GW] and this [EA · GW].
- Lastly, there is a risk here psychologically and socially of fudging the calculations above to make things more comfortable.
(Meta: I may make a full post on this someday and use this reasoning often)
comment by James L (jglamine) · 2023-01-05T01:47:29.428Z · LW(p) · GW(p)
Why are you using your median timeline | success? Maybe I missed it, but I don't see your reason explained in the post.
comment by [deleted] · 2023-01-05T04:59:34.618Z · LW(p) · GW(p)
Strong downvoted for the emojis.
Replies from: jeff-ladish↑ comment by Jeffrey Ladish (jeff-ladish) · 2023-01-05T05:47:58.042Z · LW(p) · GW(p)
Why did you do that?
Replies from: None