Focusing your impact on short vs long TAI timelines

post by kuhanj · 2023-09-30T19:34:39.508Z · LW · GW · 0 comments

Contents

  Summary/Key Points 
    Considerations that favor focusing on shorter timelines
    Considerations that favor focusing on longer timelines
    Unclear, Person-Specific, and Neutral Considerations
  Introduction and Context
  Considerations favoring focusing on shorter-timelines
    Neglectedness of short-timelines scenarios 
      Fewer resources over less time: 
      You can’t contribute to past timelines in the future
      Allocating resources proportional to each potential timeline scenario entails always disproportionately focusing on shorter timelines scenarios
    Higher predictability and likelihood of useful work
    (Uncertain) Work useful for shorter-timelines is more likely useful for longer timelines than vice versa.  
  Considerations favoring focusing on longer-timelines
    Many people concerned about TAI x-safety are young and inexperienced 
    Intractability of changing shorter timelines scenarios 
  Neutral, person-specific, and unclear considerations
    Probability distribution over TAI Timelines
    Variance and Influenceability (usefulness of working on) of different timelines scenarios
    Personal considerations
    Takeoff speeds/Continuity of AI progress
    The last-window-of-influence is likely in advance of TAI arrival
  Conclusion
  Acknowledgements
None
No comments

Summary/Key Points 

I compare considerations for prioritizing impact in short vs. long transformative AI timeline scenarios. Though lots of relevant work seems timelines-agnostic, this analysis is primarily intended for work whose impact is more sensitive to AI timelines (e.g. young-people-focused outreach and movement building).  
 

Considerations that favor focusing on shorter timelines

Considerations that favor focusing on longer timelines

Unclear, Person-Specific, and Neutral Considerations

Overall, the considerations pointing in favor of prioritizing short-timelines seem moderately stronger than those in favor of prioritizing long-timelines impact to me, though others who read drafts of this post disagreed. In particular, the neglectedness considerations seemed stronger than the pro-longer-timeline-focus ones, perhaps with the exception of the impact discount young/inexperienced people face for short-timelines focused work - though even then I’m not convinced there isn’t useful work for young/inexperienced people to do under shorter timelines. I would be interested in readers’ impressions of the strength of considerations (including ones I didn’t include) in the comments. 

Introduction and Context

“Should I prioritize being impactful in shorter or longer transformative AI timeline scenarios?” 

I find myself repeatedly returning to this question when trying to figure out how to best increase my impact (henceforth meaning having an effect on ethically relevant outcomes over the long-run future).

I found it helpful to lay out and compare important considerations, which I split into three categories; considerations that: 

As with all pros and cons lists, the strength of the considerations matters a ton - which gives me the opportunity to share one of my favorite graphics (h/t 80,000 Hours): 
 

Caption: Pro and con lists make it easy to put too much weight on an unimportant factor.

Without further ado, let’s go through the considerations!  

Considerations favoring focusing on shorter-timelines

Neglectedness of short-timelines scenarios 

Fewer resources over less time: 

People currently concerned about TAI existential-safety will probably form a much larger proportion of relevant work being done in shorter-timeline scenarios than longer ones. Longer timelines imply more time for others to internalize the importance and imminence of advanced AI and focus more on making it go well (as we’re starting to see with e.g. the UK government). 

You can’t contribute to past timelines in the future

Allocating resources proportional to each potential timeline scenario entails always disproportionately focusing on shorter timelines scenarios

Higher predictability and likelihood of useful work

(Uncertain) Work useful for shorter-timelines is more likely useful for longer timelines than vice versa.  

Considerations favoring focusing on longer-timelines

Many people concerned about TAI x-safety are young and inexperienced 

Intractability of changing shorter timelines scenarios 

Neutral, person-specific, and unclear considerations

Probability distribution over TAI Timelines

Variance and Influenceability (usefulness of working on) of different timelines scenarios

Personal considerations

Takeoff speeds/Continuity of AI progress

The last-window-of-influence is likely in advance of TAI arrival

Conclusion

Overall, the considerations pointing in favor of prioritizing short-timelines seem moderately stronger to me than those in favor of prioritizing long-timelines impact, though others who read drafts of this post disagreed. In particular, the neglectedness considerations seem stronger to me than the pro-longer-timeline considerations. The main exception that comes to mind is the personal fit consideration for very young/inexperienced individuals - though I would still guess that there is lots of useful work to do in shorter timeline worlds for (e.g. operations, communications, work in low-barrier-of-entry fields). I might expand on kinds of work that seem especially useful under short vs. long timelines, but other [LW · GWwriting on what kinds of work are useful exist already - I’d recommend going through them and thinking about what work might be a good fit for you, and seems most useful under different timelines.
 

There is a good chance I am missing many important considerations, and reasoning incorrectly about the ones mentioned in this post. I’d love to hear suggestions and counter-arguments in the comments, along with  readers’ assessment of the strength of considerations and how they lean overall. 

Acknowledgements

Thanks to Michael Aird, Caleb Parikh, and others for sharing helpful comments and related documents. 

0 comments

Comments sorted by top scores.