Overview of Rethink Priorities’ work on risks from nuclear weapons
post by MichaelA · 2021-06-11T20:05:18.103Z · LW · GW · 0 commentsContents
Theory of change Credits None No comments
This post overviews the work that Rethink Priorities has done and is doing on risks from nuclear weapons [? · GW], as well as the theory of change for this work. This is intended to help readers understand how our posts on this topic fit together and what impact we ultimately intend them to have. Note that this post isn’t a summary of our work on nuclear risk; we plan to produce such summaries later. This is cross-posted from the EA Forum [EA · GW].
In 2019, Rethink Priorities (RP) began investigating risks from nuclear weapons in order to determine:
- The extent to which nuclear risk reduction should be prioritized by effective altruists (and particularly long termism [EA · GW])
- The most effective ways to reduce nuclear risk
This project was led by Luisa Rodriguez [EA · GW] until early 2020, was paused when Luisa left Rethink Priorities,[1] and was resumed by me when I joined Rethink Priorities at the end of 2020. New posts related to this project will be published over the coming months.
Our work on nuclear risk is centred on investigating the likelihood of various nuclear conflict scenarios, how harmful those scenarios would be (particularly for the long-term future), what interventions are available for reducing those likelihoods and/or harms, and how cost-effective those interventions are. This research can mostly be divided into three categories:[2]
- Investigation of key uncertainties that are relevant across a range of nuclear war scenarios, such as:
- What are the various ways a nuclear conflict could start and play out?
- If nuclear conflict occurs, how many warheads are likely to be used, and against what kinds of targets?
- If nuclear conflict occurs, how large would the effects on the climate and crop yields be?
- Given various possible crop yield declines, how many people would die?
- How might various future technological developments affect nuclear risk?
- Modelling the likelihood and harms of nuclear conflicts between specific pairs or groups of countries, especially:
- The US and Russia [? · GW]
- China and its potential adversaries (the US, India, or Russia)
- Perhaps India and Pakistan
- Identifying, collating, and evaluating intervention options for reducing nuclear risk, such as diplomacy, treaties, public advocacy, targeted advocacy, technical assistance and capacity building, and research
All published posts related to this project are collected in the sequence Risks from Nuclear Weapons [? · GW].
We will also be partnering with Metaculus to run a nuclear risk forecasting tournament. Next week, we’ll publish a post announcing that tournament, discussing how it’ll work, and explaining what we hope to achieve with it.
Theory of change
In a nutshell, the main paths to impact for our nuclear risk research involve improving decisions about (i) how much to prioritize reducing nuclear risk, and (ii) what specific actions to take to reduce nuclear risk. We are focused on improving decisions by the following types of actors:
- Funders and those who advise them
- Policymakers, policy advisors, and advocates
- Other researchers
- People deciding on careers or providing career advice
We expect to mostly (i) influence the decisions of actors with some connection to the effective altruism (EA) community, or (ii) via such actors, indirectly influence actors with no connection to the EA community. For example, we have strong relationships with some EA-aligned people and organisations who in turn have relationships with policymakers or politicians who likely have not heard of effective altruism, and those EA-aligned people or organisations may be able to convey key findings from our work to those policymakers or politicians. That said, we also plan to try to more directly influence some actors with no connection to the EA community.
We expect our work to improve decisions through a mixture of:[3]
- Synthesizing, translating, and drawing inferences from existing knowledge, theories, etc. for relevant decision-makers
- Addressing questions that differ from those tackled in existing work on a topic (e.g., questions more targeted at long-term impacts, prioritization, and probabilistic forecasts [? · GW])
- Better addressing some questions that are tackled in existing work (e.g., because we aren’t starting with the assumption that a given issue is important or a given intervention is effective)
We also expect our nuclear risk research to have indirect benefits such as allowing us to build knowledge, skills, connections, and credibility that will be useful for future work (particularly on longtermism-relevant politics, policy, and security topics).
In addition to that high-level theory of change, we internally have more detailed theories of change and target outcomes, and break them down by different parts of our overall nuclear risk work. For discussion of Rethink Priorities’ theory of change, impact assessment, and plans beyond just this nuclear work, see Rethink Priorities 2020 Impact and 2021 Strategy [EA · GW].
We would welcome questions or feedback on any of the above.
Credits
This research is a project of Rethink Priorities. It was written by Michael Aird. Thanks to Janique Behman, Neil Dullaghan, and Peter Wildeford for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can see more of our work here.
Luisa is now working with Will MacAskill at the Forethought Foundation for Global Priorities Research, will soon join 80,000 Hours, and is a member of Rethink Priorities’ board. ↩︎
There are also some additional lines of work we may do that wouldn’t neatly fit within those three categories. ↩︎
I’ll soon publish a post on “Why EAs researching mainstream topics can be useful”, which will elaborate on similar points. ↩︎
0 comments
Comments sorted by top scores.