Population Aging as an Impediment to Addressing Global Catastrophic Riskspost by Michael Flood (michael-flood) · 2018-10-18T22:28:20.985Z · score: 7 (12 votes) · LW · GW · 2 comments
[ epistemic status: first Less Wrong post, developing hypothesis, seeking feedback and help fleshing out the hypothesis into something that could be researched and about which a discussion paper can be written. A comment/contribution to Eliezer Yudkowsky's "Cognitive biases potentially affecting judgment of global risks" in Bostrom & Cirkovic's "Global Catastrophic Risks" (2008) ]
Most of the Global Catastrophic Risks we face in the 21st century, like anthropogenic climate change, comet and asteroid impacts, pandemics, and uncontrolled artificial intelligence, are high impact (affecting the majority or all of humanity), of terminal intensity (producing mass death, economic and social disruption, and in some cases potential human extinction), and are of highly uncertain probability . This last factor is one major factor making it difficult to bring public attention and political will to bear on mitigating them. This is critical as all of our work/research on AI safety and other issues will be for naught if there is no understanding or will to implement them. Implementation may not require public involvement in some cases (AI safety may be manageable by consensus between AI researchers, for example) but others, like the detection of Earth orbit crossing asteroids and comets, may require significant expenditure to build detectors, etc.
My interest at present is in additional factors that make mustering political and public will even more difficult - given that these are hard problems to interest people in in the first place, what factors make that even more difficult? I believe that the aging of populations in the developed world may be a critical factor, progressively redirecting societal resources from long-term projects like advanced infrastructure, or foundational basic science research (which arguably AI Safety counts as), towards provision of health care and pensions.
Several factors make an aging developed world population a factor in blunting long-term planning:
(1) Older people (age 65+), across the developed world, vote more often than younger people
(2) Voters are more readily mobilized to vote to protect entitlements than to make investments for the future
(3) Older voters have access to, and are more aware of, entitlements than are younger people
(4) Expanding on (3), Benefits and entitlements are of particularly high salience to the aged because of their failure to save adequately for retirement. This trend has been ongoing and seems unlikely to be due to cognitive biases surrounding future planning.
(6) Long term investments, research, and other protections/mitigations against Global Catastrophic Risks will require a tradeoff with providing benefits to present people
(7) Older people have more present focus and less future focus than younger people (to the extent that younger people do - my anecdotal data is that most people interested in the far future of humanity are <50 years old, and a small subset of that <50 year old population). Strangely, even people with grandchildren and great-grandchildren express limited interested in how their descendants will live and how safe their futures will be.
#6 is the point on which I am most uncertain (though I welcome questions and challenges that I should be more uncertain). Unless artificial intelligence and automation in the near term (15-30 years) provide really substantial economic benefits, enough that adequate Global Catastrophic Risk mitigation could be requisitioned without everyone noticing too much (and even then it may be a hard sell), it seems likely that future economic growth will be slower. Older workers, on average (my hunch says ...) are harder to retrain, and harder to motivate to retrain to take new positions, especially if the alternative is state-funded retirement. In a diminished economic future, one not as rich as it would have been with a more stable population pyramid, politics seems likely to focus on zero-sum games of robbing (young) Peter to pay (old) Paul, whether directly through higher taxation or indirectly by under-investing in the future.
Am I jumping ahead of the problem here? Do we not know enough about what it would take to address the different classes of Global Catastrophic and Existential Risk, or is there a reason to focus now on the factors that could prevent us from 'doing something about it'?
Comments sorted by top scores.