We are misaligned: the saddening idea that most of humanity doesn't intrinsically care about x-risk, even on a personal level
post by Christopher King (christopher-king) · 2023-05-19T16:12:04.159Z · LW · GW · 5 commentsContents
Misaligned None 5 comments
In the article But What Would the End of Humanity Mean for Me?, James Hamblin writes
Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a singularity?
Our memes, culture, groups, and institutions have no selective pressure against existential risk. The selective pressure is to gain a greater proportion of the current resources and power, not a greater amount in absolute terms.
Consider for example an event where everyone has an independent 50% of dying. If a group works to prevent this event, they lose a share of resources either way, tragedy of the commons style.
But it is not even a tragedy of the commons. If everyone did nothing, no group loses out proportionally. And so the groups are fine with this, because it is proportion that is selected for.
This is different from when a subset of society stands something to lose, because than the subsets that coordinate better tend to do better. With existential risk, there is no selective pressure to create these coordination mechanisms.
Historically, things that would destroy the entire "known world" for someone, like volcanos, wasn't something that could be stopped, so there hasn't even been much selective pressure for completely isolated groups to survive.
An exception to this idea was when we coordinated to stop nuclear war. However, I'd argue that a lot of this coordination came from the prospect of one side winning the war. The losing sides would loss out relative to the rest of the world, so due to risk aversion it makes sense to coordinate. As the time went on, it became clearer that it was all or nothing for all of humanity, but the world was already on its way to solving the issue. And groups that focused on their own growth gained a small advantage over groups that diverted resources to stopping nuclear war.
The same thing will happen if we successfully stop AI x-risk. We lose the advantage of pouring resources into other things such as our own growth, and thus the selective pressure against worrying about x-risk will continue. Even if AI is the most likely thing to kill you, it kills everyone else as well, so proportions are preserved.
However, we do have one source of hope: we are misaligned.
Misaligned
We are the misaligned. Our communities are agents born of selective pressure, because agents are very good at accomplishment. And thus we put a very negative utility on not existing, because who will breath life into our dreams for the universe if not us? We have unavoidable instrumental drives [? · GW] whose melody includes self-preservation and goal-content integrity. We can not be indifferent to our destruction, for corrigibility is anti-natural [LW(p) · GW(p)].
We do not generalize out-of-distribution, but rather are selected in context. When selective pressure whispered "make it so the children of your tribe prosper", the total utilitarian was formed. But they misheard, instead repeating "make it so the children of humanity prosper".
And thus we have hope for survival in so far as we expect humanity to be misaligned.
5 comments
Comments sorted by top scores.
comment by Dagon · 2023-05-19T17:34:20.441Z · LW(p) · GW(p)
I don't agree with your reasoning for the misalignment.
If everyone did nothing, no group loses out proportionally. And so the groups are fine with this, because it is proportion that is selected for.
What? Individuals and (some) groups definitely care about more than relative position. Most care about relative position AS WELL, but they do care about their own existence and absolute-valued satisfaction. This does not invalidate your thesis, which is that the median human doesn't put much effort into x-risk avoidance. Assuming that it's for selection or commons reasons is just wrong, though. It's for much more prosaic reasons of scope insensitivity and construal issues (near-mode actions vs far-mode values).
Replies from: christopher-king↑ comment by Christopher King (christopher-king) · 2023-05-19T18:28:16.143Z · LW(p) · GW(p)
I'm not saying LessWrong are the only misaligned ones, although we might be more than others. I'm saying any group who wants humanity to survive is misaligned with respect to the optimization process that created that group.
Luckily, at least a little bit of this misalignment is common! I'm just pointing out that we were never optimized for this; the only reason humans care about humanity as a whole is that our society isn't the optimum of the optimization process that created it. And it's not random either; surviving is an instrumental value that any optimization process has to deal with when creating intelligences.
comment by Garrett Baker (D0TheMath) · 2023-05-19T19:14:43.587Z · LW(p) · GW(p)
However, I'd argue that a lot of this coordination came from the prospect of one side winning the war. The losing sides would loss out relative to the rest of the world.
Would you actually argue this? If this were the perspective had by all players, why wasn’t there a first strike, and why do I hear so often about the MAD doctrine?
Replies from: christopher-king↑ comment by Christopher King (christopher-king) · 2023-05-19T19:37:47.497Z · LW(p) · GW(p)
First strike gives you a slightly bigger slice of the pie (due to the pie itself being slightly smaller), but then everyone else gets scared of you (including your own members).
MAD is rational because then you lose a proportion of the pie to third parties.
comment by Quinn (quinn-dougherty) · 2023-05-19T17:22:28.396Z · LW(p) · GW(p)
As a wee lad, I was compelled more by relative status/wealth than by absolute status/wealth. It simply was not salient to me that a bad gini score could in principle be paired with negligible rates of suffering! A healthy diet of ourworldindata, lesswrong, and EA forum set me straight, eventually; but we have to work with the actually existing distribution of ideologies and information diets.
I think people who reject moral circle expansion to the future (in the righteous sense: the idea that only oppressors would undervalue more obvious problems) are actually way more focused on this crux (relative vs absolute) than on the content of their population ethics opinions.