Does human (mis)alignment pose a significant and imminent existential threat?
post by jr · 2025-02-23T10:03:40.269Z · LW · GW · No commentsThis is a question post.
Contents
Why am I asking this question? How is this practical? None Answers 2 Dave Orr None No comments
(This question was born from my comment [LW(p) · GW(p)] on a very excellent post, LOVE in a simbox is all you need [LW · GW] by @jacob_cannell [LW · GW] )
Why am I asking this question?
I am personally very troubled by what I would equate to human misalignment -- our deep divisions, our susceptibility to misinformation and manipulation, our inability to identify and act collectively in our best interests. I am further troubled by the deleterious effects that technology has had in that regard already (think social media), and would like to see efforts to not only produce AI that is ethical or aligned itself (which, don't get me wrong, I LOVE and find very encouraging), but also ensure that AI is being harnessed to offer humans the support they need to realign themselves, which is critical to achieving the ultimate goal of Alignment of the (Humans + AI) Collaboration as a whole.
However, that's just my current perspective. And while I think I have good reasons for it, I realize I have limitations -- both in knowledge and experience, and in my power to effect change. So, I'm curious to hear other perspectives that might help me become more right or just understand other viewpoints, or perhaps connect with others who are like-minded so we can figure out what we might be able to do about it together.
How is this practical?
If others here share my concerns and believe it is a significant threat that warrants action, I will likely have follow-on questions for discussion toward that end. For instance, I'd love to hear if there are efforts already being made that address my concerns. Or perhaps if anyone thinks that creating and deploying aligned AI will naturally help humans overcome those issues, I'd be curious to hear their thoughts. I have some ideas of my own too, but I'll save those at least until I've done a lot more listening and understanding first to establish some mutual understanding and trust.
Answers
Humans have always been misaligned. Things now are probably significantly better in terms of human alignment than almost any time in history (citation needed) due to high levels of education and broad agreement about many things that we take for granted (e.g. the limits of free trade are debated but there has never been so much free trade). So you would need to think that something important was different now for there to be some kind of new existential risk.
One candidate is that as tech advances, the amount of damage a small misaligned group could do is growing. The obvious example is bioweapons -- the number of people who could create a lethal engineered global pandemic is steadily going up, and at some point some of them may be evil enough to actually try to do it.
This is one of the arguments in favor of the AGI project. Whether you think it's a good idea probably depends on your credences around human-caused xrisks versus AGI xrisk.
No comments
Comments sorted by top scores.