Posts
Comments
Comment by
Ulf M. Pettersson (ulfpettersson) on
What Is The Alignment Problem? ·
2025-01-22T00:09:31.896Z ·
LW ·
GW
> Instead of trying to directly align individual agents' objectives, we could focus on creating environmental conditions and incentive structures that naturally promote collaborative behavior.
I think you are really on to something here. To achieve alignment of AI systems and agents, it is possible to create solutions based on existing institutions that ensure alignment in human societies.
Look to the literature in economics and social science that explain how societies manage to align the interests of millions of intelligent human agents, despite all those agents acting in their own self-interest.