Posts
Comments
Long-time reader, first-time poster. My bio covers my background, but I have a few questions looking at AI risk through an economic lens:
- Has anyone deeply engaged with Hayek's "The Use of Knowledge in Society" in relation to AI alignment? I'm particularly interested in how his insights about distributed knowledge and the limitations of central planning might inform our thinking about AI governance and control structures.
- More broadly, I'm curious about historical parallels between different approaches to handling distributed vs centralized power/knowledge systems. Are there instructive analogies to be drawn from how 20th century intellectuals thought about economic planning versus how we currently think about AI development and control?
- I'm particularly interested in how distributed AI development changes the risk landscape compared to early singleton-focused scenarios. What writings or discussions have best tackled this shift?
Before developing these ideas further, I'd appreciate pointers to any existing work in these directions!