Posts
Comments
Counter-counterpoint: big groups like bureaucracies are not composed of randomly selected individuals from their respective countries. I strongly doubt that say, 100 randomly selected Google employees (the largest plausible bureaucracy that might potentially develop AGI in the very near term future?) would answer extremely similarly to 100 randomly selected Americans.
Of course, in the only moderately near term or median future, something like a Manhatten Project for AI could produce an AGI. This would still not be identical to 100 random Americans, but averaging across the US security & intelligence apparatus, the current political facing portion of the US executive administration, and the leadership + relevant employee influence from a (mandatory?) collaboration of US frontier labs would be significantly closer on average. I think it would at least be closer to average Americans than a CCP Centralized AGI Project would be to average Chinese people, although I admit I'm not very knowledgeable on the gap between Chinese leadership and average Chinese people other than basics like (somewhat) widespread VPN usage.
If you haven't already, you should consider reading the Timelines Forecast and Takeoff Forecast research supplements linked to on the AI 2027 website. But I think there are a good half dozen (not necessarily independent) reasons for thinking that if AI capabilities start to takeoff in short timeline futures, other parts of the overall economy/society aren't likely to massively change nearly as quickly.
The jagged capabilities frontier in AI that already exists and will likely increase, Moravec's Paradox, the internal model/external model gap, the lack of compute available for experimentation + training + synthetic data creation + deployment, the gap in ease of obtaining training data for tasks like Whole Brain Emulation versus software development & AI Research, the fact that diffusion/use of publicly available model capabilities is relatively slow for both reasons of human psychology & economic efficiency, etc.
Basically, the fact that the most pivotal moments of AI 2027 are written as occurring mostly within 2027, rather than say across 2029-3034, means that it's possible for substantial RSI in terms of AI capabilities before substantial transformations occur in society overall. I think the most likely way AI 2027 is wrong on this matter is that not nearly as fast of an "intelligence explosion" occurs, not that the speed of societal impacts that occur simultaneously is underestimated. The reasons for thinking this are basically taking scaling seriously & priors (which are informed by things like the industrial revolution).