Posts
Comments
I was also struck by this huge discrepancy between HLMI and FAOL predictions. I think that particularly FAOL predictions are unreliable. My interpretation is that when respondents are pushing their timelines so far into the future, some of them may be in fact attempting to resist admitting the possibility of AI takeover.
The key question is, what "automating all tasks" really means. "All tasks" includes in particular also all decision making: managerial, political, strategic, the small and the large, all of that. All the agency, long-term planning, and execution of one's own plans. Automating all tasks in fact implies AI takeover. But just considering this possibility may then easily clash with the view that many people have, namely that AIs are controllable tools rather than uncontrollable agents (see the excellent new paper by Severin Field on this).
And there will for sure be strong competitive forces pushing towards full automation, once that option becomes technically feasible. For example, if you automate production processes in a firm at all levels up to the CEO, but not the CEO, then the human CEO becomes a bottleneck, slowing down the firm's operations, potentially by orders of magnitude. Your firm may then be pushed out of the market by a competitor who automated their CEO as well.
My logic suggests that FAOL should be only slightly later than HLMI. Of course you should have first feasibility, then adoption. Some lag could follow from cost considerations (AI agents / robot actuators may be initially too expensive) or legal constraints, and perhaps also human preferences (though I doubt that point). But once we have FAOL, we have AI takeover - so in fact such scenario redirects our conversation to the topic of AI x-risk.