I feel like this and many other arguments for AI-skepticism are implicitly assuming AGI that is amazingly dumb and then proving that there is no need to worry about this dumb superintelligence.
Remember the old "AI will never beat humans at every task because there isn't one architecture that is optimal at every task. An AI optimised to play chess won't be great at trading stocks (or whatever) and vice versa"? Well, I'm capable of running a different program on my computer depending on the task at hand. If your AGI can't do the same as a random idiot with a PC, it's not really AGI.
I am emphatically not saying that Robin Hanson has ever made this particular blunder but I think he's making a more subtle one in the same vein.
Sure, if you think of AGI as a collection of image recognisers and go engines etc. then there is no ironclad argument for FOOM. But the moment (and probably sooner) that it becomes capable of actual general problem solving on par with it's creators (i.e. actual AGI) and turns its powers to recursive self-improvement - how can that result in anything but FOOM? Doesn't matter if further improvements require more complexity or less complexity or a different kind of complexity or whatever. If human researchers can do it then AGI can do it faster and better because it scales better, doesn't sleep, doesn't eat and doesn't waste time arguing with people on facebook.
This must have been said a million times already. Is this not obvious? What am I missing?