Posts
Comments
An excellent primer--thank you! I hope Scott revisits it someday, since it sounds like recent developments have narrowed the range of probable outcomes.
I gather the problem is that we cannot reliably incorporate that, or anything else, into a machine's utility function: if it can change its source code (which would be the easiest way for it to bootstrap itself to superintelligence), it can also change its utility function in unpredictable ways. (Not necessarily on purpose, but the utility function can take collateral damage from other optimizations.)
I'm glad you started this thread: to someone like me who doesn't follow AI safety closely, the argument starts to feel like, "Assume the machine is out to get us, and has an unstoppable 'I Win' button..." It's worth knowing why some people think those are reasonable assumptions, and why (or if) others disagree with them. It would be great if there was an "AI Doom FAQ" to cover the basics and get newbies and dilettantes up to speed.