Posts

Comments

Comment by Joel L. on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-29T20:59:54.353Z · LW · GW

An excellent primer--thank you! I hope Scott revisits it someday, since it sounds like recent developments have narrowed the range of probable outcomes.

Comment by Joel L. on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-27T19:19:22.099Z · LW · GW

I gather the problem is that we cannot reliably incorporate that, or anything else, into a machine's utility function: if it can change its source code (which would be the easiest way for it to bootstrap itself to superintelligence), it can also change its utility function in unpredictable ways. (Not necessarily on purpose, but the utility function can take collateral damage from other optimizations.)

I'm glad you started this thread: to someone like me who doesn't follow AI safety closely, the argument starts to feel like, "Assume the machine is out to get us, and has an unstoppable 'I Win' button..." It's worth knowing why some people think those are reasonable assumptions, and why (or if) others disagree with them. It would be great if there was an "AI Doom FAQ" to cover the basics and get newbies and dilettantes up to speed.