Posts

My Plan to Build Aligned Superintelligence 2022-08-21T13:16:32.697Z
Can We Align AI by Having It Learn Human Preferences? I’m Scared (summary of last third of Human Compatible) 2022-06-29T04:09:06.213Z

Comments

Comment by apollonianblues on My Plan to Build Aligned Superintelligence · 2022-08-21T19:27:25.280Z · LW · GW

TBH my naive thought is that if John's project succeeds it'll solve most of what I think of as the hard part of alignment, and so it seems like one of the more promising approaches to me, but in my model of the world it seems quite unlikely that there are natural abstractions in the way that John seems to think there are.

Comment by apollonianblues on My Plan to Build Aligned Superintelligence · 2022-08-21T19:25:05.811Z · LW · GW

I have LOL thanks tho

Comment by apollonianblues on My Plan to Build Aligned Superintelligence · 2022-08-21T19:24:13.437Z · LW · GW

My assumption is that it would do this to prevent other people from making superintelligences that are unaligned. At least Eliezer thinks you need to do this (see bullet point 6 in this post), and I think it generally comes up in conversations people have about pivotal acts. Some people think if you think of an alignment solution that's good and easy to implement, everyone building AGI will use it, and so you won't have to prevent other people from building unaligned AGI, but this seems unrealistic and risky to me.