Potential alignment targets for a sovereign superintelligent AI
post by Paul Colognese (paul-colognese) · 2023-10-03T15:09:59.529Z · LW · GW · No commentsThis is a question post.
Contents
Answers 14 Tamsin Leake 6 Steven Byrnes 5 Nathan Helm-Burger 3 Chipmonk None No comments
I'd like to compile a list of potential alignment targets for a sovereign [LW · GW] superintelligent AI.
By an alignment target, I mean something like what goals/values/utility function we might want to instill in a sovereign superintelligent AI (assuming we've solved the alignment problem).
Here are some alignment targets I've come across:
- Alignment to a human user (or group of human users).
- Ambitious value learning [LW · GW].
- Coherent extrapolated volition [LW · GW].
Examples, reviews, critiques, and comparisons of alignment targets are welcome.
Answers
the QACI [LW · GW] target sort-of aims to be an implementation of CEV. There's also PreDCA and UAT listed on my old list of (formal) alignment targets [LW · GW].
If there’s a powerful AI not under the close control of a human, then I currently think that the least bad realistic option to shoot for is: the AI is motivated to set up some kind of “long reflection” [? · GW] or atomic communitarian thing, or whatever—something where humans, not the AI directly, would be making the decisions about how the future will go. In other words, the AI would be motivated to set up a process / system (or a process / system to create a process / system…) and then cede power to that process / system (or at least settle into a role as police rather than decision-maker). Hopefully the process / system would be sufficiently good that it would be stable and prevent war and oppression and be compatible with moral progress and so on.
Like, if I were given extraordinary power (say, an army of millions of super-speed clones of myself), I would hope to eventually wind up in a place like that, instead of directly trying to figure out what the future should be, a prospect which terrifies me.
This is pretty vague. I imagine that lots of devils are in the details.
Corrigibility. Namely, the desire to want to be corrected if wrong, to be turned off if the operators want to turn it off, to enact the operators desires otherwise. Caution and obedience. https://www.lesswrong.com/posts/ZxHfuCyfAiHAy9Mds/desiderata-for-an-ai [LW · GW]
«Boundaries»/membranes [LW · GW].
Eg: «Boundaries» for formalizing an MVP morality [LW · GW]
Also: see the recap in Formalizing «Boundaries» with Markov blankets + Criticism of this approach [LW · GW]
Note also that there's (at least) two ways to do this, which I need to write a post about (or let me know if you want to review my draft). One way is like "be a Nanny AI and protect the «boundaries» of humans", another way that's like "mind your own business and you will automatically not cause any problems for anyone else". The former is more like Davidad's approach (at least as of earlier this year) [LW · GW], the latter is more like Mark Miller's thoughts on AI safety and security.
No comments
Comments sorted by top scores.