AGI alignment with what?

post by AlignmentMirror · 2022-07-01T10:22:27.223Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    3 Charlie Steiner
    3 Vladimir_Nesov
    -3 Flaglandbase
None
No comments

Irrespective of technical questions, what values would you align an AGI with?
Or can you point me to works of alignment researchers that to some extent formulate these values?

Answers

answer by Charlie Steiner · 2022-07-02T02:33:43.027Z · LW(p) · GW(p)

I sometimes aim for aligning AI with an abstract notion of human values, defined in a way that coincides with my own preferred understanding of the term.

Or sometimes, one meta-level farther up: an abstract notion of human values according to humans' own aggregated understanding, bootstrapped in a dynamic whose starting point was close to my own best understanding of the terms.

answer by Vladimir_Nesov · 2022-07-01T18:06:24.508Z · LW(p) · GW(p)

Aligned values that should be used for optimization, even if they can be defined by something like extrapolated volition, are not tractable (ready for immediate use), so an AGI can't be aligned in this sense directly. Optimization to any other values, or to an even slightly imperfect approximation of these values, is the essence of AI risk, so an aligned AGI needs to avoid hard optimization altogether until such time that aligned values become tractable (which might never actually happen).

Instead, AGI must be aligned in the sense of aspiring to attain aligned values (or at least to cooperate with their instillment) and in the sense of not causing problems and hopefully being beneficial in the meantime. AGI alignment is not about alignment of values in the present, it's about creating conditions for eventual alignment of values in the distant future.

comment by AlignmentMirror · 2022-07-01T19:37:16.726Z · LW(p) · GW(p)

AGI alignment is not about alignment of values in the present, it's about creating conditions for eventual alignment of values in the distant future.

What should these values in the distant future be? That's my question here.

Replies from: Vladimir_Nesov, volodymyr-frolov
comment by Vladimir_Nesov · 2022-07-01T22:21:39.868Z · LW(p) · GW(p)

That was in one of the links, whatever's decided after thinking carefully for a very long time, less evilly by a living civilization and not an individual person. But my point is that this does not answer the question of what values one should directly align an AGI with, since this is not a tractable optimization target. And any other optimization target or its approximation that's tractable is even worse if given to hard optimization. So the role of values that an AGI should be aligned with is played by things people want, the current approximations to that target, optimized-for softly, in a way that avoids goodhart's curse, but keeps an eye on that eventual target.

Replies from: AlignmentMirror
comment by AlignmentMirror · 2022-07-01T23:30:18.946Z · LW(p) · GW(p)

That was in one of the links, whatever's decided after thinking carefully for a very long time, less evilly by a living civilization and not an individual person.

Got it, thanks.

comment by Volodymyr Frolov (volodymyr-frolov) · 2022-07-01T20:07:55.163Z · LW(p) · GW(p)

You mean, the question is how exactly the Utility Function is calculated for humanity's preferences? That's part of the problem. We cannot easily fit an entirety of our preferences into a simple Utility Function (which doesn't mean there's no such Utility Function which perfectly captures it, but simply means that formalization of this function is not achievable at present moment). As Robert Miles once said, if we encode 10 things we value the most into SuperAI's Utility Function, the 11th thing is as good as gone.

Replies from: AlignmentMirror
comment by AlignmentMirror · 2022-07-01T20:43:24.127Z · LW(p) · GW(p)

Can you describe what you think of when you say "humanity's preferences"? The preferences of humans or human groups can and do conflict with each other, hence it is not just a question of complexity, right?

Replies from: volodymyr-frolov
comment by Volodymyr Frolov (volodymyr-frolov) · 2022-07-03T02:21:16.561Z · LW(p) · GW(p)

I'm sure there are multiple approaches for formalizing what we mean when we say "humanity's preferences", but intuitive understanding is enough for the sake of this discussion. In my (speculative) opinion, the problem precisely is with the complexity of the Utility Function capturing this intuitive understanding.

For simplicity let's say there's a single human being with transitive preferences and we want to perfectly align an AI with the preferences of this human. The cyclomatic complexity of such a perfect Utility Function can be easily higher than that of the human brain (it needs to perfectly predict the utility of every single thing "X" including all the imaginable consequences of having "X", with all these Xes that might be unknown for humanity for now).

answer by Flaglandbase · 2022-07-03T05:33:43.640Z · LW(p) · GW(p)

The most naive possible answer is that by law any future AI should be designed to be part of human society. 

comment by Richard_Kennaway · 2022-07-03T08:37:17.806Z · LW(p) · GW(p)

You are completely correct. That is indeed the most naive possible answer. And also the most X, for various values of X, none of them being good things for an answer to be.

No comments

Comments sorted by top scores.