post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2019-06-10T16:04:18.234Z · LW(p) · GW(p)
You can’t talk sensibly about what values are right, or what we ‘should’ build into intelligent agents.

I agree that in our usual use of the word, it doesn't make sense to talk about what (terminal) values are right.

But you agree that (within a certain level of abstraction and implied context) you can talk as if you should take certain actions? Like "you should try this dessert" is a sensible English sentence. So what about actions that impact intelligent agents?

Like, suppose there was a pill you could take that would make you want to kill your family. Should you take it? No, probably not. But now we've just expressed a preference about the values of an intelligent agent (yourself).

Modifying yourself to want bad things is wrong in the same sense that the bad things are wrong in the first place: they are wrong with respect to your current values, which are a thing we model you as having within a certain level of abstraction.

comment by jefallbright · 2019-06-09T20:54:52.953Z · LW(p) · GW(p)

I argued repeatedly and at length on the Extropian and Transhumanist discussion lists from 2004 to about 2010 for a metaethics based on the idea that actions assessed as increasingly "moral" (right in principle) are those actions assessed as promoting (1) values, hierarchical and fine-grained, increasingly coherent over an increasing context of meaning-making, via (2) instrumental methods, increasingly effective in principle, over increasing scope of consequences. Lather, rinse, repeat, with consequences tending to select for values, and methods for their promotion, that "work" (meaning "persist")

The instrumental methods half of this--the growth in scope of our model of of science and technology--is generally well-accepted.

The values half of this--the growth in context of our model of meaning-making--not so much, for a handful of understandable reasons of our developmental history.

Together, these orthogonal aspects tend to support and reinforce meaningful growth.

The Arrow of Morality points in no particular direction but outward--with increasing coherence over increasing context--and suggests we would do well to act intentionally to promote growth of our models in these two orthogonal dimensions.

Conceptual roadblocks include difficulties with evolutionary dynamics (including multi-level), synergistic (anti-entropic) expansion in both dimensions mentioned above (from the point of view of any agent), agency as inherently perspectival (subjective, but not arbitrary), and unwillingness to accept an ever-broadening indentification of "self".

Due (in my opinion) to these difficult and culturally pervasive conceptual roadblocks, I never gained much traction in my attempts to convey and test this thinking, and I eventually decided to stop beating a horse that was not so much dead, as had never really lived. I believe we'll make progress on this, two steps forward, one step back, to the extent we live and learn and become more ready. [Which is by no means guaranteed...]

I have not found any substantial literature supporting this thinking, but I can point you in the direction of bits and pieces, and we might discuss further (work and family permitting) if you would like to contact me privately.

  • Jef
Replies from: jefallbright
comment by jefallbright · 2019-06-09T21:02:25.460Z · LW(p) · GW(p)

Oh, and a short, possibly more direct response:

Values (within context) lead to preferences; preferences (within context) lead to actions; and actions (within context) lead to consequences.

Lather, rinse, repeat, updating your models of what matters and what works as you go.

comment by jmh · 2019-06-10T12:00:23.196Z · LW(p) · GW(p)

Until I saw this discussion I don't think I ever (consciously) thought of values (outside the value=price economic view) in the way this discussion seems to cast the light for me. Both the idea of values as units of choice (like goods) and the thought on the fragility of values (systems I think was implied) seem to put me on that line.

When we think about economic crisis (boo-bust cycles, depression events and various fluctuations in patterns of global trade) I wonder if the same is true for the value systems. Both are built up from some unit level type decisions. The units stand in various types of relationships (tight-loose, complementary-substitutes, near-term - far-term) in a relative sense. When anything changes there are ripple effects. Some will be more isolated, some will cascade.

Similarly, in the economic sphere, no one really choose the overall pattern of the economy or structure of production, it's largely an outcome. The approach of treating values as units and considering the fragility of the future based on any given set of values in place seems very similar.

That would suggest that an AI with a different set of values (and prioritization/valuation of the values in the set) will potentially have large impact. But it also suggests that it might not be able to drive a future it wants over that of what humans want. That perhaps is hopeful.

comment by Pattern · 2019-06-09T19:41:37.610Z · LW(p) · GW(p)

I think you can talk about what values* are consistent.

*You used the word values to refer to sets of values.