Value Impact

post by TurnTrout · 2019-09-23T00:47:12.991Z · LW · GW · 10 comments

Contents

  Appendix: Contrived Objectives
None
10 comments




Being on Earth when this happens is a big deal, no matter your objectives – you can't hoard pebbles if you're dead! People would feel the loss from anywhere in the cosmos. However, Pebblehoarders wouldn't mind if they weren't in harm's way.

Appendix: Contrived Objectives

A natural definitional objection is that a few agents aren't affected by objectively impactful events. If you think every outcome is equally good, then who cares if the meteor hits?

Obviously, our values aren't like this, and any agent we encounter or build is unlikely to be like this (since these agents wouldn't do much). Furthermore, these agents seem contrived in a technical sense (low measure under reasonable distributions in a reasonable formalization), as we'll see later. That is, "most" agents aren't like this.

From now on, assume we aren't talking about this kind of agent.


Notes

10 comments

Comments sorted by top scores.

comment by sayan · 2019-09-24T08:08:21.650Z · LW(p) · GW(p)

As far as I understand, this post decomoses 'impact' into value impact and objective impact. VI is dependent on some agent's ability to reach arbitrary value-driven goals, while OI depends on any agent's ability to reach goals in general.

I'm not sure if there exists a robust distinction between the two - the post doesn't discuss any general demarcation tool.

Maybe I'm wrong, but I think the most important point to note here is that 'objectiveness' of an impact is defined not to be about the 'objective state of the world' - rather about how 'general to all agents' an impact is.

Replies from: TurnTrout
comment by TurnTrout · 2019-09-24T14:14:47.737Z · LW(p) · GW(p)

VI is dependent on some agent's ability to reach arbitrary value-driven goals, while OI depends on any agent's ability to reach goals in general.

VI depends on the ability to do one kind of goal in particular, like human values. OI depends on goals in general.

I'm not sure if there exists a robust distinction between the two - the post doesn't discuss any general demarcation tool.

If I understand correctly, this is wondering whether there are some impacts that count for ~50% of all agents, or 10%, or .01% - where do we draw the line? It seems to me that any natural impact (that doesn't involve something crazy like "if the goal encoding starts with '0', shut them off; otherwise, leave them alone") either affects a very low percentage of agents or a very high percentage of agents. So, I'm not going to draw an exact line, but I think it should be intuitively obvious most of the time.

Maybe I'm wrong, but I think the most important point to note here is that 'objectiveness' of an impact is defined not to be about the 'objective state of the world' - rather about how 'general to all agents' an impact is

This is exactly it.

comment by DanielFilan · 2019-10-02T19:30:33.408Z · LW(p) · GW(p)

It's interesting to me to consider the case of me getting into a PhD program at UC Berkeley, which felt pretty impactful. It wasn't that I intrinsically valued being a PhD student at Berkeley, and it wasn't just that being a PhD student at Berkeley objectively gave any agent greater ability to achieve their goals (although they pay you, so it's true to some extent), it was that it gave me greater ability to achieve my goals by (a) being able to learn more about AI alignment and (b) getting to hang out with my friends and friends-of-friends in the Bay Area. (a) and (b) weren't automatic consequences of being admitted to the program, I had to do some work to make them happen, and they aren't universally valuable. A simplified example of this kind of thing is somebody giving you a non-transferrable $100 gift voucher for GameStop.

comment by Chipmonk · 2023-08-25T12:26:21.901Z · LW(p) · GW(p)

 Objective impact: it is more important that you survive and maintain the ability to make your own decisions and pursue your goals, than it is important that you get specific (subjective) things that you want

Individual sovereignty is more important than preference fulfillment

comment by adamShimi · 2020-02-12T13:02:37.300Z · LW(p) · GW(p)

I have one potential criticism of the examples:

Because I was not sure what was the concrete implication of the asteroid impact, the reveal was unimpactful on me (pun inteded) that it was objectively valued negatively by anybody because they risk death. Had you written that the asteroid strikes near the agent, or that this causes massive catastrophes, then I would probably have though that it mattered the same for local peeblehoarders and for humans. Also, the asteroid might destroy pebbles (or depending on your definition of pebble, make new ones).

Also, I feel that some of your examples of objective impact are indeed relevant to agents in general (not dying/being destroyed), while other depends on sharing a common context (cash, which would be utterly useless in Pebblia if the local economy was based on exchanging peebles for peebles).

Do you just always consider this context as implicit?

Replies from: TurnTrout
comment by TurnTrout · 2020-02-12T13:44:55.390Z · LW(p) · GW(p)

Also, I feel that some of your examples of objective impact are indeed relevant to agents in general (not dying/being destroyed), while other depends on sharing a common context (cash, which would be utterly useless in Pebblia if the local economy was based on exchanging peebles for peebles).

Yeah, in the post I wrote

Even if we were on Pebblia, we'd probably think primarily of the impact on human-Pebblehoarder relations.

Replies from: adamShimi
comment by adamShimi · 2020-02-12T13:49:50.568Z · LW(p) · GW(p)

I don't see the link with my objection, since you quote a part of your post when you write of value impact (which is dependent on the values of the specific agents) and I talk about the need for context even for objective impact (which you present as independent of values and objectives of specific agents)

Replies from: TurnTrout
comment by TurnTrout · 2020-02-12T15:48:10.731Z · LW(p) · GW(p)

Oh, I think I see. Yes, this is explicitly talked about later in the sequence - "resources" like cash are given their importance by how they affect future possibilities, and that's highly context-dependent.

(Let me know if this still isn't addressing your objection)

Replies from: adamShimi
comment by adamShimi · 2020-02-12T16:12:15.405Z · LW(p) · GW(p)

Thanks, I'll keep going then.

comment by martinkunev · 2024-03-28T00:21:41.984Z · LW(p) · GW(p)

It seems to me that objective impact stems from convergent instrumental goals - self-preservation, resource acquisition, etc.