A Toy Model of Hingeyness

post by B Jacobs (Bob Jacobs) · 2020-09-07T17:38:59.826Z · LW · GW · 10 comments

Contents

  Definition of Hingeyness
  Older decisions are hingier
  Decrease in range
  Going extinct quickly isn't necessarily bad
  Is hingeyness related to slack?
  How probability fits in
  How much risk should we take?
  Conclusion
None
10 comments

This is a crosspost from the Effective Altruism forum [EA · GW]

Epistemic status: Attempt to clarify a vague concept. This should be seen as a jumping of point and not as a definitive model.


Definition of Hingeyness

The Hinge of History refers to a time when we have an unusually high amount of influence over the future of civilization, compared to people who lived in the eras before and after ours.

I will use the model I made for my previous question post [EA · GW] to explain why I don't think this definition is very useful. As before, in this model are only two possible choices per year. The number inside the circle refers to the amount of utility that year experiences and the two lines are the two options that this year has to decide on. The amount of utility which each option will add to the next year is written next to the lines. (link to image)


Older decisions are hingier

I think we all agree that we should try to avoid the option that will lead to better results in the next year, but will create less utility in the long run. In this model the year with 1 utility could choose the +2 option, but it should choose the +1 option because it leads to better options next year. Let's assume that all life dies after the last batch of years. The 1 utility then 3 utility then 0 utility option is the worst because you've generated 4 utility in total. 1-3-6 is just as good as 1-2-7, but 1-2- 8 is clearly the best path.

The implication is that later decisions are never hingier than earlier ones. 1 gets a range of options that ranges from 4 utility to 11 utility, no other option get's that kind of range. In fact, it's mathematically impossible that future decisions have a range of options that's larger than the previous decisions had (assuming the universe will end and isn't some kind of loop). It's also mathematically impossible that future decisions have ranges where the best and worst case scenarios give you more utility than the range of the previous years. This is, unless negative utility is possible, which might arguably exist when you have a universe of beings being kept alive and tortured against their will (but it's rare in any case).


Decrease in range

Does that mean that hingeyness is now a useless concept? Not necessarily. The range will never grow, but the amount by which it narrows from year to year varies widely. Let's look at an extreme example. (link to image)

So the decisions made in 1 will always have the broadest range [204-405], but if you look at the difference in range between 3 [203-311] and 4 [208-311] it's not that much. So hingeyness may still be useful to think about how quickly [EA(p) · GW(p)] our range is decreasing. It's even possible that the range doesn't shrink at all.


Going extinct quickly isn't necessarily bad

In the previous post I said that choosing for times where we survive for longer is almost always better (assuming you're a positive utilitarian and negative utility is impossible), this is an example of when this is not the case. The 1-2-402 chain gives the world the most utility even though it goes extinct one tick quicker. We (naturally) focus on reducing x-risk, but I wanted to visualize here why it might be possible that dying quickly in a blaze of utility is better than fizzling on for longer with low amounts of utility (especially if negative utility is possible). Although it should be noted that this model gives you clear ticks which might not exist in real life. Maybe planck time? Or maybe the time it takes to go from one state of pleasure to another a.k.a the time it takes to fire a neuron? Depending on how you answer that question this argument might fall flat.


Is hingeyness related to slack?

I'm starting to see similarities between the range of possible choices you keep and the amount of slack [LW · GW]. I previously expressed that I see the slack/moloch [LW · GW] trade-off as similar to the exploration/exploitation trade-off. Since we can't accurately predict which branches will give us the most utility it might be useful to keep a broad range of options open a.k.a to give yourself a lot of slack. In fact if we look at the first image you can see that someone who is pursuing linear utility exploitation will go from 1 to 3 (giving himself a +2 instead of a +1). Since this gives you worse results later this is basically the same thing as moloch pushing you into an inadequate equilibria [? · GW]. Having the slack/exploration to choose a sub-optimal route in the short-run but a better route in the long-run can only work if you have a lot of hingeyness.


How probability fits in

In reality of course you get more than two options, but the principle stays the same. Instead of a range you get a probability distribution. (link to image)

The probability that you get a certain amount of utility is equal to the amount of chains that generate that specific amount of utility (If you think certain chains have inherently less chance of existing you can just multiply the two factors). The range we are talking about is the difference between the lowest amount you could possibly generate and the highest. This will always either stay the same or shrink. This is not necessarily a bad thing as a we would rather face a narrow range of options between several good outcomes than a broad range of options between a lot of bad outcomes. But what about a distribution that looks like this (link to image):

This is what I think a lot of people think about when we talk about the hinge of history; a time in history where the decisions we make can either turn out to have very good outcomes or very bad outcomes with very little in between. Our range may be smaller than the previous eras, but the probability that we either gain or lose lots of utility have never been higher. I won't decide what the "true definition of hingeyness" is since language belongs to it's users. I'm just pointing out that "the range of total amount of utility generated", "how quickly that range is decreasing" and "how polarized the probability distribution is" are very different concepts and we should probably have different labels for them. I will suggest three in the conclusion.


How much risk should we take?

I previously asked:

When you are looking at the potential branches in the future, should you make the choice that will lead you to the cluster of outcomes with the highest average utility or to the cluster with the highest possible utility?

EmmaAbele [EA · GW] answered:

I'd say the one with the highest average utility if they are all equally likely. Basically, go with the one with the highest expected value.

But what about the cluster of branches with the median amount of utility, or mode or whatever? I don't think these questions have one definitively correct answer. Instead I would argue that we should use meta-preference utilitarianism [LW · GW] to choose the options that most people want to choose.


Conclusion

There are three concepts that could be described as Hingeyness:

1) The range of the amount of utility you can potentially generate with your decision (maybe call it 'hinge broadness'?)

2) How much that range will narrow when you make a decision (maybe call it 'hinge reduction'?)

3) How polarized the probability is that you get either a lot or very little utility in the future (maybe call it 'hinge precipiceness [EA · GW]'?)

Having lot's of "hinge broadness" is crucial for having slack. This toy model can be used to visualize all of these concepts.

10 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2020-09-08T11:09:29.408Z · LW(p) · GW(p)
unless negative utility is possible

In all forms of utility theory that I know of, utility is only defined up to arbitrary offset and positive scaling. In that setting, there is no such thing as negative, positive or zero utility (although there are negative, positive, and zero differences of utility). In what setting is there any question of whether negative utility can exist?

comment by Donald Hobson (donald-hobson) · 2020-09-09T11:57:39.932Z · LW(p) · GW(p)

Your definition of Hinginess doesn't match the intuitive idea. In particular, you are talking about the range of possible values.

Suppose that you are at the first tick, making a decision. You have the widest range of "possible" options, but not all of those options are actually in your action space. Lets say that you personally will be dead by the second tick, and that you can't influence the utility functions of any other agents with your decision now. You just get to choose which branch you go down. The actual path within that branch are entirely outside your control.

Consider a tree where the first ten options are ignored entirely, so the tree branches into 1024 identical subtrees. Then let the 11'th option completely control the utility. In this context, most of the hinginess is at tick 11. All someone at the first tick gets to choose is between two options, each of which are maybe 0 util or maybe lotsof util, as decided by factors outside your control, in other words, all the options have about the same expected utility.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-09-09T14:20:22.813Z · LW(p) · GW(p)

It might be interesting to distinguish between "personal hingeyness" and "utilitarian hingeyness". Humans are not utilitarians so we care mostly about stuff that's happening in our own lives, when we die, our personal tree stops and we can't get more hinges. But the "utilitarian hingeyness" continues as it describes all possible utility. I made this with population ethics in mind, but you could totally use the same concept for your personal life, but then the most hingey time for you and the most hingey time for everyone will be different.

I'm not sure I understand your last paragraph, because you didn't clarify what you meant with the word "hingeyness"? If you meant by that "the range of total amount of utility you can potentially generate" (aka hinge broadness) or "the amount by which that range shrinks" (aka hinge reduction) It is possible to draw a tree where the first tick of an 11 tick tree has just as broad of a range as an option in the 10th tick. So the hinge broadness and the hinge reduction can be just as big in the 10th as in the 1st tick, but not bigger. I don't think you're talking about "hinge shift", but maybe you were talking about hinge precipiceness instead in which case, yes that can totally be bigger in the 10th tick.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2020-09-09T19:13:19.999Z · LW(p) · GW(p)

I was just trying to make the point that the bredth of available options doesn't actually mean real world control.

Imagine a game where 11 people each take a turn to play in order. Each person can play either a 1 or a 0. Each player can see the moves of all the previous players. If the number of 1's played is odd, everyone wins a prize. If you aren't the 11'th player, it doesn't matter what you pick, all that matters is whether or not the 11'th player wants the prize. (Unless all the people after you are going to pick their favourite numbers, regardless of the prize, and you know what those numbers are.

comment by crl826 · 2020-09-08T00:06:50.889Z · LW(p) · GW(p)
In fact, it's a mathematically impossible that future decisions have a range of options that's larger than the previous decisions had

Can you expand on this because it isn't obvious to me that this is true.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-09-08T09:53:16.474Z · LW(p) · GW(p)

If we draw a tree of all possible timelines (and there is an end to the tree) the older choices will always have more branches that will sprout out because of them. If we are purely looking at the possible endings then the 1 in the first image has a range of 4 possible endings, but 2 only has 2 possible endings. If we're looking at branches then the 1 has a range of 6 possible branches, while 2 only has 2 possible branches. If we're looking at ending utility then 1 has a range of [0-8] while 2 only has [7-8]. If we're looking at the range of possible utility you can experience then 1 has a range from 1->3->0 = 4 utility all the way to 1->2->8 = 11 utility, while 2 only has 2->7 = 9 to 2->8 = 10.

When we talk about the utility of endings it is possible that the range doesn't change. For example:

(I can't post images in comments so here is a link to the image I will use to illustrate this point)

Here the "range of utility in endings" tick 1 has (the first 10) is [0-10] and the range of endings the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.

Now the width of the range of the total amount of utility you could potentially experience can also stay the same. For example the lowest utility tick 1 can experience is 10->0->0 = 10 utility and the highest is 10-0-10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility. The probability has changed (ending with a weird number like 19 is impossible for tick 2). The range has also shifted downwards from [10-20] to [0-10], but the width stays the same.

It just occurred to me that some people may find the shift in range also important for hingeyness. Maybe call that 'hinge shift'?

Crucially, in none of these definitions is it possible to end up with a wider range later down the line than when you started.

Replies from: crl826
comment by crl826 · 2020-09-08T11:13:01.447Z · LW(p) · GW(p)

Its seems like it's only impossible because that is how you've drawn it. Not that it isn't actually mathematically impossible.

Why couldnt one of the final branches in your example be -100?

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-09-08T15:12:43.615Z · LW(p) · GW(p)

Ending in negative numbers wouldn't change anything. The amount of endings will still shrink, the amount of branches will shrink, the range of the possible utility of the endings will still shrink or stay the same length, the range of the total amount of utility you could generate over the future branches will also shrink or stay the same length. Try it! Replace any number in any of my models with a negative number or draw your own model and see what happens.

Replies from: crl826
comment by crl826 · 2020-09-08T20:47:55.609Z · LW(p) · GW(p)

It wasn't about being negative or not. My question works just as well with a positive number.

I was trying to get at what happens when the range of one of the final branches goes wider than another final branch.

If that is the case, then it is mathematically possible for a more recent hinge to be hingier than a hinge further back in time.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-09-09T13:16:12.956Z · LW(p) · GW(p)

If in the first image we replace the 0 with a -100 (much wider) what happens? The amount of endings for 1 is still larger than 3. The amount of branches for 1 is still larger than 3. The width of the range of the possible utility of the endings for 1 is [-100 to 8] and for 3 is [-100 to 6] (smaller). The width of the range of the total amount of utility you could generate over the future branches is [1->3->-100 = -96 up to 1->2->8= 11] for 1 and [3->-100= -97 up to 3->6= 9] for 3 (smaller). Is this a good example of what you're trying to convey? If not could you maybe draw an example tree, to show me what you mean?