Pathological utilitometer thought experiment

post by Rain · 2010-10-26T15:13:06.100Z · LW · GW · Legacy · 30 comments

Contents

30 comments

I've been doing thought experiments involving a utilitometer: a device capable of measuring the utility of the universe, including sums-over-time and counterfactuals (what-if extrapolations), for any given utility function, even generic statements such as, "what I value." Things this model ignores: nonutilitarianism, complexity, contradictions, unknowability of true utility functions, inability to simulate and measure counterfactual universes, etc.

Unfortunately, I believe I've run into a pathological mindset from thinking about this utilitometer. Given the abilities of the device, you'd want to input your utility function and then take a sum-over-time from the beginning to the end of the universe and start checking counterfactuals ("I buy a new car", "I donate all my money to nonprofits", "I move to California", etc) to see if the total goes up or down.

It seems quite obvious that the sum at the end of the universe is the measure that makes the most sense, and I can't see any reason for taking a measure at the end of an action as is done in all typical discussions of utility. Here's an example: "The expected utility from moving to California is negative due to the high cost of living and the fact that I would not have a job." But a sum over all time might show that it was positive utility because I meet someone, or do something, or learn something that improves the rest of my life, and without the utilitometer, I would have missed all of those add-on effects. The device allows me to fill in all of the unknown details and unintended consequences.

Where this thinking becomes a problem is when I realize I have no such device, but desperately want one, so I can incorporate the unknown and the unintended, and know what path I should be taking to maximize my life, rather than having the short, narrow view of the future I do now. In essence, it places higher utility on 'being good at calculating expected utility' than almost any other actions I could take. If I could just build a true utilitometer that measures everything, then the expected utility would be enormous! ("push button to improve universe"). And even incremental steps along the way could have amazing payoffs.

Given that a utilitometer as described is impossible, thinking about it has still altered my values to place steps toward creating it above other, seemingly more realistic options (buying a new car, moving to California, etc). I previously asked the question, "How much time and effort should we put into improving our models and predictions, given we will have to model and predict the answer to this question?" and acknowledged it was circular and unanswerable. The pathology comes from entering the circle and starting a feedback loop; anything less than perfect prediction means wasting the entire future.

30 comments

Comments sorted by top scores.

comment by PeerInfinity · 2010-10-28T20:41:44.497Z · LW(p) · GW(p)

I... had a similar problem, as a result of spending lots of time thinking about these topics.

and... I went as far as starting a project to construct something vaguely similar to a utilitometer.

and... I got stuck in an affective death spiral about this project.

and... I ended up throwing a ridiculous amount of time, money, and effort at this project.

I seem to be mostly recovered from the affective death spiral now, but now I'm having trouble with the sunk cost fallacy.

I was considering writing a top-level post about this project to LW, but I still haven't even managed to do a good job of describing what the project is. And it's really obvious now that the project is a whole lot more difficult and a whole lot less useful than I originally hoped.

I still don't actually have anything to show for my efforts so far. But I still might as well post some links here to what little I do have:

The "official project page" at lifeboat.com

a first attempt to describe the project, on the transhumanist wiki

comment by Eugine_Nier · 2010-10-26T15:42:12.920Z · LW(p) · GW(p)

I suspect you're suffering from availability bias. Specifically, thinking about the utilitometer has caused you to subjectively overestimate how likely you are to succeed.

Replies from: Rain
comment by Rain · 2010-10-26T15:51:04.095Z · LW(p) · GW(p)

Obviously I believe I have no chance of success with the toy as described. But the slightest increase in predictive power seems to have a great deal of benefit. The marginal utility of increases in utility prediction seems quite high to me. Does it not seem that way to others?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-26T15:58:37.446Z · LW(p) · GW(p)

The marginal utility of increases in utility prediction seems quite high to me.

Yes, well that's sort of the point of this site.

Replies from: Rain
comment by Rain · 2010-10-26T17:17:58.863Z · LW(p) · GW(p)

So how did everyone else avoid the pathological effect of that taking up more of their thought patterns than 'actual' utility? Or maybe they didn't.

Replies from: Kingreaper, Nick_Tarleton
comment by Kingreaper · 2010-10-26T17:56:50.503Z · LW(p) · GW(p)

Anyone spending time on here clearly believes that improving their ability to predict things is worthwhile.

Either that or they just think this place is kinda fun. Or both.

comment by Nick_Tarleton · 2010-10-26T21:58:44.542Z · LW(p) · GW(p)

It's not necessarily pathological to devote more resources to investment than consumption for the time being. (LW may not be the best form of investment.)

comment by Richard_Kennaway · 2010-10-26T19:50:58.762Z · LW(p) · GW(p)

Some more constructive versions of this thought:

"If I had six hours to cut down a tree, I'd spend four of them sharpening my axe."

Learning how to learn.

Knowing how to know.

Replies from: Rain
comment by Rain · 2010-10-26T23:40:38.233Z · LW(p) · GW(p)

How about "analysis paralysis"? "You think too much"? That's more what I had in mind.

comment by Nick_Tarleton · 2010-10-26T21:59:55.607Z · LW(p) · GW(p)

The pathology comes from entering the circle and starting a feedback loop; anything less than perfect prediction means wasting the entire future.

Why? It seems like your expected utility should steadily increase as your prediction ability does.

Replies from: Rain
comment by Rain · 2010-10-26T23:40:59.558Z · LW(p) · GW(p)

When do you stop attempting to increase the utility and make the decision?

Replies from: Kingreaper, Relsqui
comment by Kingreaper · 2010-10-28T06:52:22.198Z · LW(p) · GW(p)

When d(utility)/d(research) is less than d(utility)/d(action)

That is to say; when the increase in expected utility from research is smaller than the increase in expected utility from the same amount of action.

Replies from: Rain
comment by Rain · 2010-10-28T12:54:17.789Z · LW(p) · GW(p)

Yes.

From the number of intuitively obvious answers to this post, I'm beginning to think that others just don't care about the sorts of problems I'm interested in. (Likely alternative: I suck at explaining them). I see "when to measure (predicting the future utility of actions)" as one of the fundamental flaws of current theory, but everyone else seems to just say "when you calculate you should do so", as if they have some sort of fully functioning ability to step out of the analysis / predictive phase and take concrete action. I don't understand that.

This flows into the other main problem I have, which is "what to value (crafting the proper utility function)". Several times, I've been told that we do not create the function, rather we discover it, in which case I reformulate the problem as "setting the proper instrumental goals (achieving ambiguous or fluctuating terminal values)".

When you're not even sure[1] what it is you want, and you're not sure[1] that doing a particular thing will lead to [very long term] positive results in the direction you want, why take any action other than research? Judgment under uncertainty is extraordinarily difficult for me.

[1] Please note that this use of "not sure" is meant along the lines of wild utility fluctuation in positive and negative directions due to unintended consequences, unknown results, and random events outside of your control. There are many ways in which short term benefits are outdone by long term detriments, which are then negated by even longer term benefits, in nearly impossible to predict patterns. I see almost every action as useless static noise, given X years of consequences.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T15:51:26.516Z · LW(p) · GW(p)

[1] Please note that this use of "not sure" is meant along the lines of wild utility fluctuation in positive and negative directions due to unintended consequences, unknown results, and random events outside of your control. There are many ways in which short term benefits are outdone by long term detriments, which are then negated by even longer term benefits, in nearly impossible to predict patterns. I see almost every action as useless static noise, given X years of consequences.

If almost every action is static noise apart from it's predictable consequences, is it not a sensible approximation to assume that the static noises are going to be, on average, equal?

In which case, you can value the predictable consequences, and let the unpredictable consequences cancel.

If you fail to do that you can't get a value of utility for anything; even for the utility of making a better utiliometer.

Replies from: Rain
comment by Rain · 2010-10-28T16:09:23.879Z · LW(p) · GW(p)

If almost every action is static noise apart from it's predictable consequences, is it not a sensible approximation to assume that the static noises are going to be, on average, equal?

In my estimation, it seems likely that either the sign of total utility flips between positive and negative based on every act (very large swings, butterfly effect), or all utility is canceled out by noise after the short term (anchoring to null).

In which case, you can value the predictable consequences, and let the unpredictable consequences cancel.

If you fail to do that you can't get a value of utility for anything; even for the utility of making a better utilitometer.

Hence pathology.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T17:07:46.750Z · LW(p) · GW(p)

In my estimation, it seems likely that either the sign of total utility flips between positive and negative based on every act (very large swings, butterfly effect), or all utility is canceled out by noise after the short term (anchoring to null).

This is a strange version of the gamblers fallacy; the random noise doesn't "cancel out" the chosen act. If I place my D20 on the 1 20 times in a row, that doesn't make it any less likely that I'll roll a 1 during a game.

Imagine a game where you first place a fair coin heads up (winning 5000 utilons) or tails up (losing 5000 utilons) and then flip it 10 million times; winning 500 utilons for every coinflip that turns up heads; and losing 500 utilons for every tails

Sure, the unpredictable (chaotic) effects are much larger than the predictable effects, but they don't cancel them out.

Putting the coin down heads-up is, on average, 10,000 utilons better.

Just like torturing someone for no reason is, on average, going to produce a worse world-outcome than giving someone chocolate for no reason.

Replies from: Rain
comment by Rain · 2010-10-28T17:42:44.437Z · LW(p) · GW(p)

I disagree, primarily on the grounds of when you take the measure of utility. As usual, you're measuring immediately after the event occurs, whereas all of my previous statements have been about a measure many years after. It is not at all clear to me that short term effects like those you describe end up with long term average effects that can be calculated, or would be of the desired sign. Events are not discrete.

How does giving a random person chocolate for no reason affect them over the course of their whole life?

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T19:37:18.931Z · LW(p) · GW(p)

I disagree, primarily on the grounds of when you take the measure of utility.

Do you disagree with just my real-world application, or also with my coinflip example?

It is not at all clear to me that short term effects like those you describe end up with long term average effects that can be calculated, or would be of the desired sign.

Let's say you have two choices; one is "+500 utilons and then other stuff"; the other is "-500 utilons and then other stuff", where you don't know anything about the nature of "other stuff". Why can you not cancel out the unknowns? Your best information about both unknowns is identical, is it not?

How does giving a random person chocolate for no reason affect them over the course of their whole life?

On average better than torturing them would. Do you disagree?

Replies from: Rain
comment by Rain · 2010-10-28T20:47:33.432Z · LW(p) · GW(p)

Do you disagree with just my real-world application, or also with my coinflip example?

Both.

Let's say you have two choices; one is "+500 utilons and then other stuff"; the other is "-500 utilons and then other stuff", where you don't know anything about the nature of "other stuff". Why can you not cancel out the unknowns? Your best information about both unknowns is identical, is it not?

Too clean - money is not utilons. I think I can see part of the problem. The standard definition of utility seems to contain the time element within it, rather than allowing context and flow into the future to have an effect on the object (not utilons!) itself. Using the very word 'utility' creates a point-in-time effect?

On average better than torturing them would. Do you disagree?

Maybe. I'm mainly trying to say, "I don't know", because I'm caught in some weird loop of calculation over unknown quantities.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T20:50:42.117Z · LW(p) · GW(p)

Both.

Okay, let's concentrate on this for a second, why do you disagree with the coinflip example?

Do you feel that the two sets of coinflips DON'T have the same average utility? Do you feel that the average utility of the coinflips isn't zero?

Do you feel that utility can't be measured? (in which case, whence the utiliometer?)

Replies from: Rain
comment by Rain · 2010-10-28T20:58:27.265Z · LW(p) · GW(p)

I feel that the word 'utilons' needs to be disambiguated or tabooed, and that once I see the actual winnings (money? prestige? sweet, sweet heroin?), I could see how it might be 'utilons' at the point it's won, but negative utility later on.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T21:00:35.710Z · LW(p) · GW(p)

Okay, let's make it money, and assume you're a money-optimiser.

Or make it utilons, and you've been told it's utilons by your friend, who has a utiliometer.

EDIT:(I am forced to give such arbitrary, but certain, examples by the nature of the issue you're having; you seem to be seeing anything with an uncertain part as being completely indistinguishable; to an extent that makes torture indistinguishable from chocolate.)

Hmmm, perhaps there is one example that could work: replace utilons with "hours worth of progress on making the utiliometer", but make all the negative amounts 0 instead.

In each of these cases: do the random bits cancel?

Replies from: Rain
comment by Rain · 2010-10-28T21:16:19.317Z · LW(p) · GW(p)

During the flipping of the coin, and the winning of the utilons, yes. If they're taking the measure with the utilitometer at the point-in-time of winning, then it will show 'utilons', but I think that's the wrong place to take the measurement. There's the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.

I still think the problem is inherent in the definition, though, so asking me questions based strictly on that definition is, uh, problematic, even as a thought experiment.

Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind. I asked Clippy what it valued, and didn't get satisfactory results when talking about prediction and value problems.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T21:23:31.611Z · LW(p) · GW(p)

During the flipping of the coin, and the winning of the utilons, yes. If they're taking the measure with the utilitometer at the point-in-time of winning, then it will show 'utilons', but I think that's the wrong place to take the measurement. There's the possibility that more now means less later, or over all. If they take the measurement at end-of-time, then I would expect massive differences between each coin flip, as measured by the utilitometer, or no effect whatsoever.

The friend with the utiliometer set it up so that there are no differences between each flip. One might alter windflow over the artic, the other might kill a fish in the pacific, total utility is the same.

Value is complex. Humans are contradictory. I doubt there is such a thing as a true utilon, or a simplistic optimizer of any kind.

Then why bother trying to make a utiliometer?

Remember all those unintended consequences? Your making an imperfect utiliometer is as likely to have huge negative effects on the far future as any other action you make.

And you making a perfect utiliometer is impossible; the total future is unbounded.

Again: if you have two possibilities that are, on average, the same; apart from a small, known difference (ie. torturing someone to death or giving them chocolate; both are almost equally likely to prevent the end of the world [there is good reason to think that the chocolate is a better choice in that regard, but the effect is minor] both are equally likely to decrease the death toll in the year 5583224308, but one gives someone chocolate, and the other tortures the person to death) why can't you cancel the bit that's the same, and look at the difference?

Replies from: Rain
comment by Rain · 2010-10-28T21:39:22.233Z · LW(p) · GW(p)

Then why bother trying to make a utilitometer?

Because as described, it would do the impossible :-P Obviously I'm not ever intending to build one, just thinking about it, which led me to the rest of this discussion, and my problems with utility and value.

Remember all those unintended consequences? Your making an imperfect utiliometer is as likely to have huge negative effects on the far future as any other action you make.

And you making a perfect utiliometer is impossible; the total future is unbounded.

Exactly why I feel like the entire future is wasted or random static, regardless of the actions I take.

why can't you cancel the bit that's the same, and look at the difference?

Because I think 'the bit' is different. Time moves in a linear fashion, and effects propagate outward from their point of origin, flipping all sorts of coins all over the place that would have otherwise landed on the other side.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T21:57:05.465Z · LW(p) · GW(p)

Because I think 'the bit' is different. Time moves in a linear fashion, and effects propagate outward from their point of origin, flipping all sorts of coins all over the place that would have otherwise landed on the other side.

Of course "the bit" is, in actuality, different. If "the bit" wasn't different, d(utility)/d(work-on-utilitiometer) would be zero. But unless you know the difference, the effective difference to you is zero.

If I present you with two locked boxes, one with a diamond in, the other without, picking one will get you the diamond, the other won't.

But unless you have some way of telling which box contains the diamond, you might as well pick the one that looks nicer.

Likewise with the coins; which way up you put it will affect the series of tosses unpredictably. But on average, it evens out. That''s the needed realisation.

Replies from: Rain
comment by Rain · 2010-10-28T22:03:28.769Z · LW(p) · GW(p)

unless you know the difference, the effective difference to you is zero.

I think that's leaving the future out of the calculation since it's otherwise hard to predict, and gets back to the original point that increases in predictive power seem to be more powerful than any other kind of utility, to the point where a loop forms.

Replies from: Kingreaper
comment by Kingreaper · 2010-10-28T22:33:55.797Z · LW(p) · GW(p)

As long as you recognise that there must be a point at which that is no longer true (ie. when your expected remaining rational lifespan is <1 year, will that still be true?) then it's not necessarily a problem.

Honing your skills before beginning work is often good. Honing your skills until the day you die is always bad.

But you need to actually pay attention to how effective increases in your prediction are. If 2 years worth of work makes you 5% better at generting utility, then you need to stop work once you've got 40 or less years left.

Replies from: Rain
comment by Rain · 2010-10-28T23:23:47.083Z · LW(p) · GW(p)

Honing your skills before beginning work is often good. Honing your skills until the day you die is always bad.

Not if "do nothing, then die" is the optimal path... otherwise agreed.